Dec 10 14:31:33 crc systemd[1]: Starting Kubernetes Kubelet... Dec 10 14:31:33 crc restorecon[4681]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Dec 10 14:31:33 crc restorecon[4681]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 10 14:31:33 crc restorecon[4681]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 10 14:31:33 crc restorecon[4681]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Dec 10 14:31:33 crc restorecon[4681]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Dec 10 14:31:33 crc restorecon[4681]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Dec 10 14:31:33 crc restorecon[4681]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Dec 10 14:31:33 crc restorecon[4681]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Dec 10 14:31:33 crc restorecon[4681]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Dec 10 14:31:33 crc restorecon[4681]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Dec 10 14:31:33 crc restorecon[4681]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Dec 10 14:31:33 crc restorecon[4681]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Dec 10 14:31:33 crc restorecon[4681]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Dec 10 14:31:33 crc restorecon[4681]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Dec 10 14:31:33 crc restorecon[4681]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Dec 10 14:31:33 crc restorecon[4681]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Dec 10 14:31:33 crc restorecon[4681]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Dec 10 14:31:33 crc restorecon[4681]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Dec 10 14:31:33 crc restorecon[4681]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 10 14:31:33 crc restorecon[4681]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 10 14:31:33 crc restorecon[4681]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 10 14:31:33 crc restorecon[4681]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 10 14:31:33 crc restorecon[4681]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 10 14:31:33 crc restorecon[4681]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 10 14:31:33 crc restorecon[4681]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 10 14:31:33 crc restorecon[4681]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 10 14:31:33 crc restorecon[4681]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 10 14:31:33 crc restorecon[4681]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 10 14:31:33 crc restorecon[4681]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 10 14:31:33 crc restorecon[4681]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 10 14:31:33 crc restorecon[4681]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 10 14:31:33 crc restorecon[4681]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 10 14:31:33 crc restorecon[4681]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Dec 10 14:31:33 crc restorecon[4681]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Dec 10 14:31:33 crc restorecon[4681]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Dec 10 14:31:33 crc restorecon[4681]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Dec 10 14:31:33 crc restorecon[4681]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Dec 10 14:31:33 crc restorecon[4681]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Dec 10 14:31:33 crc restorecon[4681]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 10 14:31:33 crc restorecon[4681]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 10 14:31:33 crc restorecon[4681]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 10 14:31:33 crc restorecon[4681]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 10 14:31:33 crc restorecon[4681]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 10 14:31:33 crc restorecon[4681]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 10 14:31:33 crc restorecon[4681]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 10 14:31:33 crc restorecon[4681]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 10 14:31:33 crc restorecon[4681]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 10 14:31:33 crc restorecon[4681]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 10 14:31:33 crc restorecon[4681]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 10 14:31:33 crc restorecon[4681]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 10 14:31:33 crc restorecon[4681]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 10 14:31:33 crc restorecon[4681]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 10 14:31:33 crc restorecon[4681]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 10 14:31:33 crc restorecon[4681]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 10 14:31:33 crc restorecon[4681]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 10 14:31:33 crc restorecon[4681]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 10 14:31:33 crc restorecon[4681]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 10 14:31:33 crc restorecon[4681]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 10 14:31:33 crc restorecon[4681]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 10 14:31:33 crc restorecon[4681]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 10 14:31:33 crc restorecon[4681]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 10 14:31:33 crc restorecon[4681]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 10 14:31:33 crc restorecon[4681]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 10 14:31:33 crc restorecon[4681]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 10 14:31:33 crc restorecon[4681]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 10 14:31:33 crc restorecon[4681]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 10 14:31:33 crc restorecon[4681]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 10 14:31:33 crc restorecon[4681]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 10 14:31:33 crc restorecon[4681]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 10 14:31:33 crc restorecon[4681]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 10 14:31:33 crc restorecon[4681]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Dec 10 14:31:33 crc restorecon[4681]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Dec 10 14:31:33 crc restorecon[4681]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Dec 10 14:31:33 crc restorecon[4681]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Dec 10 14:31:33 crc restorecon[4681]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Dec 10 14:31:33 crc restorecon[4681]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Dec 10 14:31:33 crc restorecon[4681]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Dec 10 14:31:33 crc restorecon[4681]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 10 14:31:33 crc restorecon[4681]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 10 14:31:33 crc restorecon[4681]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 10 14:31:33 crc restorecon[4681]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 10 14:31:33 crc restorecon[4681]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 10 14:31:33 crc restorecon[4681]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 10 14:31:33 crc restorecon[4681]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 10 14:31:33 crc restorecon[4681]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 10 14:31:33 crc restorecon[4681]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 10 14:31:33 crc restorecon[4681]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Dec 10 14:31:33 crc restorecon[4681]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Dec 10 14:31:33 crc restorecon[4681]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 10 14:31:34 crc restorecon[4681]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 10 14:31:34 crc restorecon[4681]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Dec 10 14:31:35 crc kubenswrapper[4718]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 10 14:31:35 crc kubenswrapper[4718]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Dec 10 14:31:35 crc kubenswrapper[4718]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 10 14:31:35 crc kubenswrapper[4718]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 10 14:31:35 crc kubenswrapper[4718]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 10 14:31:35 crc kubenswrapper[4718]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.436584 4718 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 10 14:31:35 crc kubenswrapper[4718]: W1210 14:31:35.442928 4718 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Dec 10 14:31:35 crc kubenswrapper[4718]: W1210 14:31:35.442970 4718 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Dec 10 14:31:35 crc kubenswrapper[4718]: W1210 14:31:35.442985 4718 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Dec 10 14:31:35 crc kubenswrapper[4718]: W1210 14:31:35.443006 4718 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Dec 10 14:31:35 crc kubenswrapper[4718]: W1210 14:31:35.443018 4718 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Dec 10 14:31:35 crc kubenswrapper[4718]: W1210 14:31:35.443030 4718 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Dec 10 14:31:35 crc kubenswrapper[4718]: W1210 14:31:35.443040 4718 feature_gate.go:330] unrecognized feature gate: InsightsConfig Dec 10 14:31:35 crc kubenswrapper[4718]: W1210 14:31:35.443051 4718 feature_gate.go:330] unrecognized feature gate: PinnedImages Dec 10 14:31:35 crc kubenswrapper[4718]: W1210 14:31:35.443061 4718 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Dec 10 14:31:35 crc kubenswrapper[4718]: W1210 14:31:35.443069 4718 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Dec 10 14:31:35 crc kubenswrapper[4718]: W1210 14:31:35.443077 4718 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Dec 10 14:31:35 crc kubenswrapper[4718]: W1210 14:31:35.443085 4718 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Dec 10 14:31:35 crc kubenswrapper[4718]: W1210 14:31:35.443093 4718 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Dec 10 14:31:35 crc kubenswrapper[4718]: W1210 14:31:35.443102 4718 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Dec 10 14:31:35 crc kubenswrapper[4718]: W1210 14:31:35.443109 4718 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Dec 10 14:31:35 crc kubenswrapper[4718]: W1210 14:31:35.443117 4718 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 10 14:31:35 crc kubenswrapper[4718]: W1210 14:31:35.443124 4718 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Dec 10 14:31:35 crc kubenswrapper[4718]: W1210 14:31:35.443132 4718 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Dec 10 14:31:35 crc kubenswrapper[4718]: W1210 14:31:35.443141 4718 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Dec 10 14:31:35 crc kubenswrapper[4718]: W1210 14:31:35.443148 4718 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Dec 10 14:31:35 crc kubenswrapper[4718]: W1210 14:31:35.443158 4718 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Dec 10 14:31:35 crc kubenswrapper[4718]: W1210 14:31:35.443168 4718 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Dec 10 14:31:35 crc kubenswrapper[4718]: W1210 14:31:35.443179 4718 feature_gate.go:330] unrecognized feature gate: GatewayAPI Dec 10 14:31:35 crc kubenswrapper[4718]: W1210 14:31:35.443188 4718 feature_gate.go:330] unrecognized feature gate: SignatureStores Dec 10 14:31:35 crc kubenswrapper[4718]: W1210 14:31:35.443196 4718 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Dec 10 14:31:35 crc kubenswrapper[4718]: W1210 14:31:35.443204 4718 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Dec 10 14:31:35 crc kubenswrapper[4718]: W1210 14:31:35.443213 4718 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Dec 10 14:31:35 crc kubenswrapper[4718]: W1210 14:31:35.443222 4718 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Dec 10 14:31:35 crc kubenswrapper[4718]: W1210 14:31:35.443231 4718 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Dec 10 14:31:35 crc kubenswrapper[4718]: W1210 14:31:35.443239 4718 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Dec 10 14:31:35 crc kubenswrapper[4718]: W1210 14:31:35.443247 4718 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Dec 10 14:31:35 crc kubenswrapper[4718]: W1210 14:31:35.443255 4718 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Dec 10 14:31:35 crc kubenswrapper[4718]: W1210 14:31:35.443262 4718 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Dec 10 14:31:35 crc kubenswrapper[4718]: W1210 14:31:35.443270 4718 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Dec 10 14:31:35 crc kubenswrapper[4718]: W1210 14:31:35.443278 4718 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Dec 10 14:31:35 crc kubenswrapper[4718]: W1210 14:31:35.443286 4718 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Dec 10 14:31:35 crc kubenswrapper[4718]: W1210 14:31:35.443308 4718 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Dec 10 14:31:35 crc kubenswrapper[4718]: W1210 14:31:35.443316 4718 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Dec 10 14:31:35 crc kubenswrapper[4718]: W1210 14:31:35.443324 4718 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Dec 10 14:31:35 crc kubenswrapper[4718]: W1210 14:31:35.443335 4718 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 10 14:31:35 crc kubenswrapper[4718]: W1210 14:31:35.443344 4718 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Dec 10 14:31:35 crc kubenswrapper[4718]: W1210 14:31:35.443352 4718 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Dec 10 14:31:35 crc kubenswrapper[4718]: W1210 14:31:35.443360 4718 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Dec 10 14:31:35 crc kubenswrapper[4718]: W1210 14:31:35.443370 4718 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Dec 10 14:31:35 crc kubenswrapper[4718]: W1210 14:31:35.443425 4718 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Dec 10 14:31:35 crc kubenswrapper[4718]: W1210 14:31:35.443436 4718 feature_gate.go:330] unrecognized feature gate: PlatformOperators Dec 10 14:31:35 crc kubenswrapper[4718]: W1210 14:31:35.443448 4718 feature_gate.go:330] unrecognized feature gate: OVNObservability Dec 10 14:31:35 crc kubenswrapper[4718]: W1210 14:31:35.443457 4718 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Dec 10 14:31:35 crc kubenswrapper[4718]: W1210 14:31:35.443464 4718 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Dec 10 14:31:35 crc kubenswrapper[4718]: W1210 14:31:35.443473 4718 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Dec 10 14:31:35 crc kubenswrapper[4718]: W1210 14:31:35.443481 4718 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Dec 10 14:31:35 crc kubenswrapper[4718]: W1210 14:31:35.443488 4718 feature_gate.go:330] unrecognized feature gate: NewOLM Dec 10 14:31:35 crc kubenswrapper[4718]: W1210 14:31:35.443496 4718 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Dec 10 14:31:35 crc kubenswrapper[4718]: W1210 14:31:35.443503 4718 feature_gate.go:330] unrecognized feature gate: Example Dec 10 14:31:35 crc kubenswrapper[4718]: W1210 14:31:35.443512 4718 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Dec 10 14:31:35 crc kubenswrapper[4718]: W1210 14:31:35.443525 4718 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Dec 10 14:31:35 crc kubenswrapper[4718]: W1210 14:31:35.443535 4718 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 10 14:31:35 crc kubenswrapper[4718]: W1210 14:31:35.443546 4718 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 10 14:31:35 crc kubenswrapper[4718]: W1210 14:31:35.443554 4718 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Dec 10 14:31:35 crc kubenswrapper[4718]: W1210 14:31:35.443563 4718 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Dec 10 14:31:35 crc kubenswrapper[4718]: W1210 14:31:35.443571 4718 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Dec 10 14:31:35 crc kubenswrapper[4718]: W1210 14:31:35.443579 4718 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Dec 10 14:31:35 crc kubenswrapper[4718]: W1210 14:31:35.443586 4718 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Dec 10 14:31:35 crc kubenswrapper[4718]: W1210 14:31:35.443593 4718 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Dec 10 14:31:35 crc kubenswrapper[4718]: W1210 14:31:35.443601 4718 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Dec 10 14:31:35 crc kubenswrapper[4718]: W1210 14:31:35.443609 4718 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Dec 10 14:31:35 crc kubenswrapper[4718]: W1210 14:31:35.443616 4718 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Dec 10 14:31:35 crc kubenswrapper[4718]: W1210 14:31:35.443624 4718 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Dec 10 14:31:35 crc kubenswrapper[4718]: W1210 14:31:35.443631 4718 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Dec 10 14:31:35 crc kubenswrapper[4718]: W1210 14:31:35.443639 4718 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Dec 10 14:31:35 crc kubenswrapper[4718]: W1210 14:31:35.443647 4718 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.444214 4718 flags.go:64] FLAG: --address="0.0.0.0" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.444261 4718 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.444305 4718 flags.go:64] FLAG: --anonymous-auth="true" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.444319 4718 flags.go:64] FLAG: --application-metrics-count-limit="100" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.444332 4718 flags.go:64] FLAG: --authentication-token-webhook="false" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.444342 4718 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.444355 4718 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.444367 4718 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.444377 4718 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.444418 4718 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.444430 4718 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.444441 4718 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.444451 4718 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.444461 4718 flags.go:64] FLAG: --cgroup-root="" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.444470 4718 flags.go:64] FLAG: --cgroups-per-qos="true" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.444480 4718 flags.go:64] FLAG: --client-ca-file="" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.444490 4718 flags.go:64] FLAG: --cloud-config="" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.444499 4718 flags.go:64] FLAG: --cloud-provider="" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.444509 4718 flags.go:64] FLAG: --cluster-dns="[]" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.444531 4718 flags.go:64] FLAG: --cluster-domain="" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.444541 4718 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.444551 4718 flags.go:64] FLAG: --config-dir="" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.444561 4718 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.444572 4718 flags.go:64] FLAG: --container-log-max-files="5" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.444584 4718 flags.go:64] FLAG: --container-log-max-size="10Mi" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.444596 4718 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.444606 4718 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.444616 4718 flags.go:64] FLAG: --containerd-namespace="k8s.io" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.444626 4718 flags.go:64] FLAG: --contention-profiling="false" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.444636 4718 flags.go:64] FLAG: --cpu-cfs-quota="true" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.444646 4718 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.444656 4718 flags.go:64] FLAG: --cpu-manager-policy="none" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.444665 4718 flags.go:64] FLAG: --cpu-manager-policy-options="" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.444677 4718 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.444686 4718 flags.go:64] FLAG: --enable-controller-attach-detach="true" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.444696 4718 flags.go:64] FLAG: --enable-debugging-handlers="true" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.444707 4718 flags.go:64] FLAG: --enable-load-reader="false" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.444730 4718 flags.go:64] FLAG: --enable-server="true" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.444741 4718 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.444759 4718 flags.go:64] FLAG: --event-burst="100" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.444769 4718 flags.go:64] FLAG: --event-qps="50" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.444779 4718 flags.go:64] FLAG: --event-storage-age-limit="default=0" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.444789 4718 flags.go:64] FLAG: --event-storage-event-limit="default=0" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.444798 4718 flags.go:64] FLAG: --eviction-hard="" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.444810 4718 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.444820 4718 flags.go:64] FLAG: --eviction-minimum-reclaim="" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.444829 4718 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.444839 4718 flags.go:64] FLAG: --eviction-soft="" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.444849 4718 flags.go:64] FLAG: --eviction-soft-grace-period="" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.444859 4718 flags.go:64] FLAG: --exit-on-lock-contention="false" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.444869 4718 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.444879 4718 flags.go:64] FLAG: --experimental-mounter-path="" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.444889 4718 flags.go:64] FLAG: --fail-cgroupv1="false" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.444898 4718 flags.go:64] FLAG: --fail-swap-on="true" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.444908 4718 flags.go:64] FLAG: --feature-gates="" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.444919 4718 flags.go:64] FLAG: --file-check-frequency="20s" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.444929 4718 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.444940 4718 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.444951 4718 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.444961 4718 flags.go:64] FLAG: --healthz-port="10248" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.444970 4718 flags.go:64] FLAG: --help="false" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.444980 4718 flags.go:64] FLAG: --hostname-override="" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.444990 4718 flags.go:64] FLAG: --housekeeping-interval="10s" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.445000 4718 flags.go:64] FLAG: --http-check-frequency="20s" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.445010 4718 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.445019 4718 flags.go:64] FLAG: --image-credential-provider-config="" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.445029 4718 flags.go:64] FLAG: --image-gc-high-threshold="85" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.445039 4718 flags.go:64] FLAG: --image-gc-low-threshold="80" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.445049 4718 flags.go:64] FLAG: --image-service-endpoint="" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.445058 4718 flags.go:64] FLAG: --kernel-memcg-notification="false" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.445067 4718 flags.go:64] FLAG: --kube-api-burst="100" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.445077 4718 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.445087 4718 flags.go:64] FLAG: --kube-api-qps="50" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.445109 4718 flags.go:64] FLAG: --kube-reserved="" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.445120 4718 flags.go:64] FLAG: --kube-reserved-cgroup="" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.445129 4718 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.445139 4718 flags.go:64] FLAG: --kubelet-cgroups="" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.445149 4718 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.445159 4718 flags.go:64] FLAG: --lock-file="" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.445168 4718 flags.go:64] FLAG: --log-cadvisor-usage="false" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.445178 4718 flags.go:64] FLAG: --log-flush-frequency="5s" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.445189 4718 flags.go:64] FLAG: --log-json-info-buffer-size="0" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.445204 4718 flags.go:64] FLAG: --log-json-split-stream="false" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.445216 4718 flags.go:64] FLAG: --log-text-info-buffer-size="0" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.445225 4718 flags.go:64] FLAG: --log-text-split-stream="false" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.445235 4718 flags.go:64] FLAG: --logging-format="text" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.445244 4718 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.445254 4718 flags.go:64] FLAG: --make-iptables-util-chains="true" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.445264 4718 flags.go:64] FLAG: --manifest-url="" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.445274 4718 flags.go:64] FLAG: --manifest-url-header="" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.445287 4718 flags.go:64] FLAG: --max-housekeeping-interval="15s" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.445296 4718 flags.go:64] FLAG: --max-open-files="1000000" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.445308 4718 flags.go:64] FLAG: --max-pods="110" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.445318 4718 flags.go:64] FLAG: --maximum-dead-containers="-1" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.445328 4718 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.445338 4718 flags.go:64] FLAG: --memory-manager-policy="None" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.445347 4718 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.445357 4718 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.445367 4718 flags.go:64] FLAG: --node-ip="192.168.126.11" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.445377 4718 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.445434 4718 flags.go:64] FLAG: --node-status-max-images="50" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.445444 4718 flags.go:64] FLAG: --node-status-update-frequency="10s" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.445454 4718 flags.go:64] FLAG: --oom-score-adj="-999" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.445463 4718 flags.go:64] FLAG: --pod-cidr="" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.445473 4718 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.445487 4718 flags.go:64] FLAG: --pod-manifest-path="" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.445497 4718 flags.go:64] FLAG: --pod-max-pids="-1" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.445507 4718 flags.go:64] FLAG: --pods-per-core="0" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.445517 4718 flags.go:64] FLAG: --port="10250" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.445539 4718 flags.go:64] FLAG: --protect-kernel-defaults="false" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.445550 4718 flags.go:64] FLAG: --provider-id="" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.445559 4718 flags.go:64] FLAG: --qos-reserved="" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.445569 4718 flags.go:64] FLAG: --read-only-port="10255" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.445581 4718 flags.go:64] FLAG: --register-node="true" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.445590 4718 flags.go:64] FLAG: --register-schedulable="true" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.445600 4718 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.445624 4718 flags.go:64] FLAG: --registry-burst="10" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.445634 4718 flags.go:64] FLAG: --registry-qps="5" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.445644 4718 flags.go:64] FLAG: --reserved-cpus="" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.445654 4718 flags.go:64] FLAG: --reserved-memory="" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.445666 4718 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.445675 4718 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.445686 4718 flags.go:64] FLAG: --rotate-certificates="false" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.445696 4718 flags.go:64] FLAG: --rotate-server-certificates="false" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.445706 4718 flags.go:64] FLAG: --runonce="false" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.445715 4718 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.445725 4718 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.445735 4718 flags.go:64] FLAG: --seccomp-default="false" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.445745 4718 flags.go:64] FLAG: --serialize-image-pulls="true" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.445755 4718 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.445765 4718 flags.go:64] FLAG: --storage-driver-db="cadvisor" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.445776 4718 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.445786 4718 flags.go:64] FLAG: --storage-driver-password="root" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.445796 4718 flags.go:64] FLAG: --storage-driver-secure="false" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.445806 4718 flags.go:64] FLAG: --storage-driver-table="stats" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.445815 4718 flags.go:64] FLAG: --storage-driver-user="root" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.445825 4718 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.445835 4718 flags.go:64] FLAG: --sync-frequency="1m0s" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.445844 4718 flags.go:64] FLAG: --system-cgroups="" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.445854 4718 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.445869 4718 flags.go:64] FLAG: --system-reserved-cgroup="" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.445878 4718 flags.go:64] FLAG: --tls-cert-file="" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.445887 4718 flags.go:64] FLAG: --tls-cipher-suites="[]" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.445907 4718 flags.go:64] FLAG: --tls-min-version="" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.445916 4718 flags.go:64] FLAG: --tls-private-key-file="" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.445939 4718 flags.go:64] FLAG: --topology-manager-policy="none" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.445950 4718 flags.go:64] FLAG: --topology-manager-policy-options="" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.445959 4718 flags.go:64] FLAG: --topology-manager-scope="container" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.445969 4718 flags.go:64] FLAG: --v="2" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.445982 4718 flags.go:64] FLAG: --version="false" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.445995 4718 flags.go:64] FLAG: --vmodule="" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.446006 4718 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.446017 4718 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Dec 10 14:31:35 crc kubenswrapper[4718]: W1210 14:31:35.464029 4718 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Dec 10 14:31:35 crc kubenswrapper[4718]: W1210 14:31:35.464099 4718 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Dec 10 14:31:35 crc kubenswrapper[4718]: W1210 14:31:35.464114 4718 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Dec 10 14:31:35 crc kubenswrapper[4718]: W1210 14:31:35.464125 4718 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Dec 10 14:31:35 crc kubenswrapper[4718]: W1210 14:31:35.464135 4718 feature_gate.go:330] unrecognized feature gate: PlatformOperators Dec 10 14:31:35 crc kubenswrapper[4718]: W1210 14:31:35.464143 4718 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Dec 10 14:31:35 crc kubenswrapper[4718]: W1210 14:31:35.464152 4718 feature_gate.go:330] unrecognized feature gate: GatewayAPI Dec 10 14:31:35 crc kubenswrapper[4718]: W1210 14:31:35.464162 4718 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Dec 10 14:31:35 crc kubenswrapper[4718]: W1210 14:31:35.464173 4718 feature_gate.go:330] unrecognized feature gate: NewOLM Dec 10 14:31:35 crc kubenswrapper[4718]: W1210 14:31:35.464183 4718 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Dec 10 14:31:35 crc kubenswrapper[4718]: W1210 14:31:35.464193 4718 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Dec 10 14:31:35 crc kubenswrapper[4718]: W1210 14:31:35.464202 4718 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Dec 10 14:31:35 crc kubenswrapper[4718]: W1210 14:31:35.464211 4718 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Dec 10 14:31:35 crc kubenswrapper[4718]: W1210 14:31:35.464220 4718 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Dec 10 14:31:35 crc kubenswrapper[4718]: W1210 14:31:35.464229 4718 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Dec 10 14:31:35 crc kubenswrapper[4718]: W1210 14:31:35.464237 4718 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Dec 10 14:31:35 crc kubenswrapper[4718]: W1210 14:31:35.464246 4718 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Dec 10 14:31:35 crc kubenswrapper[4718]: W1210 14:31:35.464254 4718 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Dec 10 14:31:35 crc kubenswrapper[4718]: W1210 14:31:35.464262 4718 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Dec 10 14:31:35 crc kubenswrapper[4718]: W1210 14:31:35.464271 4718 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Dec 10 14:31:35 crc kubenswrapper[4718]: W1210 14:31:35.464280 4718 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Dec 10 14:31:35 crc kubenswrapper[4718]: W1210 14:31:35.464288 4718 feature_gate.go:330] unrecognized feature gate: OVNObservability Dec 10 14:31:35 crc kubenswrapper[4718]: W1210 14:31:35.464296 4718 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Dec 10 14:31:35 crc kubenswrapper[4718]: W1210 14:31:35.464306 4718 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 10 14:31:35 crc kubenswrapper[4718]: W1210 14:31:35.464314 4718 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Dec 10 14:31:35 crc kubenswrapper[4718]: W1210 14:31:35.464323 4718 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Dec 10 14:31:35 crc kubenswrapper[4718]: W1210 14:31:35.464331 4718 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Dec 10 14:31:35 crc kubenswrapper[4718]: W1210 14:31:35.464340 4718 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Dec 10 14:31:35 crc kubenswrapper[4718]: W1210 14:31:35.464370 4718 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 10 14:31:35 crc kubenswrapper[4718]: W1210 14:31:35.464380 4718 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Dec 10 14:31:35 crc kubenswrapper[4718]: W1210 14:31:35.464428 4718 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Dec 10 14:31:35 crc kubenswrapper[4718]: W1210 14:31:35.464437 4718 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Dec 10 14:31:35 crc kubenswrapper[4718]: W1210 14:31:35.464450 4718 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Dec 10 14:31:35 crc kubenswrapper[4718]: W1210 14:31:35.464464 4718 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Dec 10 14:31:35 crc kubenswrapper[4718]: W1210 14:31:35.464477 4718 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Dec 10 14:31:35 crc kubenswrapper[4718]: W1210 14:31:35.464487 4718 feature_gate.go:330] unrecognized feature gate: PinnedImages Dec 10 14:31:35 crc kubenswrapper[4718]: W1210 14:31:35.464498 4718 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Dec 10 14:31:35 crc kubenswrapper[4718]: W1210 14:31:35.464510 4718 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Dec 10 14:31:35 crc kubenswrapper[4718]: W1210 14:31:35.464523 4718 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Dec 10 14:31:35 crc kubenswrapper[4718]: W1210 14:31:35.464537 4718 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Dec 10 14:31:35 crc kubenswrapper[4718]: W1210 14:31:35.464549 4718 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Dec 10 14:31:35 crc kubenswrapper[4718]: W1210 14:31:35.464560 4718 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Dec 10 14:31:35 crc kubenswrapper[4718]: W1210 14:31:35.464569 4718 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Dec 10 14:31:35 crc kubenswrapper[4718]: W1210 14:31:35.464577 4718 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Dec 10 14:31:35 crc kubenswrapper[4718]: W1210 14:31:35.464587 4718 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 10 14:31:35 crc kubenswrapper[4718]: W1210 14:31:35.464596 4718 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Dec 10 14:31:35 crc kubenswrapper[4718]: W1210 14:31:35.464604 4718 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Dec 10 14:31:35 crc kubenswrapper[4718]: W1210 14:31:35.464613 4718 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Dec 10 14:31:35 crc kubenswrapper[4718]: W1210 14:31:35.464622 4718 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Dec 10 14:31:35 crc kubenswrapper[4718]: W1210 14:31:35.464630 4718 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Dec 10 14:31:35 crc kubenswrapper[4718]: W1210 14:31:35.464638 4718 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Dec 10 14:31:35 crc kubenswrapper[4718]: W1210 14:31:35.464647 4718 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Dec 10 14:31:35 crc kubenswrapper[4718]: W1210 14:31:35.464658 4718 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Dec 10 14:31:35 crc kubenswrapper[4718]: W1210 14:31:35.464670 4718 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 10 14:31:35 crc kubenswrapper[4718]: W1210 14:31:35.464682 4718 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Dec 10 14:31:35 crc kubenswrapper[4718]: W1210 14:31:35.464695 4718 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Dec 10 14:31:35 crc kubenswrapper[4718]: W1210 14:31:35.464706 4718 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Dec 10 14:31:35 crc kubenswrapper[4718]: W1210 14:31:35.464717 4718 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Dec 10 14:31:35 crc kubenswrapper[4718]: W1210 14:31:35.464726 4718 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Dec 10 14:31:35 crc kubenswrapper[4718]: W1210 14:31:35.464735 4718 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Dec 10 14:31:35 crc kubenswrapper[4718]: W1210 14:31:35.464745 4718 feature_gate.go:330] unrecognized feature gate: InsightsConfig Dec 10 14:31:35 crc kubenswrapper[4718]: W1210 14:31:35.464754 4718 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Dec 10 14:31:35 crc kubenswrapper[4718]: W1210 14:31:35.464763 4718 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Dec 10 14:31:35 crc kubenswrapper[4718]: W1210 14:31:35.464774 4718 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Dec 10 14:31:35 crc kubenswrapper[4718]: W1210 14:31:35.464798 4718 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Dec 10 14:31:35 crc kubenswrapper[4718]: W1210 14:31:35.464807 4718 feature_gate.go:330] unrecognized feature gate: Example Dec 10 14:31:35 crc kubenswrapper[4718]: W1210 14:31:35.464816 4718 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Dec 10 14:31:35 crc kubenswrapper[4718]: W1210 14:31:35.464824 4718 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Dec 10 14:31:35 crc kubenswrapper[4718]: W1210 14:31:35.464833 4718 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Dec 10 14:31:35 crc kubenswrapper[4718]: W1210 14:31:35.464841 4718 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Dec 10 14:31:35 crc kubenswrapper[4718]: W1210 14:31:35.464849 4718 feature_gate.go:330] unrecognized feature gate: SignatureStores Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.464881 4718 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.555634 4718 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.555683 4718 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 10 14:31:35 crc kubenswrapper[4718]: W1210 14:31:35.555766 4718 feature_gate.go:330] unrecognized feature gate: PinnedImages Dec 10 14:31:35 crc kubenswrapper[4718]: W1210 14:31:35.555774 4718 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Dec 10 14:31:35 crc kubenswrapper[4718]: W1210 14:31:35.555779 4718 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Dec 10 14:31:35 crc kubenswrapper[4718]: W1210 14:31:35.555784 4718 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Dec 10 14:31:35 crc kubenswrapper[4718]: W1210 14:31:35.555788 4718 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Dec 10 14:31:35 crc kubenswrapper[4718]: W1210 14:31:35.555792 4718 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Dec 10 14:31:35 crc kubenswrapper[4718]: W1210 14:31:35.555796 4718 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Dec 10 14:31:35 crc kubenswrapper[4718]: W1210 14:31:35.555800 4718 feature_gate.go:330] unrecognized feature gate: OVNObservability Dec 10 14:31:35 crc kubenswrapper[4718]: W1210 14:31:35.555804 4718 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Dec 10 14:31:35 crc kubenswrapper[4718]: W1210 14:31:35.555808 4718 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Dec 10 14:31:35 crc kubenswrapper[4718]: W1210 14:31:35.555811 4718 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Dec 10 14:31:35 crc kubenswrapper[4718]: W1210 14:31:35.555815 4718 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Dec 10 14:31:35 crc kubenswrapper[4718]: W1210 14:31:35.555819 4718 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Dec 10 14:31:35 crc kubenswrapper[4718]: W1210 14:31:35.555822 4718 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Dec 10 14:31:35 crc kubenswrapper[4718]: W1210 14:31:35.555826 4718 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Dec 10 14:31:35 crc kubenswrapper[4718]: W1210 14:31:35.555830 4718 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Dec 10 14:31:35 crc kubenswrapper[4718]: W1210 14:31:35.555833 4718 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Dec 10 14:31:35 crc kubenswrapper[4718]: W1210 14:31:35.555837 4718 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Dec 10 14:31:35 crc kubenswrapper[4718]: W1210 14:31:35.555841 4718 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Dec 10 14:31:35 crc kubenswrapper[4718]: W1210 14:31:35.555844 4718 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Dec 10 14:31:35 crc kubenswrapper[4718]: W1210 14:31:35.555847 4718 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Dec 10 14:31:35 crc kubenswrapper[4718]: W1210 14:31:35.555851 4718 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Dec 10 14:31:35 crc kubenswrapper[4718]: W1210 14:31:35.555855 4718 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Dec 10 14:31:35 crc kubenswrapper[4718]: W1210 14:31:35.555858 4718 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Dec 10 14:31:35 crc kubenswrapper[4718]: W1210 14:31:35.555862 4718 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Dec 10 14:31:35 crc kubenswrapper[4718]: W1210 14:31:35.555868 4718 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Dec 10 14:31:35 crc kubenswrapper[4718]: W1210 14:31:35.555876 4718 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 10 14:31:35 crc kubenswrapper[4718]: W1210 14:31:35.555881 4718 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Dec 10 14:31:35 crc kubenswrapper[4718]: W1210 14:31:35.555886 4718 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Dec 10 14:31:35 crc kubenswrapper[4718]: W1210 14:31:35.555891 4718 feature_gate.go:330] unrecognized feature gate: NewOLM Dec 10 14:31:35 crc kubenswrapper[4718]: W1210 14:31:35.555896 4718 feature_gate.go:330] unrecognized feature gate: PlatformOperators Dec 10 14:31:35 crc kubenswrapper[4718]: W1210 14:31:35.555900 4718 feature_gate.go:330] unrecognized feature gate: Example Dec 10 14:31:35 crc kubenswrapper[4718]: W1210 14:31:35.555905 4718 feature_gate.go:330] unrecognized feature gate: InsightsConfig Dec 10 14:31:35 crc kubenswrapper[4718]: W1210 14:31:35.555910 4718 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Dec 10 14:31:35 crc kubenswrapper[4718]: W1210 14:31:35.555913 4718 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 10 14:31:35 crc kubenswrapper[4718]: W1210 14:31:35.555917 4718 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Dec 10 14:31:35 crc kubenswrapper[4718]: W1210 14:31:35.555921 4718 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Dec 10 14:31:35 crc kubenswrapper[4718]: W1210 14:31:35.555925 4718 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Dec 10 14:31:35 crc kubenswrapper[4718]: W1210 14:31:35.555928 4718 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Dec 10 14:31:35 crc kubenswrapper[4718]: W1210 14:31:35.555933 4718 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Dec 10 14:31:35 crc kubenswrapper[4718]: W1210 14:31:35.555937 4718 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Dec 10 14:31:35 crc kubenswrapper[4718]: W1210 14:31:35.555943 4718 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Dec 10 14:31:35 crc kubenswrapper[4718]: W1210 14:31:35.555977 4718 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Dec 10 14:31:35 crc kubenswrapper[4718]: W1210 14:31:35.555981 4718 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 10 14:31:35 crc kubenswrapper[4718]: W1210 14:31:35.555985 4718 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Dec 10 14:31:35 crc kubenswrapper[4718]: W1210 14:31:35.555990 4718 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Dec 10 14:31:35 crc kubenswrapper[4718]: W1210 14:31:35.555993 4718 feature_gate.go:330] unrecognized feature gate: GatewayAPI Dec 10 14:31:35 crc kubenswrapper[4718]: W1210 14:31:35.555997 4718 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Dec 10 14:31:35 crc kubenswrapper[4718]: W1210 14:31:35.556002 4718 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 10 14:31:35 crc kubenswrapper[4718]: W1210 14:31:35.556007 4718 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Dec 10 14:31:35 crc kubenswrapper[4718]: W1210 14:31:35.556011 4718 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Dec 10 14:31:35 crc kubenswrapper[4718]: W1210 14:31:35.556015 4718 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Dec 10 14:31:35 crc kubenswrapper[4718]: W1210 14:31:35.556019 4718 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Dec 10 14:31:35 crc kubenswrapper[4718]: W1210 14:31:35.556024 4718 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Dec 10 14:31:35 crc kubenswrapper[4718]: W1210 14:31:35.556029 4718 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Dec 10 14:31:35 crc kubenswrapper[4718]: W1210 14:31:35.556035 4718 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Dec 10 14:31:35 crc kubenswrapper[4718]: W1210 14:31:35.556040 4718 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Dec 10 14:31:35 crc kubenswrapper[4718]: W1210 14:31:35.556044 4718 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Dec 10 14:31:35 crc kubenswrapper[4718]: W1210 14:31:35.556049 4718 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Dec 10 14:31:35 crc kubenswrapper[4718]: W1210 14:31:35.556054 4718 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Dec 10 14:31:35 crc kubenswrapper[4718]: W1210 14:31:35.556058 4718 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Dec 10 14:31:35 crc kubenswrapper[4718]: W1210 14:31:35.556062 4718 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Dec 10 14:31:35 crc kubenswrapper[4718]: W1210 14:31:35.556066 4718 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Dec 10 14:31:35 crc kubenswrapper[4718]: W1210 14:31:35.556070 4718 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Dec 10 14:31:35 crc kubenswrapper[4718]: W1210 14:31:35.556074 4718 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Dec 10 14:31:35 crc kubenswrapper[4718]: W1210 14:31:35.556078 4718 feature_gate.go:330] unrecognized feature gate: SignatureStores Dec 10 14:31:35 crc kubenswrapper[4718]: W1210 14:31:35.556082 4718 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Dec 10 14:31:35 crc kubenswrapper[4718]: W1210 14:31:35.556086 4718 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Dec 10 14:31:35 crc kubenswrapper[4718]: W1210 14:31:35.556089 4718 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Dec 10 14:31:35 crc kubenswrapper[4718]: W1210 14:31:35.556093 4718 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Dec 10 14:31:35 crc kubenswrapper[4718]: W1210 14:31:35.556097 4718 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.556104 4718 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Dec 10 14:31:35 crc kubenswrapper[4718]: W1210 14:31:35.556230 4718 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Dec 10 14:31:35 crc kubenswrapper[4718]: W1210 14:31:35.556237 4718 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 10 14:31:35 crc kubenswrapper[4718]: W1210 14:31:35.556241 4718 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Dec 10 14:31:35 crc kubenswrapper[4718]: W1210 14:31:35.556245 4718 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 10 14:31:35 crc kubenswrapper[4718]: W1210 14:31:35.556249 4718 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Dec 10 14:31:35 crc kubenswrapper[4718]: W1210 14:31:35.556253 4718 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Dec 10 14:31:35 crc kubenswrapper[4718]: W1210 14:31:35.556256 4718 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Dec 10 14:31:35 crc kubenswrapper[4718]: W1210 14:31:35.556271 4718 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Dec 10 14:31:35 crc kubenswrapper[4718]: W1210 14:31:35.556275 4718 feature_gate.go:330] unrecognized feature gate: GatewayAPI Dec 10 14:31:35 crc kubenswrapper[4718]: W1210 14:31:35.556278 4718 feature_gate.go:330] unrecognized feature gate: Example Dec 10 14:31:35 crc kubenswrapper[4718]: W1210 14:31:35.556282 4718 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Dec 10 14:31:35 crc kubenswrapper[4718]: W1210 14:31:35.556285 4718 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 10 14:31:35 crc kubenswrapper[4718]: W1210 14:31:35.556289 4718 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Dec 10 14:31:35 crc kubenswrapper[4718]: W1210 14:31:35.556292 4718 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Dec 10 14:31:35 crc kubenswrapper[4718]: W1210 14:31:35.556295 4718 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Dec 10 14:31:35 crc kubenswrapper[4718]: W1210 14:31:35.556299 4718 feature_gate.go:330] unrecognized feature gate: InsightsConfig Dec 10 14:31:35 crc kubenswrapper[4718]: W1210 14:31:35.556302 4718 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Dec 10 14:31:35 crc kubenswrapper[4718]: W1210 14:31:35.556306 4718 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Dec 10 14:31:35 crc kubenswrapper[4718]: W1210 14:31:35.556309 4718 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Dec 10 14:31:35 crc kubenswrapper[4718]: W1210 14:31:35.556313 4718 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Dec 10 14:31:35 crc kubenswrapper[4718]: W1210 14:31:35.556316 4718 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Dec 10 14:31:35 crc kubenswrapper[4718]: W1210 14:31:35.556320 4718 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Dec 10 14:31:35 crc kubenswrapper[4718]: W1210 14:31:35.556324 4718 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Dec 10 14:31:35 crc kubenswrapper[4718]: W1210 14:31:35.556327 4718 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Dec 10 14:31:35 crc kubenswrapper[4718]: W1210 14:31:35.556331 4718 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Dec 10 14:31:35 crc kubenswrapper[4718]: W1210 14:31:35.556336 4718 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Dec 10 14:31:35 crc kubenswrapper[4718]: W1210 14:31:35.556341 4718 feature_gate.go:330] unrecognized feature gate: PinnedImages Dec 10 14:31:35 crc kubenswrapper[4718]: W1210 14:31:35.556345 4718 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Dec 10 14:31:35 crc kubenswrapper[4718]: W1210 14:31:35.556348 4718 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Dec 10 14:31:35 crc kubenswrapper[4718]: W1210 14:31:35.556352 4718 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Dec 10 14:31:35 crc kubenswrapper[4718]: W1210 14:31:35.556357 4718 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Dec 10 14:31:35 crc kubenswrapper[4718]: W1210 14:31:35.556361 4718 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Dec 10 14:31:35 crc kubenswrapper[4718]: W1210 14:31:35.556366 4718 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Dec 10 14:31:35 crc kubenswrapper[4718]: W1210 14:31:35.556369 4718 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Dec 10 14:31:35 crc kubenswrapper[4718]: W1210 14:31:35.556373 4718 feature_gate.go:330] unrecognized feature gate: NewOLM Dec 10 14:31:35 crc kubenswrapper[4718]: W1210 14:31:35.556377 4718 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Dec 10 14:31:35 crc kubenswrapper[4718]: W1210 14:31:35.556381 4718 feature_gate.go:330] unrecognized feature gate: OVNObservability Dec 10 14:31:35 crc kubenswrapper[4718]: W1210 14:31:35.556434 4718 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Dec 10 14:31:35 crc kubenswrapper[4718]: W1210 14:31:35.556441 4718 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Dec 10 14:31:35 crc kubenswrapper[4718]: W1210 14:31:35.556445 4718 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Dec 10 14:31:35 crc kubenswrapper[4718]: W1210 14:31:35.556450 4718 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 10 14:31:35 crc kubenswrapper[4718]: W1210 14:31:35.556454 4718 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Dec 10 14:31:35 crc kubenswrapper[4718]: W1210 14:31:35.556458 4718 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Dec 10 14:31:35 crc kubenswrapper[4718]: W1210 14:31:35.556463 4718 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Dec 10 14:31:35 crc kubenswrapper[4718]: W1210 14:31:35.556467 4718 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Dec 10 14:31:35 crc kubenswrapper[4718]: W1210 14:31:35.556471 4718 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Dec 10 14:31:35 crc kubenswrapper[4718]: W1210 14:31:35.556475 4718 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Dec 10 14:31:35 crc kubenswrapper[4718]: W1210 14:31:35.556490 4718 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Dec 10 14:31:35 crc kubenswrapper[4718]: W1210 14:31:35.556494 4718 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Dec 10 14:31:35 crc kubenswrapper[4718]: W1210 14:31:35.556497 4718 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Dec 10 14:31:35 crc kubenswrapper[4718]: W1210 14:31:35.556501 4718 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Dec 10 14:31:35 crc kubenswrapper[4718]: W1210 14:31:35.556505 4718 feature_gate.go:330] unrecognized feature gate: PlatformOperators Dec 10 14:31:35 crc kubenswrapper[4718]: W1210 14:31:35.556508 4718 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Dec 10 14:31:35 crc kubenswrapper[4718]: W1210 14:31:35.556514 4718 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Dec 10 14:31:35 crc kubenswrapper[4718]: W1210 14:31:35.556517 4718 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Dec 10 14:31:35 crc kubenswrapper[4718]: W1210 14:31:35.556527 4718 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Dec 10 14:31:35 crc kubenswrapper[4718]: W1210 14:31:35.556531 4718 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Dec 10 14:31:35 crc kubenswrapper[4718]: W1210 14:31:35.556535 4718 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Dec 10 14:31:35 crc kubenswrapper[4718]: W1210 14:31:35.556538 4718 feature_gate.go:330] unrecognized feature gate: SignatureStores Dec 10 14:31:35 crc kubenswrapper[4718]: W1210 14:31:35.556542 4718 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Dec 10 14:31:35 crc kubenswrapper[4718]: W1210 14:31:35.556545 4718 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Dec 10 14:31:35 crc kubenswrapper[4718]: W1210 14:31:35.556555 4718 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Dec 10 14:31:35 crc kubenswrapper[4718]: W1210 14:31:35.556558 4718 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Dec 10 14:31:35 crc kubenswrapper[4718]: W1210 14:31:35.556561 4718 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Dec 10 14:31:35 crc kubenswrapper[4718]: W1210 14:31:35.556566 4718 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Dec 10 14:31:35 crc kubenswrapper[4718]: W1210 14:31:35.556569 4718 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Dec 10 14:31:35 crc kubenswrapper[4718]: W1210 14:31:35.556573 4718 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Dec 10 14:31:35 crc kubenswrapper[4718]: W1210 14:31:35.556576 4718 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Dec 10 14:31:35 crc kubenswrapper[4718]: W1210 14:31:35.556580 4718 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Dec 10 14:31:35 crc kubenswrapper[4718]: W1210 14:31:35.556584 4718 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Dec 10 14:31:35 crc kubenswrapper[4718]: W1210 14:31:35.556588 4718 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.556596 4718 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.556809 4718 server.go:940] "Client rotation is on, will bootstrap in background" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.561098 4718 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.561195 4718 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.561752 4718 server.go:997] "Starting client certificate rotation" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.561781 4718 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.561957 4718 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2026-01-15 05:48:11.051134838 +0000 UTC Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.562050 4718 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 855h16m35.489088811s for next certificate rotation Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.566370 4718 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.567953 4718 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.587372 4718 log.go:25] "Validated CRI v1 runtime API" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.610871 4718 log.go:25] "Validated CRI v1 image API" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.612901 4718 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.615938 4718 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2025-12-10-14-26-21-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.615991 4718 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:42 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:43 fsType:tmpfs blockSize:0}] Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.636976 4718 manager.go:217] Machine: {Timestamp:2025-12-10 14:31:35.635618492 +0000 UTC m=+0.584841919 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2800000 MemoryCapacity:33654120448 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:559eaef7-72a9-45f0-b8d3-0046c76adc0d BootID:fbef6824-3734-4f3d-bf18-030e5c72cff8 Filesystems:[{Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827060224 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827060224 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:42 Capacity:3365408768 Type:vfs Inodes:821633 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:43 Capacity:1073741824 Type:vfs Inodes:4108169 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:e3:f2:d0 Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:e3:f2:d0 Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:81:7a:39 Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:2b:a9:42 Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:73:5c:c4 Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:3a:8f:05 Speed:-1 Mtu:1496} {Name:eth10 MacAddress:76:3a:13:67:8a:e6 Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:ae:c9:81:87:07:bb Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654120448 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.637268 4718 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.637723 4718 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.729157 4718 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.729428 4718 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.729491 4718 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.729831 4718 topology_manager.go:138] "Creating topology manager with none policy" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.729845 4718 container_manager_linux.go:303] "Creating device plugin manager" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.730013 4718 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.730041 4718 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.730464 4718 state_mem.go:36] "Initialized new in-memory state store" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.730573 4718 server.go:1245] "Using root directory" path="/var/lib/kubelet" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.738642 4718 kubelet.go:418] "Attempting to sync node with API server" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.738681 4718 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.738716 4718 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.738737 4718 kubelet.go:324] "Adding apiserver pod source" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.738756 4718 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 10 14:31:35 crc kubenswrapper[4718]: W1210 14:31:35.744631 4718 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.17:6443: connect: connection refused Dec 10 14:31:35 crc kubenswrapper[4718]: E1210 14:31:35.744834 4718 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.17:6443: connect: connection refused" logger="UnhandledError" Dec 10 14:31:35 crc kubenswrapper[4718]: W1210 14:31:35.744623 4718 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.17:6443: connect: connection refused Dec 10 14:31:35 crc kubenswrapper[4718]: E1210 14:31:35.745025 4718 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.17:6443: connect: connection refused" logger="UnhandledError" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.749784 4718 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.750336 4718 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.751230 4718 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.752033 4718 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.752057 4718 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.752065 4718 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.752073 4718 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.752088 4718 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.752099 4718 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.752107 4718 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.752118 4718 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.752128 4718 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.752138 4718 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.752152 4718 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.752162 4718 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.752456 4718 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.753143 4718 server.go:1280] "Started kubelet" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.753538 4718 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.17:6443: connect: connection refused Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.753804 4718 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.753797 4718 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.755505 4718 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 10 14:31:35 crc systemd[1]: Started Kubernetes Kubelet. Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.755878 4718 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.756718 4718 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.757249 4718 volume_manager.go:287] "The desired_state_of_world populator starts" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.757260 4718 volume_manager.go:289] "Starting Kubelet Volume Manager" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.756760 4718 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 15:24:52.531684882 +0000 UTC Dec 10 14:31:35 crc kubenswrapper[4718]: E1210 14:31:35.757379 4718 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.757478 4718 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Dec 10 14:31:35 crc kubenswrapper[4718]: E1210 14:31:35.758188 4718 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.17:6443: connect: connection refused" interval="200ms" Dec 10 14:31:35 crc kubenswrapper[4718]: W1210 14:31:35.758501 4718 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.17:6443: connect: connection refused Dec 10 14:31:35 crc kubenswrapper[4718]: E1210 14:31:35.758609 4718 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.17:6443: connect: connection refused" logger="UnhandledError" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.760448 4718 server.go:460] "Adding debug handlers to kubelet server" Dec 10 14:31:35 crc kubenswrapper[4718]: E1210 14:31:35.759677 4718 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.17:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.187fe11d7ef941ab default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-10 14:31:35.753089451 +0000 UTC m=+0.702312868,LastTimestamp:2025-12-10 14:31:35.753089451 +0000 UTC m=+0.702312868,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.770135 4718 factory.go:55] Registering systemd factory Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.770956 4718 factory.go:221] Registration of the systemd container factory successfully Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.773663 4718 factory.go:153] Registering CRI-O factory Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.773700 4718 factory.go:221] Registration of the crio container factory successfully Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.773877 4718 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.775125 4718 factory.go:103] Registering Raw factory Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.775152 4718 manager.go:1196] Started watching for new ooms in manager Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.776002 4718 manager.go:319] Starting recovery of all containers Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.777220 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.777302 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.777316 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.777329 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.777373 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.777384 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.777412 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.777447 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.777466 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.777475 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.777486 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.777496 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.777504 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.777518 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.777534 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.777545 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.777557 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.777569 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.777577 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.777589 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.778954 4718 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.779002 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.779065 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.779085 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.779101 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.779116 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.779130 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.779162 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.779181 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.779199 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.779218 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.779240 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.779254 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.779304 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.779324 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.779339 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.779354 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.779371 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.779402 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.779417 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.779436 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.779452 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.779470 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.779489 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.779507 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.779525 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.779554 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.779571 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.779593 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.779610 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.779626 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.779642 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.779661 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.779686 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.779712 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.779735 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.780243 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.780282 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.780300 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.780327 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.780348 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.780370 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.780415 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.780432 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.780449 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.780466 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.780485 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.780502 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.780520 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.780539 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.780561 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.780577 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.780594 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.780609 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.780627 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.780646 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.780666 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.780687 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.780705 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.780722 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.780739 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.780758 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.780776 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.780793 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.780811 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.780827 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.780854 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.780869 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.780882 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.780897 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.780912 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.780927 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.780943 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.780961 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.780981 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.781056 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.781073 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.781087 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.781102 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.781118 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.781133 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.781147 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.781161 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.781175 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.781189 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.781213 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.781230 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.781248 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.781261 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.781273 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.781287 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.781302 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.781315 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.781329 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.781345 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.781360 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.781374 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.781420 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.781437 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.781452 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.781467 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.781483 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.781500 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.781515 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.781549 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.781561 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.781577 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.781588 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.781600 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.781613 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.781625 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.781637 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.781647 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.781658 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.781671 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.781683 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.781695 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.781706 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.781719 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.781730 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.781742 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.781754 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.781768 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.781780 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.781792 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.781822 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.781834 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.781846 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.782631 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.782652 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.782662 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.782675 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.782686 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.782699 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.782710 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.782720 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.782731 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.782741 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.782784 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.782799 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.782833 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.782847 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.782860 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.783119 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.783137 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.783151 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.783424 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.783440 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.783455 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.783468 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.783483 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.783500 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.783513 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.783527 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.783544 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.783557 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.783574 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.783587 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.783600 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.783619 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.783632 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.783646 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.783659 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.783671 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.783683 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.783696 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.783708 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.783721 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.783732 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.783746 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.783759 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.783775 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.783791 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.783805 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.783819 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.783832 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.783846 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.783859 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.783873 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.783887 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.783900 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.783912 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.783927 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.783940 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.783954 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.783968 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.783981 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.783994 4718 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.784006 4718 reconstruct.go:97] "Volume reconstruction finished" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.784023 4718 reconciler.go:26] "Reconciler: start to sync state" Dec 10 14:31:35 crc kubenswrapper[4718]: I1210 14:31:35.809358 4718 manager.go:324] Recovery completed Dec 10 14:31:35 crc kubenswrapper[4718]: E1210 14:31:35.857941 4718 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Dec 10 14:31:36 crc kubenswrapper[4718]: I1210 14:31:35.914334 4718 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 10 14:31:36 crc kubenswrapper[4718]: E1210 14:31:36.004264 4718 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Dec 10 14:31:36 crc kubenswrapper[4718]: I1210 14:31:36.005530 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:31:36 crc kubenswrapper[4718]: I1210 14:31:36.005592 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:31:36 crc kubenswrapper[4718]: I1210 14:31:36.005606 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:31:36 crc kubenswrapper[4718]: E1210 14:31:36.006152 4718 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.17:6443: connect: connection refused" interval="400ms" Dec 10 14:31:36 crc kubenswrapper[4718]: I1210 14:31:36.010169 4718 cpu_manager.go:225] "Starting CPU manager" policy="none" Dec 10 14:31:36 crc kubenswrapper[4718]: I1210 14:31:36.010209 4718 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Dec 10 14:31:36 crc kubenswrapper[4718]: I1210 14:31:36.010257 4718 state_mem.go:36] "Initialized new in-memory state store" Dec 10 14:31:36 crc kubenswrapper[4718]: I1210 14:31:36.017000 4718 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 10 14:31:36 crc kubenswrapper[4718]: I1210 14:31:36.018886 4718 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 10 14:31:36 crc kubenswrapper[4718]: I1210 14:31:36.018985 4718 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 10 14:31:36 crc kubenswrapper[4718]: I1210 14:31:36.019033 4718 kubelet.go:2335] "Starting kubelet main sync loop" Dec 10 14:31:36 crc kubenswrapper[4718]: E1210 14:31:36.019111 4718 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 10 14:31:36 crc kubenswrapper[4718]: W1210 14:31:36.019834 4718 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.17:6443: connect: connection refused Dec 10 14:31:36 crc kubenswrapper[4718]: E1210 14:31:36.019896 4718 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.17:6443: connect: connection refused" logger="UnhandledError" Dec 10 14:31:36 crc kubenswrapper[4718]: I1210 14:31:36.020849 4718 policy_none.go:49] "None policy: Start" Dec 10 14:31:36 crc kubenswrapper[4718]: I1210 14:31:36.021762 4718 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 10 14:31:36 crc kubenswrapper[4718]: I1210 14:31:36.021854 4718 state_mem.go:35] "Initializing new in-memory state store" Dec 10 14:31:36 crc kubenswrapper[4718]: I1210 14:31:36.080900 4718 manager.go:334] "Starting Device Plugin manager" Dec 10 14:31:36 crc kubenswrapper[4718]: I1210 14:31:36.081224 4718 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 10 14:31:36 crc kubenswrapper[4718]: I1210 14:31:36.081249 4718 server.go:79] "Starting device plugin registration server" Dec 10 14:31:36 crc kubenswrapper[4718]: I1210 14:31:36.081864 4718 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 10 14:31:36 crc kubenswrapper[4718]: I1210 14:31:36.081887 4718 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 10 14:31:36 crc kubenswrapper[4718]: I1210 14:31:36.082149 4718 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Dec 10 14:31:36 crc kubenswrapper[4718]: I1210 14:31:36.082320 4718 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Dec 10 14:31:36 crc kubenswrapper[4718]: I1210 14:31:36.082337 4718 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 10 14:31:36 crc kubenswrapper[4718]: E1210 14:31:36.088337 4718 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Dec 10 14:31:36 crc kubenswrapper[4718]: I1210 14:31:36.119484 4718 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc"] Dec 10 14:31:36 crc kubenswrapper[4718]: I1210 14:31:36.119677 4718 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 10 14:31:36 crc kubenswrapper[4718]: I1210 14:31:36.121215 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:31:36 crc kubenswrapper[4718]: I1210 14:31:36.121264 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:31:36 crc kubenswrapper[4718]: I1210 14:31:36.121290 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:31:36 crc kubenswrapper[4718]: I1210 14:31:36.121579 4718 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 10 14:31:36 crc kubenswrapper[4718]: I1210 14:31:36.121881 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 10 14:31:36 crc kubenswrapper[4718]: I1210 14:31:36.121958 4718 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 10 14:31:36 crc kubenswrapper[4718]: I1210 14:31:36.122530 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:31:36 crc kubenswrapper[4718]: I1210 14:31:36.122591 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:31:36 crc kubenswrapper[4718]: I1210 14:31:36.122604 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:31:36 crc kubenswrapper[4718]: I1210 14:31:36.122787 4718 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 10 14:31:36 crc kubenswrapper[4718]: I1210 14:31:36.122953 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Dec 10 14:31:36 crc kubenswrapper[4718]: I1210 14:31:36.123008 4718 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 10 14:31:36 crc kubenswrapper[4718]: I1210 14:31:36.123014 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:31:36 crc kubenswrapper[4718]: I1210 14:31:36.123198 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:31:36 crc kubenswrapper[4718]: I1210 14:31:36.123218 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:31:36 crc kubenswrapper[4718]: I1210 14:31:36.124103 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:31:36 crc kubenswrapper[4718]: I1210 14:31:36.124126 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:31:36 crc kubenswrapper[4718]: I1210 14:31:36.124135 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:31:36 crc kubenswrapper[4718]: I1210 14:31:36.124173 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:31:36 crc kubenswrapper[4718]: I1210 14:31:36.124193 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:31:36 crc kubenswrapper[4718]: I1210 14:31:36.124203 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:31:36 crc kubenswrapper[4718]: I1210 14:31:36.124430 4718 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 10 14:31:36 crc kubenswrapper[4718]: I1210 14:31:36.124567 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 10 14:31:36 crc kubenswrapper[4718]: I1210 14:31:36.124606 4718 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 10 14:31:36 crc kubenswrapper[4718]: I1210 14:31:36.125495 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:31:36 crc kubenswrapper[4718]: I1210 14:31:36.125524 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:31:36 crc kubenswrapper[4718]: I1210 14:31:36.125540 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:31:36 crc kubenswrapper[4718]: I1210 14:31:36.125504 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:31:36 crc kubenswrapper[4718]: I1210 14:31:36.125624 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:31:36 crc kubenswrapper[4718]: I1210 14:31:36.125635 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:31:36 crc kubenswrapper[4718]: I1210 14:31:36.125660 4718 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 10 14:31:36 crc kubenswrapper[4718]: I1210 14:31:36.125830 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 10 14:31:36 crc kubenswrapper[4718]: I1210 14:31:36.125861 4718 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 10 14:31:36 crc kubenswrapper[4718]: I1210 14:31:36.126359 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:31:36 crc kubenswrapper[4718]: I1210 14:31:36.126410 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:31:36 crc kubenswrapper[4718]: I1210 14:31:36.126422 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:31:36 crc kubenswrapper[4718]: I1210 14:31:36.126579 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 10 14:31:36 crc kubenswrapper[4718]: I1210 14:31:36.126611 4718 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 10 14:31:36 crc kubenswrapper[4718]: I1210 14:31:36.126636 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:31:36 crc kubenswrapper[4718]: I1210 14:31:36.126666 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:31:36 crc kubenswrapper[4718]: I1210 14:31:36.126677 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:31:36 crc kubenswrapper[4718]: I1210 14:31:36.127312 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:31:36 crc kubenswrapper[4718]: I1210 14:31:36.127345 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:31:36 crc kubenswrapper[4718]: I1210 14:31:36.127356 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:31:36 crc kubenswrapper[4718]: I1210 14:31:36.182166 4718 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 10 14:31:36 crc kubenswrapper[4718]: I1210 14:31:36.183506 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:31:36 crc kubenswrapper[4718]: I1210 14:31:36.183551 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:31:36 crc kubenswrapper[4718]: I1210 14:31:36.183561 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:31:36 crc kubenswrapper[4718]: I1210 14:31:36.183585 4718 kubelet_node_status.go:76] "Attempting to register node" node="crc" Dec 10 14:31:36 crc kubenswrapper[4718]: E1210 14:31:36.184133 4718 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.17:6443: connect: connection refused" node="crc" Dec 10 14:31:36 crc kubenswrapper[4718]: I1210 14:31:36.207665 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 10 14:31:36 crc kubenswrapper[4718]: I1210 14:31:36.207722 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 10 14:31:36 crc kubenswrapper[4718]: I1210 14:31:36.207751 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 10 14:31:36 crc kubenswrapper[4718]: I1210 14:31:36.207779 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 10 14:31:36 crc kubenswrapper[4718]: I1210 14:31:36.207904 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 10 14:31:36 crc kubenswrapper[4718]: I1210 14:31:36.207940 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 10 14:31:36 crc kubenswrapper[4718]: I1210 14:31:36.207987 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 10 14:31:36 crc kubenswrapper[4718]: I1210 14:31:36.208106 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 10 14:31:36 crc kubenswrapper[4718]: I1210 14:31:36.208189 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 10 14:31:36 crc kubenswrapper[4718]: I1210 14:31:36.208217 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 10 14:31:36 crc kubenswrapper[4718]: I1210 14:31:36.208245 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 10 14:31:36 crc kubenswrapper[4718]: I1210 14:31:36.208303 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 10 14:31:36 crc kubenswrapper[4718]: I1210 14:31:36.208322 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 10 14:31:36 crc kubenswrapper[4718]: I1210 14:31:36.208341 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 10 14:31:36 crc kubenswrapper[4718]: I1210 14:31:36.208426 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 10 14:31:36 crc kubenswrapper[4718]: I1210 14:31:36.309855 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 10 14:31:36 crc kubenswrapper[4718]: I1210 14:31:36.309932 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 10 14:31:36 crc kubenswrapper[4718]: I1210 14:31:36.309957 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 10 14:31:36 crc kubenswrapper[4718]: I1210 14:31:36.309975 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 10 14:31:36 crc kubenswrapper[4718]: I1210 14:31:36.310000 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 10 14:31:36 crc kubenswrapper[4718]: I1210 14:31:36.310023 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 10 14:31:36 crc kubenswrapper[4718]: I1210 14:31:36.310042 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 10 14:31:36 crc kubenswrapper[4718]: I1210 14:31:36.310060 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 10 14:31:36 crc kubenswrapper[4718]: I1210 14:31:36.310080 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 10 14:31:36 crc kubenswrapper[4718]: I1210 14:31:36.310099 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 10 14:31:36 crc kubenswrapper[4718]: I1210 14:31:36.310122 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 10 14:31:36 crc kubenswrapper[4718]: I1210 14:31:36.310143 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 10 14:31:36 crc kubenswrapper[4718]: I1210 14:31:36.310163 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 10 14:31:36 crc kubenswrapper[4718]: I1210 14:31:36.310182 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 10 14:31:36 crc kubenswrapper[4718]: I1210 14:31:36.310200 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 10 14:31:36 crc kubenswrapper[4718]: I1210 14:31:36.310615 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 10 14:31:36 crc kubenswrapper[4718]: I1210 14:31:36.310676 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 10 14:31:36 crc kubenswrapper[4718]: I1210 14:31:36.310676 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 10 14:31:36 crc kubenswrapper[4718]: I1210 14:31:36.310671 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 10 14:31:36 crc kubenswrapper[4718]: I1210 14:31:36.310715 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 10 14:31:36 crc kubenswrapper[4718]: I1210 14:31:36.310738 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 10 14:31:36 crc kubenswrapper[4718]: I1210 14:31:36.310740 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 10 14:31:36 crc kubenswrapper[4718]: I1210 14:31:36.310755 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 10 14:31:36 crc kubenswrapper[4718]: I1210 14:31:36.310779 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 10 14:31:36 crc kubenswrapper[4718]: I1210 14:31:36.310768 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 10 14:31:36 crc kubenswrapper[4718]: I1210 14:31:36.310795 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 10 14:31:36 crc kubenswrapper[4718]: I1210 14:31:36.310781 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 10 14:31:36 crc kubenswrapper[4718]: I1210 14:31:36.310799 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 10 14:31:36 crc kubenswrapper[4718]: I1210 14:31:36.310826 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 10 14:31:36 crc kubenswrapper[4718]: I1210 14:31:36.310877 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 10 14:31:36 crc kubenswrapper[4718]: I1210 14:31:36.384571 4718 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 10 14:31:36 crc kubenswrapper[4718]: I1210 14:31:36.386055 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:31:36 crc kubenswrapper[4718]: I1210 14:31:36.386105 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:31:36 crc kubenswrapper[4718]: I1210 14:31:36.386116 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:31:36 crc kubenswrapper[4718]: I1210 14:31:36.386137 4718 kubelet_node_status.go:76] "Attempting to register node" node="crc" Dec 10 14:31:36 crc kubenswrapper[4718]: E1210 14:31:36.386657 4718 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.17:6443: connect: connection refused" node="crc" Dec 10 14:31:36 crc kubenswrapper[4718]: E1210 14:31:36.407761 4718 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.17:6443: connect: connection refused" interval="800ms" Dec 10 14:31:36 crc kubenswrapper[4718]: I1210 14:31:36.445689 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 10 14:31:36 crc kubenswrapper[4718]: I1210 14:31:36.453115 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Dec 10 14:31:36 crc kubenswrapper[4718]: I1210 14:31:36.470576 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 10 14:31:36 crc kubenswrapper[4718]: I1210 14:31:36.486961 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 10 14:31:36 crc kubenswrapper[4718]: I1210 14:31:36.496288 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 10 14:31:36 crc kubenswrapper[4718]: W1210 14:31:36.541096 4718 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-2e7b36573e44fdb578591b57a52da63529b53b95e3484a6c0acfb6d6ae1497f4 WatchSource:0}: Error finding container 2e7b36573e44fdb578591b57a52da63529b53b95e3484a6c0acfb6d6ae1497f4: Status 404 returned error can't find the container with id 2e7b36573e44fdb578591b57a52da63529b53b95e3484a6c0acfb6d6ae1497f4 Dec 10 14:31:36 crc kubenswrapper[4718]: W1210 14:31:36.542929 4718 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-559758feb282b639f7733a6c20d792463a8e7e833ce744f7662cbdc5f40f45f5 WatchSource:0}: Error finding container 559758feb282b639f7733a6c20d792463a8e7e833ce744f7662cbdc5f40f45f5: Status 404 returned error can't find the container with id 559758feb282b639f7733a6c20d792463a8e7e833ce744f7662cbdc5f40f45f5 Dec 10 14:31:36 crc kubenswrapper[4718]: W1210 14:31:36.544777 4718 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-83ac28200cbaf06d8383fcf5e3c6b11802fd427dbc93a5ae43f9cf60d6301472 WatchSource:0}: Error finding container 83ac28200cbaf06d8383fcf5e3c6b11802fd427dbc93a5ae43f9cf60d6301472: Status 404 returned error can't find the container with id 83ac28200cbaf06d8383fcf5e3c6b11802fd427dbc93a5ae43f9cf60d6301472 Dec 10 14:31:36 crc kubenswrapper[4718]: W1210 14:31:36.546650 4718 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-c8703ed538294bfd5f0aa85c9f3e5293eb44b9915dec3ad13ffde7160e7887e8 WatchSource:0}: Error finding container c8703ed538294bfd5f0aa85c9f3e5293eb44b9915dec3ad13ffde7160e7887e8: Status 404 returned error can't find the container with id c8703ed538294bfd5f0aa85c9f3e5293eb44b9915dec3ad13ffde7160e7887e8 Dec 10 14:31:36 crc kubenswrapper[4718]: W1210 14:31:36.548257 4718 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-99346fedfba771a0ddf57744050e4852c6a9497896a580653a6863a1099284ec WatchSource:0}: Error finding container 99346fedfba771a0ddf57744050e4852c6a9497896a580653a6863a1099284ec: Status 404 returned error can't find the container with id 99346fedfba771a0ddf57744050e4852c6a9497896a580653a6863a1099284ec Dec 10 14:31:36 crc kubenswrapper[4718]: I1210 14:31:36.754905 4718 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.17:6443: connect: connection refused Dec 10 14:31:36 crc kubenswrapper[4718]: I1210 14:31:36.758034 4718 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 14:20:40.564179426 +0000 UTC Dec 10 14:31:36 crc kubenswrapper[4718]: I1210 14:31:36.787573 4718 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 10 14:31:36 crc kubenswrapper[4718]: I1210 14:31:36.789039 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:31:36 crc kubenswrapper[4718]: I1210 14:31:36.789082 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:31:36 crc kubenswrapper[4718]: I1210 14:31:36.789098 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:31:36 crc kubenswrapper[4718]: I1210 14:31:36.789129 4718 kubelet_node_status.go:76] "Attempting to register node" node="crc" Dec 10 14:31:36 crc kubenswrapper[4718]: E1210 14:31:36.789685 4718 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.17:6443: connect: connection refused" node="crc" Dec 10 14:31:36 crc kubenswrapper[4718]: W1210 14:31:36.841457 4718 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.17:6443: connect: connection refused Dec 10 14:31:36 crc kubenswrapper[4718]: E1210 14:31:36.841582 4718 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.17:6443: connect: connection refused" logger="UnhandledError" Dec 10 14:31:36 crc kubenswrapper[4718]: W1210 14:31:36.986467 4718 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.17:6443: connect: connection refused Dec 10 14:31:36 crc kubenswrapper[4718]: E1210 14:31:36.986570 4718 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.17:6443: connect: connection refused" logger="UnhandledError" Dec 10 14:31:36 crc kubenswrapper[4718]: W1210 14:31:36.992324 4718 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.17:6443: connect: connection refused Dec 10 14:31:36 crc kubenswrapper[4718]: E1210 14:31:36.992375 4718 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.17:6443: connect: connection refused" logger="UnhandledError" Dec 10 14:31:37 crc kubenswrapper[4718]: I1210 14:31:37.026633 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"83ac28200cbaf06d8383fcf5e3c6b11802fd427dbc93a5ae43f9cf60d6301472"} Dec 10 14:31:37 crc kubenswrapper[4718]: I1210 14:31:37.027861 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"99346fedfba771a0ddf57744050e4852c6a9497896a580653a6863a1099284ec"} Dec 10 14:31:37 crc kubenswrapper[4718]: I1210 14:31:37.029361 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"2e7b36573e44fdb578591b57a52da63529b53b95e3484a6c0acfb6d6ae1497f4"} Dec 10 14:31:37 crc kubenswrapper[4718]: I1210 14:31:37.031129 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"559758feb282b639f7733a6c20d792463a8e7e833ce744f7662cbdc5f40f45f5"} Dec 10 14:31:37 crc kubenswrapper[4718]: I1210 14:31:37.032400 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"c8703ed538294bfd5f0aa85c9f3e5293eb44b9915dec3ad13ffde7160e7887e8"} Dec 10 14:31:37 crc kubenswrapper[4718]: E1210 14:31:37.209232 4718 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.17:6443: connect: connection refused" interval="1.6s" Dec 10 14:31:37 crc kubenswrapper[4718]: W1210 14:31:37.254374 4718 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.17:6443: connect: connection refused Dec 10 14:31:37 crc kubenswrapper[4718]: E1210 14:31:37.254555 4718 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.17:6443: connect: connection refused" logger="UnhandledError" Dec 10 14:31:37 crc kubenswrapper[4718]: I1210 14:31:37.590673 4718 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 10 14:31:37 crc kubenswrapper[4718]: I1210 14:31:37.592007 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:31:37 crc kubenswrapper[4718]: I1210 14:31:37.592058 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:31:37 crc kubenswrapper[4718]: I1210 14:31:37.592071 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:31:37 crc kubenswrapper[4718]: I1210 14:31:37.592096 4718 kubelet_node_status.go:76] "Attempting to register node" node="crc" Dec 10 14:31:37 crc kubenswrapper[4718]: E1210 14:31:37.592627 4718 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.17:6443: connect: connection refused" node="crc" Dec 10 14:31:37 crc kubenswrapper[4718]: I1210 14:31:37.754756 4718 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.17:6443: connect: connection refused Dec 10 14:31:37 crc kubenswrapper[4718]: I1210 14:31:37.758960 4718 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 12:16:12.378336508 +0000 UTC Dec 10 14:31:38 crc kubenswrapper[4718]: I1210 14:31:38.037006 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"651112c229c32968527538e66f1b161c92411b37f45f36ca90b27b6710d7a43d"} Dec 10 14:31:38 crc kubenswrapper[4718]: I1210 14:31:38.037058 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"2d1de02a2f41beed3e92a05e69a6c76b44861e7dba46d723a38fffe8c3d0ed42"} Dec 10 14:31:38 crc kubenswrapper[4718]: I1210 14:31:38.038996 4718 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="98178725ebbf1d6fe01df267ec22fdd0b8bb049fba5d6790ca3e045b71083a05" exitCode=0 Dec 10 14:31:38 crc kubenswrapper[4718]: I1210 14:31:38.039096 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"98178725ebbf1d6fe01df267ec22fdd0b8bb049fba5d6790ca3e045b71083a05"} Dec 10 14:31:38 crc kubenswrapper[4718]: I1210 14:31:38.039190 4718 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 10 14:31:38 crc kubenswrapper[4718]: I1210 14:31:38.040351 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:31:38 crc kubenswrapper[4718]: I1210 14:31:38.040401 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:31:38 crc kubenswrapper[4718]: I1210 14:31:38.040414 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:31:38 crc kubenswrapper[4718]: I1210 14:31:38.041360 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"a3cbf8dfd3e1278a733c6ca7e888788d0bec845ff72dfd618e2070262d05b6a7"} Dec 10 14:31:38 crc kubenswrapper[4718]: I1210 14:31:38.041375 4718 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 10 14:31:38 crc kubenswrapper[4718]: I1210 14:31:38.041335 4718 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="a3cbf8dfd3e1278a733c6ca7e888788d0bec845ff72dfd618e2070262d05b6a7" exitCode=0 Dec 10 14:31:38 crc kubenswrapper[4718]: I1210 14:31:38.042125 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:31:38 crc kubenswrapper[4718]: I1210 14:31:38.042148 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:31:38 crc kubenswrapper[4718]: I1210 14:31:38.042160 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:31:38 crc kubenswrapper[4718]: I1210 14:31:38.044314 4718 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 10 14:31:38 crc kubenswrapper[4718]: I1210 14:31:38.044443 4718 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="86141b1111aec255b1850d00a497e541509f05d03aba5ce3e1c26bf8d7181fdc" exitCode=0 Dec 10 14:31:38 crc kubenswrapper[4718]: I1210 14:31:38.044563 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"86141b1111aec255b1850d00a497e541509f05d03aba5ce3e1c26bf8d7181fdc"} Dec 10 14:31:38 crc kubenswrapper[4718]: I1210 14:31:38.044669 4718 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 10 14:31:38 crc kubenswrapper[4718]: I1210 14:31:38.045225 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:31:38 crc kubenswrapper[4718]: I1210 14:31:38.045247 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:31:38 crc kubenswrapper[4718]: I1210 14:31:38.045263 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:31:38 crc kubenswrapper[4718]: I1210 14:31:38.047292 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:31:38 crc kubenswrapper[4718]: I1210 14:31:38.047352 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:31:38 crc kubenswrapper[4718]: I1210 14:31:38.047371 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:31:38 crc kubenswrapper[4718]: I1210 14:31:38.048603 4718 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="541c7810a9f52a43bd4d949a836dfed2fbf9384a1c46d64f81168a8eec1437e3" exitCode=0 Dec 10 14:31:38 crc kubenswrapper[4718]: I1210 14:31:38.048642 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"541c7810a9f52a43bd4d949a836dfed2fbf9384a1c46d64f81168a8eec1437e3"} Dec 10 14:31:38 crc kubenswrapper[4718]: I1210 14:31:38.048774 4718 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 10 14:31:38 crc kubenswrapper[4718]: I1210 14:31:38.050896 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:31:38 crc kubenswrapper[4718]: I1210 14:31:38.050933 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:31:38 crc kubenswrapper[4718]: I1210 14:31:38.050944 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:31:38 crc kubenswrapper[4718]: E1210 14:31:38.512052 4718 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.17:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.187fe11d7ef941ab default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-10 14:31:35.753089451 +0000 UTC m=+0.702312868,LastTimestamp:2025-12-10 14:31:35.753089451 +0000 UTC m=+0.702312868,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 10 14:31:38 crc kubenswrapper[4718]: I1210 14:31:38.755432 4718 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.17:6443: connect: connection refused Dec 10 14:31:38 crc kubenswrapper[4718]: I1210 14:31:38.759457 4718 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 20:13:10.216655747 +0000 UTC Dec 10 14:31:38 crc kubenswrapper[4718]: E1210 14:31:38.843049 4718 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.17:6443: connect: connection refused" interval="3.2s" Dec 10 14:31:38 crc kubenswrapper[4718]: W1210 14:31:38.919416 4718 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.17:6443: connect: connection refused Dec 10 14:31:38 crc kubenswrapper[4718]: E1210 14:31:38.919547 4718 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.17:6443: connect: connection refused" logger="UnhandledError" Dec 10 14:31:39 crc kubenswrapper[4718]: W1210 14:31:39.063002 4718 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.17:6443: connect: connection refused Dec 10 14:31:39 crc kubenswrapper[4718]: E1210 14:31:39.063082 4718 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.17:6443: connect: connection refused" logger="UnhandledError" Dec 10 14:31:39 crc kubenswrapper[4718]: I1210 14:31:39.200198 4718 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 10 14:31:39 crc kubenswrapper[4718]: I1210 14:31:39.200425 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"fe972be45d5fffe525720a01120068ca8d0d5ad43be84a4dfea4ca04946ff711"} Dec 10 14:31:39 crc kubenswrapper[4718]: I1210 14:31:39.200566 4718 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 10 14:31:39 crc kubenswrapper[4718]: I1210 14:31:39.224985 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:31:39 crc kubenswrapper[4718]: I1210 14:31:39.225030 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:31:39 crc kubenswrapper[4718]: I1210 14:31:39.225039 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:31:39 crc kubenswrapper[4718]: I1210 14:31:39.225452 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:31:39 crc kubenswrapper[4718]: I1210 14:31:39.225512 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:31:39 crc kubenswrapper[4718]: I1210 14:31:39.225525 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:31:39 crc kubenswrapper[4718]: I1210 14:31:39.225555 4718 kubelet_node_status.go:76] "Attempting to register node" node="crc" Dec 10 14:31:39 crc kubenswrapper[4718]: E1210 14:31:39.226368 4718 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.17:6443: connect: connection refused" node="crc" Dec 10 14:31:39 crc kubenswrapper[4718]: I1210 14:31:39.327074 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"32fcede9b35f9a45177a341e272e7c2cbe2e5c6bf699e877664a603a5f96124a"} Dec 10 14:31:39 crc kubenswrapper[4718]: I1210 14:31:39.327150 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"65966d33004e0a6740f0b450f404729baec86fc47c84a58ee020a78b7378bfee"} Dec 10 14:31:39 crc kubenswrapper[4718]: I1210 14:31:39.330508 4718 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="f71357acb00a7b2364de56823e8f662069885964da1fc1aa8a3e91947869f334" exitCode=0 Dec 10 14:31:39 crc kubenswrapper[4718]: I1210 14:31:39.330585 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"f71357acb00a7b2364de56823e8f662069885964da1fc1aa8a3e91947869f334"} Dec 10 14:31:39 crc kubenswrapper[4718]: I1210 14:31:39.330747 4718 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 10 14:31:39 crc kubenswrapper[4718]: I1210 14:31:39.331850 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:31:39 crc kubenswrapper[4718]: I1210 14:31:39.331876 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:31:39 crc kubenswrapper[4718]: I1210 14:31:39.331889 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:31:39 crc kubenswrapper[4718]: W1210 14:31:39.333578 4718 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.17:6443: connect: connection refused Dec 10 14:31:39 crc kubenswrapper[4718]: E1210 14:31:39.333652 4718 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.17:6443: connect: connection refused" logger="UnhandledError" Dec 10 14:31:39 crc kubenswrapper[4718]: I1210 14:31:39.335813 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"65ade8ad5ab17011ad49a8283beac1b53ed3612052f8ad9c9b5521b18a71f970"} Dec 10 14:31:39 crc kubenswrapper[4718]: I1210 14:31:39.335844 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"ca56ecda46fc73e9f734cfd36b87b7353db70adefef0665771911d62633ce508"} Dec 10 14:31:39 crc kubenswrapper[4718]: I1210 14:31:39.339322 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"d898946159da9ec7806b5daa074a2313a8d3cd6086f4ae2bdfe858a7ba70eca7"} Dec 10 14:31:39 crc kubenswrapper[4718]: I1210 14:31:39.339354 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"d9405e4c833e1d215824604c6d7ec42dec156f2968d3351f110df665cdac0ce1"} Dec 10 14:31:39 crc kubenswrapper[4718]: I1210 14:31:39.339515 4718 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 10 14:31:39 crc kubenswrapper[4718]: I1210 14:31:39.340236 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:31:39 crc kubenswrapper[4718]: I1210 14:31:39.340256 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:31:39 crc kubenswrapper[4718]: I1210 14:31:39.340267 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:31:39 crc kubenswrapper[4718]: W1210 14:31:39.374381 4718 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.17:6443: connect: connection refused Dec 10 14:31:39 crc kubenswrapper[4718]: E1210 14:31:39.374507 4718 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.17:6443: connect: connection refused" logger="UnhandledError" Dec 10 14:31:39 crc kubenswrapper[4718]: I1210 14:31:39.754686 4718 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.17:6443: connect: connection refused Dec 10 14:31:39 crc kubenswrapper[4718]: I1210 14:31:39.759787 4718 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 16:44:16.234631699 +0000 UTC Dec 10 14:31:39 crc kubenswrapper[4718]: I1210 14:31:39.759848 4718 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 74h12m36.474787171s for next certificate rotation Dec 10 14:31:40 crc kubenswrapper[4718]: I1210 14:31:40.295278 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 10 14:31:40 crc kubenswrapper[4718]: I1210 14:31:40.425604 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"f4c8b683dbf4a1d50d5987ea168baecaf4b2124a5a6286dc22c96cd7472db60a"} Dec 10 14:31:40 crc kubenswrapper[4718]: I1210 14:31:40.425668 4718 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 10 14:31:40 crc kubenswrapper[4718]: I1210 14:31:40.427612 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:31:40 crc kubenswrapper[4718]: I1210 14:31:40.427648 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:31:40 crc kubenswrapper[4718]: I1210 14:31:40.427657 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:31:40 crc kubenswrapper[4718]: I1210 14:31:40.431733 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"cfa44e8f79d7525847ab10384aaf41105a8e6769b384679417cd0a7379748dee"} Dec 10 14:31:40 crc kubenswrapper[4718]: I1210 14:31:40.431770 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"96a402f0272405a079b30df6cd969089ca77cda5c69af9a3b37067724e7a2203"} Dec 10 14:31:40 crc kubenswrapper[4718]: I1210 14:31:40.431785 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"6f363cabfc65ba2cb737968ead6f21568f30416a1f385f511f2d1a45da1e831a"} Dec 10 14:31:40 crc kubenswrapper[4718]: I1210 14:31:40.431879 4718 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 10 14:31:40 crc kubenswrapper[4718]: I1210 14:31:40.433027 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:31:40 crc kubenswrapper[4718]: I1210 14:31:40.433048 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:31:40 crc kubenswrapper[4718]: I1210 14:31:40.433058 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:31:40 crc kubenswrapper[4718]: I1210 14:31:40.436712 4718 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="569b47e5967ce0ac32c9e788a893311624b65ce53898f938b656290e3521eac1" exitCode=0 Dec 10 14:31:40 crc kubenswrapper[4718]: I1210 14:31:40.436903 4718 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 10 14:31:40 crc kubenswrapper[4718]: I1210 14:31:40.436934 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"569b47e5967ce0ac32c9e788a893311624b65ce53898f938b656290e3521eac1"} Dec 10 14:31:40 crc kubenswrapper[4718]: I1210 14:31:40.436976 4718 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 10 14:31:40 crc kubenswrapper[4718]: I1210 14:31:40.438741 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:31:40 crc kubenswrapper[4718]: I1210 14:31:40.438820 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:31:40 crc kubenswrapper[4718]: I1210 14:31:40.438834 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:31:40 crc kubenswrapper[4718]: I1210 14:31:40.459568 4718 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 10 14:31:40 crc kubenswrapper[4718]: I1210 14:31:40.461963 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:31:40 crc kubenswrapper[4718]: I1210 14:31:40.462197 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:31:40 crc kubenswrapper[4718]: I1210 14:31:40.462216 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:31:40 crc kubenswrapper[4718]: I1210 14:31:40.464726 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:31:40 crc kubenswrapper[4718]: I1210 14:31:40.464781 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:31:40 crc kubenswrapper[4718]: I1210 14:31:40.464796 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:31:40 crc kubenswrapper[4718]: I1210 14:31:40.594781 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 10 14:31:40 crc kubenswrapper[4718]: I1210 14:31:40.628588 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 10 14:31:40 crc kubenswrapper[4718]: I1210 14:31:40.754591 4718 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.17:6443: connect: connection refused Dec 10 14:31:41 crc kubenswrapper[4718]: I1210 14:31:41.443468 4718 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 10 14:31:41 crc kubenswrapper[4718]: I1210 14:31:41.443518 4718 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 10 14:31:41 crc kubenswrapper[4718]: I1210 14:31:41.443480 4718 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 10 14:31:41 crc kubenswrapper[4718]: I1210 14:31:41.443555 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"7d9bd6f4d07250b40c63c763479f4fda1c23d3c9f71eedb262fd9cb238354605"} Dec 10 14:31:41 crc kubenswrapper[4718]: I1210 14:31:41.443660 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"92692024d66d9335da5886842043cbe9f547d164de3fdac6376b164ec8768113"} Dec 10 14:31:41 crc kubenswrapper[4718]: I1210 14:31:41.443682 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"2fd301838aa5021be88873ee897a124950b0e5de604202ef2e9adc1d5ae40732"} Dec 10 14:31:41 crc kubenswrapper[4718]: I1210 14:31:41.443695 4718 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 10 14:31:41 crc kubenswrapper[4718]: I1210 14:31:41.443792 4718 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 10 14:31:41 crc kubenswrapper[4718]: I1210 14:31:41.445321 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:31:41 crc kubenswrapper[4718]: I1210 14:31:41.445363 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:31:41 crc kubenswrapper[4718]: I1210 14:31:41.445418 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:31:41 crc kubenswrapper[4718]: I1210 14:31:41.445430 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:31:41 crc kubenswrapper[4718]: I1210 14:31:41.445367 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:31:41 crc kubenswrapper[4718]: I1210 14:31:41.445530 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:31:41 crc kubenswrapper[4718]: I1210 14:31:41.445898 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:31:41 crc kubenswrapper[4718]: I1210 14:31:41.445944 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:31:41 crc kubenswrapper[4718]: I1210 14:31:41.445955 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:31:41 crc kubenswrapper[4718]: I1210 14:31:41.754472 4718 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.17:6443: connect: connection refused Dec 10 14:31:42 crc kubenswrapper[4718]: I1210 14:31:42.287858 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 10 14:31:42 crc kubenswrapper[4718]: I1210 14:31:42.427079 4718 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 10 14:31:42 crc kubenswrapper[4718]: I1210 14:31:42.428506 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:31:42 crc kubenswrapper[4718]: I1210 14:31:42.428575 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:31:42 crc kubenswrapper[4718]: I1210 14:31:42.428586 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:31:42 crc kubenswrapper[4718]: I1210 14:31:42.428612 4718 kubelet_node_status.go:76] "Attempting to register node" node="crc" Dec 10 14:31:42 crc kubenswrapper[4718]: I1210 14:31:42.447654 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Dec 10 14:31:42 crc kubenswrapper[4718]: I1210 14:31:42.449119 4718 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="96a402f0272405a079b30df6cd969089ca77cda5c69af9a3b37067724e7a2203" exitCode=255 Dec 10 14:31:42 crc kubenswrapper[4718]: I1210 14:31:42.449192 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"96a402f0272405a079b30df6cd969089ca77cda5c69af9a3b37067724e7a2203"} Dec 10 14:31:42 crc kubenswrapper[4718]: I1210 14:31:42.449371 4718 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 10 14:31:42 crc kubenswrapper[4718]: I1210 14:31:42.450064 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:31:42 crc kubenswrapper[4718]: I1210 14:31:42.450099 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:31:42 crc kubenswrapper[4718]: I1210 14:31:42.450108 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:31:42 crc kubenswrapper[4718]: I1210 14:31:42.450595 4718 scope.go:117] "RemoveContainer" containerID="96a402f0272405a079b30df6cd969089ca77cda5c69af9a3b37067724e7a2203" Dec 10 14:31:42 crc kubenswrapper[4718]: I1210 14:31:42.452381 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"8437bd6fcc9fe1422502b2feedbb03714ddbb2620d3b6431118c34511a950de6"} Dec 10 14:31:42 crc kubenswrapper[4718]: I1210 14:31:42.452435 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"199df3bd0e866ea57708c66a7cc92e346afc57c8e49449d4cee70825f658684a"} Dec 10 14:31:42 crc kubenswrapper[4718]: I1210 14:31:42.452479 4718 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 10 14:31:42 crc kubenswrapper[4718]: I1210 14:31:42.452489 4718 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 10 14:31:42 crc kubenswrapper[4718]: I1210 14:31:42.475186 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:31:42 crc kubenswrapper[4718]: I1210 14:31:42.475265 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:31:42 crc kubenswrapper[4718]: I1210 14:31:42.475280 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:31:42 crc kubenswrapper[4718]: I1210 14:31:42.475290 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:31:42 crc kubenswrapper[4718]: I1210 14:31:42.475353 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:31:42 crc kubenswrapper[4718]: I1210 14:31:42.475368 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:31:42 crc kubenswrapper[4718]: I1210 14:31:42.626472 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 10 14:31:43 crc kubenswrapper[4718]: I1210 14:31:43.364848 4718 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 10 14:31:43 crc kubenswrapper[4718]: I1210 14:31:43.457762 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Dec 10 14:31:43 crc kubenswrapper[4718]: I1210 14:31:43.460010 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"9d6e990a4a3f575ffc8a6399b8dd1fd59a84db71b34885946046fc7c5bd0a8e8"} Dec 10 14:31:43 crc kubenswrapper[4718]: I1210 14:31:43.460110 4718 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 10 14:31:43 crc kubenswrapper[4718]: I1210 14:31:43.460145 4718 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 10 14:31:43 crc kubenswrapper[4718]: I1210 14:31:43.460187 4718 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 10 14:31:43 crc kubenswrapper[4718]: I1210 14:31:43.460240 4718 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 10 14:31:43 crc kubenswrapper[4718]: I1210 14:31:43.461431 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:31:43 crc kubenswrapper[4718]: I1210 14:31:43.461473 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:31:43 crc kubenswrapper[4718]: I1210 14:31:43.461484 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:31:43 crc kubenswrapper[4718]: I1210 14:31:43.461579 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:31:43 crc kubenswrapper[4718]: I1210 14:31:43.461626 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:31:43 crc kubenswrapper[4718]: I1210 14:31:43.461638 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:31:43 crc kubenswrapper[4718]: I1210 14:31:43.462000 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:31:43 crc kubenswrapper[4718]: I1210 14:31:43.462040 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:31:43 crc kubenswrapper[4718]: I1210 14:31:43.462054 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:31:44 crc kubenswrapper[4718]: I1210 14:31:44.462371 4718 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 10 14:31:44 crc kubenswrapper[4718]: I1210 14:31:44.462455 4718 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 10 14:31:44 crc kubenswrapper[4718]: I1210 14:31:44.463259 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:31:44 crc kubenswrapper[4718]: I1210 14:31:44.463294 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:31:44 crc kubenswrapper[4718]: I1210 14:31:44.463302 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:31:44 crc kubenswrapper[4718]: I1210 14:31:44.679774 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Dec 10 14:31:44 crc kubenswrapper[4718]: I1210 14:31:44.679995 4718 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 10 14:31:44 crc kubenswrapper[4718]: I1210 14:31:44.681323 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:31:44 crc kubenswrapper[4718]: I1210 14:31:44.681354 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:31:44 crc kubenswrapper[4718]: I1210 14:31:44.681362 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:31:44 crc kubenswrapper[4718]: I1210 14:31:44.771223 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 10 14:31:45 crc kubenswrapper[4718]: I1210 14:31:45.288817 4718 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 10 14:31:45 crc kubenswrapper[4718]: I1210 14:31:45.288917 4718 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 10 14:31:45 crc kubenswrapper[4718]: I1210 14:31:45.366426 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 10 14:31:45 crc kubenswrapper[4718]: I1210 14:31:45.366845 4718 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 10 14:31:45 crc kubenswrapper[4718]: I1210 14:31:45.368718 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:31:45 crc kubenswrapper[4718]: I1210 14:31:45.368779 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:31:45 crc kubenswrapper[4718]: I1210 14:31:45.368801 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:31:45 crc kubenswrapper[4718]: I1210 14:31:45.465455 4718 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 10 14:31:45 crc kubenswrapper[4718]: I1210 14:31:45.465519 4718 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 10 14:31:45 crc kubenswrapper[4718]: I1210 14:31:45.466755 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:31:45 crc kubenswrapper[4718]: I1210 14:31:45.466799 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:31:45 crc kubenswrapper[4718]: I1210 14:31:45.466812 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:31:45 crc kubenswrapper[4718]: I1210 14:31:45.473939 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 10 14:31:46 crc kubenswrapper[4718]: E1210 14:31:46.088468 4718 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Dec 10 14:31:46 crc kubenswrapper[4718]: I1210 14:31:46.268550 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Dec 10 14:31:46 crc kubenswrapper[4718]: I1210 14:31:46.268837 4718 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 10 14:31:46 crc kubenswrapper[4718]: I1210 14:31:46.270117 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:31:46 crc kubenswrapper[4718]: I1210 14:31:46.270161 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:31:46 crc kubenswrapper[4718]: I1210 14:31:46.270173 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:31:46 crc kubenswrapper[4718]: I1210 14:31:46.468639 4718 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 10 14:31:46 crc kubenswrapper[4718]: I1210 14:31:46.469765 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:31:46 crc kubenswrapper[4718]: I1210 14:31:46.469802 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:31:46 crc kubenswrapper[4718]: I1210 14:31:46.469812 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:31:47 crc kubenswrapper[4718]: I1210 14:31:47.366433 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 10 14:31:47 crc kubenswrapper[4718]: I1210 14:31:47.366686 4718 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 10 14:31:47 crc kubenswrapper[4718]: I1210 14:31:47.368370 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:31:47 crc kubenswrapper[4718]: I1210 14:31:47.368421 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:31:47 crc kubenswrapper[4718]: I1210 14:31:47.368440 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:31:47 crc kubenswrapper[4718]: I1210 14:31:47.371703 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 10 14:31:47 crc kubenswrapper[4718]: I1210 14:31:47.471256 4718 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 10 14:31:47 crc kubenswrapper[4718]: I1210 14:31:47.472106 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:31:47 crc kubenswrapper[4718]: I1210 14:31:47.472154 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:31:47 crc kubenswrapper[4718]: I1210 14:31:47.472168 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:31:52 crc kubenswrapper[4718]: E1210 14:31:52.049528 4718 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" interval="6.4s" Dec 10 14:31:52 crc kubenswrapper[4718]: E1210 14:31:52.430299 4718 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": net/http: TLS handshake timeout" node="crc" Dec 10 14:31:52 crc kubenswrapper[4718]: I1210 14:31:52.490868 4718 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Dec 10 14:31:52 crc kubenswrapper[4718]: I1210 14:31:52.490968 4718 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Dec 10 14:31:52 crc kubenswrapper[4718]: I1210 14:31:52.495763 4718 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Dec 10 14:31:52 crc kubenswrapper[4718]: I1210 14:31:52.495861 4718 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Dec 10 14:31:53 crc kubenswrapper[4718]: I1210 14:31:53.365710 4718 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Liveness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Dec 10 14:31:53 crc kubenswrapper[4718]: I1210 14:31:53.365791 4718 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Dec 10 14:31:54 crc kubenswrapper[4718]: I1210 14:31:54.776158 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 10 14:31:54 crc kubenswrapper[4718]: I1210 14:31:54.776775 4718 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 10 14:31:54 crc kubenswrapper[4718]: I1210 14:31:54.777317 4718 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Dec 10 14:31:54 crc kubenswrapper[4718]: I1210 14:31:54.777461 4718 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Dec 10 14:31:54 crc kubenswrapper[4718]: I1210 14:31:54.778438 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:31:54 crc kubenswrapper[4718]: I1210 14:31:54.778482 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:31:54 crc kubenswrapper[4718]: I1210 14:31:54.778497 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:31:54 crc kubenswrapper[4718]: I1210 14:31:54.779626 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Dec 10 14:31:54 crc kubenswrapper[4718]: I1210 14:31:54.779814 4718 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 10 14:31:54 crc kubenswrapper[4718]: I1210 14:31:54.780740 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:31:54 crc kubenswrapper[4718]: I1210 14:31:54.780773 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:31:54 crc kubenswrapper[4718]: I1210 14:31:54.780785 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:31:54 crc kubenswrapper[4718]: I1210 14:31:54.781304 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 10 14:31:54 crc kubenswrapper[4718]: I1210 14:31:54.791953 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Dec 10 14:31:55 crc kubenswrapper[4718]: I1210 14:31:55.288715 4718 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 10 14:31:55 crc kubenswrapper[4718]: I1210 14:31:55.288821 4718 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 10 14:31:55 crc kubenswrapper[4718]: I1210 14:31:55.475665 4718 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Dec 10 14:31:55 crc kubenswrapper[4718]: I1210 14:31:55.475748 4718 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Dec 10 14:31:55 crc kubenswrapper[4718]: I1210 14:31:55.492972 4718 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 10 14:31:55 crc kubenswrapper[4718]: I1210 14:31:55.493713 4718 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 10 14:31:55 crc kubenswrapper[4718]: I1210 14:31:55.493852 4718 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Dec 10 14:31:55 crc kubenswrapper[4718]: I1210 14:31:55.493916 4718 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Dec 10 14:31:55 crc kubenswrapper[4718]: I1210 14:31:55.494101 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:31:55 crc kubenswrapper[4718]: I1210 14:31:55.494180 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:31:55 crc kubenswrapper[4718]: I1210 14:31:55.494198 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:31:55 crc kubenswrapper[4718]: I1210 14:31:55.495016 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:31:55 crc kubenswrapper[4718]: I1210 14:31:55.495051 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:31:55 crc kubenswrapper[4718]: I1210 14:31:55.495063 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:31:56 crc kubenswrapper[4718]: E1210 14:31:56.088810 4718 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.494115 4718 trace.go:236] Trace[1111341402]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (10-Dec-2025 14:31:43.375) (total time: 14118ms): Dec 10 14:31:57 crc kubenswrapper[4718]: Trace[1111341402]: ---"Objects listed" error: 14117ms (14:31:57.493) Dec 10 14:31:57 crc kubenswrapper[4718]: Trace[1111341402]: [14.118005439s] [14.118005439s] END Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.494174 4718 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.494340 4718 trace.go:236] Trace[246670786]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (10-Dec-2025 14:31:43.821) (total time: 13673ms): Dec 10 14:31:57 crc kubenswrapper[4718]: Trace[246670786]: ---"Objects listed" error: 13673ms (14:31:57.494) Dec 10 14:31:57 crc kubenswrapper[4718]: Trace[246670786]: [13.673292161s] [13.673292161s] END Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.494357 4718 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.501829 4718 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.501911 4718 trace.go:236] Trace[500752586]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (10-Dec-2025 14:31:44.034) (total time: 13466ms): Dec 10 14:31:57 crc kubenswrapper[4718]: Trace[500752586]: ---"Objects listed" error: 13466ms (14:31:57.501) Dec 10 14:31:57 crc kubenswrapper[4718]: Trace[500752586]: [13.466998752s] [13.466998752s] END Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.501951 4718 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.502730 4718 trace.go:236] Trace[748051848]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (10-Dec-2025 14:31:44.130) (total time: 13371ms): Dec 10 14:31:57 crc kubenswrapper[4718]: Trace[748051848]: ---"Objects listed" error: 13371ms (14:31:57.502) Dec 10 14:31:57 crc kubenswrapper[4718]: Trace[748051848]: [13.371930538s] [13.371930538s] END Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.502773 4718 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.754001 4718 apiserver.go:52] "Watching apiserver" Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.759702 4718 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.760023 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf"] Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.760438 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.760516 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.760523 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 10 14:31:57 crc kubenswrapper[4718]: E1210 14:31:57.760624 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 10 14:31:57 crc kubenswrapper[4718]: E1210 14:31:57.760701 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.761134 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.762722 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 10 14:31:57 crc kubenswrapper[4718]: E1210 14:31:57.762786 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.762816 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.769368 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.769533 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.769704 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.769365 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.769761 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.769895 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.770094 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.770445 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.770514 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.798001 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.809603 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.820262 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.832483 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.845756 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.858128 4718 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.858567 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:57Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.869701 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:57Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.904173 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.904273 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.904299 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.904349 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.904376 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.904641 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.904935 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.904970 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.905004 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.905028 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.905032 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.905051 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.905135 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.905170 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.905228 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.905255 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.905289 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.905357 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.905407 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.905434 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.905459 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.905484 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.905508 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.905541 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.905551 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.905570 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.905598 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.905753 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.905782 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.905778 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.905814 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.905782 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.905825 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.905843 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.905941 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.905943 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.905984 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.906078 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.906114 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.906157 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.906162 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.906268 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.906300 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.906295 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.906325 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.906349 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.906345 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.906371 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.906411 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.906435 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.906453 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.906459 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.906514 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.906546 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.906580 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.906595 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.906607 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.906644 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.906668 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.906692 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.906717 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.906744 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.906773 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.906797 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.906827 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.906819 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.906854 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.906883 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.906836 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.906916 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.906943 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.906993 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.907018 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.907049 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.907073 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.907096 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.907118 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.907141 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.907166 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.907188 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.907272 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.907431 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.907825 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.907810 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.907830 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.907869 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.907906 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.907950 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.907978 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.908006 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.907999 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.908434 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.908730 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.908808 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.908834 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.909069 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.909344 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.909721 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.910078 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.910198 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.910262 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.910259 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.910337 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.910420 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.910486 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.910688 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.910730 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.910798 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.910863 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:31:57 crc kubenswrapper[4718]: E1210 14:31:57.911373 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-10 14:31:58.411336982 +0000 UTC m=+23.360560579 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.911465 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.911518 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.911546 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.911586 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.911614 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.911637 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.911667 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.911879 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.911891 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.911694 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.912279 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.912346 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.912669 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.912766 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.912858 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.912915 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.912948 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.912961 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.913077 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.913081 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.913115 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.913156 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.913181 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.913189 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.913209 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.913524 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.913560 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.913681 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.913744 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.913768 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.913807 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.913833 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.913924 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.913949 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.913964 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.914001 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.914031 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.914090 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.914115 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.914137 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.914249 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.914007 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.914306 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.914180 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.914233 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.914353 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.914411 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.914439 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.914467 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.914580 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.914609 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.914662 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.914692 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.914760 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.914784 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.914790 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.914829 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.914855 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.914864 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.914893 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.914909 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.915105 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.915162 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.915198 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.915232 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.915271 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.915302 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.915335 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.915366 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.915410 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.915478 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.915512 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.915543 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.915572 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.915599 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.915627 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.915655 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.915689 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.915719 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.915748 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.915775 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.915798 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.916118 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.916162 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.916187 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.916210 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.916237 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.916259 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.916283 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.916306 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.916329 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.916351 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.916382 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.916423 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.916448 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.917066 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.917105 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.917135 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.917160 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.917186 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.917207 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.917230 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.917255 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.917282 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.917332 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.917355 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.917374 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.917445 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.917462 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.917479 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.917501 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.917535 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.917566 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.917588 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.917609 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.917632 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.917653 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.917676 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.917698 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.917720 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.917740 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.917761 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.917782 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.917804 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.917826 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.917859 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.917880 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.917901 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.917922 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.917946 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.917969 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.917992 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.918021 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.918044 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.918068 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.918092 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.918116 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.918139 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.918163 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.918186 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.918213 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.918236 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.918257 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.918284 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.918310 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.918406 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.918454 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.918485 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.918508 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.918541 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.918568 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.918594 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.918632 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.918660 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.918708 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.918736 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.918770 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.918795 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.918822 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.918952 4718 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.918972 4718 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.915161 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.915275 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.915365 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.915376 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.915350 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.915725 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.915801 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.915911 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.916521 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.916818 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.916966 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.917061 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.917227 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.917254 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.917909 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.918074 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.918367 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.918561 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.918810 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.918890 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.918928 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.919526 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.919815 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.920411 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.920446 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.920883 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.920903 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.921002 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.921097 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.921462 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.921491 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.921524 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.921839 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.921847 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.922051 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.922716 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.922996 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.923149 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.923284 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.923715 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.923976 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.924068 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.924444 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.924895 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.925070 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.925539 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.925825 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.926152 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.926593 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.927026 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.927944 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.928353 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.928649 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.929174 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.929624 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.929914 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.930163 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.930473 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.930911 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.931464 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.932315 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.934329 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.934689 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.934883 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.935120 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.935338 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.935592 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.935916 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.936151 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.936435 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.937174 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.949504 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.950278 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 14:31:57 crc kubenswrapper[4718]: I1210 14:31:57.951189 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:57.931864 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.076122 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.076257 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.076623 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.076696 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.076833 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.076843 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.076878 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.077077 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.077205 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.077233 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.077257 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.088029 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.089779 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.091022 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.091793 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.092294 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.093594 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.093921 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.094223 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.094547 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.094646 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.094701 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.095983 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.096377 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:31:58 crc kubenswrapper[4718]: E1210 14:31:58.102481 4718 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.097858 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.098327 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.098349 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 14:31:58 crc kubenswrapper[4718]: E1210 14:31:58.102644 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-10 14:31:58.602616688 +0000 UTC m=+23.551840105 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.098358 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.098446 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.098454 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.098697 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.099580 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.099679 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.100006 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.100350 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.100645 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.100966 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.101169 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.101379 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.101590 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.101907 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.100210 4718 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.103068 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.103872 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 10 14:31:58 crc kubenswrapper[4718]: E1210 14:31:58.104487 4718 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.098480 4718 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.104670 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.100733 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.104954 4718 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.104952 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.105007 4718 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.105038 4718 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.105092 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.105333 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.105371 4718 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Dec 10 14:31:58 crc kubenswrapper[4718]: E1210 14:31:58.105458 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-10 14:31:58.605380378 +0000 UTC m=+23.554603795 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.105794 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.106107 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.106255 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.106787 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.106871 4718 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.107149 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.107193 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.107236 4718 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.107595 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.108077 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.108451 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.108736 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.109228 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.109439 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.109785 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.109879 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.110426 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.110508 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.110712 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.110802 4718 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.110971 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.110996 4718 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.111011 4718 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.111030 4718 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.111043 4718 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.111056 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.111073 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.111090 4718 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.111103 4718 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.111117 4718 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.111132 4718 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.111144 4718 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.111156 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.111171 4718 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.111186 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.111198 4718 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.111210 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.111225 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.111237 4718 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.111250 4718 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.111262 4718 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.111276 4718 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.111289 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.111304 4718 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.111317 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.111334 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.111347 4718 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.111359 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.111370 4718 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.111407 4718 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.111419 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.111431 4718 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.111448 4718 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.111459 4718 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.111470 4718 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.111482 4718 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.111496 4718 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.111507 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.111518 4718 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.111530 4718 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.111544 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.111560 4718 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.111572 4718 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.111585 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.111601 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.111613 4718 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.111624 4718 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.114664 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.116100 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.118050 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.121005 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:31:58 crc kubenswrapper[4718]: E1210 14:31:58.121405 4718 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 10 14:31:58 crc kubenswrapper[4718]: E1210 14:31:58.121448 4718 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 10 14:31:58 crc kubenswrapper[4718]: E1210 14:31:58.121494 4718 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 10 14:31:58 crc kubenswrapper[4718]: E1210 14:31:58.121967 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-10 14:31:58.621576301 +0000 UTC m=+23.570799718 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.127714 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.128723 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.132416 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.133037 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.134081 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.134973 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Dec 10 14:31:58 crc kubenswrapper[4718]: E1210 14:31:58.136220 4718 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 10 14:31:58 crc kubenswrapper[4718]: E1210 14:31:58.136277 4718 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 10 14:31:58 crc kubenswrapper[4718]: E1210 14:31:58.136297 4718 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 10 14:31:58 crc kubenswrapper[4718]: E1210 14:31:58.136416 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-10 14:31:58.636368808 +0000 UTC m=+23.585592405 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.139558 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.140751 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.142719 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.143535 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.145651 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.146472 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.146946 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.147733 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.148335 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.149027 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.149709 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.151212 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.151906 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.153231 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.154250 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.156069 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.156825 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.157206 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.158192 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.158857 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.159470 4718 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.159619 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.162315 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.162930 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.164153 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.165921 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.166644 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.167682 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.168482 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.169574 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.170156 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.170867 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.171955 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.172921 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.173542 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.174613 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.175166 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.176332 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.176931 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.177909 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.178368 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.178951 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.179943 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.180480 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.205329 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.205502 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.212335 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.212414 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.212483 4718 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.212497 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.212509 4718 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.212520 4718 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.212530 4718 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.212542 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.212553 4718 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.212563 4718 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.212574 4718 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.212587 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.212597 4718 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.212607 4718 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.212617 4718 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.212628 4718 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.212638 4718 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.212649 4718 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.212660 4718 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.212670 4718 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.212680 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.212691 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.212704 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.212714 4718 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.212724 4718 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.212734 4718 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.212744 4718 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.212754 4718 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.212764 4718 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.212776 4718 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.212786 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.212797 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.212808 4718 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.212819 4718 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.212829 4718 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.212839 4718 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.212849 4718 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.212859 4718 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.212871 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.212881 4718 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.212893 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.212904 4718 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.212914 4718 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.212925 4718 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.212935 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.212946 4718 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.212956 4718 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.212967 4718 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.212978 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.212990 4718 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.213002 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.213011 4718 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.213042 4718 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.213053 4718 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.213064 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.213076 4718 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.213088 4718 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.213099 4718 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.213110 4718 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.213121 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.213133 4718 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.213144 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.213155 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.213166 4718 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.213177 4718 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.213189 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.213200 4718 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.213211 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.213223 4718 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.213234 4718 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.213245 4718 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.213257 4718 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.213267 4718 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.213278 4718 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.213288 4718 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.213303 4718 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.213314 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.213326 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.213337 4718 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.213350 4718 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.213361 4718 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.213373 4718 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.213384 4718 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.213410 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.213422 4718 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.213432 4718 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.213442 4718 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.213453 4718 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.213465 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.213476 4718 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.213487 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.213499 4718 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.213511 4718 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.213524 4718 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.213534 4718 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.213545 4718 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.213557 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.213570 4718 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.213581 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.213592 4718 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.213603 4718 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.213614 4718 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.213626 4718 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.213638 4718 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.213649 4718 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.213660 4718 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.213671 4718 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.213693 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.213704 4718 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.213715 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.213726 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.213736 4718 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.213747 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.213758 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.213769 4718 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.213780 4718 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.213790 4718 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.213800 4718 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.213812 4718 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.213824 4718 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.213835 4718 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.213846 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.213857 4718 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.213867 4718 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.213878 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.213892 4718 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.213902 4718 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.213914 4718 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.213924 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.213935 4718 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.213945 4718 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.213956 4718 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.213967 4718 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.213977 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.213988 4718 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.213999 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.214009 4718 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.214175 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.214353 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.382014 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.390538 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.397766 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 10 14:31:58 crc kubenswrapper[4718]: W1210 14:31:58.405862 4718 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podef543e1b_8068_4ea3_b32a_61027b32e95d.slice/crio-fb8720b7b33e81af976c652313c48c99797cb7cd1fdf0f651028cf2f4559bcb6 WatchSource:0}: Error finding container fb8720b7b33e81af976c652313c48c99797cb7cd1fdf0f651028cf2f4559bcb6: Status 404 returned error can't find the container with id fb8720b7b33e81af976c652313c48c99797cb7cd1fdf0f651028cf2f4559bcb6 Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.415142 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 10 14:31:58 crc kubenswrapper[4718]: E1210 14:31:58.415318 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-10 14:31:59.415291259 +0000 UTC m=+24.364514676 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 14:31:58 crc kubenswrapper[4718]: W1210 14:31:58.416430 4718 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd75a4c96_2883_4a0b_bab2_0fab2b6c0b49.slice/crio-7ff504b1d4612dbac10e1df7d053aeb5ed4c7d6cb7a44ae1d1314c5e04366bb0 WatchSource:0}: Error finding container 7ff504b1d4612dbac10e1df7d053aeb5ed4c7d6cb7a44ae1d1314c5e04366bb0: Status 404 returned error can't find the container with id 7ff504b1d4612dbac10e1df7d053aeb5ed4c7d6cb7a44ae1d1314c5e04366bb0 Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.502357 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"7ff504b1d4612dbac10e1df7d053aeb5ed4c7d6cb7a44ae1d1314c5e04366bb0"} Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.503689 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"fb8720b7b33e81af976c652313c48c99797cb7cd1fdf0f651028cf2f4559bcb6"} Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.504504 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"57d47c71b5f19fa88d2c864850ec1dc82b0f3b75c8ec0361ed021aa03fa71c5e"} Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.505920 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.506331 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.507723 4718 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="9d6e990a4a3f575ffc8a6399b8dd1fd59a84db71b34885946046fc7c5bd0a8e8" exitCode=255 Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.507750 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"9d6e990a4a3f575ffc8a6399b8dd1fd59a84db71b34885946046fc7c5bd0a8e8"} Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.507806 4718 scope.go:117] "RemoveContainer" containerID="96a402f0272405a079b30df6cd969089ca77cda5c69af9a3b37067724e7a2203" Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.527058 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.539227 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.550836 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.560036 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.562039 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.562411 4718 scope.go:117] "RemoveContainer" containerID="9d6e990a4a3f575ffc8a6399b8dd1fd59a84db71b34885946046fc7c5bd0a8e8" Dec 10 14:31:58 crc kubenswrapper[4718]: E1210 14:31:58.562734 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.569215 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:57Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.584375 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:57Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.625528 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.625579 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.625608 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 10 14:31:58 crc kubenswrapper[4718]: E1210 14:31:58.625696 4718 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 10 14:31:58 crc kubenswrapper[4718]: E1210 14:31:58.625747 4718 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 10 14:31:58 crc kubenswrapper[4718]: E1210 14:31:58.625779 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-10 14:31:59.625757474 +0000 UTC m=+24.574980891 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 10 14:31:58 crc kubenswrapper[4718]: E1210 14:31:58.625956 4718 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 10 14:31:58 crc kubenswrapper[4718]: E1210 14:31:58.626035 4718 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 10 14:31:58 crc kubenswrapper[4718]: E1210 14:31:58.626053 4718 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 10 14:31:58 crc kubenswrapper[4718]: E1210 14:31:58.625994 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-10 14:31:59.625913188 +0000 UTC m=+24.575136615 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 10 14:31:58 crc kubenswrapper[4718]: E1210 14:31:58.626150 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-10 14:31:59.626136464 +0000 UTC m=+24.575360051 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.726223 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 10 14:31:58 crc kubenswrapper[4718]: E1210 14:31:58.726475 4718 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 10 14:31:58 crc kubenswrapper[4718]: E1210 14:31:58.726515 4718 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 10 14:31:58 crc kubenswrapper[4718]: E1210 14:31:58.726527 4718 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 10 14:31:58 crc kubenswrapper[4718]: E1210 14:31:58.726624 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-10 14:31:59.726595145 +0000 UTC m=+24.675818732 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.831224 4718 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.833348 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.833395 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.833404 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.833479 4718 kubelet_node_status.go:76] "Attempting to register node" node="crc" Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.864138 4718 kubelet_node_status.go:115] "Node was previously registered" node="crc" Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.864329 4718 kubelet_node_status.go:79] "Successfully registered node" node="crc" Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.866733 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.866785 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.866794 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.866815 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.866828 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:31:58Z","lastTransitionTime":"2025-12-10T14:31:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:31:58 crc kubenswrapper[4718]: E1210 14:31:58.882747 4718 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:31:58Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:31:58Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:58Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:31:58Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:31:58Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:58Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"fbef6824-3734-4f3d-bf18-030e5c72cff8\\\",\\\"systemUUID\\\":\\\"559eaef7-72a9-45f0-b8d3-0046c76adc0d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.887513 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.887656 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.887744 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.887819 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.887879 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:31:58Z","lastTransitionTime":"2025-12-10T14:31:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:31:58 crc kubenswrapper[4718]: E1210 14:31:58.905285 4718 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:31:58Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:31:58Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:58Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:31:58Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:31:58Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:58Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"fbef6824-3734-4f3d-bf18-030e5c72cff8\\\",\\\"systemUUID\\\":\\\"559eaef7-72a9-45f0-b8d3-0046c76adc0d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.909988 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.910019 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.910031 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.910057 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.910070 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:31:58Z","lastTransitionTime":"2025-12-10T14:31:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:31:58 crc kubenswrapper[4718]: E1210 14:31:58.924311 4718 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:31:58Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:31:58Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:58Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:31:58Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:31:58Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:58Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"fbef6824-3734-4f3d-bf18-030e5c72cff8\\\",\\\"systemUUID\\\":\\\"559eaef7-72a9-45f0-b8d3-0046c76adc0d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.929702 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.929757 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.929778 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.929797 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.929809 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:31:58Z","lastTransitionTime":"2025-12-10T14:31:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:31:58 crc kubenswrapper[4718]: E1210 14:31:58.943834 4718 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:31:58Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:31:58Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:58Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:31:58Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:31:58Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:58Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"fbef6824-3734-4f3d-bf18-030e5c72cff8\\\",\\\"systemUUID\\\":\\\"559eaef7-72a9-45f0-b8d3-0046c76adc0d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.948687 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.948711 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.948725 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.948740 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.948750 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:31:58Z","lastTransitionTime":"2025-12-10T14:31:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:31:58 crc kubenswrapper[4718]: E1210 14:31:58.961802 4718 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:31:58Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:31:58Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:58Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:31:58Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:31:58Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:58Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"fbef6824-3734-4f3d-bf18-030e5c72cff8\\\",\\\"systemUUID\\\":\\\"559eaef7-72a9-45f0-b8d3-0046c76adc0d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 14:31:58 crc kubenswrapper[4718]: E1210 14:31:58.962072 4718 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.970486 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.970627 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.970640 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.970659 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:31:58 crc kubenswrapper[4718]: I1210 14:31:58.970677 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:31:58Z","lastTransitionTime":"2025-12-10T14:31:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:31:59 crc kubenswrapper[4718]: I1210 14:31:59.020220 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 10 14:31:59 crc kubenswrapper[4718]: E1210 14:31:59.020506 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 10 14:31:59 crc kubenswrapper[4718]: I1210 14:31:59.073641 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:31:59 crc kubenswrapper[4718]: I1210 14:31:59.073917 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:31:59 crc kubenswrapper[4718]: I1210 14:31:59.074031 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:31:59 crc kubenswrapper[4718]: I1210 14:31:59.074110 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:31:59 crc kubenswrapper[4718]: I1210 14:31:59.074184 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:31:59Z","lastTransitionTime":"2025-12-10T14:31:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:31:59 crc kubenswrapper[4718]: I1210 14:31:59.176960 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:31:59 crc kubenswrapper[4718]: I1210 14:31:59.176982 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:31:59 crc kubenswrapper[4718]: I1210 14:31:59.176990 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:31:59 crc kubenswrapper[4718]: I1210 14:31:59.177004 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:31:59 crc kubenswrapper[4718]: I1210 14:31:59.177013 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:31:59Z","lastTransitionTime":"2025-12-10T14:31:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:31:59 crc kubenswrapper[4718]: I1210 14:31:59.280260 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:31:59 crc kubenswrapper[4718]: I1210 14:31:59.280315 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:31:59 crc kubenswrapper[4718]: I1210 14:31:59.280329 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:31:59 crc kubenswrapper[4718]: I1210 14:31:59.280349 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:31:59 crc kubenswrapper[4718]: I1210 14:31:59.280363 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:31:59Z","lastTransitionTime":"2025-12-10T14:31:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:31:59 crc kubenswrapper[4718]: I1210 14:31:59.387224 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:31:59 crc kubenswrapper[4718]: I1210 14:31:59.387274 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:31:59 crc kubenswrapper[4718]: I1210 14:31:59.387285 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:31:59 crc kubenswrapper[4718]: I1210 14:31:59.387302 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:31:59 crc kubenswrapper[4718]: I1210 14:31:59.387313 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:31:59Z","lastTransitionTime":"2025-12-10T14:31:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:31:59 crc kubenswrapper[4718]: I1210 14:31:59.482702 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 10 14:31:59 crc kubenswrapper[4718]: E1210 14:31:59.482857 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-10 14:32:01.482835744 +0000 UTC m=+26.432059161 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 14:31:59 crc kubenswrapper[4718]: I1210 14:31:59.489768 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:31:59 crc kubenswrapper[4718]: I1210 14:31:59.489803 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:31:59 crc kubenswrapper[4718]: I1210 14:31:59.489816 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:31:59 crc kubenswrapper[4718]: I1210 14:31:59.489831 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:31:59 crc kubenswrapper[4718]: I1210 14:31:59.489843 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:31:59Z","lastTransitionTime":"2025-12-10T14:31:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:31:59 crc kubenswrapper[4718]: I1210 14:31:59.513671 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"3a6ff07473ddfa3d1db1e5fe47937274f803c400ee435102c3b0383c02af4340"} Dec 10 14:31:59 crc kubenswrapper[4718]: I1210 14:31:59.513723 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"2b56868fde34c04a22b78be1e6cb02ff5b11f0dd9fa0232e5a4c38725a8bf193"} Dec 10 14:31:59 crc kubenswrapper[4718]: I1210 14:31:59.523174 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"caa6ee7ab7f267517dcd1699f91f61cbc8b2dd0a135dd553e096f31a7f1dd93b"} Dec 10 14:31:59 crc kubenswrapper[4718]: I1210 14:31:59.525274 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Dec 10 14:31:59 crc kubenswrapper[4718]: I1210 14:31:59.528046 4718 scope.go:117] "RemoveContainer" containerID="9d6e990a4a3f575ffc8a6399b8dd1fd59a84db71b34885946046fc7c5bd0a8e8" Dec 10 14:31:59 crc kubenswrapper[4718]: E1210 14:31:59.528378 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Dec 10 14:31:59 crc kubenswrapper[4718]: I1210 14:31:59.592612 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:31:59 crc kubenswrapper[4718]: I1210 14:31:59.592990 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:31:59 crc kubenswrapper[4718]: I1210 14:31:59.593059 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:31:59 crc kubenswrapper[4718]: I1210 14:31:59.593152 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:31:59 crc kubenswrapper[4718]: I1210 14:31:59.593239 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:31:59Z","lastTransitionTime":"2025-12-10T14:31:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:31:59 crc kubenswrapper[4718]: I1210 14:31:59.605601 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 14:31:59 crc kubenswrapper[4718]: I1210 14:31:59.641372 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 14:31:59 crc kubenswrapper[4718]: I1210 14:31:59.673049 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52b297ad-8ff7-498e-8248-b64014de744f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65966d33004e0a6740f0b450f404729baec86fc47c84a58ee020a78b7378bfee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfa44e8f79d7525847ab10384aaf41105a8e6769b384679417cd0a7379748dee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://32fcede9b35f9a45177a341e272e7c2cbe2e5c6bf699e877664a603a5f96124a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d6e990a4a3f575ffc8a6399b8dd1fd59a84db71b34885946046fc7c5bd0a8e8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96a402f0272405a079b30df6cd969089ca77cda5c69af9a3b37067724e7a2203\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-10T14:31:41Z\\\",\\\"message\\\":\\\"W1210 14:31:41.007972 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1210 14:31:41.008556 1 crypto.go:601] Generating new CA for check-endpoints-signer@1765377101 cert, and key in /tmp/serving-cert-4273492018/serving-signer.crt, /tmp/serving-cert-4273492018/serving-signer.key\\\\nI1210 14:31:41.307377 1 observer_polling.go:159] Starting file observer\\\\nW1210 14:31:41.310916 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1210 14:31:41.311216 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1210 14:31:41.312591 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4273492018/tls.crt::/tmp/serving-cert-4273492018/tls.key\\\\\\\"\\\\nF1210 14:31:41.786142 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-10T14:31:40Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9d6e990a4a3f575ffc8a6399b8dd1fd59a84db71b34885946046fc7c5bd0a8e8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-10T14:31:58Z\\\",\\\"message\\\":\\\"le observer\\\\nW1210 14:31:57.496102 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1210 14:31:57.496266 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1210 14:31:57.497076 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1744864549/tls.crt::/tmp/serving-cert-1744864549/tls.key\\\\\\\"\\\\nI1210 14:31:57.965947 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1210 14:31:57.968367 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1210 14:31:57.968400 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1210 14:31:57.968417 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1210 14:31:57.968422 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1210 14:31:57.972813 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1210 14:31:57.972849 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1210 14:31:57.972856 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1210 14:31:57.972862 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1210 14:31:57.972867 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1210 14:31:57.972870 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1210 14:31:57.972874 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1210 14:31:57.973028 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1210 14:31:57.977590 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-10T14:31:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f363cabfc65ba2cb737968ead6f21568f30416a1f385f511f2d1a45da1e831a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3cbf8dfd3e1278a733c6ca7e888788d0bec845ff72dfd618e2070262d05b6a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3cbf8dfd3e1278a733c6ca7e888788d0bec845ff72dfd618e2070262d05b6a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:31:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:31:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 14:31:59 crc kubenswrapper[4718]: I1210 14:31:59.684939 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 10 14:31:59 crc kubenswrapper[4718]: E1210 14:31:59.685169 4718 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 10 14:31:59 crc kubenswrapper[4718]: I1210 14:31:59.685337 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 10 14:31:59 crc kubenswrapper[4718]: E1210 14:31:59.685480 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-10 14:32:01.68544926 +0000 UTC m=+26.634672847 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 10 14:31:59 crc kubenswrapper[4718]: I1210 14:31:59.685533 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 10 14:31:59 crc kubenswrapper[4718]: E1210 14:31:59.685690 4718 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 10 14:31:59 crc kubenswrapper[4718]: E1210 14:31:59.685711 4718 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 10 14:31:59 crc kubenswrapper[4718]: E1210 14:31:59.685722 4718 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 10 14:31:59 crc kubenswrapper[4718]: E1210 14:31:59.685786 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-10 14:32:01.685764548 +0000 UTC m=+26.634987965 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 10 14:31:59 crc kubenswrapper[4718]: E1210 14:31:59.685690 4718 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 10 14:31:59 crc kubenswrapper[4718]: E1210 14:31:59.685816 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-10 14:32:01.685810449 +0000 UTC m=+26.635033866 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 10 14:31:59 crc kubenswrapper[4718]: I1210 14:31:59.690267 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 14:31:59 crc kubenswrapper[4718]: I1210 14:31:59.695668 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:31:59 crc kubenswrapper[4718]: I1210 14:31:59.695757 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:31:59 crc kubenswrapper[4718]: I1210 14:31:59.695770 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:31:59 crc kubenswrapper[4718]: I1210 14:31:59.695795 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:31:59 crc kubenswrapper[4718]: I1210 14:31:59.695809 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:31:59Z","lastTransitionTime":"2025-12-10T14:31:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:31:59 crc kubenswrapper[4718]: I1210 14:31:59.716018 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 14:31:59 crc kubenswrapper[4718]: I1210 14:31:59.736832 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:57Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 14:31:59 crc kubenswrapper[4718]: I1210 14:31:59.760115 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a6ff07473ddfa3d1db1e5fe47937274f803c400ee435102c3b0383c02af4340\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b56868fde34c04a22b78be1e6cb02ff5b11f0dd9fa0232e5a4c38725a8bf193\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 14:31:59 crc kubenswrapper[4718]: I1210 14:31:59.772709 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 14:31:59 crc kubenswrapper[4718]: I1210 14:31:59.786354 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 10 14:31:59 crc kubenswrapper[4718]: E1210 14:31:59.786674 4718 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 10 14:31:59 crc kubenswrapper[4718]: E1210 14:31:59.786725 4718 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 10 14:31:59 crc kubenswrapper[4718]: E1210 14:31:59.786741 4718 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 10 14:31:59 crc kubenswrapper[4718]: E1210 14:31:59.786829 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-10 14:32:01.786801973 +0000 UTC m=+26.736025530 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 10 14:31:59 crc kubenswrapper[4718]: I1210 14:31:59.798873 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:31:59 crc kubenswrapper[4718]: I1210 14:31:59.798917 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:31:59 crc kubenswrapper[4718]: I1210 14:31:59.798930 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:31:59 crc kubenswrapper[4718]: I1210 14:31:59.798950 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:31:59 crc kubenswrapper[4718]: I1210 14:31:59.798964 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:31:59Z","lastTransitionTime":"2025-12-10T14:31:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:31:59 crc kubenswrapper[4718]: I1210 14:31:59.806256 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52b297ad-8ff7-498e-8248-b64014de744f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65966d33004e0a6740f0b450f404729baec86fc47c84a58ee020a78b7378bfee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfa44e8f79d7525847ab10384aaf41105a8e6769b384679417cd0a7379748dee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://32fcede9b35f9a45177a341e272e7c2cbe2e5c6bf699e877664a603a5f96124a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d6e990a4a3f575ffc8a6399b8dd1fd59a84db71b34885946046fc7c5bd0a8e8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9d6e990a4a3f575ffc8a6399b8dd1fd59a84db71b34885946046fc7c5bd0a8e8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-10T14:31:58Z\\\",\\\"message\\\":\\\"le observer\\\\nW1210 14:31:57.496102 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1210 14:31:57.496266 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1210 14:31:57.497076 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1744864549/tls.crt::/tmp/serving-cert-1744864549/tls.key\\\\\\\"\\\\nI1210 14:31:57.965947 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1210 14:31:57.968367 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1210 14:31:57.968400 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1210 14:31:57.968417 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1210 14:31:57.968422 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1210 14:31:57.972813 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1210 14:31:57.972849 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1210 14:31:57.972856 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1210 14:31:57.972862 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1210 14:31:57.972867 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1210 14:31:57.972870 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1210 14:31:57.972874 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1210 14:31:57.973028 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1210 14:31:57.977590 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-10T14:31:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f363cabfc65ba2cb737968ead6f21568f30416a1f385f511f2d1a45da1e831a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3cbf8dfd3e1278a733c6ca7e888788d0bec845ff72dfd618e2070262d05b6a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3cbf8dfd3e1278a733c6ca7e888788d0bec845ff72dfd618e2070262d05b6a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:31:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:31:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 14:31:59 crc kubenswrapper[4718]: I1210 14:31:59.819216 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://caa6ee7ab7f267517dcd1699f91f61cbc8b2dd0a135dd553e096f31a7f1dd93b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 14:31:59 crc kubenswrapper[4718]: I1210 14:31:59.829996 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 14:31:59 crc kubenswrapper[4718]: I1210 14:31:59.844496 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 10 14:31:59 crc kubenswrapper[4718]: I1210 14:31:59.965012 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:31:59 crc kubenswrapper[4718]: I1210 14:31:59.965050 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:31:59 crc kubenswrapper[4718]: I1210 14:31:59.965059 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:31:59 crc kubenswrapper[4718]: I1210 14:31:59.965074 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:31:59 crc kubenswrapper[4718]: I1210 14:31:59.965085 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:31:59Z","lastTransitionTime":"2025-12-10T14:31:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:00 crc kubenswrapper[4718]: I1210 14:32:00.115835 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:57Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:31:59Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:00 crc kubenswrapper[4718]: I1210 14:32:00.116223 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 10 14:32:00 crc kubenswrapper[4718]: E1210 14:32:00.116350 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 10 14:32:00 crc kubenswrapper[4718]: I1210 14:32:00.117730 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 10 14:32:00 crc kubenswrapper[4718]: E1210 14:32:00.117933 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 10 14:32:00 crc kubenswrapper[4718]: I1210 14:32:00.119296 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:00 crc kubenswrapper[4718]: I1210 14:32:00.119318 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:00 crc kubenswrapper[4718]: I1210 14:32:00.119326 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:00 crc kubenswrapper[4718]: I1210 14:32:00.119340 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:00 crc kubenswrapper[4718]: I1210 14:32:00.119370 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:00Z","lastTransitionTime":"2025-12-10T14:32:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:00 crc kubenswrapper[4718]: I1210 14:32:00.119905 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Dec 10 14:32:00 crc kubenswrapper[4718]: I1210 14:32:00.120961 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Dec 10 14:32:00 crc kubenswrapper[4718]: I1210 14:32:00.123619 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Dec 10 14:32:00 crc kubenswrapper[4718]: I1210 14:32:00.222458 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:00 crc kubenswrapper[4718]: I1210 14:32:00.222504 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:00 crc kubenswrapper[4718]: I1210 14:32:00.222518 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:00 crc kubenswrapper[4718]: I1210 14:32:00.222537 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:00 crc kubenswrapper[4718]: I1210 14:32:00.222550 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:00Z","lastTransitionTime":"2025-12-10T14:32:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:00 crc kubenswrapper[4718]: I1210 14:32:00.229687 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a6ff07473ddfa3d1db1e5fe47937274f803c400ee435102c3b0383c02af4340\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b56868fde34c04a22b78be1e6cb02ff5b11f0dd9fa0232e5a4c38725a8bf193\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:00Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:00 crc kubenswrapper[4718]: I1210 14:32:00.328057 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:00 crc kubenswrapper[4718]: I1210 14:32:00.328110 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:00 crc kubenswrapper[4718]: I1210 14:32:00.328122 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:00 crc kubenswrapper[4718]: I1210 14:32:00.328139 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:00 crc kubenswrapper[4718]: I1210 14:32:00.328151 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:00Z","lastTransitionTime":"2025-12-10T14:32:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:00 crc kubenswrapper[4718]: I1210 14:32:00.431642 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:00 crc kubenswrapper[4718]: I1210 14:32:00.431676 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:00 crc kubenswrapper[4718]: I1210 14:32:00.431683 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:00 crc kubenswrapper[4718]: I1210 14:32:00.431698 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:00 crc kubenswrapper[4718]: I1210 14:32:00.431707 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:00Z","lastTransitionTime":"2025-12-10T14:32:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:00 crc kubenswrapper[4718]: I1210 14:32:00.482835 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-hv62w"] Dec 10 14:32:00 crc kubenswrapper[4718]: I1210 14:32:00.483193 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-hv62w" Dec 10 14:32:00 crc kubenswrapper[4718]: I1210 14:32:00.484237 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-xvkg4"] Dec 10 14:32:00 crc kubenswrapper[4718]: I1210 14:32:00.484624 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-xvkg4" Dec 10 14:32:00 crc kubenswrapper[4718]: I1210 14:32:00.487043 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Dec 10 14:32:00 crc kubenswrapper[4718]: I1210 14:32:00.487572 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Dec 10 14:32:00 crc kubenswrapper[4718]: I1210 14:32:00.487815 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-8zmhn"] Dec 10 14:32:00 crc kubenswrapper[4718]: I1210 14:32:00.488131 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" Dec 10 14:32:00 crc kubenswrapper[4718]: I1210 14:32:00.489044 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Dec 10 14:32:00 crc kubenswrapper[4718]: I1210 14:32:00.489071 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Dec 10 14:32:00 crc kubenswrapper[4718]: I1210 14:32:00.489045 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Dec 10 14:32:00 crc kubenswrapper[4718]: I1210 14:32:00.496717 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Dec 10 14:32:00 crc kubenswrapper[4718]: I1210 14:32:00.496758 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Dec 10 14:32:00 crc kubenswrapper[4718]: I1210 14:32:00.496884 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Dec 10 14:32:00 crc kubenswrapper[4718]: I1210 14:32:00.497030 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Dec 10 14:32:00 crc kubenswrapper[4718]: I1210 14:32:00.497199 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Dec 10 14:32:00 crc kubenswrapper[4718]: I1210 14:32:00.506444 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Dec 10 14:32:00 crc kubenswrapper[4718]: I1210 14:32:00.506838 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Dec 10 14:32:00 crc kubenswrapper[4718]: I1210 14:32:00.507306 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Dec 10 14:32:00 crc kubenswrapper[4718]: I1210 14:32:00.522733 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://caa6ee7ab7f267517dcd1699f91f61cbc8b2dd0a135dd553e096f31a7f1dd93b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:00Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:00 crc kubenswrapper[4718]: I1210 14:32:00.534751 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:00 crc kubenswrapper[4718]: I1210 14:32:00.534819 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:00 crc kubenswrapper[4718]: I1210 14:32:00.534831 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:00 crc kubenswrapper[4718]: I1210 14:32:00.534849 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:00 crc kubenswrapper[4718]: I1210 14:32:00.534862 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:00Z","lastTransitionTime":"2025-12-10T14:32:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:00 crc kubenswrapper[4718]: I1210 14:32:00.542093 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:00Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:00 crc kubenswrapper[4718]: I1210 14:32:00.557152 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:00Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:00 crc kubenswrapper[4718]: I1210 14:32:00.575474 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:00Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:00 crc kubenswrapper[4718]: I1210 14:32:00.591823 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52b297ad-8ff7-498e-8248-b64014de744f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65966d33004e0a6740f0b450f404729baec86fc47c84a58ee020a78b7378bfee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfa44e8f79d7525847ab10384aaf41105a8e6769b384679417cd0a7379748dee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://32fcede9b35f9a45177a341e272e7c2cbe2e5c6bf699e877664a603a5f96124a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d6e990a4a3f575ffc8a6399b8dd1fd59a84db71b34885946046fc7c5bd0a8e8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9d6e990a4a3f575ffc8a6399b8dd1fd59a84db71b34885946046fc7c5bd0a8e8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-10T14:31:58Z\\\",\\\"message\\\":\\\"le observer\\\\nW1210 14:31:57.496102 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1210 14:31:57.496266 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1210 14:31:57.497076 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1744864549/tls.crt::/tmp/serving-cert-1744864549/tls.key\\\\\\\"\\\\nI1210 14:31:57.965947 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1210 14:31:57.968367 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1210 14:31:57.968400 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1210 14:31:57.968417 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1210 14:31:57.968422 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1210 14:31:57.972813 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1210 14:31:57.972849 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1210 14:31:57.972856 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1210 14:31:57.972862 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1210 14:31:57.972867 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1210 14:31:57.972870 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1210 14:31:57.972874 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1210 14:31:57.973028 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1210 14:31:57.977590 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-10T14:31:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f363cabfc65ba2cb737968ead6f21568f30416a1f385f511f2d1a45da1e831a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3cbf8dfd3e1278a733c6ca7e888788d0bec845ff72dfd618e2070262d05b6a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3cbf8dfd3e1278a733c6ca7e888788d0bec845ff72dfd618e2070262d05b6a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:31:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:31:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:00Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:00 crc kubenswrapper[4718]: I1210 14:32:00.607454 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:57Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:00Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:00 crc kubenswrapper[4718]: I1210 14:32:00.620869 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kfb2t\" (UniqueName: \"kubernetes.io/projected/8db53917-7cfb-496d-b8a0-5cc68f3be4e7-kube-api-access-kfb2t\") pod \"machine-config-daemon-8zmhn\" (UID: \"8db53917-7cfb-496d-b8a0-5cc68f3be4e7\") " pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" Dec 10 14:32:00 crc kubenswrapper[4718]: I1210 14:32:00.620950 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/9db3984f-4589-462f-94d7-89a885be63d5-host-var-lib-cni-multus\") pod \"multus-hv62w\" (UID: \"9db3984f-4589-462f-94d7-89a885be63d5\") " pod="openshift-multus/multus-hv62w" Dec 10 14:32:00 crc kubenswrapper[4718]: I1210 14:32:00.620979 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/9db3984f-4589-462f-94d7-89a885be63d5-host-run-netns\") pod \"multus-hv62w\" (UID: \"9db3984f-4589-462f-94d7-89a885be63d5\") " pod="openshift-multus/multus-hv62w" Dec 10 14:32:00 crc kubenswrapper[4718]: I1210 14:32:00.621004 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/8db53917-7cfb-496d-b8a0-5cc68f3be4e7-mcd-auth-proxy-config\") pod \"machine-config-daemon-8zmhn\" (UID: \"8db53917-7cfb-496d-b8a0-5cc68f3be4e7\") " pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" Dec 10 14:32:00 crc kubenswrapper[4718]: I1210 14:32:00.621052 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/9db3984f-4589-462f-94d7-89a885be63d5-multus-conf-dir\") pod \"multus-hv62w\" (UID: \"9db3984f-4589-462f-94d7-89a885be63d5\") " pod="openshift-multus/multus-hv62w" Dec 10 14:32:00 crc kubenswrapper[4718]: I1210 14:32:00.621074 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/9db3984f-4589-462f-94d7-89a885be63d5-system-cni-dir\") pod \"multus-hv62w\" (UID: \"9db3984f-4589-462f-94d7-89a885be63d5\") " pod="openshift-multus/multus-hv62w" Dec 10 14:32:00 crc kubenswrapper[4718]: I1210 14:32:00.621095 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/9db3984f-4589-462f-94d7-89a885be63d5-host-var-lib-kubelet\") pod \"multus-hv62w\" (UID: \"9db3984f-4589-462f-94d7-89a885be63d5\") " pod="openshift-multus/multus-hv62w" Dec 10 14:32:00 crc kubenswrapper[4718]: I1210 14:32:00.621117 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/9db3984f-4589-462f-94d7-89a885be63d5-host-run-multus-certs\") pod \"multus-hv62w\" (UID: \"9db3984f-4589-462f-94d7-89a885be63d5\") " pod="openshift-multus/multus-hv62w" Dec 10 14:32:00 crc kubenswrapper[4718]: I1210 14:32:00.621212 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/8db53917-7cfb-496d-b8a0-5cc68f3be4e7-proxy-tls\") pod \"machine-config-daemon-8zmhn\" (UID: \"8db53917-7cfb-496d-b8a0-5cc68f3be4e7\") " pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" Dec 10 14:32:00 crc kubenswrapper[4718]: I1210 14:32:00.621246 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vsmbx\" (UniqueName: \"kubernetes.io/projected/af814c60-50de-499d-a1b2-b18f3749bc35-kube-api-access-vsmbx\") pod \"node-resolver-xvkg4\" (UID: \"af814c60-50de-499d-a1b2-b18f3749bc35\") " pod="openshift-dns/node-resolver-xvkg4" Dec 10 14:32:00 crc kubenswrapper[4718]: I1210 14:32:00.621278 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d84p4\" (UniqueName: \"kubernetes.io/projected/9db3984f-4589-462f-94d7-89a885be63d5-kube-api-access-d84p4\") pod \"multus-hv62w\" (UID: \"9db3984f-4589-462f-94d7-89a885be63d5\") " pod="openshift-multus/multus-hv62w" Dec 10 14:32:00 crc kubenswrapper[4718]: I1210 14:32:00.621295 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/8db53917-7cfb-496d-b8a0-5cc68f3be4e7-rootfs\") pod \"machine-config-daemon-8zmhn\" (UID: \"8db53917-7cfb-496d-b8a0-5cc68f3be4e7\") " pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" Dec 10 14:32:00 crc kubenswrapper[4718]: I1210 14:32:00.621311 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/9db3984f-4589-462f-94d7-89a885be63d5-cnibin\") pod \"multus-hv62w\" (UID: \"9db3984f-4589-462f-94d7-89a885be63d5\") " pod="openshift-multus/multus-hv62w" Dec 10 14:32:00 crc kubenswrapper[4718]: I1210 14:32:00.621329 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/9db3984f-4589-462f-94d7-89a885be63d5-os-release\") pod \"multus-hv62w\" (UID: \"9db3984f-4589-462f-94d7-89a885be63d5\") " pod="openshift-multus/multus-hv62w" Dec 10 14:32:00 crc kubenswrapper[4718]: I1210 14:32:00.621421 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/af814c60-50de-499d-a1b2-b18f3749bc35-hosts-file\") pod \"node-resolver-xvkg4\" (UID: \"af814c60-50de-499d-a1b2-b18f3749bc35\") " pod="openshift-dns/node-resolver-xvkg4" Dec 10 14:32:00 crc kubenswrapper[4718]: I1210 14:32:00.621503 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/9db3984f-4589-462f-94d7-89a885be63d5-multus-cni-dir\") pod \"multus-hv62w\" (UID: \"9db3984f-4589-462f-94d7-89a885be63d5\") " pod="openshift-multus/multus-hv62w" Dec 10 14:32:00 crc kubenswrapper[4718]: I1210 14:32:00.621523 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/9db3984f-4589-462f-94d7-89a885be63d5-cni-binary-copy\") pod \"multus-hv62w\" (UID: \"9db3984f-4589-462f-94d7-89a885be63d5\") " pod="openshift-multus/multus-hv62w" Dec 10 14:32:00 crc kubenswrapper[4718]: I1210 14:32:00.621542 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/9db3984f-4589-462f-94d7-89a885be63d5-multus-socket-dir-parent\") pod \"multus-hv62w\" (UID: \"9db3984f-4589-462f-94d7-89a885be63d5\") " pod="openshift-multus/multus-hv62w" Dec 10 14:32:00 crc kubenswrapper[4718]: I1210 14:32:00.621579 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/9db3984f-4589-462f-94d7-89a885be63d5-multus-daemon-config\") pod \"multus-hv62w\" (UID: \"9db3984f-4589-462f-94d7-89a885be63d5\") " pod="openshift-multus/multus-hv62w" Dec 10 14:32:00 crc kubenswrapper[4718]: I1210 14:32:00.621603 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/9db3984f-4589-462f-94d7-89a885be63d5-host-var-lib-cni-bin\") pod \"multus-hv62w\" (UID: \"9db3984f-4589-462f-94d7-89a885be63d5\") " pod="openshift-multus/multus-hv62w" Dec 10 14:32:00 crc kubenswrapper[4718]: I1210 14:32:00.621621 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/9db3984f-4589-462f-94d7-89a885be63d5-hostroot\") pod \"multus-hv62w\" (UID: \"9db3984f-4589-462f-94d7-89a885be63d5\") " pod="openshift-multus/multus-hv62w" Dec 10 14:32:00 crc kubenswrapper[4718]: I1210 14:32:00.621700 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9db3984f-4589-462f-94d7-89a885be63d5-etc-kubernetes\") pod \"multus-hv62w\" (UID: \"9db3984f-4589-462f-94d7-89a885be63d5\") " pod="openshift-multus/multus-hv62w" Dec 10 14:32:00 crc kubenswrapper[4718]: I1210 14:32:00.621772 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/9db3984f-4589-462f-94d7-89a885be63d5-host-run-k8s-cni-cncf-io\") pod \"multus-hv62w\" (UID: \"9db3984f-4589-462f-94d7-89a885be63d5\") " pod="openshift-multus/multus-hv62w" Dec 10 14:32:00 crc kubenswrapper[4718]: I1210 14:32:00.623161 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hv62w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9db3984f-4589-462f-94d7-89a885be63d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d84p4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hv62w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:00Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:00 crc kubenswrapper[4718]: I1210 14:32:00.637381 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:00 crc kubenswrapper[4718]: I1210 14:32:00.637445 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:00 crc kubenswrapper[4718]: I1210 14:32:00.637457 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:00 crc kubenswrapper[4718]: I1210 14:32:00.637475 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:00 crc kubenswrapper[4718]: I1210 14:32:00.637500 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:00Z","lastTransitionTime":"2025-12-10T14:32:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:00 crc kubenswrapper[4718]: I1210 14:32:00.639887 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a6ff07473ddfa3d1db1e5fe47937274f803c400ee435102c3b0383c02af4340\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b56868fde34c04a22b78be1e6cb02ff5b11f0dd9fa0232e5a4c38725a8bf193\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:00Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:00 crc kubenswrapper[4718]: I1210 14:32:00.655327 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:57Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:00Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:00 crc kubenswrapper[4718]: I1210 14:32:00.671572 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52b297ad-8ff7-498e-8248-b64014de744f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65966d33004e0a6740f0b450f404729baec86fc47c84a58ee020a78b7378bfee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfa44e8f79d7525847ab10384aaf41105a8e6769b384679417cd0a7379748dee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://32fcede9b35f9a45177a341e272e7c2cbe2e5c6bf699e877664a603a5f96124a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d6e990a4a3f575ffc8a6399b8dd1fd59a84db71b34885946046fc7c5bd0a8e8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9d6e990a4a3f575ffc8a6399b8dd1fd59a84db71b34885946046fc7c5bd0a8e8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-10T14:31:58Z\\\",\\\"message\\\":\\\"le observer\\\\nW1210 14:31:57.496102 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1210 14:31:57.496266 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1210 14:31:57.497076 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1744864549/tls.crt::/tmp/serving-cert-1744864549/tls.key\\\\\\\"\\\\nI1210 14:31:57.965947 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1210 14:31:57.968367 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1210 14:31:57.968400 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1210 14:31:57.968417 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1210 14:31:57.968422 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1210 14:31:57.972813 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1210 14:31:57.972849 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1210 14:31:57.972856 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1210 14:31:57.972862 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1210 14:31:57.972867 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1210 14:31:57.972870 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1210 14:31:57.972874 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1210 14:31:57.973028 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1210 14:31:57.977590 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-10T14:31:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f363cabfc65ba2cb737968ead6f21568f30416a1f385f511f2d1a45da1e831a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3cbf8dfd3e1278a733c6ca7e888788d0bec845ff72dfd618e2070262d05b6a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3cbf8dfd3e1278a733c6ca7e888788d0bec845ff72dfd618e2070262d05b6a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:31:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:31:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:00Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:00 crc kubenswrapper[4718]: I1210 14:32:00.689232 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:00Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:00 crc kubenswrapper[4718]: I1210 14:32:00.707882 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:00Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:00 crc kubenswrapper[4718]: I1210 14:32:00.720440 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a6ff07473ddfa3d1db1e5fe47937274f803c400ee435102c3b0383c02af4340\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b56868fde34c04a22b78be1e6cb02ff5b11f0dd9fa0232e5a4c38725a8bf193\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:00Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:00 crc kubenswrapper[4718]: I1210 14:32:00.723059 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/9db3984f-4589-462f-94d7-89a885be63d5-host-run-netns\") pod \"multus-hv62w\" (UID: \"9db3984f-4589-462f-94d7-89a885be63d5\") " pod="openshift-multus/multus-hv62w" Dec 10 14:32:00 crc kubenswrapper[4718]: I1210 14:32:00.723090 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/8db53917-7cfb-496d-b8a0-5cc68f3be4e7-mcd-auth-proxy-config\") pod \"machine-config-daemon-8zmhn\" (UID: \"8db53917-7cfb-496d-b8a0-5cc68f3be4e7\") " pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" Dec 10 14:32:00 crc kubenswrapper[4718]: I1210 14:32:00.723131 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/9db3984f-4589-462f-94d7-89a885be63d5-multus-conf-dir\") pod \"multus-hv62w\" (UID: \"9db3984f-4589-462f-94d7-89a885be63d5\") " pod="openshift-multus/multus-hv62w" Dec 10 14:32:00 crc kubenswrapper[4718]: I1210 14:32:00.723161 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/9db3984f-4589-462f-94d7-89a885be63d5-system-cni-dir\") pod \"multus-hv62w\" (UID: \"9db3984f-4589-462f-94d7-89a885be63d5\") " pod="openshift-multus/multus-hv62w" Dec 10 14:32:00 crc kubenswrapper[4718]: I1210 14:32:00.723179 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/9db3984f-4589-462f-94d7-89a885be63d5-host-var-lib-kubelet\") pod \"multus-hv62w\" (UID: \"9db3984f-4589-462f-94d7-89a885be63d5\") " pod="openshift-multus/multus-hv62w" Dec 10 14:32:00 crc kubenswrapper[4718]: I1210 14:32:00.723195 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/9db3984f-4589-462f-94d7-89a885be63d5-host-run-multus-certs\") pod \"multus-hv62w\" (UID: \"9db3984f-4589-462f-94d7-89a885be63d5\") " pod="openshift-multus/multus-hv62w" Dec 10 14:32:00 crc kubenswrapper[4718]: I1210 14:32:00.723212 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d84p4\" (UniqueName: \"kubernetes.io/projected/9db3984f-4589-462f-94d7-89a885be63d5-kube-api-access-d84p4\") pod \"multus-hv62w\" (UID: \"9db3984f-4589-462f-94d7-89a885be63d5\") " pod="openshift-multus/multus-hv62w" Dec 10 14:32:00 crc kubenswrapper[4718]: I1210 14:32:00.723230 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/8db53917-7cfb-496d-b8a0-5cc68f3be4e7-rootfs\") pod \"machine-config-daemon-8zmhn\" (UID: \"8db53917-7cfb-496d-b8a0-5cc68f3be4e7\") " pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" Dec 10 14:32:00 crc kubenswrapper[4718]: I1210 14:32:00.723247 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/8db53917-7cfb-496d-b8a0-5cc68f3be4e7-proxy-tls\") pod \"machine-config-daemon-8zmhn\" (UID: \"8db53917-7cfb-496d-b8a0-5cc68f3be4e7\") " pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" Dec 10 14:32:00 crc kubenswrapper[4718]: I1210 14:32:00.723266 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vsmbx\" (UniqueName: \"kubernetes.io/projected/af814c60-50de-499d-a1b2-b18f3749bc35-kube-api-access-vsmbx\") pod \"node-resolver-xvkg4\" (UID: \"af814c60-50de-499d-a1b2-b18f3749bc35\") " pod="openshift-dns/node-resolver-xvkg4" Dec 10 14:32:00 crc kubenswrapper[4718]: I1210 14:32:00.723291 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/9db3984f-4589-462f-94d7-89a885be63d5-cnibin\") pod \"multus-hv62w\" (UID: \"9db3984f-4589-462f-94d7-89a885be63d5\") " pod="openshift-multus/multus-hv62w" Dec 10 14:32:00 crc kubenswrapper[4718]: I1210 14:32:00.723309 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/9db3984f-4589-462f-94d7-89a885be63d5-os-release\") pod \"multus-hv62w\" (UID: \"9db3984f-4589-462f-94d7-89a885be63d5\") " pod="openshift-multus/multus-hv62w" Dec 10 14:32:00 crc kubenswrapper[4718]: I1210 14:32:00.723315 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/9db3984f-4589-462f-94d7-89a885be63d5-host-run-multus-certs\") pod \"multus-hv62w\" (UID: \"9db3984f-4589-462f-94d7-89a885be63d5\") " pod="openshift-multus/multus-hv62w" Dec 10 14:32:00 crc kubenswrapper[4718]: I1210 14:32:00.723332 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/af814c60-50de-499d-a1b2-b18f3749bc35-hosts-file\") pod \"node-resolver-xvkg4\" (UID: \"af814c60-50de-499d-a1b2-b18f3749bc35\") " pod="openshift-dns/node-resolver-xvkg4" Dec 10 14:32:00 crc kubenswrapper[4718]: I1210 14:32:00.723371 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/af814c60-50de-499d-a1b2-b18f3749bc35-hosts-file\") pod \"node-resolver-xvkg4\" (UID: \"af814c60-50de-499d-a1b2-b18f3749bc35\") " pod="openshift-dns/node-resolver-xvkg4" Dec 10 14:32:00 crc kubenswrapper[4718]: I1210 14:32:00.723406 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/9db3984f-4589-462f-94d7-89a885be63d5-multus-daemon-config\") pod \"multus-hv62w\" (UID: \"9db3984f-4589-462f-94d7-89a885be63d5\") " pod="openshift-multus/multus-hv62w" Dec 10 14:32:00 crc kubenswrapper[4718]: I1210 14:32:00.723451 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/9db3984f-4589-462f-94d7-89a885be63d5-multus-cni-dir\") pod \"multus-hv62w\" (UID: \"9db3984f-4589-462f-94d7-89a885be63d5\") " pod="openshift-multus/multus-hv62w" Dec 10 14:32:00 crc kubenswrapper[4718]: I1210 14:32:00.723454 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/9db3984f-4589-462f-94d7-89a885be63d5-system-cni-dir\") pod \"multus-hv62w\" (UID: \"9db3984f-4589-462f-94d7-89a885be63d5\") " pod="openshift-multus/multus-hv62w" Dec 10 14:32:00 crc kubenswrapper[4718]: I1210 14:32:00.723475 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/9db3984f-4589-462f-94d7-89a885be63d5-cni-binary-copy\") pod \"multus-hv62w\" (UID: \"9db3984f-4589-462f-94d7-89a885be63d5\") " pod="openshift-multus/multus-hv62w" Dec 10 14:32:00 crc kubenswrapper[4718]: I1210 14:32:00.723561 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/9db3984f-4589-462f-94d7-89a885be63d5-multus-socket-dir-parent\") pod \"multus-hv62w\" (UID: \"9db3984f-4589-462f-94d7-89a885be63d5\") " pod="openshift-multus/multus-hv62w" Dec 10 14:32:00 crc kubenswrapper[4718]: I1210 14:32:00.723601 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/8db53917-7cfb-496d-b8a0-5cc68f3be4e7-rootfs\") pod \"machine-config-daemon-8zmhn\" (UID: \"8db53917-7cfb-496d-b8a0-5cc68f3be4e7\") " pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" Dec 10 14:32:00 crc kubenswrapper[4718]: I1210 14:32:00.723611 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/9db3984f-4589-462f-94d7-89a885be63d5-host-run-k8s-cni-cncf-io\") pod \"multus-hv62w\" (UID: \"9db3984f-4589-462f-94d7-89a885be63d5\") " pod="openshift-multus/multus-hv62w" Dec 10 14:32:00 crc kubenswrapper[4718]: I1210 14:32:00.723651 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/9db3984f-4589-462f-94d7-89a885be63d5-host-var-lib-cni-bin\") pod \"multus-hv62w\" (UID: \"9db3984f-4589-462f-94d7-89a885be63d5\") " pod="openshift-multus/multus-hv62w" Dec 10 14:32:00 crc kubenswrapper[4718]: I1210 14:32:00.723679 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/9db3984f-4589-462f-94d7-89a885be63d5-hostroot\") pod \"multus-hv62w\" (UID: \"9db3984f-4589-462f-94d7-89a885be63d5\") " pod="openshift-multus/multus-hv62w" Dec 10 14:32:00 crc kubenswrapper[4718]: I1210 14:32:00.723700 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9db3984f-4589-462f-94d7-89a885be63d5-etc-kubernetes\") pod \"multus-hv62w\" (UID: \"9db3984f-4589-462f-94d7-89a885be63d5\") " pod="openshift-multus/multus-hv62w" Dec 10 14:32:00 crc kubenswrapper[4718]: I1210 14:32:00.723753 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/9db3984f-4589-462f-94d7-89a885be63d5-host-var-lib-cni-multus\") pod \"multus-hv62w\" (UID: \"9db3984f-4589-462f-94d7-89a885be63d5\") " pod="openshift-multus/multus-hv62w" Dec 10 14:32:00 crc kubenswrapper[4718]: I1210 14:32:00.723777 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kfb2t\" (UniqueName: \"kubernetes.io/projected/8db53917-7cfb-496d-b8a0-5cc68f3be4e7-kube-api-access-kfb2t\") pod \"machine-config-daemon-8zmhn\" (UID: \"8db53917-7cfb-496d-b8a0-5cc68f3be4e7\") " pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" Dec 10 14:32:00 crc kubenswrapper[4718]: I1210 14:32:00.723810 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/9db3984f-4589-462f-94d7-89a885be63d5-host-var-lib-kubelet\") pod \"multus-hv62w\" (UID: \"9db3984f-4589-462f-94d7-89a885be63d5\") " pod="openshift-multus/multus-hv62w" Dec 10 14:32:00 crc kubenswrapper[4718]: I1210 14:32:00.723887 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/9db3984f-4589-462f-94d7-89a885be63d5-cnibin\") pod \"multus-hv62w\" (UID: \"9db3984f-4589-462f-94d7-89a885be63d5\") " pod="openshift-multus/multus-hv62w" Dec 10 14:32:00 crc kubenswrapper[4718]: I1210 14:32:00.723911 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/9db3984f-4589-462f-94d7-89a885be63d5-multus-socket-dir-parent\") pod \"multus-hv62w\" (UID: \"9db3984f-4589-462f-94d7-89a885be63d5\") " pod="openshift-multus/multus-hv62w" Dec 10 14:32:00 crc kubenswrapper[4718]: I1210 14:32:00.723972 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/9db3984f-4589-462f-94d7-89a885be63d5-multus-cni-dir\") pod \"multus-hv62w\" (UID: \"9db3984f-4589-462f-94d7-89a885be63d5\") " pod="openshift-multus/multus-hv62w" Dec 10 14:32:00 crc kubenswrapper[4718]: I1210 14:32:00.723993 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/9db3984f-4589-462f-94d7-89a885be63d5-os-release\") pod \"multus-hv62w\" (UID: \"9db3984f-4589-462f-94d7-89a885be63d5\") " pod="openshift-multus/multus-hv62w" Dec 10 14:32:00 crc kubenswrapper[4718]: I1210 14:32:00.723998 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/9db3984f-4589-462f-94d7-89a885be63d5-hostroot\") pod \"multus-hv62w\" (UID: \"9db3984f-4589-462f-94d7-89a885be63d5\") " pod="openshift-multus/multus-hv62w" Dec 10 14:32:00 crc kubenswrapper[4718]: I1210 14:32:00.724027 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/9db3984f-4589-462f-94d7-89a885be63d5-host-var-lib-cni-bin\") pod \"multus-hv62w\" (UID: \"9db3984f-4589-462f-94d7-89a885be63d5\") " pod="openshift-multus/multus-hv62w" Dec 10 14:32:00 crc kubenswrapper[4718]: I1210 14:32:00.724028 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/9db3984f-4589-462f-94d7-89a885be63d5-host-run-k8s-cni-cncf-io\") pod \"multus-hv62w\" (UID: \"9db3984f-4589-462f-94d7-89a885be63d5\") " pod="openshift-multus/multus-hv62w" Dec 10 14:32:00 crc kubenswrapper[4718]: I1210 14:32:00.724054 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/9db3984f-4589-462f-94d7-89a885be63d5-host-var-lib-cni-multus\") pod \"multus-hv62w\" (UID: \"9db3984f-4589-462f-94d7-89a885be63d5\") " pod="openshift-multus/multus-hv62w" Dec 10 14:32:00 crc kubenswrapper[4718]: I1210 14:32:00.724065 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9db3984f-4589-462f-94d7-89a885be63d5-etc-kubernetes\") pod \"multus-hv62w\" (UID: \"9db3984f-4589-462f-94d7-89a885be63d5\") " pod="openshift-multus/multus-hv62w" Dec 10 14:32:00 crc kubenswrapper[4718]: I1210 14:32:00.724130 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/9db3984f-4589-462f-94d7-89a885be63d5-host-run-netns\") pod \"multus-hv62w\" (UID: \"9db3984f-4589-462f-94d7-89a885be63d5\") " pod="openshift-multus/multus-hv62w" Dec 10 14:32:00 crc kubenswrapper[4718]: I1210 14:32:00.724138 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/9db3984f-4589-462f-94d7-89a885be63d5-multus-conf-dir\") pod \"multus-hv62w\" (UID: \"9db3984f-4589-462f-94d7-89a885be63d5\") " pod="openshift-multus/multus-hv62w" Dec 10 14:32:00 crc kubenswrapper[4718]: I1210 14:32:00.724297 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/9db3984f-4589-462f-94d7-89a885be63d5-cni-binary-copy\") pod \"multus-hv62w\" (UID: \"9db3984f-4589-462f-94d7-89a885be63d5\") " pod="openshift-multus/multus-hv62w" Dec 10 14:32:00 crc kubenswrapper[4718]: I1210 14:32:00.724384 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/8db53917-7cfb-496d-b8a0-5cc68f3be4e7-mcd-auth-proxy-config\") pod \"machine-config-daemon-8zmhn\" (UID: \"8db53917-7cfb-496d-b8a0-5cc68f3be4e7\") " pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" Dec 10 14:32:00 crc kubenswrapper[4718]: I1210 14:32:00.724779 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/9db3984f-4589-462f-94d7-89a885be63d5-multus-daemon-config\") pod \"multus-hv62w\" (UID: \"9db3984f-4589-462f-94d7-89a885be63d5\") " pod="openshift-multus/multus-hv62w" Dec 10 14:32:00 crc kubenswrapper[4718]: I1210 14:32:00.728833 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/8db53917-7cfb-496d-b8a0-5cc68f3be4e7-proxy-tls\") pod \"machine-config-daemon-8zmhn\" (UID: \"8db53917-7cfb-496d-b8a0-5cc68f3be4e7\") " pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" Dec 10 14:32:00 crc kubenswrapper[4718]: I1210 14:32:00.739553 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://caa6ee7ab7f267517dcd1699f91f61cbc8b2dd0a135dd553e096f31a7f1dd93b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:00Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:00 crc kubenswrapper[4718]: I1210 14:32:00.740689 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:00 crc kubenswrapper[4718]: I1210 14:32:00.740729 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:00 crc kubenswrapper[4718]: I1210 14:32:00.740739 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:00 crc kubenswrapper[4718]: I1210 14:32:00.740754 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:00 crc kubenswrapper[4718]: I1210 14:32:00.740764 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:00Z","lastTransitionTime":"2025-12-10T14:32:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:00 crc kubenswrapper[4718]: I1210 14:32:00.749770 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kfb2t\" (UniqueName: \"kubernetes.io/projected/8db53917-7cfb-496d-b8a0-5cc68f3be4e7-kube-api-access-kfb2t\") pod \"machine-config-daemon-8zmhn\" (UID: \"8db53917-7cfb-496d-b8a0-5cc68f3be4e7\") " pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" Dec 10 14:32:00 crc kubenswrapper[4718]: I1210 14:32:00.751577 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vsmbx\" (UniqueName: \"kubernetes.io/projected/af814c60-50de-499d-a1b2-b18f3749bc35-kube-api-access-vsmbx\") pod \"node-resolver-xvkg4\" (UID: \"af814c60-50de-499d-a1b2-b18f3749bc35\") " pod="openshift-dns/node-resolver-xvkg4" Dec 10 14:32:00 crc kubenswrapper[4718]: I1210 14:32:00.754218 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d84p4\" (UniqueName: \"kubernetes.io/projected/9db3984f-4589-462f-94d7-89a885be63d5-kube-api-access-d84p4\") pod \"multus-hv62w\" (UID: \"9db3984f-4589-462f-94d7-89a885be63d5\") " pod="openshift-multus/multus-hv62w" Dec 10 14:32:00 crc kubenswrapper[4718]: I1210 14:32:00.766987 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:00Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:00 crc kubenswrapper[4718]: I1210 14:32:00.781458 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hv62w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9db3984f-4589-462f-94d7-89a885be63d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d84p4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hv62w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:00Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:00 crc kubenswrapper[4718]: I1210 14:32:00.792138 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xvkg4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af814c60-50de-499d-a1b2-b18f3749bc35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:00Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:00Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vsmbx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:00Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xvkg4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:00Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:00 crc kubenswrapper[4718]: I1210 14:32:00.803133 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8db53917-7cfb-496d-b8a0-5cc68f3be4e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:00Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:00Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8zmhn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:00Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:00 crc kubenswrapper[4718]: I1210 14:32:00.805312 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-hv62w" Dec 10 14:32:00 crc kubenswrapper[4718]: I1210 14:32:00.815566 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-xvkg4" Dec 10 14:32:00 crc kubenswrapper[4718]: W1210 14:32:00.818067 4718 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9db3984f_4589_462f_94d7_89a885be63d5.slice/crio-7f89974d7245e64c8457b51972d5a63deb98154961a1a5d4273c32bb1fa3a6eb WatchSource:0}: Error finding container 7f89974d7245e64c8457b51972d5a63deb98154961a1a5d4273c32bb1fa3a6eb: Status 404 returned error can't find the container with id 7f89974d7245e64c8457b51972d5a63deb98154961a1a5d4273c32bb1fa3a6eb Dec 10 14:32:00 crc kubenswrapper[4718]: I1210 14:32:00.822645 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" Dec 10 14:32:00 crc kubenswrapper[4718]: W1210 14:32:00.832303 4718 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podaf814c60_50de_499d_a1b2_b18f3749bc35.slice/crio-7c21005ed6f22ff90aeb350baa8ade74081c04504b21129b835eaa29986bcd15 WatchSource:0}: Error finding container 7c21005ed6f22ff90aeb350baa8ade74081c04504b21129b835eaa29986bcd15: Status 404 returned error can't find the container with id 7c21005ed6f22ff90aeb350baa8ade74081c04504b21129b835eaa29986bcd15 Dec 10 14:32:00 crc kubenswrapper[4718]: I1210 14:32:00.843422 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:00 crc kubenswrapper[4718]: I1210 14:32:00.843704 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:00 crc kubenswrapper[4718]: I1210 14:32:00.843806 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:00 crc kubenswrapper[4718]: I1210 14:32:00.843902 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:00 crc kubenswrapper[4718]: I1210 14:32:00.843984 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:00Z","lastTransitionTime":"2025-12-10T14:32:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:00 crc kubenswrapper[4718]: I1210 14:32:00.890019 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-kkfdg"] Dec 10 14:32:00 crc kubenswrapper[4718]: I1210 14:32:00.890694 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-kkfdg" Dec 10 14:32:00 crc kubenswrapper[4718]: I1210 14:32:00.894369 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Dec 10 14:32:00 crc kubenswrapper[4718]: I1210 14:32:00.894647 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Dec 10 14:32:00 crc kubenswrapper[4718]: I1210 14:32:00.949113 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:00 crc kubenswrapper[4718]: I1210 14:32:00.949145 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:00 crc kubenswrapper[4718]: I1210 14:32:00.949153 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:00 crc kubenswrapper[4718]: I1210 14:32:00.949166 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:00 crc kubenswrapper[4718]: I1210 14:32:00.949176 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:00Z","lastTransitionTime":"2025-12-10T14:32:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:01 crc kubenswrapper[4718]: I1210 14:32:01.019602 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 10 14:32:01 crc kubenswrapper[4718]: E1210 14:32:01.020058 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 10 14:32:01 crc kubenswrapper[4718]: I1210 14:32:01.041026 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/db9c6842-03cb-4a28-baff-77f27f537aa4-cni-binary-copy\") pod \"multus-additional-cni-plugins-kkfdg\" (UID: \"db9c6842-03cb-4a28-baff-77f27f537aa4\") " pod="openshift-multus/multus-additional-cni-plugins-kkfdg" Dec 10 14:32:01 crc kubenswrapper[4718]: I1210 14:32:01.041085 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/db9c6842-03cb-4a28-baff-77f27f537aa4-system-cni-dir\") pod \"multus-additional-cni-plugins-kkfdg\" (UID: \"db9c6842-03cb-4a28-baff-77f27f537aa4\") " pod="openshift-multus/multus-additional-cni-plugins-kkfdg" Dec 10 14:32:01 crc kubenswrapper[4718]: I1210 14:32:01.041126 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/db9c6842-03cb-4a28-baff-77f27f537aa4-cnibin\") pod \"multus-additional-cni-plugins-kkfdg\" (UID: \"db9c6842-03cb-4a28-baff-77f27f537aa4\") " pod="openshift-multus/multus-additional-cni-plugins-kkfdg" Dec 10 14:32:01 crc kubenswrapper[4718]: I1210 14:32:01.041161 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bxrt7\" (UniqueName: \"kubernetes.io/projected/db9c6842-03cb-4a28-baff-77f27f537aa4-kube-api-access-bxrt7\") pod \"multus-additional-cni-plugins-kkfdg\" (UID: \"db9c6842-03cb-4a28-baff-77f27f537aa4\") " pod="openshift-multus/multus-additional-cni-plugins-kkfdg" Dec 10 14:32:01 crc kubenswrapper[4718]: I1210 14:32:01.041191 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/db9c6842-03cb-4a28-baff-77f27f537aa4-tuning-conf-dir\") pod \"multus-additional-cni-plugins-kkfdg\" (UID: \"db9c6842-03cb-4a28-baff-77f27f537aa4\") " pod="openshift-multus/multus-additional-cni-plugins-kkfdg" Dec 10 14:32:01 crc kubenswrapper[4718]: I1210 14:32:01.041217 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/db9c6842-03cb-4a28-baff-77f27f537aa4-os-release\") pod \"multus-additional-cni-plugins-kkfdg\" (UID: \"db9c6842-03cb-4a28-baff-77f27f537aa4\") " pod="openshift-multus/multus-additional-cni-plugins-kkfdg" Dec 10 14:32:01 crc kubenswrapper[4718]: I1210 14:32:01.041236 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/db9c6842-03cb-4a28-baff-77f27f537aa4-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-kkfdg\" (UID: \"db9c6842-03cb-4a28-baff-77f27f537aa4\") " pod="openshift-multus/multus-additional-cni-plugins-kkfdg" Dec 10 14:32:01 crc kubenswrapper[4718]: I1210 14:32:01.045833 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-dtch2"] Dec 10 14:32:01 crc kubenswrapper[4718]: I1210 14:32:01.046753 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-dtch2" Dec 10 14:32:01 crc kubenswrapper[4718]: W1210 14:32:01.051787 4718 reflector.go:561] object-"openshift-ovn-kubernetes"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-ovn-kubernetes": no relationship found between node 'crc' and this object Dec 10 14:32:01 crc kubenswrapper[4718]: W1210 14:32:01.051831 4718 reflector.go:561] object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: configmaps "openshift-service-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-ovn-kubernetes": no relationship found between node 'crc' and this object Dec 10 14:32:01 crc kubenswrapper[4718]: E1210 14:32:01.051856 4718 reflector.go:158] "Unhandled Error" err="object-\"openshift-ovn-kubernetes\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-ovn-kubernetes\": no relationship found between node 'crc' and this object" logger="UnhandledError" Dec 10 14:32:01 crc kubenswrapper[4718]: E1210 14:32:01.051880 4718 reflector.go:158] "Unhandled Error" err="object-\"openshift-ovn-kubernetes\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"openshift-service-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-ovn-kubernetes\": no relationship found between node 'crc' and this object" logger="UnhandledError" Dec 10 14:32:01 crc kubenswrapper[4718]: W1210 14:32:01.052011 4718 reflector.go:561] object-"openshift-ovn-kubernetes"/"env-overrides": failed to list *v1.ConfigMap: configmaps "env-overrides" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-ovn-kubernetes": no relationship found between node 'crc' and this object Dec 10 14:32:01 crc kubenswrapper[4718]: E1210 14:32:01.052031 4718 reflector.go:158] "Unhandled Error" err="object-\"openshift-ovn-kubernetes\"/\"env-overrides\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"env-overrides\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-ovn-kubernetes\": no relationship found between node 'crc' and this object" logger="UnhandledError" Dec 10 14:32:01 crc kubenswrapper[4718]: W1210 14:32:01.052063 4718 reflector.go:561] object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert": failed to list *v1.Secret: secrets "ovn-node-metrics-cert" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-ovn-kubernetes": no relationship found between node 'crc' and this object Dec 10 14:32:01 crc kubenswrapper[4718]: E1210 14:32:01.052074 4718 reflector.go:158] "Unhandled Error" err="object-\"openshift-ovn-kubernetes\"/\"ovn-node-metrics-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"ovn-node-metrics-cert\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-ovn-kubernetes\": no relationship found between node 'crc' and this object" logger="UnhandledError" Dec 10 14:32:01 crc kubenswrapper[4718]: W1210 14:32:01.052135 4718 reflector.go:561] object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl": failed to list *v1.Secret: secrets "ovn-kubernetes-node-dockercfg-pwtwl" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-ovn-kubernetes": no relationship found between node 'crc' and this object Dec 10 14:32:01 crc kubenswrapper[4718]: E1210 14:32:01.052146 4718 reflector.go:158] "Unhandled Error" err="object-\"openshift-ovn-kubernetes\"/\"ovn-kubernetes-node-dockercfg-pwtwl\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"ovn-kubernetes-node-dockercfg-pwtwl\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-ovn-kubernetes\": no relationship found between node 'crc' and this object" logger="UnhandledError" Dec 10 14:32:01 crc kubenswrapper[4718]: W1210 14:32:01.052172 4718 reflector.go:561] object-"openshift-ovn-kubernetes"/"ovnkube-config": failed to list *v1.ConfigMap: configmaps "ovnkube-config" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-ovn-kubernetes": no relationship found between node 'crc' and this object Dec 10 14:32:01 crc kubenswrapper[4718]: E1210 14:32:01.052181 4718 reflector.go:158] "Unhandled Error" err="object-\"openshift-ovn-kubernetes\"/\"ovnkube-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"ovnkube-config\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-ovn-kubernetes\": no relationship found between node 'crc' and this object" logger="UnhandledError" Dec 10 14:32:01 crc kubenswrapper[4718]: W1210 14:32:01.052219 4718 reflector.go:561] object-"openshift-ovn-kubernetes"/"ovnkube-script-lib": failed to list *v1.ConfigMap: configmaps "ovnkube-script-lib" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-ovn-kubernetes": no relationship found between node 'crc' and this object Dec 10 14:32:01 crc kubenswrapper[4718]: E1210 14:32:01.052232 4718 reflector.go:158] "Unhandled Error" err="object-\"openshift-ovn-kubernetes\"/\"ovnkube-script-lib\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"ovnkube-script-lib\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-ovn-kubernetes\": no relationship found between node 'crc' and this object" logger="UnhandledError" Dec 10 14:32:01 crc kubenswrapper[4718]: I1210 14:32:01.058566 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:01 crc kubenswrapper[4718]: I1210 14:32:01.058634 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:01 crc kubenswrapper[4718]: I1210 14:32:01.058646 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:01 crc kubenswrapper[4718]: I1210 14:32:01.058665 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:01 crc kubenswrapper[4718]: I1210 14:32:01.058679 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:01Z","lastTransitionTime":"2025-12-10T14:32:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:01 crc kubenswrapper[4718]: I1210 14:32:01.120867 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://caa6ee7ab7f267517dcd1699f91f61cbc8b2dd0a135dd553e096f31a7f1dd93b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:01Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:01 crc kubenswrapper[4718]: I1210 14:32:01.142829 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/db9c6842-03cb-4a28-baff-77f27f537aa4-os-release\") pod \"multus-additional-cni-plugins-kkfdg\" (UID: \"db9c6842-03cb-4a28-baff-77f27f537aa4\") " pod="openshift-multus/multus-additional-cni-plugins-kkfdg" Dec 10 14:32:01 crc kubenswrapper[4718]: I1210 14:32:01.142896 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/db9c6842-03cb-4a28-baff-77f27f537aa4-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-kkfdg\" (UID: \"db9c6842-03cb-4a28-baff-77f27f537aa4\") " pod="openshift-multus/multus-additional-cni-plugins-kkfdg" Dec 10 14:32:01 crc kubenswrapper[4718]: I1210 14:32:01.142939 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/db9c6842-03cb-4a28-baff-77f27f537aa4-cni-binary-copy\") pod \"multus-additional-cni-plugins-kkfdg\" (UID: \"db9c6842-03cb-4a28-baff-77f27f537aa4\") " pod="openshift-multus/multus-additional-cni-plugins-kkfdg" Dec 10 14:32:01 crc kubenswrapper[4718]: I1210 14:32:01.142977 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/db9c6842-03cb-4a28-baff-77f27f537aa4-system-cni-dir\") pod \"multus-additional-cni-plugins-kkfdg\" (UID: \"db9c6842-03cb-4a28-baff-77f27f537aa4\") " pod="openshift-multus/multus-additional-cni-plugins-kkfdg" Dec 10 14:32:01 crc kubenswrapper[4718]: I1210 14:32:01.143014 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/db9c6842-03cb-4a28-baff-77f27f537aa4-cnibin\") pod \"multus-additional-cni-plugins-kkfdg\" (UID: \"db9c6842-03cb-4a28-baff-77f27f537aa4\") " pod="openshift-multus/multus-additional-cni-plugins-kkfdg" Dec 10 14:32:01 crc kubenswrapper[4718]: I1210 14:32:01.143040 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bxrt7\" (UniqueName: \"kubernetes.io/projected/db9c6842-03cb-4a28-baff-77f27f537aa4-kube-api-access-bxrt7\") pod \"multus-additional-cni-plugins-kkfdg\" (UID: \"db9c6842-03cb-4a28-baff-77f27f537aa4\") " pod="openshift-multus/multus-additional-cni-plugins-kkfdg" Dec 10 14:32:01 crc kubenswrapper[4718]: I1210 14:32:01.143067 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/db9c6842-03cb-4a28-baff-77f27f537aa4-tuning-conf-dir\") pod \"multus-additional-cni-plugins-kkfdg\" (UID: \"db9c6842-03cb-4a28-baff-77f27f537aa4\") " pod="openshift-multus/multus-additional-cni-plugins-kkfdg" Dec 10 14:32:01 crc kubenswrapper[4718]: I1210 14:32:01.143225 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/db9c6842-03cb-4a28-baff-77f27f537aa4-tuning-conf-dir\") pod \"multus-additional-cni-plugins-kkfdg\" (UID: \"db9c6842-03cb-4a28-baff-77f27f537aa4\") " pod="openshift-multus/multus-additional-cni-plugins-kkfdg" Dec 10 14:32:01 crc kubenswrapper[4718]: I1210 14:32:01.143303 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/db9c6842-03cb-4a28-baff-77f27f537aa4-os-release\") pod \"multus-additional-cni-plugins-kkfdg\" (UID: \"db9c6842-03cb-4a28-baff-77f27f537aa4\") " pod="openshift-multus/multus-additional-cni-plugins-kkfdg" Dec 10 14:32:01 crc kubenswrapper[4718]: I1210 14:32:01.144057 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/db9c6842-03cb-4a28-baff-77f27f537aa4-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-kkfdg\" (UID: \"db9c6842-03cb-4a28-baff-77f27f537aa4\") " pod="openshift-multus/multus-additional-cni-plugins-kkfdg" Dec 10 14:32:01 crc kubenswrapper[4718]: I1210 14:32:01.144616 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/db9c6842-03cb-4a28-baff-77f27f537aa4-cni-binary-copy\") pod \"multus-additional-cni-plugins-kkfdg\" (UID: \"db9c6842-03cb-4a28-baff-77f27f537aa4\") " pod="openshift-multus/multus-additional-cni-plugins-kkfdg" Dec 10 14:32:01 crc kubenswrapper[4718]: I1210 14:32:01.144666 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/db9c6842-03cb-4a28-baff-77f27f537aa4-system-cni-dir\") pod \"multus-additional-cni-plugins-kkfdg\" (UID: \"db9c6842-03cb-4a28-baff-77f27f537aa4\") " pod="openshift-multus/multus-additional-cni-plugins-kkfdg" Dec 10 14:32:01 crc kubenswrapper[4718]: I1210 14:32:01.144688 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/db9c6842-03cb-4a28-baff-77f27f537aa4-cnibin\") pod \"multus-additional-cni-plugins-kkfdg\" (UID: \"db9c6842-03cb-4a28-baff-77f27f537aa4\") " pod="openshift-multus/multus-additional-cni-plugins-kkfdg" Dec 10 14:32:01 crc kubenswrapper[4718]: I1210 14:32:01.163101 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:01 crc kubenswrapper[4718]: I1210 14:32:01.163173 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:01 crc kubenswrapper[4718]: I1210 14:32:01.163199 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:01 crc kubenswrapper[4718]: I1210 14:32:01.163248 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:01 crc kubenswrapper[4718]: I1210 14:32:01.163269 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:01Z","lastTransitionTime":"2025-12-10T14:32:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:01 crc kubenswrapper[4718]: I1210 14:32:01.210415 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bxrt7\" (UniqueName: \"kubernetes.io/projected/db9c6842-03cb-4a28-baff-77f27f537aa4-kube-api-access-bxrt7\") pod \"multus-additional-cni-plugins-kkfdg\" (UID: \"db9c6842-03cb-4a28-baff-77f27f537aa4\") " pod="openshift-multus/multus-additional-cni-plugins-kkfdg" Dec 10 14:32:01 crc kubenswrapper[4718]: I1210 14:32:01.258668 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/612af6cb-db4d-4874-a9ea-8b3c7eb8e30c-var-lib-openvswitch\") pod \"ovnkube-node-dtch2\" (UID: \"612af6cb-db4d-4874-a9ea-8b3c7eb8e30c\") " pod="openshift-ovn-kubernetes/ovnkube-node-dtch2" Dec 10 14:32:01 crc kubenswrapper[4718]: I1210 14:32:01.258727 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/612af6cb-db4d-4874-a9ea-8b3c7eb8e30c-host-run-ovn-kubernetes\") pod \"ovnkube-node-dtch2\" (UID: \"612af6cb-db4d-4874-a9ea-8b3c7eb8e30c\") " pod="openshift-ovn-kubernetes/ovnkube-node-dtch2" Dec 10 14:32:01 crc kubenswrapper[4718]: I1210 14:32:01.258750 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/612af6cb-db4d-4874-a9ea-8b3c7eb8e30c-systemd-units\") pod \"ovnkube-node-dtch2\" (UID: \"612af6cb-db4d-4874-a9ea-8b3c7eb8e30c\") " pod="openshift-ovn-kubernetes/ovnkube-node-dtch2" Dec 10 14:32:01 crc kubenswrapper[4718]: I1210 14:32:01.258770 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/612af6cb-db4d-4874-a9ea-8b3c7eb8e30c-etc-openvswitch\") pod \"ovnkube-node-dtch2\" (UID: \"612af6cb-db4d-4874-a9ea-8b3c7eb8e30c\") " pod="openshift-ovn-kubernetes/ovnkube-node-dtch2" Dec 10 14:32:01 crc kubenswrapper[4718]: I1210 14:32:01.258792 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/612af6cb-db4d-4874-a9ea-8b3c7eb8e30c-node-log\") pod \"ovnkube-node-dtch2\" (UID: \"612af6cb-db4d-4874-a9ea-8b3c7eb8e30c\") " pod="openshift-ovn-kubernetes/ovnkube-node-dtch2" Dec 10 14:32:01 crc kubenswrapper[4718]: I1210 14:32:01.258813 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/612af6cb-db4d-4874-a9ea-8b3c7eb8e30c-ovnkube-script-lib\") pod \"ovnkube-node-dtch2\" (UID: \"612af6cb-db4d-4874-a9ea-8b3c7eb8e30c\") " pod="openshift-ovn-kubernetes/ovnkube-node-dtch2" Dec 10 14:32:01 crc kubenswrapper[4718]: I1210 14:32:01.258887 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/612af6cb-db4d-4874-a9ea-8b3c7eb8e30c-host-kubelet\") pod \"ovnkube-node-dtch2\" (UID: \"612af6cb-db4d-4874-a9ea-8b3c7eb8e30c\") " pod="openshift-ovn-kubernetes/ovnkube-node-dtch2" Dec 10 14:32:01 crc kubenswrapper[4718]: I1210 14:32:01.258916 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/612af6cb-db4d-4874-a9ea-8b3c7eb8e30c-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-dtch2\" (UID: \"612af6cb-db4d-4874-a9ea-8b3c7eb8e30c\") " pod="openshift-ovn-kubernetes/ovnkube-node-dtch2" Dec 10 14:32:01 crc kubenswrapper[4718]: I1210 14:32:01.258934 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/612af6cb-db4d-4874-a9ea-8b3c7eb8e30c-env-overrides\") pod \"ovnkube-node-dtch2\" (UID: \"612af6cb-db4d-4874-a9ea-8b3c7eb8e30c\") " pod="openshift-ovn-kubernetes/ovnkube-node-dtch2" Dec 10 14:32:01 crc kubenswrapper[4718]: I1210 14:32:01.258975 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/612af6cb-db4d-4874-a9ea-8b3c7eb8e30c-run-ovn\") pod \"ovnkube-node-dtch2\" (UID: \"612af6cb-db4d-4874-a9ea-8b3c7eb8e30c\") " pod="openshift-ovn-kubernetes/ovnkube-node-dtch2" Dec 10 14:32:01 crc kubenswrapper[4718]: I1210 14:32:01.258991 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/612af6cb-db4d-4874-a9ea-8b3c7eb8e30c-ovnkube-config\") pod \"ovnkube-node-dtch2\" (UID: \"612af6cb-db4d-4874-a9ea-8b3c7eb8e30c\") " pod="openshift-ovn-kubernetes/ovnkube-node-dtch2" Dec 10 14:32:01 crc kubenswrapper[4718]: I1210 14:32:01.259009 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/612af6cb-db4d-4874-a9ea-8b3c7eb8e30c-run-systemd\") pod \"ovnkube-node-dtch2\" (UID: \"612af6cb-db4d-4874-a9ea-8b3c7eb8e30c\") " pod="openshift-ovn-kubernetes/ovnkube-node-dtch2" Dec 10 14:32:01 crc kubenswrapper[4718]: I1210 14:32:01.259025 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/612af6cb-db4d-4874-a9ea-8b3c7eb8e30c-run-openvswitch\") pod \"ovnkube-node-dtch2\" (UID: \"612af6cb-db4d-4874-a9ea-8b3c7eb8e30c\") " pod="openshift-ovn-kubernetes/ovnkube-node-dtch2" Dec 10 14:32:01 crc kubenswrapper[4718]: I1210 14:32:01.259056 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/612af6cb-db4d-4874-a9ea-8b3c7eb8e30c-ovn-node-metrics-cert\") pod \"ovnkube-node-dtch2\" (UID: \"612af6cb-db4d-4874-a9ea-8b3c7eb8e30c\") " pod="openshift-ovn-kubernetes/ovnkube-node-dtch2" Dec 10 14:32:01 crc kubenswrapper[4718]: I1210 14:32:01.259072 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/612af6cb-db4d-4874-a9ea-8b3c7eb8e30c-log-socket\") pod \"ovnkube-node-dtch2\" (UID: \"612af6cb-db4d-4874-a9ea-8b3c7eb8e30c\") " pod="openshift-ovn-kubernetes/ovnkube-node-dtch2" Dec 10 14:32:01 crc kubenswrapper[4718]: I1210 14:32:01.259184 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jhb7n\" (UniqueName: \"kubernetes.io/projected/612af6cb-db4d-4874-a9ea-8b3c7eb8e30c-kube-api-access-jhb7n\") pod \"ovnkube-node-dtch2\" (UID: \"612af6cb-db4d-4874-a9ea-8b3c7eb8e30c\") " pod="openshift-ovn-kubernetes/ovnkube-node-dtch2" Dec 10 14:32:01 crc kubenswrapper[4718]: I1210 14:32:01.259258 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/612af6cb-db4d-4874-a9ea-8b3c7eb8e30c-host-run-netns\") pod \"ovnkube-node-dtch2\" (UID: \"612af6cb-db4d-4874-a9ea-8b3c7eb8e30c\") " pod="openshift-ovn-kubernetes/ovnkube-node-dtch2" Dec 10 14:32:01 crc kubenswrapper[4718]: I1210 14:32:01.259306 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/612af6cb-db4d-4874-a9ea-8b3c7eb8e30c-host-cni-netd\") pod \"ovnkube-node-dtch2\" (UID: \"612af6cb-db4d-4874-a9ea-8b3c7eb8e30c\") " pod="openshift-ovn-kubernetes/ovnkube-node-dtch2" Dec 10 14:32:01 crc kubenswrapper[4718]: I1210 14:32:01.259328 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/612af6cb-db4d-4874-a9ea-8b3c7eb8e30c-host-cni-bin\") pod \"ovnkube-node-dtch2\" (UID: \"612af6cb-db4d-4874-a9ea-8b3c7eb8e30c\") " pod="openshift-ovn-kubernetes/ovnkube-node-dtch2" Dec 10 14:32:01 crc kubenswrapper[4718]: I1210 14:32:01.259353 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/612af6cb-db4d-4874-a9ea-8b3c7eb8e30c-host-slash\") pod \"ovnkube-node-dtch2\" (UID: \"612af6cb-db4d-4874-a9ea-8b3c7eb8e30c\") " pod="openshift-ovn-kubernetes/ovnkube-node-dtch2" Dec 10 14:32:01 crc kubenswrapper[4718]: I1210 14:32:01.270047 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:01 crc kubenswrapper[4718]: I1210 14:32:01.270098 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:01 crc kubenswrapper[4718]: I1210 14:32:01.270107 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:01 crc kubenswrapper[4718]: I1210 14:32:01.270124 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:01 crc kubenswrapper[4718]: I1210 14:32:01.270135 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:01Z","lastTransitionTime":"2025-12-10T14:32:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:01 crc kubenswrapper[4718]: I1210 14:32:01.276547 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:01Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:01 crc kubenswrapper[4718]: I1210 14:32:01.364455 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/612af6cb-db4d-4874-a9ea-8b3c7eb8e30c-run-ovn\") pod \"ovnkube-node-dtch2\" (UID: \"612af6cb-db4d-4874-a9ea-8b3c7eb8e30c\") " pod="openshift-ovn-kubernetes/ovnkube-node-dtch2" Dec 10 14:32:01 crc kubenswrapper[4718]: I1210 14:32:01.364629 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/612af6cb-db4d-4874-a9ea-8b3c7eb8e30c-run-ovn\") pod \"ovnkube-node-dtch2\" (UID: \"612af6cb-db4d-4874-a9ea-8b3c7eb8e30c\") " pod="openshift-ovn-kubernetes/ovnkube-node-dtch2" Dec 10 14:32:01 crc kubenswrapper[4718]: I1210 14:32:01.364655 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/612af6cb-db4d-4874-a9ea-8b3c7eb8e30c-ovnkube-config\") pod \"ovnkube-node-dtch2\" (UID: \"612af6cb-db4d-4874-a9ea-8b3c7eb8e30c\") " pod="openshift-ovn-kubernetes/ovnkube-node-dtch2" Dec 10 14:32:01 crc kubenswrapper[4718]: I1210 14:32:01.364748 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/612af6cb-db4d-4874-a9ea-8b3c7eb8e30c-run-systemd\") pod \"ovnkube-node-dtch2\" (UID: \"612af6cb-db4d-4874-a9ea-8b3c7eb8e30c\") " pod="openshift-ovn-kubernetes/ovnkube-node-dtch2" Dec 10 14:32:01 crc kubenswrapper[4718]: I1210 14:32:01.364788 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/612af6cb-db4d-4874-a9ea-8b3c7eb8e30c-run-openvswitch\") pod \"ovnkube-node-dtch2\" (UID: \"612af6cb-db4d-4874-a9ea-8b3c7eb8e30c\") " pod="openshift-ovn-kubernetes/ovnkube-node-dtch2" Dec 10 14:32:01 crc kubenswrapper[4718]: I1210 14:32:01.364810 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/612af6cb-db4d-4874-a9ea-8b3c7eb8e30c-ovn-node-metrics-cert\") pod \"ovnkube-node-dtch2\" (UID: \"612af6cb-db4d-4874-a9ea-8b3c7eb8e30c\") " pod="openshift-ovn-kubernetes/ovnkube-node-dtch2" Dec 10 14:32:01 crc kubenswrapper[4718]: I1210 14:32:01.364826 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/612af6cb-db4d-4874-a9ea-8b3c7eb8e30c-run-systemd\") pod \"ovnkube-node-dtch2\" (UID: \"612af6cb-db4d-4874-a9ea-8b3c7eb8e30c\") " pod="openshift-ovn-kubernetes/ovnkube-node-dtch2" Dec 10 14:32:01 crc kubenswrapper[4718]: I1210 14:32:01.364845 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/612af6cb-db4d-4874-a9ea-8b3c7eb8e30c-log-socket\") pod \"ovnkube-node-dtch2\" (UID: \"612af6cb-db4d-4874-a9ea-8b3c7eb8e30c\") " pod="openshift-ovn-kubernetes/ovnkube-node-dtch2" Dec 10 14:32:01 crc kubenswrapper[4718]: I1210 14:32:01.364922 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/612af6cb-db4d-4874-a9ea-8b3c7eb8e30c-run-openvswitch\") pod \"ovnkube-node-dtch2\" (UID: \"612af6cb-db4d-4874-a9ea-8b3c7eb8e30c\") " pod="openshift-ovn-kubernetes/ovnkube-node-dtch2" Dec 10 14:32:01 crc kubenswrapper[4718]: I1210 14:32:01.364972 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/612af6cb-db4d-4874-a9ea-8b3c7eb8e30c-log-socket\") pod \"ovnkube-node-dtch2\" (UID: \"612af6cb-db4d-4874-a9ea-8b3c7eb8e30c\") " pod="openshift-ovn-kubernetes/ovnkube-node-dtch2" Dec 10 14:32:01 crc kubenswrapper[4718]: I1210 14:32:01.365104 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jhb7n\" (UniqueName: \"kubernetes.io/projected/612af6cb-db4d-4874-a9ea-8b3c7eb8e30c-kube-api-access-jhb7n\") pod \"ovnkube-node-dtch2\" (UID: \"612af6cb-db4d-4874-a9ea-8b3c7eb8e30c\") " pod="openshift-ovn-kubernetes/ovnkube-node-dtch2" Dec 10 14:32:01 crc kubenswrapper[4718]: I1210 14:32:01.365149 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/612af6cb-db4d-4874-a9ea-8b3c7eb8e30c-host-run-netns\") pod \"ovnkube-node-dtch2\" (UID: \"612af6cb-db4d-4874-a9ea-8b3c7eb8e30c\") " pod="openshift-ovn-kubernetes/ovnkube-node-dtch2" Dec 10 14:32:01 crc kubenswrapper[4718]: I1210 14:32:01.365191 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/612af6cb-db4d-4874-a9ea-8b3c7eb8e30c-host-cni-netd\") pod \"ovnkube-node-dtch2\" (UID: \"612af6cb-db4d-4874-a9ea-8b3c7eb8e30c\") " pod="openshift-ovn-kubernetes/ovnkube-node-dtch2" Dec 10 14:32:01 crc kubenswrapper[4718]: I1210 14:32:01.365220 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/612af6cb-db4d-4874-a9ea-8b3c7eb8e30c-host-cni-bin\") pod \"ovnkube-node-dtch2\" (UID: \"612af6cb-db4d-4874-a9ea-8b3c7eb8e30c\") " pod="openshift-ovn-kubernetes/ovnkube-node-dtch2" Dec 10 14:32:01 crc kubenswrapper[4718]: I1210 14:32:01.365268 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/612af6cb-db4d-4874-a9ea-8b3c7eb8e30c-host-cni-bin\") pod \"ovnkube-node-dtch2\" (UID: \"612af6cb-db4d-4874-a9ea-8b3c7eb8e30c\") " pod="openshift-ovn-kubernetes/ovnkube-node-dtch2" Dec 10 14:32:01 crc kubenswrapper[4718]: I1210 14:32:01.365287 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/612af6cb-db4d-4874-a9ea-8b3c7eb8e30c-host-slash\") pod \"ovnkube-node-dtch2\" (UID: \"612af6cb-db4d-4874-a9ea-8b3c7eb8e30c\") " pod="openshift-ovn-kubernetes/ovnkube-node-dtch2" Dec 10 14:32:01 crc kubenswrapper[4718]: I1210 14:32:01.365312 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/612af6cb-db4d-4874-a9ea-8b3c7eb8e30c-host-run-netns\") pod \"ovnkube-node-dtch2\" (UID: \"612af6cb-db4d-4874-a9ea-8b3c7eb8e30c\") " pod="openshift-ovn-kubernetes/ovnkube-node-dtch2" Dec 10 14:32:01 crc kubenswrapper[4718]: I1210 14:32:01.365331 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/612af6cb-db4d-4874-a9ea-8b3c7eb8e30c-var-lib-openvswitch\") pod \"ovnkube-node-dtch2\" (UID: \"612af6cb-db4d-4874-a9ea-8b3c7eb8e30c\") " pod="openshift-ovn-kubernetes/ovnkube-node-dtch2" Dec 10 14:32:01 crc kubenswrapper[4718]: I1210 14:32:01.365346 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/612af6cb-db4d-4874-a9ea-8b3c7eb8e30c-host-cni-netd\") pod \"ovnkube-node-dtch2\" (UID: \"612af6cb-db4d-4874-a9ea-8b3c7eb8e30c\") " pod="openshift-ovn-kubernetes/ovnkube-node-dtch2" Dec 10 14:32:01 crc kubenswrapper[4718]: I1210 14:32:01.365423 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/612af6cb-db4d-4874-a9ea-8b3c7eb8e30c-host-run-ovn-kubernetes\") pod \"ovnkube-node-dtch2\" (UID: \"612af6cb-db4d-4874-a9ea-8b3c7eb8e30c\") " pod="openshift-ovn-kubernetes/ovnkube-node-dtch2" Dec 10 14:32:01 crc kubenswrapper[4718]: I1210 14:32:01.365447 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/612af6cb-db4d-4874-a9ea-8b3c7eb8e30c-host-run-ovn-kubernetes\") pod \"ovnkube-node-dtch2\" (UID: \"612af6cb-db4d-4874-a9ea-8b3c7eb8e30c\") " pod="openshift-ovn-kubernetes/ovnkube-node-dtch2" Dec 10 14:32:01 crc kubenswrapper[4718]: I1210 14:32:01.365457 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/612af6cb-db4d-4874-a9ea-8b3c7eb8e30c-systemd-units\") pod \"ovnkube-node-dtch2\" (UID: \"612af6cb-db4d-4874-a9ea-8b3c7eb8e30c\") " pod="openshift-ovn-kubernetes/ovnkube-node-dtch2" Dec 10 14:32:01 crc kubenswrapper[4718]: I1210 14:32:01.365483 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/612af6cb-db4d-4874-a9ea-8b3c7eb8e30c-etc-openvswitch\") pod \"ovnkube-node-dtch2\" (UID: \"612af6cb-db4d-4874-a9ea-8b3c7eb8e30c\") " pod="openshift-ovn-kubernetes/ovnkube-node-dtch2" Dec 10 14:32:01 crc kubenswrapper[4718]: I1210 14:32:01.365492 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/612af6cb-db4d-4874-a9ea-8b3c7eb8e30c-var-lib-openvswitch\") pod \"ovnkube-node-dtch2\" (UID: \"612af6cb-db4d-4874-a9ea-8b3c7eb8e30c\") " pod="openshift-ovn-kubernetes/ovnkube-node-dtch2" Dec 10 14:32:01 crc kubenswrapper[4718]: I1210 14:32:01.365509 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/612af6cb-db4d-4874-a9ea-8b3c7eb8e30c-node-log\") pod \"ovnkube-node-dtch2\" (UID: \"612af6cb-db4d-4874-a9ea-8b3c7eb8e30c\") " pod="openshift-ovn-kubernetes/ovnkube-node-dtch2" Dec 10 14:32:01 crc kubenswrapper[4718]: I1210 14:32:01.365527 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/612af6cb-db4d-4874-a9ea-8b3c7eb8e30c-host-slash\") pod \"ovnkube-node-dtch2\" (UID: \"612af6cb-db4d-4874-a9ea-8b3c7eb8e30c\") " pod="openshift-ovn-kubernetes/ovnkube-node-dtch2" Dec 10 14:32:01 crc kubenswrapper[4718]: I1210 14:32:01.365530 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/612af6cb-db4d-4874-a9ea-8b3c7eb8e30c-ovnkube-script-lib\") pod \"ovnkube-node-dtch2\" (UID: \"612af6cb-db4d-4874-a9ea-8b3c7eb8e30c\") " pod="openshift-ovn-kubernetes/ovnkube-node-dtch2" Dec 10 14:32:01 crc kubenswrapper[4718]: I1210 14:32:01.365615 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/612af6cb-db4d-4874-a9ea-8b3c7eb8e30c-systemd-units\") pod \"ovnkube-node-dtch2\" (UID: \"612af6cb-db4d-4874-a9ea-8b3c7eb8e30c\") " pod="openshift-ovn-kubernetes/ovnkube-node-dtch2" Dec 10 14:32:01 crc kubenswrapper[4718]: I1210 14:32:01.365618 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/612af6cb-db4d-4874-a9ea-8b3c7eb8e30c-host-kubelet\") pod \"ovnkube-node-dtch2\" (UID: \"612af6cb-db4d-4874-a9ea-8b3c7eb8e30c\") " pod="openshift-ovn-kubernetes/ovnkube-node-dtch2" Dec 10 14:32:01 crc kubenswrapper[4718]: I1210 14:32:01.365650 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/612af6cb-db4d-4874-a9ea-8b3c7eb8e30c-host-kubelet\") pod \"ovnkube-node-dtch2\" (UID: \"612af6cb-db4d-4874-a9ea-8b3c7eb8e30c\") " pod="openshift-ovn-kubernetes/ovnkube-node-dtch2" Dec 10 14:32:01 crc kubenswrapper[4718]: I1210 14:32:01.365666 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/612af6cb-db4d-4874-a9ea-8b3c7eb8e30c-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-dtch2\" (UID: \"612af6cb-db4d-4874-a9ea-8b3c7eb8e30c\") " pod="openshift-ovn-kubernetes/ovnkube-node-dtch2" Dec 10 14:32:01 crc kubenswrapper[4718]: I1210 14:32:01.365697 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/612af6cb-db4d-4874-a9ea-8b3c7eb8e30c-env-overrides\") pod \"ovnkube-node-dtch2\" (UID: \"612af6cb-db4d-4874-a9ea-8b3c7eb8e30c\") " pod="openshift-ovn-kubernetes/ovnkube-node-dtch2" Dec 10 14:32:01 crc kubenswrapper[4718]: I1210 14:32:01.365707 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/612af6cb-db4d-4874-a9ea-8b3c7eb8e30c-node-log\") pod \"ovnkube-node-dtch2\" (UID: \"612af6cb-db4d-4874-a9ea-8b3c7eb8e30c\") " pod="openshift-ovn-kubernetes/ovnkube-node-dtch2" Dec 10 14:32:01 crc kubenswrapper[4718]: I1210 14:32:01.365706 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/612af6cb-db4d-4874-a9ea-8b3c7eb8e30c-etc-openvswitch\") pod \"ovnkube-node-dtch2\" (UID: \"612af6cb-db4d-4874-a9ea-8b3c7eb8e30c\") " pod="openshift-ovn-kubernetes/ovnkube-node-dtch2" Dec 10 14:32:01 crc kubenswrapper[4718]: I1210 14:32:01.365788 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/612af6cb-db4d-4874-a9ea-8b3c7eb8e30c-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-dtch2\" (UID: \"612af6cb-db4d-4874-a9ea-8b3c7eb8e30c\") " pod="openshift-ovn-kubernetes/ovnkube-node-dtch2" Dec 10 14:32:01 crc kubenswrapper[4718]: I1210 14:32:01.376399 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-kkfdg" Dec 10 14:32:01 crc kubenswrapper[4718]: I1210 14:32:01.376507 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:01 crc kubenswrapper[4718]: I1210 14:32:01.376552 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:01 crc kubenswrapper[4718]: I1210 14:32:01.376565 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:01 crc kubenswrapper[4718]: I1210 14:32:01.376585 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:01 crc kubenswrapper[4718]: I1210 14:32:01.376596 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:01Z","lastTransitionTime":"2025-12-10T14:32:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:01 crc kubenswrapper[4718]: I1210 14:32:01.377218 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hv62w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9db3984f-4589-462f-94d7-89a885be63d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d84p4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hv62w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:01Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:01 crc kubenswrapper[4718]: I1210 14:32:01.404820 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xvkg4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af814c60-50de-499d-a1b2-b18f3749bc35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:00Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:00Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vsmbx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:00Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xvkg4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:01Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:01 crc kubenswrapper[4718]: I1210 14:32:01.422867 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8db53917-7cfb-496d-b8a0-5cc68f3be4e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:00Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:00Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8zmhn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:01Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:01 crc kubenswrapper[4718]: I1210 14:32:01.446513 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kkfdg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db9c6842-03cb-4a28-baff-77f27f537aa4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:00Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bxrt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bxrt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bxrt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bxrt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bxrt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bxrt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bxrt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kkfdg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:01Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:01 crc kubenswrapper[4718]: I1210 14:32:01.466471 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52b297ad-8ff7-498e-8248-b64014de744f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65966d33004e0a6740f0b450f404729baec86fc47c84a58ee020a78b7378bfee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfa44e8f79d7525847ab10384aaf41105a8e6769b384679417cd0a7379748dee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://32fcede9b35f9a45177a341e272e7c2cbe2e5c6bf699e877664a603a5f96124a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d6e990a4a3f575ffc8a6399b8dd1fd59a84db71b34885946046fc7c5bd0a8e8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9d6e990a4a3f575ffc8a6399b8dd1fd59a84db71b34885946046fc7c5bd0a8e8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-10T14:31:58Z\\\",\\\"message\\\":\\\"le observer\\\\nW1210 14:31:57.496102 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1210 14:31:57.496266 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1210 14:31:57.497076 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1744864549/tls.crt::/tmp/serving-cert-1744864549/tls.key\\\\\\\"\\\\nI1210 14:31:57.965947 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1210 14:31:57.968367 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1210 14:31:57.968400 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1210 14:31:57.968417 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1210 14:31:57.968422 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1210 14:31:57.972813 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1210 14:31:57.972849 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1210 14:31:57.972856 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1210 14:31:57.972862 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1210 14:31:57.972867 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1210 14:31:57.972870 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1210 14:31:57.972874 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1210 14:31:57.973028 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1210 14:31:57.977590 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-10T14:31:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f363cabfc65ba2cb737968ead6f21568f30416a1f385f511f2d1a45da1e831a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3cbf8dfd3e1278a733c6ca7e888788d0bec845ff72dfd618e2070262d05b6a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3cbf8dfd3e1278a733c6ca7e888788d0bec845ff72dfd618e2070262d05b6a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:31:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:31:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:01Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:01 crc kubenswrapper[4718]: I1210 14:32:01.492900 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:01 crc kubenswrapper[4718]: I1210 14:32:01.492939 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:01 crc kubenswrapper[4718]: I1210 14:32:01.492948 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:01 crc kubenswrapper[4718]: I1210 14:32:01.492964 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:01 crc kubenswrapper[4718]: I1210 14:32:01.492978 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:01Z","lastTransitionTime":"2025-12-10T14:32:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:01 crc kubenswrapper[4718]: I1210 14:32:01.535772 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:01Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:01 crc kubenswrapper[4718]: I1210 14:32:01.547027 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" event={"ID":"8db53917-7cfb-496d-b8a0-5cc68f3be4e7","Type":"ContainerStarted","Data":"5ac0946788c7cef36abe84639ca343f7ba379f19b94308cd1a08f4dcac4ad249"} Dec 10 14:32:01 crc kubenswrapper[4718]: I1210 14:32:01.547087 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" event={"ID":"8db53917-7cfb-496d-b8a0-5cc68f3be4e7","Type":"ContainerStarted","Data":"f420f195ef9f1879dc97af4b3feb8835025c2cbe0ede4113fb1b0e0b08ba6c2a"} Dec 10 14:32:01 crc kubenswrapper[4718]: I1210 14:32:01.547102 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" event={"ID":"8db53917-7cfb-496d-b8a0-5cc68f3be4e7","Type":"ContainerStarted","Data":"09b6defaaddceb6926cc5222d998f31e950c9a5d6359c64ff0906ac6481d3ba8"} Dec 10 14:32:01 crc kubenswrapper[4718]: I1210 14:32:01.550366 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-xvkg4" event={"ID":"af814c60-50de-499d-a1b2-b18f3749bc35","Type":"ContainerStarted","Data":"01d0637707f68305ada3aab8a2cf56db32750665ef5c1bcfbfb1993fd135c3b3"} Dec 10 14:32:01 crc kubenswrapper[4718]: I1210 14:32:01.550422 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-xvkg4" event={"ID":"af814c60-50de-499d-a1b2-b18f3749bc35","Type":"ContainerStarted","Data":"7c21005ed6f22ff90aeb350baa8ade74081c04504b21129b835eaa29986bcd15"} Dec 10 14:32:01 crc kubenswrapper[4718]: I1210 14:32:01.551955 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-hv62w" event={"ID":"9db3984f-4589-462f-94d7-89a885be63d5","Type":"ContainerStarted","Data":"2c18891e2a71e33784526db2ff2226ff6aa8a9df462f5cb11ff4f5a446509f54"} Dec 10 14:32:01 crc kubenswrapper[4718]: I1210 14:32:01.552006 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-hv62w" event={"ID":"9db3984f-4589-462f-94d7-89a885be63d5","Type":"ContainerStarted","Data":"7f89974d7245e64c8457b51972d5a63deb98154961a1a5d4273c32bb1fa3a6eb"} Dec 10 14:32:01 crc kubenswrapper[4718]: I1210 14:32:01.553372 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"368fbae55ef9c78f2f88c5a07301f271e1ccffc66976a0c2dc9f80edce04613a"} Dec 10 14:32:01 crc kubenswrapper[4718]: I1210 14:32:01.554443 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-kkfdg" event={"ID":"db9c6842-03cb-4a28-baff-77f27f537aa4","Type":"ContainerStarted","Data":"055dbea372765b4207a7998f933254a3ae7dab2102a6a7f0a005321c1ed56553"} Dec 10 14:32:01 crc kubenswrapper[4718]: I1210 14:32:01.567142 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 10 14:32:01 crc kubenswrapper[4718]: E1210 14:32:01.567497 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-10 14:32:05.567473997 +0000 UTC m=+30.516697414 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 14:32:01 crc kubenswrapper[4718]: I1210 14:32:01.596704 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:01 crc kubenswrapper[4718]: I1210 14:32:01.597069 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:01 crc kubenswrapper[4718]: I1210 14:32:01.597081 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:01 crc kubenswrapper[4718]: I1210 14:32:01.597096 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:01 crc kubenswrapper[4718]: I1210 14:32:01.597107 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:01Z","lastTransitionTime":"2025-12-10T14:32:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:01 crc kubenswrapper[4718]: I1210 14:32:01.605883 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:01Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:01 crc kubenswrapper[4718]: I1210 14:32:01.623808 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:57Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:01Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:01 crc kubenswrapper[4718]: I1210 14:32:01.647983 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a6ff07473ddfa3d1db1e5fe47937274f803c400ee435102c3b0383c02af4340\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b56868fde34c04a22b78be1e6cb02ff5b11f0dd9fa0232e5a4c38725a8bf193\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:01Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:01 crc kubenswrapper[4718]: I1210 14:32:01.688461 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:01Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:01 crc kubenswrapper[4718]: I1210 14:32:01.699870 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:01 crc kubenswrapper[4718]: I1210 14:32:01.699932 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:01 crc kubenswrapper[4718]: I1210 14:32:01.699944 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:01 crc kubenswrapper[4718]: I1210 14:32:01.699960 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:01 crc kubenswrapper[4718]: I1210 14:32:01.699972 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:01Z","lastTransitionTime":"2025-12-10T14:32:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:01 crc kubenswrapper[4718]: I1210 14:32:01.706887 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hv62w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9db3984f-4589-462f-94d7-89a885be63d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c18891e2a71e33784526db2ff2226ff6aa8a9df462f5cb11ff4f5a446509f54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d84p4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hv62w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:01Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:01 crc kubenswrapper[4718]: I1210 14:32:01.721615 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xvkg4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af814c60-50de-499d-a1b2-b18f3749bc35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01d0637707f68305ada3aab8a2cf56db32750665ef5c1bcfbfb1993fd135c3b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vsmbx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:00Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xvkg4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:01Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:01 crc kubenswrapper[4718]: I1210 14:32:01.739179 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8db53917-7cfb-496d-b8a0-5cc68f3be4e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ac0946788c7cef36abe84639ca343f7ba379f19b94308cd1a08f4dcac4ad249\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f420f195ef9f1879dc97af4b3feb8835025c2cbe0ede4113fb1b0e0b08ba6c2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8zmhn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:01Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:01 crc kubenswrapper[4718]: I1210 14:32:01.754464 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://caa6ee7ab7f267517dcd1699f91f61cbc8b2dd0a135dd553e096f31a7f1dd93b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:01Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:01 crc kubenswrapper[4718]: I1210 14:32:01.769365 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 10 14:32:01 crc kubenswrapper[4718]: I1210 14:32:01.769437 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 10 14:32:01 crc kubenswrapper[4718]: I1210 14:32:01.769482 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 10 14:32:01 crc kubenswrapper[4718]: E1210 14:32:01.769606 4718 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 10 14:32:01 crc kubenswrapper[4718]: E1210 14:32:01.769665 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-10 14:32:05.769648291 +0000 UTC m=+30.718871708 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 10 14:32:01 crc kubenswrapper[4718]: E1210 14:32:01.770100 4718 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 10 14:32:01 crc kubenswrapper[4718]: E1210 14:32:01.770117 4718 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 10 14:32:01 crc kubenswrapper[4718]: E1210 14:32:01.770129 4718 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 10 14:32:01 crc kubenswrapper[4718]: E1210 14:32:01.770166 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-10 14:32:05.770154024 +0000 UTC m=+30.719377441 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 10 14:32:01 crc kubenswrapper[4718]: E1210 14:32:01.770238 4718 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 10 14:32:01 crc kubenswrapper[4718]: E1210 14:32:01.770273 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-10 14:32:05.770264597 +0000 UTC m=+30.719488014 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 10 14:32:01 crc kubenswrapper[4718]: I1210 14:32:01.813194 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:01 crc kubenswrapper[4718]: I1210 14:32:01.813242 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:01 crc kubenswrapper[4718]: I1210 14:32:01.813253 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:01 crc kubenswrapper[4718]: I1210 14:32:01.813268 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:01 crc kubenswrapper[4718]: I1210 14:32:01.813279 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:01Z","lastTransitionTime":"2025-12-10T14:32:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:01 crc kubenswrapper[4718]: I1210 14:32:01.817179 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kkfdg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db9c6842-03cb-4a28-baff-77f27f537aa4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:00Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bxrt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bxrt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bxrt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bxrt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bxrt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bxrt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bxrt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kkfdg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:01Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:01 crc kubenswrapper[4718]: I1210 14:32:01.850809 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dtch2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"612af6cb-db4d-4874-a9ea-8b3c7eb8e30c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-dtch2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:01Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:01 crc kubenswrapper[4718]: I1210 14:32:01.868003 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52b297ad-8ff7-498e-8248-b64014de744f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65966d33004e0a6740f0b450f404729baec86fc47c84a58ee020a78b7378bfee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfa44e8f79d7525847ab10384aaf41105a8e6769b384679417cd0a7379748dee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://32fcede9b35f9a45177a341e272e7c2cbe2e5c6bf699e877664a603a5f96124a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d6e990a4a3f575ffc8a6399b8dd1fd59a84db71b34885946046fc7c5bd0a8e8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9d6e990a4a3f575ffc8a6399b8dd1fd59a84db71b34885946046fc7c5bd0a8e8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-10T14:31:58Z\\\",\\\"message\\\":\\\"le observer\\\\nW1210 14:31:57.496102 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1210 14:31:57.496266 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1210 14:31:57.497076 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1744864549/tls.crt::/tmp/serving-cert-1744864549/tls.key\\\\\\\"\\\\nI1210 14:31:57.965947 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1210 14:31:57.968367 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1210 14:31:57.968400 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1210 14:31:57.968417 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1210 14:31:57.968422 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1210 14:31:57.972813 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1210 14:31:57.972849 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1210 14:31:57.972856 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1210 14:31:57.972862 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1210 14:31:57.972867 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1210 14:31:57.972870 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1210 14:31:57.972874 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1210 14:31:57.973028 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1210 14:31:57.977590 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-10T14:31:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f363cabfc65ba2cb737968ead6f21568f30416a1f385f511f2d1a45da1e831a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3cbf8dfd3e1278a733c6ca7e888788d0bec845ff72dfd618e2070262d05b6a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3cbf8dfd3e1278a733c6ca7e888788d0bec845ff72dfd618e2070262d05b6a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:31:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:31:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:01Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:01 crc kubenswrapper[4718]: I1210 14:32:01.870371 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 10 14:32:01 crc kubenswrapper[4718]: E1210 14:32:01.870638 4718 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 10 14:32:01 crc kubenswrapper[4718]: E1210 14:32:01.870695 4718 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 10 14:32:01 crc kubenswrapper[4718]: E1210 14:32:01.870714 4718 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 10 14:32:01 crc kubenswrapper[4718]: E1210 14:32:01.870803 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-10 14:32:05.8707757 +0000 UTC m=+30.819999287 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 10 14:32:01 crc kubenswrapper[4718]: I1210 14:32:01.882645 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:01Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:01 crc kubenswrapper[4718]: I1210 14:32:01.905227 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:01Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:01 crc kubenswrapper[4718]: I1210 14:32:01.942913 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://368fbae55ef9c78f2f88c5a07301f271e1ccffc66976a0c2dc9f80edce04613a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:01Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:01 crc kubenswrapper[4718]: I1210 14:32:01.972329 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Dec 10 14:32:01 crc kubenswrapper[4718]: I1210 14:32:01.975655 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a6ff07473ddfa3d1db1e5fe47937274f803c400ee435102c3b0383c02af4340\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b56868fde34c04a22b78be1e6cb02ff5b11f0dd9fa0232e5a4c38725a8bf193\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:01Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:01 crc kubenswrapper[4718]: I1210 14:32:01.976413 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/612af6cb-db4d-4874-a9ea-8b3c7eb8e30c-ovnkube-script-lib\") pod \"ovnkube-node-dtch2\" (UID: \"612af6cb-db4d-4874-a9ea-8b3c7eb8e30c\") " pod="openshift-ovn-kubernetes/ovnkube-node-dtch2" Dec 10 14:32:02 crc kubenswrapper[4718]: I1210 14:32:02.052352 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 10 14:32:02 crc kubenswrapper[4718]: I1210 14:32:02.052804 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 10 14:32:02 crc kubenswrapper[4718]: E1210 14:32:02.052569 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 10 14:32:02 crc kubenswrapper[4718]: I1210 14:32:02.053299 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:02 crc kubenswrapper[4718]: I1210 14:32:02.053346 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:02 crc kubenswrapper[4718]: I1210 14:32:02.053358 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:02 crc kubenswrapper[4718]: I1210 14:32:02.053373 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:02 crc kubenswrapper[4718]: E1210 14:32:02.053220 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 10 14:32:02 crc kubenswrapper[4718]: I1210 14:32:02.054149 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:02Z","lastTransitionTime":"2025-12-10T14:32:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:02 crc kubenswrapper[4718]: I1210 14:32:02.140567 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Dec 10 14:32:02 crc kubenswrapper[4718]: I1210 14:32:02.149901 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/612af6cb-db4d-4874-a9ea-8b3c7eb8e30c-ovn-node-metrics-cert\") pod \"ovnkube-node-dtch2\" (UID: \"612af6cb-db4d-4874-a9ea-8b3c7eb8e30c\") " pod="openshift-ovn-kubernetes/ovnkube-node-dtch2" Dec 10 14:32:02 crc kubenswrapper[4718]: I1210 14:32:02.155749 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Dec 10 14:32:02 crc kubenswrapper[4718]: I1210 14:32:02.157099 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:02 crc kubenswrapper[4718]: I1210 14:32:02.157132 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:02 crc kubenswrapper[4718]: I1210 14:32:02.157143 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:02 crc kubenswrapper[4718]: I1210 14:32:02.157161 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:02 crc kubenswrapper[4718]: I1210 14:32:02.157175 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:02Z","lastTransitionTime":"2025-12-10T14:32:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:02 crc kubenswrapper[4718]: I1210 14:32:02.259881 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:02 crc kubenswrapper[4718]: I1210 14:32:02.259935 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:02 crc kubenswrapper[4718]: I1210 14:32:02.259953 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:02 crc kubenswrapper[4718]: I1210 14:32:02.259974 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:02 crc kubenswrapper[4718]: I1210 14:32:02.259996 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:02Z","lastTransitionTime":"2025-12-10T14:32:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:02 crc kubenswrapper[4718]: I1210 14:32:02.303663 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 10 14:32:02 crc kubenswrapper[4718]: I1210 14:32:02.311081 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 10 14:32:02 crc kubenswrapper[4718]: E1210 14:32:02.369153 4718 configmap.go:193] Couldn't get configMap openshift-ovn-kubernetes/env-overrides: failed to sync configmap cache: timed out waiting for the condition Dec 10 14:32:02 crc kubenswrapper[4718]: E1210 14:32:02.369192 4718 configmap.go:193] Couldn't get configMap openshift-ovn-kubernetes/ovnkube-config: failed to sync configmap cache: timed out waiting for the condition Dec 10 14:32:02 crc kubenswrapper[4718]: E1210 14:32:02.369680 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/612af6cb-db4d-4874-a9ea-8b3c7eb8e30c-env-overrides podName:612af6cb-db4d-4874-a9ea-8b3c7eb8e30c nodeName:}" failed. No retries permitted until 2025-12-10 14:32:02.869545295 +0000 UTC m=+27.818768712 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "env-overrides" (UniqueName: "kubernetes.io/configmap/612af6cb-db4d-4874-a9ea-8b3c7eb8e30c-env-overrides") pod "ovnkube-node-dtch2" (UID: "612af6cb-db4d-4874-a9ea-8b3c7eb8e30c") : failed to sync configmap cache: timed out waiting for the condition Dec 10 14:32:02 crc kubenswrapper[4718]: E1210 14:32:02.369781 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/612af6cb-db4d-4874-a9ea-8b3c7eb8e30c-ovnkube-config podName:612af6cb-db4d-4874-a9ea-8b3c7eb8e30c nodeName:}" failed. No retries permitted until 2025-12-10 14:32:02.869767751 +0000 UTC m=+27.818991168 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "ovnkube-config" (UniqueName: "kubernetes.io/configmap/612af6cb-db4d-4874-a9ea-8b3c7eb8e30c-ovnkube-config") pod "ovnkube-node-dtch2" (UID: "612af6cb-db4d-4874-a9ea-8b3c7eb8e30c") : failed to sync configmap cache: timed out waiting for the condition Dec 10 14:32:02 crc kubenswrapper[4718]: I1210 14:32:02.371284 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:02 crc kubenswrapper[4718]: I1210 14:32:02.371316 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:02 crc kubenswrapper[4718]: I1210 14:32:02.371325 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:02 crc kubenswrapper[4718]: I1210 14:32:02.371342 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:02 crc kubenswrapper[4718]: I1210 14:32:02.371357 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:02Z","lastTransitionTime":"2025-12-10T14:32:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:02 crc kubenswrapper[4718]: I1210 14:32:02.372535 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Dec 10 14:32:02 crc kubenswrapper[4718]: E1210 14:32:02.405780 4718 projected.go:288] Couldn't get configMap openshift-ovn-kubernetes/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Dec 10 14:32:02 crc kubenswrapper[4718]: I1210 14:32:02.426153 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Dec 10 14:32:02 crc kubenswrapper[4718]: I1210 14:32:02.435298 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kkfdg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db9c6842-03cb-4a28-baff-77f27f537aa4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:00Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bxrt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bxrt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bxrt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bxrt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bxrt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bxrt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bxrt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kkfdg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:02Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:02 crc kubenswrapper[4718]: I1210 14:32:02.439278 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Dec 10 14:32:02 crc kubenswrapper[4718]: E1210 14:32:02.446823 4718 projected.go:194] Error preparing data for projected volume kube-api-access-jhb7n for pod openshift-ovn-kubernetes/ovnkube-node-dtch2: failed to sync configmap cache: timed out waiting for the condition Dec 10 14:32:02 crc kubenswrapper[4718]: E1210 14:32:02.446996 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/612af6cb-db4d-4874-a9ea-8b3c7eb8e30c-kube-api-access-jhb7n podName:612af6cb-db4d-4874-a9ea-8b3c7eb8e30c nodeName:}" failed. No retries permitted until 2025-12-10 14:32:02.946952248 +0000 UTC m=+27.896175665 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-jhb7n" (UniqueName: "kubernetes.io/projected/612af6cb-db4d-4874-a9ea-8b3c7eb8e30c-kube-api-access-jhb7n") pod "ovnkube-node-dtch2" (UID: "612af6cb-db4d-4874-a9ea-8b3c7eb8e30c") : failed to sync configmap cache: timed out waiting for the condition Dec 10 14:32:02 crc kubenswrapper[4718]: I1210 14:32:02.481039 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:02 crc kubenswrapper[4718]: I1210 14:32:02.481100 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:02 crc kubenswrapper[4718]: I1210 14:32:02.481112 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:02 crc kubenswrapper[4718]: I1210 14:32:02.481131 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:02 crc kubenswrapper[4718]: I1210 14:32:02.481144 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:02Z","lastTransitionTime":"2025-12-10T14:32:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:02 crc kubenswrapper[4718]: I1210 14:32:02.481379 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://368fbae55ef9c78f2f88c5a07301f271e1ccffc66976a0c2dc9f80edce04613a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:02Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:02 crc kubenswrapper[4718]: I1210 14:32:02.500814 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Dec 10 14:32:02 crc kubenswrapper[4718]: I1210 14:32:02.506144 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dtch2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"612af6cb-db4d-4874-a9ea-8b3c7eb8e30c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-dtch2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:02Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:02 crc kubenswrapper[4718]: I1210 14:32:02.530212 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52b297ad-8ff7-498e-8248-b64014de744f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65966d33004e0a6740f0b450f404729baec86fc47c84a58ee020a78b7378bfee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfa44e8f79d7525847ab10384aaf41105a8e6769b384679417cd0a7379748dee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://32fcede9b35f9a45177a341e272e7c2cbe2e5c6bf699e877664a603a5f96124a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d6e990a4a3f575ffc8a6399b8dd1fd59a84db71b34885946046fc7c5bd0a8e8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9d6e990a4a3f575ffc8a6399b8dd1fd59a84db71b34885946046fc7c5bd0a8e8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-10T14:31:58Z\\\",\\\"message\\\":\\\"le observer\\\\nW1210 14:31:57.496102 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1210 14:31:57.496266 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1210 14:31:57.497076 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1744864549/tls.crt::/tmp/serving-cert-1744864549/tls.key\\\\\\\"\\\\nI1210 14:31:57.965947 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1210 14:31:57.968367 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1210 14:31:57.968400 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1210 14:31:57.968417 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1210 14:31:57.968422 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1210 14:31:57.972813 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1210 14:31:57.972849 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1210 14:31:57.972856 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1210 14:31:57.972862 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1210 14:31:57.972867 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1210 14:31:57.972870 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1210 14:31:57.972874 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1210 14:31:57.973028 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1210 14:31:57.977590 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-10T14:31:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f363cabfc65ba2cb737968ead6f21568f30416a1f385f511f2d1a45da1e831a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3cbf8dfd3e1278a733c6ca7e888788d0bec845ff72dfd618e2070262d05b6a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3cbf8dfd3e1278a733c6ca7e888788d0bec845ff72dfd618e2070262d05b6a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:31:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:31:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:02Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:02 crc kubenswrapper[4718]: I1210 14:32:02.557468 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Dec 10 14:32:02 crc kubenswrapper[4718]: I1210 14:32:02.559092 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-kkfdg" event={"ID":"db9c6842-03cb-4a28-baff-77f27f537aa4","Type":"ContainerStarted","Data":"34acb23b1eef53ae920322853c0503b9d26b0d30894e6c53ae52fb90e1671294"} Dec 10 14:32:02 crc kubenswrapper[4718]: E1210 14:32:02.581853 4718 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-crc\" already exists" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 10 14:32:02 crc kubenswrapper[4718]: I1210 14:32:02.584734 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:02 crc kubenswrapper[4718]: I1210 14:32:02.584780 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:02 crc kubenswrapper[4718]: I1210 14:32:02.584793 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:02 crc kubenswrapper[4718]: I1210 14:32:02.584810 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:02 crc kubenswrapper[4718]: I1210 14:32:02.584825 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:02Z","lastTransitionTime":"2025-12-10T14:32:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:02 crc kubenswrapper[4718]: I1210 14:32:02.591136 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:02Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:02 crc kubenswrapper[4718]: I1210 14:32:02.606868 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:02Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:02 crc kubenswrapper[4718]: I1210 14:32:02.628933 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a6ff07473ddfa3d1db1e5fe47937274f803c400ee435102c3b0383c02af4340\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b56868fde34c04a22b78be1e6cb02ff5b11f0dd9fa0232e5a4c38725a8bf193\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:02Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:02 crc kubenswrapper[4718]: I1210 14:32:02.651521 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://caa6ee7ab7f267517dcd1699f91f61cbc8b2dd0a135dd553e096f31a7f1dd93b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:02Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:02 crc kubenswrapper[4718]: I1210 14:32:02.673581 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:02Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:02 crc kubenswrapper[4718]: I1210 14:32:02.687648 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:02 crc kubenswrapper[4718]: I1210 14:32:02.687720 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:02 crc kubenswrapper[4718]: I1210 14:32:02.687737 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:02 crc kubenswrapper[4718]: I1210 14:32:02.687762 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:02 crc kubenswrapper[4718]: I1210 14:32:02.687783 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:02Z","lastTransitionTime":"2025-12-10T14:32:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:02 crc kubenswrapper[4718]: I1210 14:32:02.707058 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hv62w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9db3984f-4589-462f-94d7-89a885be63d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c18891e2a71e33784526db2ff2226ff6aa8a9df462f5cb11ff4f5a446509f54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d84p4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hv62w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:02Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:02 crc kubenswrapper[4718]: I1210 14:32:02.719881 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xvkg4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af814c60-50de-499d-a1b2-b18f3749bc35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01d0637707f68305ada3aab8a2cf56db32750665ef5c1bcfbfb1993fd135c3b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vsmbx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:00Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xvkg4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:02Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:02 crc kubenswrapper[4718]: I1210 14:32:02.733643 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8db53917-7cfb-496d-b8a0-5cc68f3be4e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ac0946788c7cef36abe84639ca343f7ba379f19b94308cd1a08f4dcac4ad249\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f420f195ef9f1879dc97af4b3feb8835025c2cbe0ede4113fb1b0e0b08ba6c2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8zmhn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:02Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:02 crc kubenswrapper[4718]: I1210 14:32:02.750501 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a6ff07473ddfa3d1db1e5fe47937274f803c400ee435102c3b0383c02af4340\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b56868fde34c04a22b78be1e6cb02ff5b11f0dd9fa0232e5a4c38725a8bf193\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:02Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:02 crc kubenswrapper[4718]: I1210 14:32:02.791458 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:02 crc kubenswrapper[4718]: I1210 14:32:02.791735 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:02 crc kubenswrapper[4718]: I1210 14:32:02.791813 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:02 crc kubenswrapper[4718]: I1210 14:32:02.791929 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:02 crc kubenswrapper[4718]: I1210 14:32:02.792004 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:02Z","lastTransitionTime":"2025-12-10T14:32:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:02 crc kubenswrapper[4718]: I1210 14:32:02.894104 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:02 crc kubenswrapper[4718]: I1210 14:32:02.894421 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:02 crc kubenswrapper[4718]: I1210 14:32:02.894525 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:02 crc kubenswrapper[4718]: I1210 14:32:02.894596 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:02 crc kubenswrapper[4718]: I1210 14:32:02.894669 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:02Z","lastTransitionTime":"2025-12-10T14:32:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:02 crc kubenswrapper[4718]: I1210 14:32:02.920082 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/612af6cb-db4d-4874-a9ea-8b3c7eb8e30c-env-overrides\") pod \"ovnkube-node-dtch2\" (UID: \"612af6cb-db4d-4874-a9ea-8b3c7eb8e30c\") " pod="openshift-ovn-kubernetes/ovnkube-node-dtch2" Dec 10 14:32:02 crc kubenswrapper[4718]: I1210 14:32:02.920779 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1432ae01-a453-47a7-be1d-5fdcabe6f0c8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://651112c229c32968527538e66f1b161c92411b37f45f36ca90b27b6710d7a43d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d1de02a2f41beed3e92a05e69a6c76b44861e7dba46d723a38fffe8c3d0ed42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9405e4c833e1d215824604c6d7ec42dec156f2968d3351f110df665cdac0ce1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d898946159da9ec7806b5daa074a2313a8d3cd6086f4ae2bdfe858a7ba70eca7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:31:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:02Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:02 crc kubenswrapper[4718]: I1210 14:32:02.919446 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/612af6cb-db4d-4874-a9ea-8b3c7eb8e30c-env-overrides\") pod \"ovnkube-node-dtch2\" (UID: \"612af6cb-db4d-4874-a9ea-8b3c7eb8e30c\") " pod="openshift-ovn-kubernetes/ovnkube-node-dtch2" Dec 10 14:32:02 crc kubenswrapper[4718]: I1210 14:32:02.921230 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/612af6cb-db4d-4874-a9ea-8b3c7eb8e30c-ovnkube-config\") pod \"ovnkube-node-dtch2\" (UID: \"612af6cb-db4d-4874-a9ea-8b3c7eb8e30c\") " pod="openshift-ovn-kubernetes/ovnkube-node-dtch2" Dec 10 14:32:02 crc kubenswrapper[4718]: I1210 14:32:02.921920 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/612af6cb-db4d-4874-a9ea-8b3c7eb8e30c-ovnkube-config\") pod \"ovnkube-node-dtch2\" (UID: \"612af6cb-db4d-4874-a9ea-8b3c7eb8e30c\") " pod="openshift-ovn-kubernetes/ovnkube-node-dtch2" Dec 10 14:32:02 crc kubenswrapper[4718]: I1210 14:32:02.946756 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://caa6ee7ab7f267517dcd1699f91f61cbc8b2dd0a135dd553e096f31a7f1dd93b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:02Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:02 crc kubenswrapper[4718]: I1210 14:32:02.976379 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:02Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:03 crc kubenswrapper[4718]: I1210 14:32:03.005225 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hv62w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9db3984f-4589-462f-94d7-89a885be63d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c18891e2a71e33784526db2ff2226ff6aa8a9df462f5cb11ff4f5a446509f54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d84p4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hv62w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:03Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:03 crc kubenswrapper[4718]: I1210 14:32:03.007985 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:03 crc kubenswrapper[4718]: I1210 14:32:03.008026 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:03 crc kubenswrapper[4718]: I1210 14:32:03.008038 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:03 crc kubenswrapper[4718]: I1210 14:32:03.008056 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:03 crc kubenswrapper[4718]: I1210 14:32:03.008070 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:03Z","lastTransitionTime":"2025-12-10T14:32:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:03 crc kubenswrapper[4718]: I1210 14:32:03.019306 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 10 14:32:03 crc kubenswrapper[4718]: E1210 14:32:03.019757 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 10 14:32:03 crc kubenswrapper[4718]: I1210 14:32:03.026883 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jhb7n\" (UniqueName: \"kubernetes.io/projected/612af6cb-db4d-4874-a9ea-8b3c7eb8e30c-kube-api-access-jhb7n\") pod \"ovnkube-node-dtch2\" (UID: \"612af6cb-db4d-4874-a9ea-8b3c7eb8e30c\") " pod="openshift-ovn-kubernetes/ovnkube-node-dtch2" Dec 10 14:32:03 crc kubenswrapper[4718]: I1210 14:32:03.026944 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xvkg4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af814c60-50de-499d-a1b2-b18f3749bc35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01d0637707f68305ada3aab8a2cf56db32750665ef5c1bcfbfb1993fd135c3b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vsmbx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:00Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xvkg4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:03Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:03 crc kubenswrapper[4718]: I1210 14:32:03.032208 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jhb7n\" (UniqueName: \"kubernetes.io/projected/612af6cb-db4d-4874-a9ea-8b3c7eb8e30c-kube-api-access-jhb7n\") pod \"ovnkube-node-dtch2\" (UID: \"612af6cb-db4d-4874-a9ea-8b3c7eb8e30c\") " pod="openshift-ovn-kubernetes/ovnkube-node-dtch2" Dec 10 14:32:03 crc kubenswrapper[4718]: I1210 14:32:03.044038 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-9fmrd"] Dec 10 14:32:03 crc kubenswrapper[4718]: I1210 14:32:03.044513 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-9fmrd" Dec 10 14:32:03 crc kubenswrapper[4718]: I1210 14:32:03.047068 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Dec 10 14:32:03 crc kubenswrapper[4718]: I1210 14:32:03.047217 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Dec 10 14:32:03 crc kubenswrapper[4718]: I1210 14:32:03.048231 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Dec 10 14:32:03 crc kubenswrapper[4718]: I1210 14:32:03.048358 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Dec 10 14:32:03 crc kubenswrapper[4718]: I1210 14:32:03.055268 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8db53917-7cfb-496d-b8a0-5cc68f3be4e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ac0946788c7cef36abe84639ca343f7ba379f19b94308cd1a08f4dcac4ad249\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f420f195ef9f1879dc97af4b3feb8835025c2cbe0ede4113fb1b0e0b08ba6c2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8zmhn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:03Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:03 crc kubenswrapper[4718]: I1210 14:32:03.110890 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:03 crc kubenswrapper[4718]: I1210 14:32:03.111208 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:03 crc kubenswrapper[4718]: I1210 14:32:03.111307 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:03 crc kubenswrapper[4718]: I1210 14:32:03.111465 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:03 crc kubenswrapper[4718]: I1210 14:32:03.111568 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:03Z","lastTransitionTime":"2025-12-10T14:32:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:03 crc kubenswrapper[4718]: I1210 14:32:03.118226 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kkfdg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db9c6842-03cb-4a28-baff-77f27f537aa4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:00Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bxrt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34acb23b1eef53ae920322853c0503b9d26b0d30894e6c53ae52fb90e1671294\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bxrt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bxrt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bxrt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bxrt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bxrt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bxrt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kkfdg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:03Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:03 crc kubenswrapper[4718]: I1210 14:32:03.128548 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r6f28\" (UniqueName: \"kubernetes.io/projected/186fd52a-c63c-461f-a551-8b57ead36f59-kube-api-access-r6f28\") pod \"node-ca-9fmrd\" (UID: \"186fd52a-c63c-461f-a551-8b57ead36f59\") " pod="openshift-image-registry/node-ca-9fmrd" Dec 10 14:32:03 crc kubenswrapper[4718]: I1210 14:32:03.128857 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/186fd52a-c63c-461f-a551-8b57ead36f59-host\") pod \"node-ca-9fmrd\" (UID: \"186fd52a-c63c-461f-a551-8b57ead36f59\") " pod="openshift-image-registry/node-ca-9fmrd" Dec 10 14:32:03 crc kubenswrapper[4718]: I1210 14:32:03.128946 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/186fd52a-c63c-461f-a551-8b57ead36f59-serviceca\") pod \"node-ca-9fmrd\" (UID: \"186fd52a-c63c-461f-a551-8b57ead36f59\") " pod="openshift-image-registry/node-ca-9fmrd" Dec 10 14:32:03 crc kubenswrapper[4718]: I1210 14:32:03.180742 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-dtch2" Dec 10 14:32:03 crc kubenswrapper[4718]: I1210 14:32:03.214344 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:03 crc kubenswrapper[4718]: I1210 14:32:03.214410 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:03 crc kubenswrapper[4718]: I1210 14:32:03.214422 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:03 crc kubenswrapper[4718]: I1210 14:32:03.214437 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:03 crc kubenswrapper[4718]: I1210 14:32:03.214453 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:03Z","lastTransitionTime":"2025-12-10T14:32:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:03 crc kubenswrapper[4718]: I1210 14:32:03.220626 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52b297ad-8ff7-498e-8248-b64014de744f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65966d33004e0a6740f0b450f404729baec86fc47c84a58ee020a78b7378bfee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfa44e8f79d7525847ab10384aaf41105a8e6769b384679417cd0a7379748dee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://32fcede9b35f9a45177a341e272e7c2cbe2e5c6bf699e877664a603a5f96124a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d6e990a4a3f575ffc8a6399b8dd1fd59a84db71b34885946046fc7c5bd0a8e8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9d6e990a4a3f575ffc8a6399b8dd1fd59a84db71b34885946046fc7c5bd0a8e8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-10T14:31:58Z\\\",\\\"message\\\":\\\"le observer\\\\nW1210 14:31:57.496102 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1210 14:31:57.496266 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1210 14:31:57.497076 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1744864549/tls.crt::/tmp/serving-cert-1744864549/tls.key\\\\\\\"\\\\nI1210 14:31:57.965947 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1210 14:31:57.968367 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1210 14:31:57.968400 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1210 14:31:57.968417 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1210 14:31:57.968422 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1210 14:31:57.972813 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1210 14:31:57.972849 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1210 14:31:57.972856 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1210 14:31:57.972862 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1210 14:31:57.972867 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1210 14:31:57.972870 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1210 14:31:57.972874 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1210 14:31:57.973028 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1210 14:31:57.977590 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-10T14:31:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f363cabfc65ba2cb737968ead6f21568f30416a1f385f511f2d1a45da1e831a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3cbf8dfd3e1278a733c6ca7e888788d0bec845ff72dfd618e2070262d05b6a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3cbf8dfd3e1278a733c6ca7e888788d0bec845ff72dfd618e2070262d05b6a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:31:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:31:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:03Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:03 crc kubenswrapper[4718]: I1210 14:32:03.229746 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r6f28\" (UniqueName: \"kubernetes.io/projected/186fd52a-c63c-461f-a551-8b57ead36f59-kube-api-access-r6f28\") pod \"node-ca-9fmrd\" (UID: \"186fd52a-c63c-461f-a551-8b57ead36f59\") " pod="openshift-image-registry/node-ca-9fmrd" Dec 10 14:32:03 crc kubenswrapper[4718]: I1210 14:32:03.229803 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/186fd52a-c63c-461f-a551-8b57ead36f59-host\") pod \"node-ca-9fmrd\" (UID: \"186fd52a-c63c-461f-a551-8b57ead36f59\") " pod="openshift-image-registry/node-ca-9fmrd" Dec 10 14:32:03 crc kubenswrapper[4718]: I1210 14:32:03.229821 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/186fd52a-c63c-461f-a551-8b57ead36f59-serviceca\") pod \"node-ca-9fmrd\" (UID: \"186fd52a-c63c-461f-a551-8b57ead36f59\") " pod="openshift-image-registry/node-ca-9fmrd" Dec 10 14:32:03 crc kubenswrapper[4718]: I1210 14:32:03.230550 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/186fd52a-c63c-461f-a551-8b57ead36f59-host\") pod \"node-ca-9fmrd\" (UID: \"186fd52a-c63c-461f-a551-8b57ead36f59\") " pod="openshift-image-registry/node-ca-9fmrd" Dec 10 14:32:03 crc kubenswrapper[4718]: I1210 14:32:03.230841 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/186fd52a-c63c-461f-a551-8b57ead36f59-serviceca\") pod \"node-ca-9fmrd\" (UID: \"186fd52a-c63c-461f-a551-8b57ead36f59\") " pod="openshift-image-registry/node-ca-9fmrd" Dec 10 14:32:03 crc kubenswrapper[4718]: W1210 14:32:03.242232 4718 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod612af6cb_db4d_4874_a9ea_8b3c7eb8e30c.slice/crio-41da8781c324f2c75342ff2d91854483b45d32b610071b068acb73c5cb47afa9 WatchSource:0}: Error finding container 41da8781c324f2c75342ff2d91854483b45d32b610071b068acb73c5cb47afa9: Status 404 returned error can't find the container with id 41da8781c324f2c75342ff2d91854483b45d32b610071b068acb73c5cb47afa9 Dec 10 14:32:03 crc kubenswrapper[4718]: I1210 14:32:03.246858 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:03Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:03 crc kubenswrapper[4718]: I1210 14:32:03.267081 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r6f28\" (UniqueName: \"kubernetes.io/projected/186fd52a-c63c-461f-a551-8b57ead36f59-kube-api-access-r6f28\") pod \"node-ca-9fmrd\" (UID: \"186fd52a-c63c-461f-a551-8b57ead36f59\") " pod="openshift-image-registry/node-ca-9fmrd" Dec 10 14:32:03 crc kubenswrapper[4718]: I1210 14:32:03.268538 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:03Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:03 crc kubenswrapper[4718]: I1210 14:32:03.288426 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://368fbae55ef9c78f2f88c5a07301f271e1ccffc66976a0c2dc9f80edce04613a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:03Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:03 crc kubenswrapper[4718]: I1210 14:32:03.309373 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dtch2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"612af6cb-db4d-4874-a9ea-8b3c7eb8e30c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-dtch2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:03Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:03 crc kubenswrapper[4718]: I1210 14:32:03.317230 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:03 crc kubenswrapper[4718]: I1210 14:32:03.317267 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:03 crc kubenswrapper[4718]: I1210 14:32:03.317277 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:03 crc kubenswrapper[4718]: I1210 14:32:03.317293 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:03 crc kubenswrapper[4718]: I1210 14:32:03.317304 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:03Z","lastTransitionTime":"2025-12-10T14:32:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:03 crc kubenswrapper[4718]: I1210 14:32:03.326896 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kkfdg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db9c6842-03cb-4a28-baff-77f27f537aa4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:00Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bxrt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34acb23b1eef53ae920322853c0503b9d26b0d30894e6c53ae52fb90e1671294\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bxrt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bxrt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bxrt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bxrt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bxrt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bxrt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kkfdg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:03Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:03 crc kubenswrapper[4718]: I1210 14:32:03.341848 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52b297ad-8ff7-498e-8248-b64014de744f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65966d33004e0a6740f0b450f404729baec86fc47c84a58ee020a78b7378bfee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfa44e8f79d7525847ab10384aaf41105a8e6769b384679417cd0a7379748dee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://32fcede9b35f9a45177a341e272e7c2cbe2e5c6bf699e877664a603a5f96124a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d6e990a4a3f575ffc8a6399b8dd1fd59a84db71b34885946046fc7c5bd0a8e8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9d6e990a4a3f575ffc8a6399b8dd1fd59a84db71b34885946046fc7c5bd0a8e8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-10T14:31:58Z\\\",\\\"message\\\":\\\"le observer\\\\nW1210 14:31:57.496102 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1210 14:31:57.496266 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1210 14:31:57.497076 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1744864549/tls.crt::/tmp/serving-cert-1744864549/tls.key\\\\\\\"\\\\nI1210 14:31:57.965947 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1210 14:31:57.968367 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1210 14:31:57.968400 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1210 14:31:57.968417 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1210 14:31:57.968422 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1210 14:31:57.972813 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1210 14:31:57.972849 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1210 14:31:57.972856 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1210 14:31:57.972862 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1210 14:31:57.972867 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1210 14:31:57.972870 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1210 14:31:57.972874 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1210 14:31:57.973028 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1210 14:31:57.977590 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-10T14:31:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f363cabfc65ba2cb737968ead6f21568f30416a1f385f511f2d1a45da1e831a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3cbf8dfd3e1278a733c6ca7e888788d0bec845ff72dfd618e2070262d05b6a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3cbf8dfd3e1278a733c6ca7e888788d0bec845ff72dfd618e2070262d05b6a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:31:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:31:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:03Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:03 crc kubenswrapper[4718]: I1210 14:32:03.362373 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:03Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:03 crc kubenswrapper[4718]: I1210 14:32:03.373850 4718 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 10 14:32:03 crc kubenswrapper[4718]: I1210 14:32:03.374613 4718 scope.go:117] "RemoveContainer" containerID="9d6e990a4a3f575ffc8a6399b8dd1fd59a84db71b34885946046fc7c5bd0a8e8" Dec 10 14:32:03 crc kubenswrapper[4718]: E1210 14:32:03.374836 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Dec 10 14:32:03 crc kubenswrapper[4718]: I1210 14:32:03.430523 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:03 crc kubenswrapper[4718]: I1210 14:32:03.430559 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:03 crc kubenswrapper[4718]: I1210 14:32:03.430568 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:03 crc kubenswrapper[4718]: I1210 14:32:03.430586 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:03 crc kubenswrapper[4718]: I1210 14:32:03.430601 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:03Z","lastTransitionTime":"2025-12-10T14:32:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:03 crc kubenswrapper[4718]: I1210 14:32:03.550239 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:03 crc kubenswrapper[4718]: I1210 14:32:03.550305 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:03 crc kubenswrapper[4718]: I1210 14:32:03.550319 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:03 crc kubenswrapper[4718]: I1210 14:32:03.550342 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:03 crc kubenswrapper[4718]: I1210 14:32:03.550354 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:03Z","lastTransitionTime":"2025-12-10T14:32:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:03 crc kubenswrapper[4718]: I1210 14:32:03.553163 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:03Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:03 crc kubenswrapper[4718]: I1210 14:32:03.559437 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-9fmrd" Dec 10 14:32:03 crc kubenswrapper[4718]: I1210 14:32:03.573982 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dtch2" event={"ID":"612af6cb-db4d-4874-a9ea-8b3c7eb8e30c","Type":"ContainerStarted","Data":"41da8781c324f2c75342ff2d91854483b45d32b610071b068acb73c5cb47afa9"} Dec 10 14:32:03 crc kubenswrapper[4718]: I1210 14:32:03.575541 4718 generic.go:334] "Generic (PLEG): container finished" podID="db9c6842-03cb-4a28-baff-77f27f537aa4" containerID="34acb23b1eef53ae920322853c0503b9d26b0d30894e6c53ae52fb90e1671294" exitCode=0 Dec 10 14:32:03 crc kubenswrapper[4718]: I1210 14:32:03.576926 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-kkfdg" event={"ID":"db9c6842-03cb-4a28-baff-77f27f537aa4","Type":"ContainerDied","Data":"34acb23b1eef53ae920322853c0503b9d26b0d30894e6c53ae52fb90e1671294"} Dec 10 14:32:03 crc kubenswrapper[4718]: W1210 14:32:03.620283 4718 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod186fd52a_c63c_461f_a551_8b57ead36f59.slice/crio-441b635714fea8bd6f2c5d7af7136d4b7858c5decd20bb3b081b30e340a209fa WatchSource:0}: Error finding container 441b635714fea8bd6f2c5d7af7136d4b7858c5decd20bb3b081b30e340a209fa: Status 404 returned error can't find the container with id 441b635714fea8bd6f2c5d7af7136d4b7858c5decd20bb3b081b30e340a209fa Dec 10 14:32:03 crc kubenswrapper[4718]: I1210 14:32:03.653900 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:03 crc kubenswrapper[4718]: I1210 14:32:03.653941 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:03 crc kubenswrapper[4718]: I1210 14:32:03.653951 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:03 crc kubenswrapper[4718]: I1210 14:32:03.653966 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:03 crc kubenswrapper[4718]: I1210 14:32:03.653976 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:03Z","lastTransitionTime":"2025-12-10T14:32:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:03 crc kubenswrapper[4718]: I1210 14:32:03.666745 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://368fbae55ef9c78f2f88c5a07301f271e1ccffc66976a0c2dc9f80edce04613a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:03Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:03 crc kubenswrapper[4718]: I1210 14:32:03.770256 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:03 crc kubenswrapper[4718]: I1210 14:32:03.770305 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:03 crc kubenswrapper[4718]: I1210 14:32:03.770316 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:03 crc kubenswrapper[4718]: I1210 14:32:03.770335 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:03 crc kubenswrapper[4718]: I1210 14:32:03.770348 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:03Z","lastTransitionTime":"2025-12-10T14:32:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:03 crc kubenswrapper[4718]: I1210 14:32:03.805078 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dtch2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"612af6cb-db4d-4874-a9ea-8b3c7eb8e30c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-dtch2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:03Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:03 crc kubenswrapper[4718]: I1210 14:32:03.824242 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a6ff07473ddfa3d1db1e5fe47937274f803c400ee435102c3b0383c02af4340\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b56868fde34c04a22b78be1e6cb02ff5b11f0dd9fa0232e5a4c38725a8bf193\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:03Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:03 crc kubenswrapper[4718]: I1210 14:32:03.837620 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9fmrd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"186fd52a-c63c-461f-a551-8b57ead36f59\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:03Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:03Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r6f28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:03Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9fmrd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:03Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:03 crc kubenswrapper[4718]: I1210 14:32:03.850260 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xvkg4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af814c60-50de-499d-a1b2-b18f3749bc35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01d0637707f68305ada3aab8a2cf56db32750665ef5c1bcfbfb1993fd135c3b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vsmbx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:00Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xvkg4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:03Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:03 crc kubenswrapper[4718]: I1210 14:32:03.862808 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8db53917-7cfb-496d-b8a0-5cc68f3be4e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ac0946788c7cef36abe84639ca343f7ba379f19b94308cd1a08f4dcac4ad249\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f420f195ef9f1879dc97af4b3feb8835025c2cbe0ede4113fb1b0e0b08ba6c2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8zmhn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:03Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:03 crc kubenswrapper[4718]: I1210 14:32:03.873314 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:03 crc kubenswrapper[4718]: I1210 14:32:03.873356 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:03 crc kubenswrapper[4718]: I1210 14:32:03.873366 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:03 crc kubenswrapper[4718]: I1210 14:32:03.873382 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:03 crc kubenswrapper[4718]: I1210 14:32:03.873408 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:03Z","lastTransitionTime":"2025-12-10T14:32:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:03 crc kubenswrapper[4718]: I1210 14:32:03.878557 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1432ae01-a453-47a7-be1d-5fdcabe6f0c8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://651112c229c32968527538e66f1b161c92411b37f45f36ca90b27b6710d7a43d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d1de02a2f41beed3e92a05e69a6c76b44861e7dba46d723a38fffe8c3d0ed42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9405e4c833e1d215824604c6d7ec42dec156f2968d3351f110df665cdac0ce1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d898946159da9ec7806b5daa074a2313a8d3cd6086f4ae2bdfe858a7ba70eca7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:31:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:03Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:03 crc kubenswrapper[4718]: I1210 14:32:03.894901 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://caa6ee7ab7f267517dcd1699f91f61cbc8b2dd0a135dd553e096f31a7f1dd93b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:03Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:03 crc kubenswrapper[4718]: I1210 14:32:03.912618 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:03Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:03 crc kubenswrapper[4718]: I1210 14:32:03.930541 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hv62w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9db3984f-4589-462f-94d7-89a885be63d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c18891e2a71e33784526db2ff2226ff6aa8a9df462f5cb11ff4f5a446509f54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d84p4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hv62w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:03Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:04 crc kubenswrapper[4718]: I1210 14:32:04.023698 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 10 14:32:04 crc kubenswrapper[4718]: E1210 14:32:04.023840 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 10 14:32:04 crc kubenswrapper[4718]: I1210 14:32:04.024282 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 10 14:32:04 crc kubenswrapper[4718]: E1210 14:32:04.024338 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 10 14:32:04 crc kubenswrapper[4718]: I1210 14:32:04.030604 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kkfdg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db9c6842-03cb-4a28-baff-77f27f537aa4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:00Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bxrt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34acb23b1eef53ae920322853c0503b9d26b0d30894e6c53ae52fb90e1671294\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34acb23b1eef53ae920322853c0503b9d26b0d30894e6c53ae52fb90e1671294\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bxrt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bxrt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bxrt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bxrt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bxrt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bxrt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kkfdg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:03Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:04 crc kubenswrapper[4718]: I1210 14:32:04.031807 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:04 crc kubenswrapper[4718]: I1210 14:32:04.031842 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:04 crc kubenswrapper[4718]: I1210 14:32:04.031852 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:04 crc kubenswrapper[4718]: I1210 14:32:04.031868 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:04 crc kubenswrapper[4718]: I1210 14:32:04.031882 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:04Z","lastTransitionTime":"2025-12-10T14:32:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:04 crc kubenswrapper[4718]: I1210 14:32:04.065213 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52b297ad-8ff7-498e-8248-b64014de744f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65966d33004e0a6740f0b450f404729baec86fc47c84a58ee020a78b7378bfee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfa44e8f79d7525847ab10384aaf41105a8e6769b384679417cd0a7379748dee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://32fcede9b35f9a45177a341e272e7c2cbe2e5c6bf699e877664a603a5f96124a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d6e990a4a3f575ffc8a6399b8dd1fd59a84db71b34885946046fc7c5bd0a8e8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9d6e990a4a3f575ffc8a6399b8dd1fd59a84db71b34885946046fc7c5bd0a8e8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-10T14:31:58Z\\\",\\\"message\\\":\\\"le observer\\\\nW1210 14:31:57.496102 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1210 14:31:57.496266 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1210 14:31:57.497076 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1744864549/tls.crt::/tmp/serving-cert-1744864549/tls.key\\\\\\\"\\\\nI1210 14:31:57.965947 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1210 14:31:57.968367 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1210 14:31:57.968400 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1210 14:31:57.968417 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1210 14:31:57.968422 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1210 14:31:57.972813 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1210 14:31:57.972849 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1210 14:31:57.972856 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1210 14:31:57.972862 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1210 14:31:57.972867 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1210 14:31:57.972870 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1210 14:31:57.972874 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1210 14:31:57.973028 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1210 14:31:57.977590 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-10T14:31:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f363cabfc65ba2cb737968ead6f21568f30416a1f385f511f2d1a45da1e831a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3cbf8dfd3e1278a733c6ca7e888788d0bec845ff72dfd618e2070262d05b6a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3cbf8dfd3e1278a733c6ca7e888788d0bec845ff72dfd618e2070262d05b6a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:31:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:31:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:04Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:04 crc kubenswrapper[4718]: I1210 14:32:04.079643 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:04Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:04 crc kubenswrapper[4718]: I1210 14:32:04.100523 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:04Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:04 crc kubenswrapper[4718]: I1210 14:32:04.121505 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://368fbae55ef9c78f2f88c5a07301f271e1ccffc66976a0c2dc9f80edce04613a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:04Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:04 crc kubenswrapper[4718]: I1210 14:32:04.134004 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:04 crc kubenswrapper[4718]: I1210 14:32:04.134053 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:04 crc kubenswrapper[4718]: I1210 14:32:04.134066 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:04 crc kubenswrapper[4718]: I1210 14:32:04.134086 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:04 crc kubenswrapper[4718]: I1210 14:32:04.134100 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:04Z","lastTransitionTime":"2025-12-10T14:32:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:04 crc kubenswrapper[4718]: I1210 14:32:04.142142 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dtch2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"612af6cb-db4d-4874-a9ea-8b3c7eb8e30c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-dtch2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:04Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:04 crc kubenswrapper[4718]: I1210 14:32:04.156609 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a6ff07473ddfa3d1db1e5fe47937274f803c400ee435102c3b0383c02af4340\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b56868fde34c04a22b78be1e6cb02ff5b11f0dd9fa0232e5a4c38725a8bf193\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:04Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:04 crc kubenswrapper[4718]: I1210 14:32:04.171326 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9fmrd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"186fd52a-c63c-461f-a551-8b57ead36f59\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:03Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:03Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r6f28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:03Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9fmrd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:04Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:04 crc kubenswrapper[4718]: I1210 14:32:04.185535 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1432ae01-a453-47a7-be1d-5fdcabe6f0c8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://651112c229c32968527538e66f1b161c92411b37f45f36ca90b27b6710d7a43d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d1de02a2f41beed3e92a05e69a6c76b44861e7dba46d723a38fffe8c3d0ed42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9405e4c833e1d215824604c6d7ec42dec156f2968d3351f110df665cdac0ce1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d898946159da9ec7806b5daa074a2313a8d3cd6086f4ae2bdfe858a7ba70eca7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:31:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:04Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:04 crc kubenswrapper[4718]: I1210 14:32:04.204508 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://caa6ee7ab7f267517dcd1699f91f61cbc8b2dd0a135dd553e096f31a7f1dd93b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:04Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:04 crc kubenswrapper[4718]: I1210 14:32:04.218100 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:04Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:04 crc kubenswrapper[4718]: I1210 14:32:04.234154 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hv62w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9db3984f-4589-462f-94d7-89a885be63d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c18891e2a71e33784526db2ff2226ff6aa8a9df462f5cb11ff4f5a446509f54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d84p4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hv62w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:04Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:04 crc kubenswrapper[4718]: I1210 14:32:04.236501 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:04 crc kubenswrapper[4718]: I1210 14:32:04.236549 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:04 crc kubenswrapper[4718]: I1210 14:32:04.236564 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:04 crc kubenswrapper[4718]: I1210 14:32:04.236583 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:04 crc kubenswrapper[4718]: I1210 14:32:04.236595 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:04Z","lastTransitionTime":"2025-12-10T14:32:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:04 crc kubenswrapper[4718]: I1210 14:32:04.249071 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xvkg4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af814c60-50de-499d-a1b2-b18f3749bc35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01d0637707f68305ada3aab8a2cf56db32750665ef5c1bcfbfb1993fd135c3b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vsmbx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:00Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xvkg4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:04Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:04 crc kubenswrapper[4718]: I1210 14:32:04.261649 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8db53917-7cfb-496d-b8a0-5cc68f3be4e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ac0946788c7cef36abe84639ca343f7ba379f19b94308cd1a08f4dcac4ad249\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f420f195ef9f1879dc97af4b3feb8835025c2cbe0ede4113fb1b0e0b08ba6c2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8zmhn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:04Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:04 crc kubenswrapper[4718]: I1210 14:32:04.338880 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:04 crc kubenswrapper[4718]: I1210 14:32:04.338926 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:04 crc kubenswrapper[4718]: I1210 14:32:04.338937 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:04 crc kubenswrapper[4718]: I1210 14:32:04.338953 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:04 crc kubenswrapper[4718]: I1210 14:32:04.338968 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:04Z","lastTransitionTime":"2025-12-10T14:32:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:04 crc kubenswrapper[4718]: I1210 14:32:04.441561 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:04 crc kubenswrapper[4718]: I1210 14:32:04.441606 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:04 crc kubenswrapper[4718]: I1210 14:32:04.441617 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:04 crc kubenswrapper[4718]: I1210 14:32:04.441633 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:04 crc kubenswrapper[4718]: I1210 14:32:04.441645 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:04Z","lastTransitionTime":"2025-12-10T14:32:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:04 crc kubenswrapper[4718]: I1210 14:32:04.544449 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:04 crc kubenswrapper[4718]: I1210 14:32:04.544505 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:04 crc kubenswrapper[4718]: I1210 14:32:04.544517 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:04 crc kubenswrapper[4718]: I1210 14:32:04.544536 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:04 crc kubenswrapper[4718]: I1210 14:32:04.544549 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:04Z","lastTransitionTime":"2025-12-10T14:32:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:04 crc kubenswrapper[4718]: I1210 14:32:04.580481 4718 generic.go:334] "Generic (PLEG): container finished" podID="612af6cb-db4d-4874-a9ea-8b3c7eb8e30c" containerID="df5db949bd1c53e8ae754b3e7e500e8c2e196aacd68a36d6331ddca378ef7504" exitCode=0 Dec 10 14:32:04 crc kubenswrapper[4718]: I1210 14:32:04.580591 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dtch2" event={"ID":"612af6cb-db4d-4874-a9ea-8b3c7eb8e30c","Type":"ContainerDied","Data":"df5db949bd1c53e8ae754b3e7e500e8c2e196aacd68a36d6331ddca378ef7504"} Dec 10 14:32:04 crc kubenswrapper[4718]: I1210 14:32:04.582618 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-9fmrd" event={"ID":"186fd52a-c63c-461f-a551-8b57ead36f59","Type":"ContainerStarted","Data":"441b635714fea8bd6f2c5d7af7136d4b7858c5decd20bb3b081b30e340a209fa"} Dec 10 14:32:04 crc kubenswrapper[4718]: I1210 14:32:04.608087 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dtch2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"612af6cb-db4d-4874-a9ea-8b3c7eb8e30c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df5db949bd1c53e8ae754b3e7e500e8c2e196aacd68a36d6331ddca378ef7504\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df5db949bd1c53e8ae754b3e7e500e8c2e196aacd68a36d6331ddca378ef7504\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-dtch2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:04Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:04 crc kubenswrapper[4718]: I1210 14:32:04.622908 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52b297ad-8ff7-498e-8248-b64014de744f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65966d33004e0a6740f0b450f404729baec86fc47c84a58ee020a78b7378bfee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfa44e8f79d7525847ab10384aaf41105a8e6769b384679417cd0a7379748dee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://32fcede9b35f9a45177a341e272e7c2cbe2e5c6bf699e877664a603a5f96124a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d6e990a4a3f575ffc8a6399b8dd1fd59a84db71b34885946046fc7c5bd0a8e8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9d6e990a4a3f575ffc8a6399b8dd1fd59a84db71b34885946046fc7c5bd0a8e8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-10T14:31:58Z\\\",\\\"message\\\":\\\"le observer\\\\nW1210 14:31:57.496102 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1210 14:31:57.496266 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1210 14:31:57.497076 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1744864549/tls.crt::/tmp/serving-cert-1744864549/tls.key\\\\\\\"\\\\nI1210 14:31:57.965947 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1210 14:31:57.968367 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1210 14:31:57.968400 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1210 14:31:57.968417 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1210 14:31:57.968422 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1210 14:31:57.972813 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1210 14:31:57.972849 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1210 14:31:57.972856 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1210 14:31:57.972862 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1210 14:31:57.972867 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1210 14:31:57.972870 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1210 14:31:57.972874 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1210 14:31:57.973028 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1210 14:31:57.977590 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-10T14:31:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f363cabfc65ba2cb737968ead6f21568f30416a1f385f511f2d1a45da1e831a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3cbf8dfd3e1278a733c6ca7e888788d0bec845ff72dfd618e2070262d05b6a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3cbf8dfd3e1278a733c6ca7e888788d0bec845ff72dfd618e2070262d05b6a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:31:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:31:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:04Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:04 crc kubenswrapper[4718]: I1210 14:32:04.637018 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:04Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:04 crc kubenswrapper[4718]: I1210 14:32:04.649178 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:04 crc kubenswrapper[4718]: I1210 14:32:04.649232 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:04 crc kubenswrapper[4718]: I1210 14:32:04.649244 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:04 crc kubenswrapper[4718]: I1210 14:32:04.649267 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:04 crc kubenswrapper[4718]: I1210 14:32:04.649281 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:04Z","lastTransitionTime":"2025-12-10T14:32:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:04 crc kubenswrapper[4718]: I1210 14:32:04.661248 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:04Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:04 crc kubenswrapper[4718]: I1210 14:32:04.675810 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://368fbae55ef9c78f2f88c5a07301f271e1ccffc66976a0c2dc9f80edce04613a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:04Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:04 crc kubenswrapper[4718]: I1210 14:32:04.688916 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a6ff07473ddfa3d1db1e5fe47937274f803c400ee435102c3b0383c02af4340\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b56868fde34c04a22b78be1e6cb02ff5b11f0dd9fa0232e5a4c38725a8bf193\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:04Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:04 crc kubenswrapper[4718]: I1210 14:32:04.700051 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9fmrd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"186fd52a-c63c-461f-a551-8b57ead36f59\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:03Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:03Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r6f28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:03Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9fmrd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:04Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:04 crc kubenswrapper[4718]: I1210 14:32:04.752991 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:04 crc kubenswrapper[4718]: I1210 14:32:04.753042 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:04 crc kubenswrapper[4718]: I1210 14:32:04.753054 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:04 crc kubenswrapper[4718]: I1210 14:32:04.753071 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:04 crc kubenswrapper[4718]: I1210 14:32:04.753084 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:04Z","lastTransitionTime":"2025-12-10T14:32:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:04 crc kubenswrapper[4718]: I1210 14:32:04.762980 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:04Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:04 crc kubenswrapper[4718]: I1210 14:32:04.818705 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hv62w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9db3984f-4589-462f-94d7-89a885be63d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c18891e2a71e33784526db2ff2226ff6aa8a9df462f5cb11ff4f5a446509f54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d84p4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hv62w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:04Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:04 crc kubenswrapper[4718]: I1210 14:32:04.830073 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xvkg4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af814c60-50de-499d-a1b2-b18f3749bc35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01d0637707f68305ada3aab8a2cf56db32750665ef5c1bcfbfb1993fd135c3b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vsmbx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:00Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xvkg4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:04Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:04 crc kubenswrapper[4718]: I1210 14:32:04.842179 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8db53917-7cfb-496d-b8a0-5cc68f3be4e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ac0946788c7cef36abe84639ca343f7ba379f19b94308cd1a08f4dcac4ad249\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f420f195ef9f1879dc97af4b3feb8835025c2cbe0ede4113fb1b0e0b08ba6c2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8zmhn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:04Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:04 crc kubenswrapper[4718]: I1210 14:32:04.853910 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1432ae01-a453-47a7-be1d-5fdcabe6f0c8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://651112c229c32968527538e66f1b161c92411b37f45f36ca90b27b6710d7a43d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d1de02a2f41beed3e92a05e69a6c76b44861e7dba46d723a38fffe8c3d0ed42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9405e4c833e1d215824604c6d7ec42dec156f2968d3351f110df665cdac0ce1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d898946159da9ec7806b5daa074a2313a8d3cd6086f4ae2bdfe858a7ba70eca7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:31:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:04Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:04 crc kubenswrapper[4718]: I1210 14:32:04.855247 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:04 crc kubenswrapper[4718]: I1210 14:32:04.855291 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:04 crc kubenswrapper[4718]: I1210 14:32:04.855303 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:04 crc kubenswrapper[4718]: I1210 14:32:04.855320 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:04 crc kubenswrapper[4718]: I1210 14:32:04.855332 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:04Z","lastTransitionTime":"2025-12-10T14:32:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:04 crc kubenswrapper[4718]: I1210 14:32:04.867608 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://caa6ee7ab7f267517dcd1699f91f61cbc8b2dd0a135dd553e096f31a7f1dd93b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:04Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:04 crc kubenswrapper[4718]: I1210 14:32:04.880834 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kkfdg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db9c6842-03cb-4a28-baff-77f27f537aa4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:00Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bxrt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34acb23b1eef53ae920322853c0503b9d26b0d30894e6c53ae52fb90e1671294\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34acb23b1eef53ae920322853c0503b9d26b0d30894e6c53ae52fb90e1671294\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bxrt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bxrt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bxrt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bxrt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bxrt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bxrt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kkfdg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:04Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:04 crc kubenswrapper[4718]: I1210 14:32:04.957508 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:04 crc kubenswrapper[4718]: I1210 14:32:04.957546 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:04 crc kubenswrapper[4718]: I1210 14:32:04.957557 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:04 crc kubenswrapper[4718]: I1210 14:32:04.957574 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:04 crc kubenswrapper[4718]: I1210 14:32:04.957585 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:04Z","lastTransitionTime":"2025-12-10T14:32:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:05 crc kubenswrapper[4718]: I1210 14:32:05.030970 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 10 14:32:05 crc kubenswrapper[4718]: E1210 14:32:05.031159 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 10 14:32:05 crc kubenswrapper[4718]: I1210 14:32:05.066854 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:05 crc kubenswrapper[4718]: I1210 14:32:05.066896 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:05 crc kubenswrapper[4718]: I1210 14:32:05.066908 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:05 crc kubenswrapper[4718]: I1210 14:32:05.066927 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:05 crc kubenswrapper[4718]: I1210 14:32:05.066940 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:05Z","lastTransitionTime":"2025-12-10T14:32:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:05 crc kubenswrapper[4718]: I1210 14:32:05.170222 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:05 crc kubenswrapper[4718]: I1210 14:32:05.170293 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:05 crc kubenswrapper[4718]: I1210 14:32:05.170305 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:05 crc kubenswrapper[4718]: I1210 14:32:05.170320 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:05 crc kubenswrapper[4718]: I1210 14:32:05.170330 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:05Z","lastTransitionTime":"2025-12-10T14:32:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:05 crc kubenswrapper[4718]: I1210 14:32:05.287955 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:05 crc kubenswrapper[4718]: I1210 14:32:05.287987 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:05 crc kubenswrapper[4718]: I1210 14:32:05.287995 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:05 crc kubenswrapper[4718]: I1210 14:32:05.288009 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:05 crc kubenswrapper[4718]: I1210 14:32:05.288019 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:05Z","lastTransitionTime":"2025-12-10T14:32:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:05 crc kubenswrapper[4718]: I1210 14:32:05.391365 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:05 crc kubenswrapper[4718]: I1210 14:32:05.391443 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:05 crc kubenswrapper[4718]: I1210 14:32:05.391455 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:05 crc kubenswrapper[4718]: I1210 14:32:05.391472 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:05 crc kubenswrapper[4718]: I1210 14:32:05.391484 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:05Z","lastTransitionTime":"2025-12-10T14:32:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:05 crc kubenswrapper[4718]: I1210 14:32:05.494534 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:05 crc kubenswrapper[4718]: I1210 14:32:05.494572 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:05 crc kubenswrapper[4718]: I1210 14:32:05.494580 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:05 crc kubenswrapper[4718]: I1210 14:32:05.494597 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:05 crc kubenswrapper[4718]: I1210 14:32:05.494607 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:05Z","lastTransitionTime":"2025-12-10T14:32:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:05 crc kubenswrapper[4718]: I1210 14:32:05.586646 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-9fmrd" event={"ID":"186fd52a-c63c-461f-a551-8b57ead36f59","Type":"ContainerStarted","Data":"6c79dfef1bedaab660a35e29f8a48c6b153f7ca5028ef6a936a196699a30566e"} Dec 10 14:32:05 crc kubenswrapper[4718]: I1210 14:32:05.589719 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-kkfdg" event={"ID":"db9c6842-03cb-4a28-baff-77f27f537aa4","Type":"ContainerStarted","Data":"d1b13fbe360cd4e791e86acdb311b78d1ab1d3344fd287fd70cfa87cf656affa"} Dec 10 14:32:05 crc kubenswrapper[4718]: I1210 14:32:05.591188 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dtch2" event={"ID":"612af6cb-db4d-4874-a9ea-8b3c7eb8e30c","Type":"ContainerStarted","Data":"a9263fdd7b12f5852db038570b5a179fff5850893af790fa27694f00375a607f"} Dec 10 14:32:05 crc kubenswrapper[4718]: I1210 14:32:05.594866 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 10 14:32:05 crc kubenswrapper[4718]: E1210 14:32:05.595020 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-10 14:32:13.594994061 +0000 UTC m=+38.544217478 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 14:32:05 crc kubenswrapper[4718]: I1210 14:32:05.597338 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:05 crc kubenswrapper[4718]: I1210 14:32:05.597407 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:05 crc kubenswrapper[4718]: I1210 14:32:05.597417 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:05 crc kubenswrapper[4718]: I1210 14:32:05.597437 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:05 crc kubenswrapper[4718]: I1210 14:32:05.597448 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:05Z","lastTransitionTime":"2025-12-10T14:32:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:05 crc kubenswrapper[4718]: I1210 14:32:05.675357 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://caa6ee7ab7f267517dcd1699f91f61cbc8b2dd0a135dd553e096f31a7f1dd93b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:05Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:05 crc kubenswrapper[4718]: I1210 14:32:05.745434 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:05 crc kubenswrapper[4718]: I1210 14:32:05.745525 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:05 crc kubenswrapper[4718]: I1210 14:32:05.745538 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:05 crc kubenswrapper[4718]: I1210 14:32:05.745598 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:05 crc kubenswrapper[4718]: I1210 14:32:05.745615 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:05Z","lastTransitionTime":"2025-12-10T14:32:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:05 crc kubenswrapper[4718]: I1210 14:32:05.840168 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 10 14:32:05 crc kubenswrapper[4718]: I1210 14:32:05.840243 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 10 14:32:05 crc kubenswrapper[4718]: I1210 14:32:05.840272 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 10 14:32:05 crc kubenswrapper[4718]: E1210 14:32:05.840425 4718 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 10 14:32:05 crc kubenswrapper[4718]: E1210 14:32:05.840488 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-10 14:32:13.840469669 +0000 UTC m=+38.789693076 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 10 14:32:05 crc kubenswrapper[4718]: E1210 14:32:05.840566 4718 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 10 14:32:05 crc kubenswrapper[4718]: E1210 14:32:05.840599 4718 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 10 14:32:05 crc kubenswrapper[4718]: E1210 14:32:05.840610 4718 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 10 14:32:05 crc kubenswrapper[4718]: E1210 14:32:05.840634 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-10 14:32:13.840627433 +0000 UTC m=+38.789850850 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 10 14:32:05 crc kubenswrapper[4718]: E1210 14:32:05.840723 4718 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 10 14:32:05 crc kubenswrapper[4718]: E1210 14:32:05.840860 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-10 14:32:13.840827208 +0000 UTC m=+38.790050775 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 10 14:32:05 crc kubenswrapper[4718]: I1210 14:32:05.874373 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:05 crc kubenswrapper[4718]: I1210 14:32:05.874425 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:05 crc kubenswrapper[4718]: I1210 14:32:05.874436 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:05 crc kubenswrapper[4718]: I1210 14:32:05.874451 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:05 crc kubenswrapper[4718]: I1210 14:32:05.874462 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:05Z","lastTransitionTime":"2025-12-10T14:32:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:05 crc kubenswrapper[4718]: I1210 14:32:05.941264 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 10 14:32:05 crc kubenswrapper[4718]: E1210 14:32:05.941523 4718 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 10 14:32:05 crc kubenswrapper[4718]: E1210 14:32:05.941550 4718 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 10 14:32:05 crc kubenswrapper[4718]: E1210 14:32:05.941563 4718 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 10 14:32:05 crc kubenswrapper[4718]: E1210 14:32:05.941633 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-10 14:32:13.941612167 +0000 UTC m=+38.890835584 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 10 14:32:05 crc kubenswrapper[4718]: I1210 14:32:05.989897 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:05 crc kubenswrapper[4718]: I1210 14:32:05.989960 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:05 crc kubenswrapper[4718]: I1210 14:32:05.989971 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:05 crc kubenswrapper[4718]: I1210 14:32:05.989990 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:05 crc kubenswrapper[4718]: I1210 14:32:05.990009 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:05Z","lastTransitionTime":"2025-12-10T14:32:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:06 crc kubenswrapper[4718]: I1210 14:32:06.021605 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 10 14:32:06 crc kubenswrapper[4718]: E1210 14:32:06.021737 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 10 14:32:06 crc kubenswrapper[4718]: I1210 14:32:06.021783 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 10 14:32:06 crc kubenswrapper[4718]: E1210 14:32:06.021831 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 10 14:32:06 crc kubenswrapper[4718]: I1210 14:32:06.040758 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:05Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:06 crc kubenswrapper[4718]: I1210 14:32:06.203625 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hv62w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9db3984f-4589-462f-94d7-89a885be63d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c18891e2a71e33784526db2ff2226ff6aa8a9df462f5cb11ff4f5a446509f54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d84p4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hv62w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:06Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:06 crc kubenswrapper[4718]: I1210 14:32:06.205737 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:06 crc kubenswrapper[4718]: I1210 14:32:06.205799 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:06 crc kubenswrapper[4718]: I1210 14:32:06.205812 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:06 crc kubenswrapper[4718]: I1210 14:32:06.205828 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:06 crc kubenswrapper[4718]: I1210 14:32:06.205839 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:06Z","lastTransitionTime":"2025-12-10T14:32:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:06 crc kubenswrapper[4718]: I1210 14:32:06.390932 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:06 crc kubenswrapper[4718]: I1210 14:32:06.391746 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:06 crc kubenswrapper[4718]: I1210 14:32:06.391870 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:06 crc kubenswrapper[4718]: I1210 14:32:06.391956 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:06 crc kubenswrapper[4718]: I1210 14:32:06.392020 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:06Z","lastTransitionTime":"2025-12-10T14:32:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:06 crc kubenswrapper[4718]: I1210 14:32:06.408711 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xvkg4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af814c60-50de-499d-a1b2-b18f3749bc35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01d0637707f68305ada3aab8a2cf56db32750665ef5c1bcfbfb1993fd135c3b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vsmbx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:00Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xvkg4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:06Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:06 crc kubenswrapper[4718]: I1210 14:32:06.452777 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8db53917-7cfb-496d-b8a0-5cc68f3be4e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ac0946788c7cef36abe84639ca343f7ba379f19b94308cd1a08f4dcac4ad249\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f420f195ef9f1879dc97af4b3feb8835025c2cbe0ede4113fb1b0e0b08ba6c2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8zmhn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:06Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:06 crc kubenswrapper[4718]: I1210 14:32:06.472303 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1432ae01-a453-47a7-be1d-5fdcabe6f0c8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://651112c229c32968527538e66f1b161c92411b37f45f36ca90b27b6710d7a43d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d1de02a2f41beed3e92a05e69a6c76b44861e7dba46d723a38fffe8c3d0ed42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9405e4c833e1d215824604c6d7ec42dec156f2968d3351f110df665cdac0ce1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d898946159da9ec7806b5daa074a2313a8d3cd6086f4ae2bdfe858a7ba70eca7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:31:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:06Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:06 crc kubenswrapper[4718]: I1210 14:32:06.490736 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kkfdg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db9c6842-03cb-4a28-baff-77f27f537aa4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:00Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bxrt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34acb23b1eef53ae920322853c0503b9d26b0d30894e6c53ae52fb90e1671294\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34acb23b1eef53ae920322853c0503b9d26b0d30894e6c53ae52fb90e1671294\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bxrt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bxrt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bxrt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bxrt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bxrt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bxrt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kkfdg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:06Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:06 crc kubenswrapper[4718]: I1210 14:32:06.495025 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:06 crc kubenswrapper[4718]: I1210 14:32:06.495091 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:06 crc kubenswrapper[4718]: I1210 14:32:06.495104 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:06 crc kubenswrapper[4718]: I1210 14:32:06.495126 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:06 crc kubenswrapper[4718]: I1210 14:32:06.495140 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:06Z","lastTransitionTime":"2025-12-10T14:32:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:06 crc kubenswrapper[4718]: I1210 14:32:06.505910 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:06Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:06 crc kubenswrapper[4718]: I1210 14:32:06.525620 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://368fbae55ef9c78f2f88c5a07301f271e1ccffc66976a0c2dc9f80edce04613a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:06Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:06 crc kubenswrapper[4718]: I1210 14:32:06.551633 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dtch2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"612af6cb-db4d-4874-a9ea-8b3c7eb8e30c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df5db949bd1c53e8ae754b3e7e500e8c2e196aacd68a36d6331ddca378ef7504\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df5db949bd1c53e8ae754b3e7e500e8c2e196aacd68a36d6331ddca378ef7504\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-dtch2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:06Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:06 crc kubenswrapper[4718]: I1210 14:32:06.572220 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52b297ad-8ff7-498e-8248-b64014de744f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65966d33004e0a6740f0b450f404729baec86fc47c84a58ee020a78b7378bfee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfa44e8f79d7525847ab10384aaf41105a8e6769b384679417cd0a7379748dee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://32fcede9b35f9a45177a341e272e7c2cbe2e5c6bf699e877664a603a5f96124a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d6e990a4a3f575ffc8a6399b8dd1fd59a84db71b34885946046fc7c5bd0a8e8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9d6e990a4a3f575ffc8a6399b8dd1fd59a84db71b34885946046fc7c5bd0a8e8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-10T14:31:58Z\\\",\\\"message\\\":\\\"le observer\\\\nW1210 14:31:57.496102 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1210 14:31:57.496266 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1210 14:31:57.497076 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1744864549/tls.crt::/tmp/serving-cert-1744864549/tls.key\\\\\\\"\\\\nI1210 14:31:57.965947 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1210 14:31:57.968367 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1210 14:31:57.968400 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1210 14:31:57.968417 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1210 14:31:57.968422 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1210 14:31:57.972813 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1210 14:31:57.972849 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1210 14:31:57.972856 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1210 14:31:57.972862 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1210 14:31:57.972867 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1210 14:31:57.972870 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1210 14:31:57.972874 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1210 14:31:57.973028 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1210 14:31:57.977590 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-10T14:31:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f363cabfc65ba2cb737968ead6f21568f30416a1f385f511f2d1a45da1e831a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3cbf8dfd3e1278a733c6ca7e888788d0bec845ff72dfd618e2070262d05b6a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3cbf8dfd3e1278a733c6ca7e888788d0bec845ff72dfd618e2070262d05b6a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:31:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:31:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:06Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:06 crc kubenswrapper[4718]: I1210 14:32:06.587755 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:06Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:06 crc kubenswrapper[4718]: I1210 14:32:06.597055 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:06 crc kubenswrapper[4718]: I1210 14:32:06.597104 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:06 crc kubenswrapper[4718]: I1210 14:32:06.597115 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:06 crc kubenswrapper[4718]: I1210 14:32:06.597134 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:06 crc kubenswrapper[4718]: I1210 14:32:06.597145 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:06Z","lastTransitionTime":"2025-12-10T14:32:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:06 crc kubenswrapper[4718]: I1210 14:32:06.598284 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dtch2" event={"ID":"612af6cb-db4d-4874-a9ea-8b3c7eb8e30c","Type":"ContainerStarted","Data":"d2ac19fc0eeba332014eefac978a35798eb8bf1ff79212bca24690236bd46e01"} Dec 10 14:32:06 crc kubenswrapper[4718]: I1210 14:32:06.608812 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a6ff07473ddfa3d1db1e5fe47937274f803c400ee435102c3b0383c02af4340\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b56868fde34c04a22b78be1e6cb02ff5b11f0dd9fa0232e5a4c38725a8bf193\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:06Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:06 crc kubenswrapper[4718]: I1210 14:32:06.676803 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9fmrd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"186fd52a-c63c-461f-a551-8b57ead36f59\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c79dfef1bedaab660a35e29f8a48c6b153f7ca5028ef6a936a196699a30566e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r6f28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:03Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9fmrd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:06Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:06 crc kubenswrapper[4718]: I1210 14:32:06.700161 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:06 crc kubenswrapper[4718]: I1210 14:32:06.700210 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:06 crc kubenswrapper[4718]: I1210 14:32:06.700219 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:06 crc kubenswrapper[4718]: I1210 14:32:06.700233 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:06 crc kubenswrapper[4718]: I1210 14:32:06.700245 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:06Z","lastTransitionTime":"2025-12-10T14:32:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:06 crc kubenswrapper[4718]: I1210 14:32:06.703820 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52b297ad-8ff7-498e-8248-b64014de744f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65966d33004e0a6740f0b450f404729baec86fc47c84a58ee020a78b7378bfee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfa44e8f79d7525847ab10384aaf41105a8e6769b384679417cd0a7379748dee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://32fcede9b35f9a45177a341e272e7c2cbe2e5c6bf699e877664a603a5f96124a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d6e990a4a3f575ffc8a6399b8dd1fd59a84db71b34885946046fc7c5bd0a8e8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9d6e990a4a3f575ffc8a6399b8dd1fd59a84db71b34885946046fc7c5bd0a8e8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-10T14:31:58Z\\\",\\\"message\\\":\\\"le observer\\\\nW1210 14:31:57.496102 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1210 14:31:57.496266 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1210 14:31:57.497076 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1744864549/tls.crt::/tmp/serving-cert-1744864549/tls.key\\\\\\\"\\\\nI1210 14:31:57.965947 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1210 14:31:57.968367 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1210 14:31:57.968400 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1210 14:31:57.968417 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1210 14:31:57.968422 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1210 14:31:57.972813 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1210 14:31:57.972849 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1210 14:31:57.972856 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1210 14:31:57.972862 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1210 14:31:57.972867 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1210 14:31:57.972870 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1210 14:31:57.972874 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1210 14:31:57.973028 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1210 14:31:57.977590 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-10T14:31:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f363cabfc65ba2cb737968ead6f21568f30416a1f385f511f2d1a45da1e831a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3cbf8dfd3e1278a733c6ca7e888788d0bec845ff72dfd618e2070262d05b6a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3cbf8dfd3e1278a733c6ca7e888788d0bec845ff72dfd618e2070262d05b6a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:31:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:31:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:06Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:06 crc kubenswrapper[4718]: I1210 14:32:06.719231 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:06Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:06 crc kubenswrapper[4718]: I1210 14:32:06.733436 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:06Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:06 crc kubenswrapper[4718]: I1210 14:32:06.747674 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://368fbae55ef9c78f2f88c5a07301f271e1ccffc66976a0c2dc9f80edce04613a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:06Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:06 crc kubenswrapper[4718]: I1210 14:32:06.768033 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dtch2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"612af6cb-db4d-4874-a9ea-8b3c7eb8e30c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df5db949bd1c53e8ae754b3e7e500e8c2e196aacd68a36d6331ddca378ef7504\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df5db949bd1c53e8ae754b3e7e500e8c2e196aacd68a36d6331ddca378ef7504\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-dtch2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:06Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:06 crc kubenswrapper[4718]: I1210 14:32:06.782098 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a6ff07473ddfa3d1db1e5fe47937274f803c400ee435102c3b0383c02af4340\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b56868fde34c04a22b78be1e6cb02ff5b11f0dd9fa0232e5a4c38725a8bf193\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:06Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:06 crc kubenswrapper[4718]: I1210 14:32:06.793331 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9fmrd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"186fd52a-c63c-461f-a551-8b57ead36f59\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c79dfef1bedaab660a35e29f8a48c6b153f7ca5028ef6a936a196699a30566e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r6f28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:03Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9fmrd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:06Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:06 crc kubenswrapper[4718]: I1210 14:32:06.802630 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:06 crc kubenswrapper[4718]: I1210 14:32:06.802684 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:06 crc kubenswrapper[4718]: I1210 14:32:06.802696 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:06 crc kubenswrapper[4718]: I1210 14:32:06.802713 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:06 crc kubenswrapper[4718]: I1210 14:32:06.802727 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:06Z","lastTransitionTime":"2025-12-10T14:32:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:06 crc kubenswrapper[4718]: I1210 14:32:06.803618 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xvkg4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af814c60-50de-499d-a1b2-b18f3749bc35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01d0637707f68305ada3aab8a2cf56db32750665ef5c1bcfbfb1993fd135c3b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vsmbx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:00Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xvkg4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:06Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:06 crc kubenswrapper[4718]: I1210 14:32:06.819860 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8db53917-7cfb-496d-b8a0-5cc68f3be4e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ac0946788c7cef36abe84639ca343f7ba379f19b94308cd1a08f4dcac4ad249\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f420f195ef9f1879dc97af4b3feb8835025c2cbe0ede4113fb1b0e0b08ba6c2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8zmhn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:06Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:06 crc kubenswrapper[4718]: I1210 14:32:06.834562 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1432ae01-a453-47a7-be1d-5fdcabe6f0c8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://651112c229c32968527538e66f1b161c92411b37f45f36ca90b27b6710d7a43d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d1de02a2f41beed3e92a05e69a6c76b44861e7dba46d723a38fffe8c3d0ed42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9405e4c833e1d215824604c6d7ec42dec156f2968d3351f110df665cdac0ce1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d898946159da9ec7806b5daa074a2313a8d3cd6086f4ae2bdfe858a7ba70eca7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:31:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:06Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:06 crc kubenswrapper[4718]: I1210 14:32:06.850330 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://caa6ee7ab7f267517dcd1699f91f61cbc8b2dd0a135dd553e096f31a7f1dd93b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:06Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:06 crc kubenswrapper[4718]: I1210 14:32:06.866066 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:06Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:06 crc kubenswrapper[4718]: I1210 14:32:06.881374 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hv62w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9db3984f-4589-462f-94d7-89a885be63d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c18891e2a71e33784526db2ff2226ff6aa8a9df462f5cb11ff4f5a446509f54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d84p4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hv62w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:06Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:06 crc kubenswrapper[4718]: I1210 14:32:06.900327 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kkfdg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db9c6842-03cb-4a28-baff-77f27f537aa4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:00Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bxrt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34acb23b1eef53ae920322853c0503b9d26b0d30894e6c53ae52fb90e1671294\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34acb23b1eef53ae920322853c0503b9d26b0d30894e6c53ae52fb90e1671294\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bxrt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1b13fbe360cd4e791e86acdb311b78d1ab1d3344fd287fd70cfa87cf656affa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bxrt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bxrt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bxrt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bxrt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bxrt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kkfdg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:06Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:06 crc kubenswrapper[4718]: I1210 14:32:06.906347 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:06 crc kubenswrapper[4718]: I1210 14:32:06.906405 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:06 crc kubenswrapper[4718]: I1210 14:32:06.906439 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:06 crc kubenswrapper[4718]: I1210 14:32:06.906485 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:06 crc kubenswrapper[4718]: I1210 14:32:06.906496 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:06Z","lastTransitionTime":"2025-12-10T14:32:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:06 crc kubenswrapper[4718]: I1210 14:32:06.915605 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a6ff07473ddfa3d1db1e5fe47937274f803c400ee435102c3b0383c02af4340\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b56868fde34c04a22b78be1e6cb02ff5b11f0dd9fa0232e5a4c38725a8bf193\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:06Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:06 crc kubenswrapper[4718]: I1210 14:32:06.926170 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9fmrd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"186fd52a-c63c-461f-a551-8b57ead36f59\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c79dfef1bedaab660a35e29f8a48c6b153f7ca5028ef6a936a196699a30566e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r6f28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:03Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9fmrd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:06Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:06 crc kubenswrapper[4718]: I1210 14:32:06.938578 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1432ae01-a453-47a7-be1d-5fdcabe6f0c8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://651112c229c32968527538e66f1b161c92411b37f45f36ca90b27b6710d7a43d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d1de02a2f41beed3e92a05e69a6c76b44861e7dba46d723a38fffe8c3d0ed42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9405e4c833e1d215824604c6d7ec42dec156f2968d3351f110df665cdac0ce1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d898946159da9ec7806b5daa074a2313a8d3cd6086f4ae2bdfe858a7ba70eca7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:31:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:06Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:06 crc kubenswrapper[4718]: I1210 14:32:06.954131 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://caa6ee7ab7f267517dcd1699f91f61cbc8b2dd0a135dd553e096f31a7f1dd93b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:06Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:06 crc kubenswrapper[4718]: I1210 14:32:06.968778 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:06Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:06 crc kubenswrapper[4718]: I1210 14:32:06.983159 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hv62w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9db3984f-4589-462f-94d7-89a885be63d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c18891e2a71e33784526db2ff2226ff6aa8a9df462f5cb11ff4f5a446509f54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d84p4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hv62w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:06Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:06 crc kubenswrapper[4718]: I1210 14:32:06.993814 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xvkg4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af814c60-50de-499d-a1b2-b18f3749bc35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01d0637707f68305ada3aab8a2cf56db32750665ef5c1bcfbfb1993fd135c3b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vsmbx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:00Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xvkg4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:06Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:07 crc kubenswrapper[4718]: I1210 14:32:07.004534 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8db53917-7cfb-496d-b8a0-5cc68f3be4e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ac0946788c7cef36abe84639ca343f7ba379f19b94308cd1a08f4dcac4ad249\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f420f195ef9f1879dc97af4b3feb8835025c2cbe0ede4113fb1b0e0b08ba6c2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8zmhn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:07Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:07 crc kubenswrapper[4718]: I1210 14:32:07.009942 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:07 crc kubenswrapper[4718]: I1210 14:32:07.009982 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:07 crc kubenswrapper[4718]: I1210 14:32:07.010014 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:07 crc kubenswrapper[4718]: I1210 14:32:07.010030 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:07 crc kubenswrapper[4718]: I1210 14:32:07.010040 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:07Z","lastTransitionTime":"2025-12-10T14:32:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:07 crc kubenswrapper[4718]: I1210 14:32:07.019340 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 10 14:32:07 crc kubenswrapper[4718]: E1210 14:32:07.019588 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 10 14:32:07 crc kubenswrapper[4718]: I1210 14:32:07.020902 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kkfdg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db9c6842-03cb-4a28-baff-77f27f537aa4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:00Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bxrt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34acb23b1eef53ae920322853c0503b9d26b0d30894e6c53ae52fb90e1671294\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34acb23b1eef53ae920322853c0503b9d26b0d30894e6c53ae52fb90e1671294\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bxrt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1b13fbe360cd4e791e86acdb311b78d1ab1d3344fd287fd70cfa87cf656affa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bxrt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bxrt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bxrt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bxrt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bxrt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kkfdg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:07Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:07 crc kubenswrapper[4718]: I1210 14:32:07.034491 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52b297ad-8ff7-498e-8248-b64014de744f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65966d33004e0a6740f0b450f404729baec86fc47c84a58ee020a78b7378bfee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfa44e8f79d7525847ab10384aaf41105a8e6769b384679417cd0a7379748dee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://32fcede9b35f9a45177a341e272e7c2cbe2e5c6bf699e877664a603a5f96124a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d6e990a4a3f575ffc8a6399b8dd1fd59a84db71b34885946046fc7c5bd0a8e8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9d6e990a4a3f575ffc8a6399b8dd1fd59a84db71b34885946046fc7c5bd0a8e8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-10T14:31:58Z\\\",\\\"message\\\":\\\"le observer\\\\nW1210 14:31:57.496102 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1210 14:31:57.496266 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1210 14:31:57.497076 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1744864549/tls.crt::/tmp/serving-cert-1744864549/tls.key\\\\\\\"\\\\nI1210 14:31:57.965947 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1210 14:31:57.968367 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1210 14:31:57.968400 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1210 14:31:57.968417 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1210 14:31:57.968422 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1210 14:31:57.972813 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1210 14:31:57.972849 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1210 14:31:57.972856 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1210 14:31:57.972862 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1210 14:31:57.972867 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1210 14:31:57.972870 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1210 14:31:57.972874 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1210 14:31:57.973028 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1210 14:31:57.977590 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-10T14:31:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f363cabfc65ba2cb737968ead6f21568f30416a1f385f511f2d1a45da1e831a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3cbf8dfd3e1278a733c6ca7e888788d0bec845ff72dfd618e2070262d05b6a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3cbf8dfd3e1278a733c6ca7e888788d0bec845ff72dfd618e2070262d05b6a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:31:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:31:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:07Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:07 crc kubenswrapper[4718]: I1210 14:32:07.047179 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:07Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:07 crc kubenswrapper[4718]: I1210 14:32:07.058876 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:07Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:07 crc kubenswrapper[4718]: I1210 14:32:07.072203 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://368fbae55ef9c78f2f88c5a07301f271e1ccffc66976a0c2dc9f80edce04613a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:07Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:07 crc kubenswrapper[4718]: I1210 14:32:07.091685 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dtch2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"612af6cb-db4d-4874-a9ea-8b3c7eb8e30c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df5db949bd1c53e8ae754b3e7e500e8c2e196aacd68a36d6331ddca378ef7504\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df5db949bd1c53e8ae754b3e7e500e8c2e196aacd68a36d6331ddca378ef7504\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-dtch2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:07Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:07 crc kubenswrapper[4718]: I1210 14:32:07.112273 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:07 crc kubenswrapper[4718]: I1210 14:32:07.112311 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:07 crc kubenswrapper[4718]: I1210 14:32:07.112324 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:07 crc kubenswrapper[4718]: I1210 14:32:07.112340 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:07 crc kubenswrapper[4718]: I1210 14:32:07.112353 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:07Z","lastTransitionTime":"2025-12-10T14:32:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:07 crc kubenswrapper[4718]: I1210 14:32:07.225602 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:07 crc kubenswrapper[4718]: I1210 14:32:07.225669 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:07 crc kubenswrapper[4718]: I1210 14:32:07.225681 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:07 crc kubenswrapper[4718]: I1210 14:32:07.225718 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:07 crc kubenswrapper[4718]: I1210 14:32:07.225730 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:07Z","lastTransitionTime":"2025-12-10T14:32:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:07 crc kubenswrapper[4718]: I1210 14:32:07.328009 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:07 crc kubenswrapper[4718]: I1210 14:32:07.328062 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:07 crc kubenswrapper[4718]: I1210 14:32:07.328070 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:07 crc kubenswrapper[4718]: I1210 14:32:07.328086 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:07 crc kubenswrapper[4718]: I1210 14:32:07.328096 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:07Z","lastTransitionTime":"2025-12-10T14:32:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:07 crc kubenswrapper[4718]: I1210 14:32:07.433126 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:07 crc kubenswrapper[4718]: I1210 14:32:07.433166 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:07 crc kubenswrapper[4718]: I1210 14:32:07.433179 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:07 crc kubenswrapper[4718]: I1210 14:32:07.433195 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:07 crc kubenswrapper[4718]: I1210 14:32:07.433207 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:07Z","lastTransitionTime":"2025-12-10T14:32:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:07 crc kubenswrapper[4718]: I1210 14:32:07.535737 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:07 crc kubenswrapper[4718]: I1210 14:32:07.535771 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:07 crc kubenswrapper[4718]: I1210 14:32:07.535782 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:07 crc kubenswrapper[4718]: I1210 14:32:07.535811 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:07 crc kubenswrapper[4718]: I1210 14:32:07.535825 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:07Z","lastTransitionTime":"2025-12-10T14:32:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:07 crc kubenswrapper[4718]: I1210 14:32:07.606273 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dtch2" event={"ID":"612af6cb-db4d-4874-a9ea-8b3c7eb8e30c","Type":"ContainerStarted","Data":"f49783c85c42bd1d4afd9d2546c84af07c770b069b7e6f22cf3d95c0e173bcb8"} Dec 10 14:32:07 crc kubenswrapper[4718]: I1210 14:32:07.639129 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:07 crc kubenswrapper[4718]: I1210 14:32:07.639196 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:07 crc kubenswrapper[4718]: I1210 14:32:07.639209 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:07 crc kubenswrapper[4718]: I1210 14:32:07.639225 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:07 crc kubenswrapper[4718]: I1210 14:32:07.639236 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:07Z","lastTransitionTime":"2025-12-10T14:32:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:07 crc kubenswrapper[4718]: I1210 14:32:07.742441 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:07 crc kubenswrapper[4718]: I1210 14:32:07.742488 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:07 crc kubenswrapper[4718]: I1210 14:32:07.742499 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:07 crc kubenswrapper[4718]: I1210 14:32:07.742511 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:07 crc kubenswrapper[4718]: I1210 14:32:07.742520 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:07Z","lastTransitionTime":"2025-12-10T14:32:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:07 crc kubenswrapper[4718]: I1210 14:32:07.845710 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:07 crc kubenswrapper[4718]: I1210 14:32:07.845761 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:07 crc kubenswrapper[4718]: I1210 14:32:07.845772 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:07 crc kubenswrapper[4718]: I1210 14:32:07.845788 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:07 crc kubenswrapper[4718]: I1210 14:32:07.845798 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:07Z","lastTransitionTime":"2025-12-10T14:32:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:07 crc kubenswrapper[4718]: I1210 14:32:07.948759 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:07 crc kubenswrapper[4718]: I1210 14:32:07.948830 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:07 crc kubenswrapper[4718]: I1210 14:32:07.948846 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:07 crc kubenswrapper[4718]: I1210 14:32:07.948871 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:07 crc kubenswrapper[4718]: I1210 14:32:07.948888 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:07Z","lastTransitionTime":"2025-12-10T14:32:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:08 crc kubenswrapper[4718]: I1210 14:32:08.020311 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 10 14:32:08 crc kubenswrapper[4718]: E1210 14:32:08.020604 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 10 14:32:08 crc kubenswrapper[4718]: I1210 14:32:08.020810 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 10 14:32:08 crc kubenswrapper[4718]: E1210 14:32:08.021003 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 10 14:32:08 crc kubenswrapper[4718]: I1210 14:32:08.051685 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:08 crc kubenswrapper[4718]: I1210 14:32:08.051735 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:08 crc kubenswrapper[4718]: I1210 14:32:08.051745 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:08 crc kubenswrapper[4718]: I1210 14:32:08.051759 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:08 crc kubenswrapper[4718]: I1210 14:32:08.051770 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:08Z","lastTransitionTime":"2025-12-10T14:32:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:08 crc kubenswrapper[4718]: I1210 14:32:08.154807 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:08 crc kubenswrapper[4718]: I1210 14:32:08.154868 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:08 crc kubenswrapper[4718]: I1210 14:32:08.154881 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:08 crc kubenswrapper[4718]: I1210 14:32:08.154903 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:08 crc kubenswrapper[4718]: I1210 14:32:08.154919 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:08Z","lastTransitionTime":"2025-12-10T14:32:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:08 crc kubenswrapper[4718]: I1210 14:32:08.258158 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:08 crc kubenswrapper[4718]: I1210 14:32:08.258234 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:08 crc kubenswrapper[4718]: I1210 14:32:08.258247 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:08 crc kubenswrapper[4718]: I1210 14:32:08.258309 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:08 crc kubenswrapper[4718]: I1210 14:32:08.258340 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:08Z","lastTransitionTime":"2025-12-10T14:32:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:08 crc kubenswrapper[4718]: I1210 14:32:08.361554 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:08 crc kubenswrapper[4718]: I1210 14:32:08.361636 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:08 crc kubenswrapper[4718]: I1210 14:32:08.361647 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:08 crc kubenswrapper[4718]: I1210 14:32:08.361768 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:08 crc kubenswrapper[4718]: I1210 14:32:08.361786 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:08Z","lastTransitionTime":"2025-12-10T14:32:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:08 crc kubenswrapper[4718]: I1210 14:32:08.466832 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:08 crc kubenswrapper[4718]: I1210 14:32:08.466902 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:08 crc kubenswrapper[4718]: I1210 14:32:08.466917 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:08 crc kubenswrapper[4718]: I1210 14:32:08.466939 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:08 crc kubenswrapper[4718]: I1210 14:32:08.466961 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:08Z","lastTransitionTime":"2025-12-10T14:32:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:08 crc kubenswrapper[4718]: I1210 14:32:08.570820 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:08 crc kubenswrapper[4718]: I1210 14:32:08.570881 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:08 crc kubenswrapper[4718]: I1210 14:32:08.570897 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:08 crc kubenswrapper[4718]: I1210 14:32:08.570920 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:08 crc kubenswrapper[4718]: I1210 14:32:08.570935 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:08Z","lastTransitionTime":"2025-12-10T14:32:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:08 crc kubenswrapper[4718]: I1210 14:32:08.674369 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:08 crc kubenswrapper[4718]: I1210 14:32:08.674447 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:08 crc kubenswrapper[4718]: I1210 14:32:08.674474 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:08 crc kubenswrapper[4718]: I1210 14:32:08.674498 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:08 crc kubenswrapper[4718]: I1210 14:32:08.674523 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:08Z","lastTransitionTime":"2025-12-10T14:32:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:08 crc kubenswrapper[4718]: I1210 14:32:08.777710 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:08 crc kubenswrapper[4718]: I1210 14:32:08.777788 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:08 crc kubenswrapper[4718]: I1210 14:32:08.777800 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:08 crc kubenswrapper[4718]: I1210 14:32:08.777999 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:08 crc kubenswrapper[4718]: I1210 14:32:08.778016 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:08Z","lastTransitionTime":"2025-12-10T14:32:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:08 crc kubenswrapper[4718]: I1210 14:32:08.882012 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:08 crc kubenswrapper[4718]: I1210 14:32:08.882408 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:08 crc kubenswrapper[4718]: I1210 14:32:08.882540 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:08 crc kubenswrapper[4718]: I1210 14:32:08.882658 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:08 crc kubenswrapper[4718]: I1210 14:32:08.882743 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:08Z","lastTransitionTime":"2025-12-10T14:32:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:08 crc kubenswrapper[4718]: I1210 14:32:08.985533 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:08 crc kubenswrapper[4718]: I1210 14:32:08.985574 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:08 crc kubenswrapper[4718]: I1210 14:32:08.985585 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:08 crc kubenswrapper[4718]: I1210 14:32:08.985599 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:08 crc kubenswrapper[4718]: I1210 14:32:08.985609 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:08Z","lastTransitionTime":"2025-12-10T14:32:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:09 crc kubenswrapper[4718]: I1210 14:32:09.019927 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 10 14:32:09 crc kubenswrapper[4718]: E1210 14:32:09.020168 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 10 14:32:09 crc kubenswrapper[4718]: I1210 14:32:09.089309 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:09 crc kubenswrapper[4718]: I1210 14:32:09.089357 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:09 crc kubenswrapper[4718]: I1210 14:32:09.089716 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:09 crc kubenswrapper[4718]: I1210 14:32:09.089747 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:09 crc kubenswrapper[4718]: I1210 14:32:09.089774 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:09Z","lastTransitionTime":"2025-12-10T14:32:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:09 crc kubenswrapper[4718]: I1210 14:32:09.193205 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:09 crc kubenswrapper[4718]: I1210 14:32:09.193593 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:09 crc kubenswrapper[4718]: I1210 14:32:09.193605 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:09 crc kubenswrapper[4718]: I1210 14:32:09.193619 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:09 crc kubenswrapper[4718]: I1210 14:32:09.193630 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:09Z","lastTransitionTime":"2025-12-10T14:32:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:09 crc kubenswrapper[4718]: I1210 14:32:09.265956 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:09 crc kubenswrapper[4718]: I1210 14:32:09.266057 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:09 crc kubenswrapper[4718]: I1210 14:32:09.266082 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:09 crc kubenswrapper[4718]: I1210 14:32:09.266115 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:09 crc kubenswrapper[4718]: I1210 14:32:09.266128 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:09Z","lastTransitionTime":"2025-12-10T14:32:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:09 crc kubenswrapper[4718]: E1210 14:32:09.280228 4718 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:32:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:32:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:09Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:32:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:32:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:09Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"fbef6824-3734-4f3d-bf18-030e5c72cff8\\\",\\\"systemUUID\\\":\\\"559eaef7-72a9-45f0-b8d3-0046c76adc0d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:09Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:09 crc kubenswrapper[4718]: I1210 14:32:09.287089 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:09 crc kubenswrapper[4718]: I1210 14:32:09.287151 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:09 crc kubenswrapper[4718]: I1210 14:32:09.287172 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:09 crc kubenswrapper[4718]: I1210 14:32:09.287203 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:09 crc kubenswrapper[4718]: I1210 14:32:09.287218 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:09Z","lastTransitionTime":"2025-12-10T14:32:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:09 crc kubenswrapper[4718]: E1210 14:32:09.303355 4718 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:32:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:32:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:09Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:32:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:32:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:09Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"fbef6824-3734-4f3d-bf18-030e5c72cff8\\\",\\\"systemUUID\\\":\\\"559eaef7-72a9-45f0-b8d3-0046c76adc0d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:09Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:09 crc kubenswrapper[4718]: I1210 14:32:09.309976 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:09 crc kubenswrapper[4718]: I1210 14:32:09.310041 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:09 crc kubenswrapper[4718]: I1210 14:32:09.310061 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:09 crc kubenswrapper[4718]: I1210 14:32:09.310087 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:09 crc kubenswrapper[4718]: I1210 14:32:09.310099 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:09Z","lastTransitionTime":"2025-12-10T14:32:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:09 crc kubenswrapper[4718]: E1210 14:32:09.326041 4718 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:32:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:32:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:09Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:32:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:32:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:09Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"fbef6824-3734-4f3d-bf18-030e5c72cff8\\\",\\\"systemUUID\\\":\\\"559eaef7-72a9-45f0-b8d3-0046c76adc0d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:09Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:09 crc kubenswrapper[4718]: I1210 14:32:09.331123 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:09 crc kubenswrapper[4718]: I1210 14:32:09.331154 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:09 crc kubenswrapper[4718]: I1210 14:32:09.331163 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:09 crc kubenswrapper[4718]: I1210 14:32:09.331176 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:09 crc kubenswrapper[4718]: I1210 14:32:09.331186 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:09Z","lastTransitionTime":"2025-12-10T14:32:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:09 crc kubenswrapper[4718]: E1210 14:32:09.348218 4718 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:32:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:32:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:09Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:32:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:32:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:09Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"fbef6824-3734-4f3d-bf18-030e5c72cff8\\\",\\\"systemUUID\\\":\\\"559eaef7-72a9-45f0-b8d3-0046c76adc0d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:09Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:09 crc kubenswrapper[4718]: I1210 14:32:09.353755 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:09 crc kubenswrapper[4718]: I1210 14:32:09.353800 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:09 crc kubenswrapper[4718]: I1210 14:32:09.353809 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:09 crc kubenswrapper[4718]: I1210 14:32:09.353827 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:09 crc kubenswrapper[4718]: I1210 14:32:09.353839 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:09Z","lastTransitionTime":"2025-12-10T14:32:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:09 crc kubenswrapper[4718]: E1210 14:32:09.367716 4718 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:32:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:32:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:09Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:32:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:32:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:09Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"fbef6824-3734-4f3d-bf18-030e5c72cff8\\\",\\\"systemUUID\\\":\\\"559eaef7-72a9-45f0-b8d3-0046c76adc0d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:09Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:09 crc kubenswrapper[4718]: E1210 14:32:09.367902 4718 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 10 14:32:09 crc kubenswrapper[4718]: I1210 14:32:09.369948 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:09 crc kubenswrapper[4718]: I1210 14:32:09.369977 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:09 crc kubenswrapper[4718]: I1210 14:32:09.369989 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:09 crc kubenswrapper[4718]: I1210 14:32:09.370005 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:09 crc kubenswrapper[4718]: I1210 14:32:09.370022 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:09Z","lastTransitionTime":"2025-12-10T14:32:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:09 crc kubenswrapper[4718]: I1210 14:32:09.473311 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:09 crc kubenswrapper[4718]: I1210 14:32:09.473377 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:09 crc kubenswrapper[4718]: I1210 14:32:09.473426 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:09 crc kubenswrapper[4718]: I1210 14:32:09.473452 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:09 crc kubenswrapper[4718]: I1210 14:32:09.473475 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:09Z","lastTransitionTime":"2025-12-10T14:32:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:09 crc kubenswrapper[4718]: I1210 14:32:09.597572 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:09 crc kubenswrapper[4718]: I1210 14:32:09.597654 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:09 crc kubenswrapper[4718]: I1210 14:32:09.597713 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:09 crc kubenswrapper[4718]: I1210 14:32:09.597749 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:09 crc kubenswrapper[4718]: I1210 14:32:09.597762 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:09Z","lastTransitionTime":"2025-12-10T14:32:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:09 crc kubenswrapper[4718]: I1210 14:32:09.624656 4718 generic.go:334] "Generic (PLEG): container finished" podID="db9c6842-03cb-4a28-baff-77f27f537aa4" containerID="d1b13fbe360cd4e791e86acdb311b78d1ab1d3344fd287fd70cfa87cf656affa" exitCode=0 Dec 10 14:32:09 crc kubenswrapper[4718]: I1210 14:32:09.624757 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-kkfdg" event={"ID":"db9c6842-03cb-4a28-baff-77f27f537aa4","Type":"ContainerDied","Data":"d1b13fbe360cd4e791e86acdb311b78d1ab1d3344fd287fd70cfa87cf656affa"} Dec 10 14:32:09 crc kubenswrapper[4718]: I1210 14:32:09.637306 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dtch2" event={"ID":"612af6cb-db4d-4874-a9ea-8b3c7eb8e30c","Type":"ContainerStarted","Data":"abe2f058bd1c2102c5f00a29af4869eb6dcd8b732ae90f5c4fda6759b460622a"} Dec 10 14:32:09 crc kubenswrapper[4718]: I1210 14:32:09.637352 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dtch2" event={"ID":"612af6cb-db4d-4874-a9ea-8b3c7eb8e30c","Type":"ContainerStarted","Data":"02ee125e78be56eace63f1077a3952d4a9b9ccb387af5413a9325eb4bff80ea9"} Dec 10 14:32:09 crc kubenswrapper[4718]: I1210 14:32:09.643811 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9fmrd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"186fd52a-c63c-461f-a551-8b57ead36f59\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c79dfef1bedaab660a35e29f8a48c6b153f7ca5028ef6a936a196699a30566e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r6f28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:03Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9fmrd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:09Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:09 crc kubenswrapper[4718]: I1210 14:32:09.657821 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a6ff07473ddfa3d1db1e5fe47937274f803c400ee435102c3b0383c02af4340\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b56868fde34c04a22b78be1e6cb02ff5b11f0dd9fa0232e5a4c38725a8bf193\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:09Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:09 crc kubenswrapper[4718]: I1210 14:32:09.670267 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1432ae01-a453-47a7-be1d-5fdcabe6f0c8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://651112c229c32968527538e66f1b161c92411b37f45f36ca90b27b6710d7a43d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d1de02a2f41beed3e92a05e69a6c76b44861e7dba46d723a38fffe8c3d0ed42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9405e4c833e1d215824604c6d7ec42dec156f2968d3351f110df665cdac0ce1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d898946159da9ec7806b5daa074a2313a8d3cd6086f4ae2bdfe858a7ba70eca7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:31:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:09Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:09 crc kubenswrapper[4718]: I1210 14:32:09.685267 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://caa6ee7ab7f267517dcd1699f91f61cbc8b2dd0a135dd553e096f31a7f1dd93b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:09Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:09 crc kubenswrapper[4718]: I1210 14:32:09.699434 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:09Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:09 crc kubenswrapper[4718]: I1210 14:32:09.699937 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:09 crc kubenswrapper[4718]: I1210 14:32:09.699969 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:09 crc kubenswrapper[4718]: I1210 14:32:09.699978 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:09 crc kubenswrapper[4718]: I1210 14:32:09.699991 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:09 crc kubenswrapper[4718]: I1210 14:32:09.700002 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:09Z","lastTransitionTime":"2025-12-10T14:32:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:09 crc kubenswrapper[4718]: I1210 14:32:09.712670 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hv62w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9db3984f-4589-462f-94d7-89a885be63d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c18891e2a71e33784526db2ff2226ff6aa8a9df462f5cb11ff4f5a446509f54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d84p4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hv62w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:09Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:09 crc kubenswrapper[4718]: I1210 14:32:09.722838 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xvkg4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af814c60-50de-499d-a1b2-b18f3749bc35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01d0637707f68305ada3aab8a2cf56db32750665ef5c1bcfbfb1993fd135c3b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vsmbx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:00Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xvkg4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:09Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:09 crc kubenswrapper[4718]: I1210 14:32:09.733749 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8db53917-7cfb-496d-b8a0-5cc68f3be4e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ac0946788c7cef36abe84639ca343f7ba379f19b94308cd1a08f4dcac4ad249\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f420f195ef9f1879dc97af4b3feb8835025c2cbe0ede4113fb1b0e0b08ba6c2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8zmhn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:09Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:09 crc kubenswrapper[4718]: I1210 14:32:09.749370 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kkfdg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db9c6842-03cb-4a28-baff-77f27f537aa4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:00Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bxrt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34acb23b1eef53ae920322853c0503b9d26b0d30894e6c53ae52fb90e1671294\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34acb23b1eef53ae920322853c0503b9d26b0d30894e6c53ae52fb90e1671294\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bxrt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1b13fbe360cd4e791e86acdb311b78d1ab1d3344fd287fd70cfa87cf656affa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1b13fbe360cd4e791e86acdb311b78d1ab1d3344fd287fd70cfa87cf656affa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bxrt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bxrt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bxrt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bxrt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bxrt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kkfdg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:09Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:09 crc kubenswrapper[4718]: I1210 14:32:09.768082 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:09Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:09 crc kubenswrapper[4718]: I1210 14:32:09.779116 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:09Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:09 crc kubenswrapper[4718]: I1210 14:32:09.790546 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://368fbae55ef9c78f2f88c5a07301f271e1ccffc66976a0c2dc9f80edce04613a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:09Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:09 crc kubenswrapper[4718]: I1210 14:32:09.803012 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:09 crc kubenswrapper[4718]: I1210 14:32:09.803067 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:09 crc kubenswrapper[4718]: I1210 14:32:09.803081 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:09 crc kubenswrapper[4718]: I1210 14:32:09.803103 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:09 crc kubenswrapper[4718]: I1210 14:32:09.803114 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:09Z","lastTransitionTime":"2025-12-10T14:32:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:09 crc kubenswrapper[4718]: I1210 14:32:09.808312 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dtch2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"612af6cb-db4d-4874-a9ea-8b3c7eb8e30c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df5db949bd1c53e8ae754b3e7e500e8c2e196aacd68a36d6331ddca378ef7504\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df5db949bd1c53e8ae754b3e7e500e8c2e196aacd68a36d6331ddca378ef7504\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-dtch2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:09Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:09 crc kubenswrapper[4718]: I1210 14:32:09.828011 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52b297ad-8ff7-498e-8248-b64014de744f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65966d33004e0a6740f0b450f404729baec86fc47c84a58ee020a78b7378bfee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfa44e8f79d7525847ab10384aaf41105a8e6769b384679417cd0a7379748dee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://32fcede9b35f9a45177a341e272e7c2cbe2e5c6bf699e877664a603a5f96124a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d6e990a4a3f575ffc8a6399b8dd1fd59a84db71b34885946046fc7c5bd0a8e8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9d6e990a4a3f575ffc8a6399b8dd1fd59a84db71b34885946046fc7c5bd0a8e8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-10T14:31:58Z\\\",\\\"message\\\":\\\"le observer\\\\nW1210 14:31:57.496102 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1210 14:31:57.496266 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1210 14:31:57.497076 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1744864549/tls.crt::/tmp/serving-cert-1744864549/tls.key\\\\\\\"\\\\nI1210 14:31:57.965947 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1210 14:31:57.968367 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1210 14:31:57.968400 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1210 14:31:57.968417 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1210 14:31:57.968422 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1210 14:31:57.972813 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1210 14:31:57.972849 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1210 14:31:57.972856 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1210 14:31:57.972862 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1210 14:31:57.972867 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1210 14:31:57.972870 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1210 14:31:57.972874 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1210 14:31:57.973028 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1210 14:31:57.977590 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-10T14:31:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f363cabfc65ba2cb737968ead6f21568f30416a1f385f511f2d1a45da1e831a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3cbf8dfd3e1278a733c6ca7e888788d0bec845ff72dfd618e2070262d05b6a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3cbf8dfd3e1278a733c6ca7e888788d0bec845ff72dfd618e2070262d05b6a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:31:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:31:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:09Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:09 crc kubenswrapper[4718]: I1210 14:32:09.905260 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:09 crc kubenswrapper[4718]: I1210 14:32:09.905293 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:09 crc kubenswrapper[4718]: I1210 14:32:09.905302 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:09 crc kubenswrapper[4718]: I1210 14:32:09.905317 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:09 crc kubenswrapper[4718]: I1210 14:32:09.905326 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:09Z","lastTransitionTime":"2025-12-10T14:32:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:10 crc kubenswrapper[4718]: I1210 14:32:10.008844 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:10 crc kubenswrapper[4718]: I1210 14:32:10.008891 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:10 crc kubenswrapper[4718]: I1210 14:32:10.008906 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:10 crc kubenswrapper[4718]: I1210 14:32:10.008923 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:10 crc kubenswrapper[4718]: I1210 14:32:10.008936 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:10Z","lastTransitionTime":"2025-12-10T14:32:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:10 crc kubenswrapper[4718]: I1210 14:32:10.019700 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 10 14:32:10 crc kubenswrapper[4718]: E1210 14:32:10.019911 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 10 14:32:10 crc kubenswrapper[4718]: I1210 14:32:10.019980 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 10 14:32:10 crc kubenswrapper[4718]: E1210 14:32:10.020180 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 10 14:32:10 crc kubenswrapper[4718]: I1210 14:32:10.111852 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:10 crc kubenswrapper[4718]: I1210 14:32:10.111884 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:10 crc kubenswrapper[4718]: I1210 14:32:10.111892 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:10 crc kubenswrapper[4718]: I1210 14:32:10.111905 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:10 crc kubenswrapper[4718]: I1210 14:32:10.111914 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:10Z","lastTransitionTime":"2025-12-10T14:32:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:10 crc kubenswrapper[4718]: I1210 14:32:10.215297 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:10 crc kubenswrapper[4718]: I1210 14:32:10.215375 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:10 crc kubenswrapper[4718]: I1210 14:32:10.215414 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:10 crc kubenswrapper[4718]: I1210 14:32:10.215433 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:10 crc kubenswrapper[4718]: I1210 14:32:10.215455 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:10Z","lastTransitionTime":"2025-12-10T14:32:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:10 crc kubenswrapper[4718]: I1210 14:32:10.319162 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:10 crc kubenswrapper[4718]: I1210 14:32:10.319216 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:10 crc kubenswrapper[4718]: I1210 14:32:10.319226 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:10 crc kubenswrapper[4718]: I1210 14:32:10.319248 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:10 crc kubenswrapper[4718]: I1210 14:32:10.319261 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:10Z","lastTransitionTime":"2025-12-10T14:32:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:10 crc kubenswrapper[4718]: I1210 14:32:10.424699 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:10 crc kubenswrapper[4718]: I1210 14:32:10.424813 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:10 crc kubenswrapper[4718]: I1210 14:32:10.424855 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:10 crc kubenswrapper[4718]: I1210 14:32:10.424888 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:10 crc kubenswrapper[4718]: I1210 14:32:10.424916 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:10Z","lastTransitionTime":"2025-12-10T14:32:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:10 crc kubenswrapper[4718]: I1210 14:32:10.541358 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:10 crc kubenswrapper[4718]: I1210 14:32:10.541434 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:10 crc kubenswrapper[4718]: I1210 14:32:10.541450 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:10 crc kubenswrapper[4718]: I1210 14:32:10.541466 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:10 crc kubenswrapper[4718]: I1210 14:32:10.541477 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:10Z","lastTransitionTime":"2025-12-10T14:32:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:10 crc kubenswrapper[4718]: I1210 14:32:10.643612 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:10 crc kubenswrapper[4718]: I1210 14:32:10.643669 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:10 crc kubenswrapper[4718]: I1210 14:32:10.643682 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:10 crc kubenswrapper[4718]: I1210 14:32:10.643707 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:10 crc kubenswrapper[4718]: I1210 14:32:10.643726 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:10Z","lastTransitionTime":"2025-12-10T14:32:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:10 crc kubenswrapper[4718]: I1210 14:32:10.644937 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-kkfdg" event={"ID":"db9c6842-03cb-4a28-baff-77f27f537aa4","Type":"ContainerStarted","Data":"8d9ee16140b928b7e5c33811af2ca2b20b2fc98fdefc12bb1790fce59c407d9c"} Dec 10 14:32:10 crc kubenswrapper[4718]: I1210 14:32:10.650164 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dtch2" event={"ID":"612af6cb-db4d-4874-a9ea-8b3c7eb8e30c","Type":"ContainerStarted","Data":"4e231c6918c86142643af798db9c4c6de9764822d2b60e1155c727b143191139"} Dec 10 14:32:10 crc kubenswrapper[4718]: I1210 14:32:10.660705 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://368fbae55ef9c78f2f88c5a07301f271e1ccffc66976a0c2dc9f80edce04613a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:10Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:10 crc kubenswrapper[4718]: I1210 14:32:10.680548 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dtch2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"612af6cb-db4d-4874-a9ea-8b3c7eb8e30c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df5db949bd1c53e8ae754b3e7e500e8c2e196aacd68a36d6331ddca378ef7504\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df5db949bd1c53e8ae754b3e7e500e8c2e196aacd68a36d6331ddca378ef7504\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-dtch2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:10Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:10 crc kubenswrapper[4718]: I1210 14:32:10.699712 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52b297ad-8ff7-498e-8248-b64014de744f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65966d33004e0a6740f0b450f404729baec86fc47c84a58ee020a78b7378bfee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfa44e8f79d7525847ab10384aaf41105a8e6769b384679417cd0a7379748dee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://32fcede9b35f9a45177a341e272e7c2cbe2e5c6bf699e877664a603a5f96124a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d6e990a4a3f575ffc8a6399b8dd1fd59a84db71b34885946046fc7c5bd0a8e8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9d6e990a4a3f575ffc8a6399b8dd1fd59a84db71b34885946046fc7c5bd0a8e8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-10T14:31:58Z\\\",\\\"message\\\":\\\"le observer\\\\nW1210 14:31:57.496102 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1210 14:31:57.496266 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1210 14:31:57.497076 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1744864549/tls.crt::/tmp/serving-cert-1744864549/tls.key\\\\\\\"\\\\nI1210 14:31:57.965947 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1210 14:31:57.968367 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1210 14:31:57.968400 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1210 14:31:57.968417 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1210 14:31:57.968422 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1210 14:31:57.972813 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1210 14:31:57.972849 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1210 14:31:57.972856 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1210 14:31:57.972862 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1210 14:31:57.972867 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1210 14:31:57.972870 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1210 14:31:57.972874 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1210 14:31:57.973028 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1210 14:31:57.977590 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-10T14:31:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f363cabfc65ba2cb737968ead6f21568f30416a1f385f511f2d1a45da1e831a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3cbf8dfd3e1278a733c6ca7e888788d0bec845ff72dfd618e2070262d05b6a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3cbf8dfd3e1278a733c6ca7e888788d0bec845ff72dfd618e2070262d05b6a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:31:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:31:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:10Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:10 crc kubenswrapper[4718]: I1210 14:32:10.715130 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:10Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:10 crc kubenswrapper[4718]: I1210 14:32:10.730504 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:10Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:10 crc kubenswrapper[4718]: I1210 14:32:10.741765 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a6ff07473ddfa3d1db1e5fe47937274f803c400ee435102c3b0383c02af4340\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b56868fde34c04a22b78be1e6cb02ff5b11f0dd9fa0232e5a4c38725a8bf193\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:10Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:10 crc kubenswrapper[4718]: I1210 14:32:10.745707 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:10 crc kubenswrapper[4718]: I1210 14:32:10.745739 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:10 crc kubenswrapper[4718]: I1210 14:32:10.745750 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:10 crc kubenswrapper[4718]: I1210 14:32:10.745766 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:10 crc kubenswrapper[4718]: I1210 14:32:10.745778 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:10Z","lastTransitionTime":"2025-12-10T14:32:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:10 crc kubenswrapper[4718]: I1210 14:32:10.753357 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9fmrd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"186fd52a-c63c-461f-a551-8b57ead36f59\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c79dfef1bedaab660a35e29f8a48c6b153f7ca5028ef6a936a196699a30566e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r6f28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:03Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9fmrd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:10Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:10 crc kubenswrapper[4718]: I1210 14:32:10.765140 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://caa6ee7ab7f267517dcd1699f91f61cbc8b2dd0a135dd553e096f31a7f1dd93b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:10Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:10 crc kubenswrapper[4718]: I1210 14:32:10.776935 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:10Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:10 crc kubenswrapper[4718]: I1210 14:32:10.788945 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hv62w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9db3984f-4589-462f-94d7-89a885be63d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c18891e2a71e33784526db2ff2226ff6aa8a9df462f5cb11ff4f5a446509f54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d84p4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hv62w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:10Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:10 crc kubenswrapper[4718]: I1210 14:32:10.798488 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xvkg4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af814c60-50de-499d-a1b2-b18f3749bc35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01d0637707f68305ada3aab8a2cf56db32750665ef5c1bcfbfb1993fd135c3b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vsmbx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:00Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xvkg4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:10Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:10 crc kubenswrapper[4718]: I1210 14:32:10.815275 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8db53917-7cfb-496d-b8a0-5cc68f3be4e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ac0946788c7cef36abe84639ca343f7ba379f19b94308cd1a08f4dcac4ad249\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f420f195ef9f1879dc97af4b3feb8835025c2cbe0ede4113fb1b0e0b08ba6c2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8zmhn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:10Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:10 crc kubenswrapper[4718]: I1210 14:32:10.844267 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1432ae01-a453-47a7-be1d-5fdcabe6f0c8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://651112c229c32968527538e66f1b161c92411b37f45f36ca90b27b6710d7a43d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d1de02a2f41beed3e92a05e69a6c76b44861e7dba46d723a38fffe8c3d0ed42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9405e4c833e1d215824604c6d7ec42dec156f2968d3351f110df665cdac0ce1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d898946159da9ec7806b5daa074a2313a8d3cd6086f4ae2bdfe858a7ba70eca7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:31:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:10Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:10 crc kubenswrapper[4718]: I1210 14:32:10.848471 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:10 crc kubenswrapper[4718]: I1210 14:32:10.848509 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:10 crc kubenswrapper[4718]: I1210 14:32:10.848519 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:10 crc kubenswrapper[4718]: I1210 14:32:10.848533 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:10 crc kubenswrapper[4718]: I1210 14:32:10.848546 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:10Z","lastTransitionTime":"2025-12-10T14:32:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:10 crc kubenswrapper[4718]: I1210 14:32:10.868729 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kkfdg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db9c6842-03cb-4a28-baff-77f27f537aa4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:00Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bxrt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34acb23b1eef53ae920322853c0503b9d26b0d30894e6c53ae52fb90e1671294\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34acb23b1eef53ae920322853c0503b9d26b0d30894e6c53ae52fb90e1671294\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bxrt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1b13fbe360cd4e791e86acdb311b78d1ab1d3344fd287fd70cfa87cf656affa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1b13fbe360cd4e791e86acdb311b78d1ab1d3344fd287fd70cfa87cf656affa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bxrt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d9ee16140b928b7e5c33811af2ca2b20b2fc98fdefc12bb1790fce59c407d9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bxrt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bxrt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bxrt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bxrt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kkfdg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:10Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:10 crc kubenswrapper[4718]: I1210 14:32:10.950451 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:10 crc kubenswrapper[4718]: I1210 14:32:10.950497 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:10 crc kubenswrapper[4718]: I1210 14:32:10.950508 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:10 crc kubenswrapper[4718]: I1210 14:32:10.950523 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:10 crc kubenswrapper[4718]: I1210 14:32:10.950535 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:10Z","lastTransitionTime":"2025-12-10T14:32:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:11 crc kubenswrapper[4718]: I1210 14:32:11.020173 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 10 14:32:11 crc kubenswrapper[4718]: E1210 14:32:11.020366 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 10 14:32:11 crc kubenswrapper[4718]: I1210 14:32:11.052863 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:11 crc kubenswrapper[4718]: I1210 14:32:11.052903 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:11 crc kubenswrapper[4718]: I1210 14:32:11.052916 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:11 crc kubenswrapper[4718]: I1210 14:32:11.052990 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:11 crc kubenswrapper[4718]: I1210 14:32:11.053015 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:11Z","lastTransitionTime":"2025-12-10T14:32:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:11 crc kubenswrapper[4718]: I1210 14:32:11.156340 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:11 crc kubenswrapper[4718]: I1210 14:32:11.156381 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:11 crc kubenswrapper[4718]: I1210 14:32:11.156403 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:11 crc kubenswrapper[4718]: I1210 14:32:11.156428 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:11 crc kubenswrapper[4718]: I1210 14:32:11.156439 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:11Z","lastTransitionTime":"2025-12-10T14:32:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:11 crc kubenswrapper[4718]: I1210 14:32:11.260894 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:11 crc kubenswrapper[4718]: I1210 14:32:11.260956 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:11 crc kubenswrapper[4718]: I1210 14:32:11.260966 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:11 crc kubenswrapper[4718]: I1210 14:32:11.260991 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:11 crc kubenswrapper[4718]: I1210 14:32:11.261006 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:11Z","lastTransitionTime":"2025-12-10T14:32:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:11 crc kubenswrapper[4718]: I1210 14:32:11.364202 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:11 crc kubenswrapper[4718]: I1210 14:32:11.364277 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:11 crc kubenswrapper[4718]: I1210 14:32:11.364293 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:11 crc kubenswrapper[4718]: I1210 14:32:11.364319 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:11 crc kubenswrapper[4718]: I1210 14:32:11.364357 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:11Z","lastTransitionTime":"2025-12-10T14:32:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:11 crc kubenswrapper[4718]: I1210 14:32:11.467672 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:11 crc kubenswrapper[4718]: I1210 14:32:11.467747 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:11 crc kubenswrapper[4718]: I1210 14:32:11.467765 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:11 crc kubenswrapper[4718]: I1210 14:32:11.467794 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:11 crc kubenswrapper[4718]: I1210 14:32:11.467819 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:11Z","lastTransitionTime":"2025-12-10T14:32:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:11 crc kubenswrapper[4718]: I1210 14:32:11.572878 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:11 crc kubenswrapper[4718]: I1210 14:32:11.573014 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:11 crc kubenswrapper[4718]: I1210 14:32:11.573039 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:11 crc kubenswrapper[4718]: I1210 14:32:11.573070 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:11 crc kubenswrapper[4718]: I1210 14:32:11.573101 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:11Z","lastTransitionTime":"2025-12-10T14:32:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:11 crc kubenswrapper[4718]: I1210 14:32:11.675297 4718 generic.go:334] "Generic (PLEG): container finished" podID="db9c6842-03cb-4a28-baff-77f27f537aa4" containerID="8d9ee16140b928b7e5c33811af2ca2b20b2fc98fdefc12bb1790fce59c407d9c" exitCode=0 Dec 10 14:32:11 crc kubenswrapper[4718]: I1210 14:32:11.675340 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-kkfdg" event={"ID":"db9c6842-03cb-4a28-baff-77f27f537aa4","Type":"ContainerDied","Data":"8d9ee16140b928b7e5c33811af2ca2b20b2fc98fdefc12bb1790fce59c407d9c"} Dec 10 14:32:11 crc kubenswrapper[4718]: I1210 14:32:11.675731 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:11 crc kubenswrapper[4718]: I1210 14:32:11.675779 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:11 crc kubenswrapper[4718]: I1210 14:32:11.675791 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:11 crc kubenswrapper[4718]: I1210 14:32:11.675807 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:11 crc kubenswrapper[4718]: I1210 14:32:11.675820 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:11Z","lastTransitionTime":"2025-12-10T14:32:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:11 crc kubenswrapper[4718]: I1210 14:32:11.695335 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8db53917-7cfb-496d-b8a0-5cc68f3be4e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ac0946788c7cef36abe84639ca343f7ba379f19b94308cd1a08f4dcac4ad249\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f420f195ef9f1879dc97af4b3feb8835025c2cbe0ede4113fb1b0e0b08ba6c2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8zmhn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:11Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:11 crc kubenswrapper[4718]: I1210 14:32:11.712602 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1432ae01-a453-47a7-be1d-5fdcabe6f0c8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://651112c229c32968527538e66f1b161c92411b37f45f36ca90b27b6710d7a43d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d1de02a2f41beed3e92a05e69a6c76b44861e7dba46d723a38fffe8c3d0ed42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9405e4c833e1d215824604c6d7ec42dec156f2968d3351f110df665cdac0ce1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d898946159da9ec7806b5daa074a2313a8d3cd6086f4ae2bdfe858a7ba70eca7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:31:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:11Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:11 crc kubenswrapper[4718]: I1210 14:32:11.731140 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://caa6ee7ab7f267517dcd1699f91f61cbc8b2dd0a135dd553e096f31a7f1dd93b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:11Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:11 crc kubenswrapper[4718]: I1210 14:32:11.747281 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:11Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:11 crc kubenswrapper[4718]: I1210 14:32:11.761891 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hv62w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9db3984f-4589-462f-94d7-89a885be63d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c18891e2a71e33784526db2ff2226ff6aa8a9df462f5cb11ff4f5a446509f54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d84p4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hv62w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:11Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:11 crc kubenswrapper[4718]: I1210 14:32:11.776752 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xvkg4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af814c60-50de-499d-a1b2-b18f3749bc35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01d0637707f68305ada3aab8a2cf56db32750665ef5c1bcfbfb1993fd135c3b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vsmbx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:00Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xvkg4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:11Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:11 crc kubenswrapper[4718]: I1210 14:32:11.779680 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:11 crc kubenswrapper[4718]: I1210 14:32:11.779857 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:11 crc kubenswrapper[4718]: I1210 14:32:11.779872 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:11 crc kubenswrapper[4718]: I1210 14:32:11.779889 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:11 crc kubenswrapper[4718]: I1210 14:32:11.779902 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:11Z","lastTransitionTime":"2025-12-10T14:32:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:11 crc kubenswrapper[4718]: I1210 14:32:11.796088 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kkfdg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db9c6842-03cb-4a28-baff-77f27f537aa4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:00Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bxrt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34acb23b1eef53ae920322853c0503b9d26b0d30894e6c53ae52fb90e1671294\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34acb23b1eef53ae920322853c0503b9d26b0d30894e6c53ae52fb90e1671294\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bxrt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1b13fbe360cd4e791e86acdb311b78d1ab1d3344fd287fd70cfa87cf656affa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1b13fbe360cd4e791e86acdb311b78d1ab1d3344fd287fd70cfa87cf656affa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bxrt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d9ee16140b928b7e5c33811af2ca2b20b2fc98fdefc12bb1790fce59c407d9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d9ee16140b928b7e5c33811af2ca2b20b2fc98fdefc12bb1790fce59c407d9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bxrt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bxrt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bxrt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bxrt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kkfdg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:11Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:11 crc kubenswrapper[4718]: I1210 14:32:11.814349 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52b297ad-8ff7-498e-8248-b64014de744f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65966d33004e0a6740f0b450f404729baec86fc47c84a58ee020a78b7378bfee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfa44e8f79d7525847ab10384aaf41105a8e6769b384679417cd0a7379748dee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://32fcede9b35f9a45177a341e272e7c2cbe2e5c6bf699e877664a603a5f96124a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d6e990a4a3f575ffc8a6399b8dd1fd59a84db71b34885946046fc7c5bd0a8e8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9d6e990a4a3f575ffc8a6399b8dd1fd59a84db71b34885946046fc7c5bd0a8e8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-10T14:31:58Z\\\",\\\"message\\\":\\\"le observer\\\\nW1210 14:31:57.496102 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1210 14:31:57.496266 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1210 14:31:57.497076 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1744864549/tls.crt::/tmp/serving-cert-1744864549/tls.key\\\\\\\"\\\\nI1210 14:31:57.965947 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1210 14:31:57.968367 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1210 14:31:57.968400 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1210 14:31:57.968417 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1210 14:31:57.968422 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1210 14:31:57.972813 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1210 14:31:57.972849 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1210 14:31:57.972856 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1210 14:31:57.972862 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1210 14:31:57.972867 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1210 14:31:57.972870 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1210 14:31:57.972874 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1210 14:31:57.973028 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1210 14:31:57.977590 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-10T14:31:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f363cabfc65ba2cb737968ead6f21568f30416a1f385f511f2d1a45da1e831a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3cbf8dfd3e1278a733c6ca7e888788d0bec845ff72dfd618e2070262d05b6a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3cbf8dfd3e1278a733c6ca7e888788d0bec845ff72dfd618e2070262d05b6a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:31:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:31:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:11Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:11 crc kubenswrapper[4718]: I1210 14:32:11.831277 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:11Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:11 crc kubenswrapper[4718]: I1210 14:32:11.846278 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:11Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:11 crc kubenswrapper[4718]: I1210 14:32:11.861411 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://368fbae55ef9c78f2f88c5a07301f271e1ccffc66976a0c2dc9f80edce04613a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:11Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:11 crc kubenswrapper[4718]: I1210 14:32:11.880470 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dtch2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"612af6cb-db4d-4874-a9ea-8b3c7eb8e30c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df5db949bd1c53e8ae754b3e7e500e8c2e196aacd68a36d6331ddca378ef7504\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df5db949bd1c53e8ae754b3e7e500e8c2e196aacd68a36d6331ddca378ef7504\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-dtch2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:11Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:11 crc kubenswrapper[4718]: I1210 14:32:11.882689 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:11 crc kubenswrapper[4718]: I1210 14:32:11.882712 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:11 crc kubenswrapper[4718]: I1210 14:32:11.882720 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:11 crc kubenswrapper[4718]: I1210 14:32:11.882734 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:11 crc kubenswrapper[4718]: I1210 14:32:11.882746 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:11Z","lastTransitionTime":"2025-12-10T14:32:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:11 crc kubenswrapper[4718]: I1210 14:32:11.893730 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a6ff07473ddfa3d1db1e5fe47937274f803c400ee435102c3b0383c02af4340\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b56868fde34c04a22b78be1e6cb02ff5b11f0dd9fa0232e5a4c38725a8bf193\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:11Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:11 crc kubenswrapper[4718]: I1210 14:32:11.903560 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9fmrd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"186fd52a-c63c-461f-a551-8b57ead36f59\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c79dfef1bedaab660a35e29f8a48c6b153f7ca5028ef6a936a196699a30566e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r6f28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:03Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9fmrd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:11Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:11 crc kubenswrapper[4718]: I1210 14:32:11.985025 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:11 crc kubenswrapper[4718]: I1210 14:32:11.985076 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:11 crc kubenswrapper[4718]: I1210 14:32:11.985090 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:11 crc kubenswrapper[4718]: I1210 14:32:11.985107 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:11 crc kubenswrapper[4718]: I1210 14:32:11.985121 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:11Z","lastTransitionTime":"2025-12-10T14:32:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:12 crc kubenswrapper[4718]: I1210 14:32:12.019828 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 10 14:32:12 crc kubenswrapper[4718]: E1210 14:32:12.020042 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 10 14:32:12 crc kubenswrapper[4718]: I1210 14:32:12.020444 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 10 14:32:12 crc kubenswrapper[4718]: E1210 14:32:12.020858 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 10 14:32:12 crc kubenswrapper[4718]: I1210 14:32:12.087898 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:12 crc kubenswrapper[4718]: I1210 14:32:12.087944 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:12 crc kubenswrapper[4718]: I1210 14:32:12.087959 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:12 crc kubenswrapper[4718]: I1210 14:32:12.087979 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:12 crc kubenswrapper[4718]: I1210 14:32:12.087994 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:12Z","lastTransitionTime":"2025-12-10T14:32:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:12 crc kubenswrapper[4718]: I1210 14:32:12.190481 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:12 crc kubenswrapper[4718]: I1210 14:32:12.190522 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:12 crc kubenswrapper[4718]: I1210 14:32:12.190535 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:12 crc kubenswrapper[4718]: I1210 14:32:12.190554 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:12 crc kubenswrapper[4718]: I1210 14:32:12.190568 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:12Z","lastTransitionTime":"2025-12-10T14:32:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:12 crc kubenswrapper[4718]: I1210 14:32:12.293371 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:12 crc kubenswrapper[4718]: I1210 14:32:12.293445 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:12 crc kubenswrapper[4718]: I1210 14:32:12.293456 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:12 crc kubenswrapper[4718]: I1210 14:32:12.293472 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:12 crc kubenswrapper[4718]: I1210 14:32:12.293483 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:12Z","lastTransitionTime":"2025-12-10T14:32:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:12 crc kubenswrapper[4718]: I1210 14:32:12.397447 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:12 crc kubenswrapper[4718]: I1210 14:32:12.397504 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:12 crc kubenswrapper[4718]: I1210 14:32:12.397516 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:12 crc kubenswrapper[4718]: I1210 14:32:12.397542 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:12 crc kubenswrapper[4718]: I1210 14:32:12.397553 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:12Z","lastTransitionTime":"2025-12-10T14:32:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:12 crc kubenswrapper[4718]: I1210 14:32:12.501468 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:12 crc kubenswrapper[4718]: I1210 14:32:12.501533 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:12 crc kubenswrapper[4718]: I1210 14:32:12.501544 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:12 crc kubenswrapper[4718]: I1210 14:32:12.501565 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:12 crc kubenswrapper[4718]: I1210 14:32:12.501577 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:12Z","lastTransitionTime":"2025-12-10T14:32:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:12 crc kubenswrapper[4718]: I1210 14:32:12.605561 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:12 crc kubenswrapper[4718]: I1210 14:32:12.605621 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:12 crc kubenswrapper[4718]: I1210 14:32:12.605632 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:12 crc kubenswrapper[4718]: I1210 14:32:12.605662 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:12 crc kubenswrapper[4718]: I1210 14:32:12.605676 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:12Z","lastTransitionTime":"2025-12-10T14:32:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:12 crc kubenswrapper[4718]: I1210 14:32:12.684317 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dtch2" event={"ID":"612af6cb-db4d-4874-a9ea-8b3c7eb8e30c","Type":"ContainerStarted","Data":"f27c89c7dd6f5334f05c340d706e938abde36968f00c30bf84f7b2fb30dfcb3b"} Dec 10 14:32:12 crc kubenswrapper[4718]: I1210 14:32:12.687373 4718 generic.go:334] "Generic (PLEG): container finished" podID="db9c6842-03cb-4a28-baff-77f27f537aa4" containerID="b1daf49ff990637090db3ead68f5d0777b6502f79edfe524df743389ee220f8d" exitCode=0 Dec 10 14:32:12 crc kubenswrapper[4718]: I1210 14:32:12.687436 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-kkfdg" event={"ID":"db9c6842-03cb-4a28-baff-77f27f537aa4","Type":"ContainerDied","Data":"b1daf49ff990637090db3ead68f5d0777b6502f79edfe524df743389ee220f8d"} Dec 10 14:32:12 crc kubenswrapper[4718]: I1210 14:32:12.708806 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://caa6ee7ab7f267517dcd1699f91f61cbc8b2dd0a135dd553e096f31a7f1dd93b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:12Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:12 crc kubenswrapper[4718]: I1210 14:32:12.709132 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:12 crc kubenswrapper[4718]: I1210 14:32:12.709164 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:12 crc kubenswrapper[4718]: I1210 14:32:12.709184 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:12 crc kubenswrapper[4718]: I1210 14:32:12.709213 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:12 crc kubenswrapper[4718]: I1210 14:32:12.709231 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:12Z","lastTransitionTime":"2025-12-10T14:32:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:12 crc kubenswrapper[4718]: I1210 14:32:12.728269 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:12Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:12 crc kubenswrapper[4718]: I1210 14:32:12.744536 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hv62w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9db3984f-4589-462f-94d7-89a885be63d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c18891e2a71e33784526db2ff2226ff6aa8a9df462f5cb11ff4f5a446509f54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d84p4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hv62w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:12Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:12 crc kubenswrapper[4718]: I1210 14:32:12.765120 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xvkg4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af814c60-50de-499d-a1b2-b18f3749bc35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01d0637707f68305ada3aab8a2cf56db32750665ef5c1bcfbfb1993fd135c3b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vsmbx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:00Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xvkg4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:12Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:12 crc kubenswrapper[4718]: I1210 14:32:12.783603 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8db53917-7cfb-496d-b8a0-5cc68f3be4e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ac0946788c7cef36abe84639ca343f7ba379f19b94308cd1a08f4dcac4ad249\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f420f195ef9f1879dc97af4b3feb8835025c2cbe0ede4113fb1b0e0b08ba6c2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8zmhn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:12Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:12 crc kubenswrapper[4718]: I1210 14:32:12.798842 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1432ae01-a453-47a7-be1d-5fdcabe6f0c8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://651112c229c32968527538e66f1b161c92411b37f45f36ca90b27b6710d7a43d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d1de02a2f41beed3e92a05e69a6c76b44861e7dba46d723a38fffe8c3d0ed42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9405e4c833e1d215824604c6d7ec42dec156f2968d3351f110df665cdac0ce1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d898946159da9ec7806b5daa074a2313a8d3cd6086f4ae2bdfe858a7ba70eca7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:31:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:12Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:12 crc kubenswrapper[4718]: I1210 14:32:12.812976 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:12 crc kubenswrapper[4718]: I1210 14:32:12.813032 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:12 crc kubenswrapper[4718]: I1210 14:32:12.813044 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:12 crc kubenswrapper[4718]: I1210 14:32:12.813064 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:12 crc kubenswrapper[4718]: I1210 14:32:12.813076 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:12Z","lastTransitionTime":"2025-12-10T14:32:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:12 crc kubenswrapper[4718]: I1210 14:32:12.818674 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kkfdg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db9c6842-03cb-4a28-baff-77f27f537aa4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:00Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bxrt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34acb23b1eef53ae920322853c0503b9d26b0d30894e6c53ae52fb90e1671294\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34acb23b1eef53ae920322853c0503b9d26b0d30894e6c53ae52fb90e1671294\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bxrt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1b13fbe360cd4e791e86acdb311b78d1ab1d3344fd287fd70cfa87cf656affa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1b13fbe360cd4e791e86acdb311b78d1ab1d3344fd287fd70cfa87cf656affa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bxrt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d9ee16140b928b7e5c33811af2ca2b20b2fc98fdefc12bb1790fce59c407d9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d9ee16140b928b7e5c33811af2ca2b20b2fc98fdefc12bb1790fce59c407d9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bxrt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1daf49ff990637090db3ead68f5d0777b6502f79edfe524df743389ee220f8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1daf49ff990637090db3ead68f5d0777b6502f79edfe524df743389ee220f8d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bxrt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bxrt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bxrt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kkfdg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:12Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:12 crc kubenswrapper[4718]: I1210 14:32:12.835417 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://368fbae55ef9c78f2f88c5a07301f271e1ccffc66976a0c2dc9f80edce04613a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:12Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:12 crc kubenswrapper[4718]: I1210 14:32:12.873797 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dtch2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"612af6cb-db4d-4874-a9ea-8b3c7eb8e30c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df5db949bd1c53e8ae754b3e7e500e8c2e196aacd68a36d6331ddca378ef7504\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df5db949bd1c53e8ae754b3e7e500e8c2e196aacd68a36d6331ddca378ef7504\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-dtch2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:12Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:12 crc kubenswrapper[4718]: I1210 14:32:12.897514 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52b297ad-8ff7-498e-8248-b64014de744f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65966d33004e0a6740f0b450f404729baec86fc47c84a58ee020a78b7378bfee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfa44e8f79d7525847ab10384aaf41105a8e6769b384679417cd0a7379748dee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://32fcede9b35f9a45177a341e272e7c2cbe2e5c6bf699e877664a603a5f96124a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d6e990a4a3f575ffc8a6399b8dd1fd59a84db71b34885946046fc7c5bd0a8e8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9d6e990a4a3f575ffc8a6399b8dd1fd59a84db71b34885946046fc7c5bd0a8e8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-10T14:31:58Z\\\",\\\"message\\\":\\\"le observer\\\\nW1210 14:31:57.496102 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1210 14:31:57.496266 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1210 14:31:57.497076 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1744864549/tls.crt::/tmp/serving-cert-1744864549/tls.key\\\\\\\"\\\\nI1210 14:31:57.965947 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1210 14:31:57.968367 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1210 14:31:57.968400 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1210 14:31:57.968417 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1210 14:31:57.968422 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1210 14:31:57.972813 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1210 14:31:57.972849 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1210 14:31:57.972856 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1210 14:31:57.972862 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1210 14:31:57.972867 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1210 14:31:57.972870 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1210 14:31:57.972874 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1210 14:31:57.973028 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1210 14:31:57.977590 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-10T14:31:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f363cabfc65ba2cb737968ead6f21568f30416a1f385f511f2d1a45da1e831a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3cbf8dfd3e1278a733c6ca7e888788d0bec845ff72dfd618e2070262d05b6a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3cbf8dfd3e1278a733c6ca7e888788d0bec845ff72dfd618e2070262d05b6a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:31:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:31:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:12Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:12 crc kubenswrapper[4718]: I1210 14:32:12.916661 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:12 crc kubenswrapper[4718]: I1210 14:32:12.916720 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:12 crc kubenswrapper[4718]: I1210 14:32:12.916733 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:12 crc kubenswrapper[4718]: I1210 14:32:12.916753 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:12 crc kubenswrapper[4718]: I1210 14:32:12.916767 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:12Z","lastTransitionTime":"2025-12-10T14:32:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:12 crc kubenswrapper[4718]: I1210 14:32:12.928231 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:12Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:12 crc kubenswrapper[4718]: I1210 14:32:12.948613 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:12Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:12 crc kubenswrapper[4718]: I1210 14:32:12.972058 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a6ff07473ddfa3d1db1e5fe47937274f803c400ee435102c3b0383c02af4340\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b56868fde34c04a22b78be1e6cb02ff5b11f0dd9fa0232e5a4c38725a8bf193\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:12Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:12 crc kubenswrapper[4718]: I1210 14:32:12.984689 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9fmrd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"186fd52a-c63c-461f-a551-8b57ead36f59\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c79dfef1bedaab660a35e29f8a48c6b153f7ca5028ef6a936a196699a30566e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r6f28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:03Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9fmrd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:12Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:13 crc kubenswrapper[4718]: I1210 14:32:13.019559 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 10 14:32:13 crc kubenswrapper[4718]: E1210 14:32:13.019834 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 10 14:32:13 crc kubenswrapper[4718]: I1210 14:32:13.021894 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:13 crc kubenswrapper[4718]: I1210 14:32:13.021956 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:13 crc kubenswrapper[4718]: I1210 14:32:13.021972 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:13 crc kubenswrapper[4718]: I1210 14:32:13.021992 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:13 crc kubenswrapper[4718]: I1210 14:32:13.022011 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:13Z","lastTransitionTime":"2025-12-10T14:32:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:13 crc kubenswrapper[4718]: I1210 14:32:13.124925 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:13 crc kubenswrapper[4718]: I1210 14:32:13.124987 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:13 crc kubenswrapper[4718]: I1210 14:32:13.124998 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:13 crc kubenswrapper[4718]: I1210 14:32:13.125017 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:13 crc kubenswrapper[4718]: I1210 14:32:13.125030 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:13Z","lastTransitionTime":"2025-12-10T14:32:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:13 crc kubenswrapper[4718]: I1210 14:32:13.228896 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:13 crc kubenswrapper[4718]: I1210 14:32:13.228956 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:13 crc kubenswrapper[4718]: I1210 14:32:13.228966 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:13 crc kubenswrapper[4718]: I1210 14:32:13.228984 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:13 crc kubenswrapper[4718]: I1210 14:32:13.228996 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:13Z","lastTransitionTime":"2025-12-10T14:32:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:13 crc kubenswrapper[4718]: I1210 14:32:13.337263 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:13 crc kubenswrapper[4718]: I1210 14:32:13.337325 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:13 crc kubenswrapper[4718]: I1210 14:32:13.337352 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:13 crc kubenswrapper[4718]: I1210 14:32:13.337371 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:13 crc kubenswrapper[4718]: I1210 14:32:13.337424 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:13Z","lastTransitionTime":"2025-12-10T14:32:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:13 crc kubenswrapper[4718]: I1210 14:32:13.440713 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:13 crc kubenswrapper[4718]: I1210 14:32:13.440779 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:13 crc kubenswrapper[4718]: I1210 14:32:13.440794 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:13 crc kubenswrapper[4718]: I1210 14:32:13.440822 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:13 crc kubenswrapper[4718]: I1210 14:32:13.440839 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:13Z","lastTransitionTime":"2025-12-10T14:32:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:13 crc kubenswrapper[4718]: I1210 14:32:13.544485 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:13 crc kubenswrapper[4718]: I1210 14:32:13.544538 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:13 crc kubenswrapper[4718]: I1210 14:32:13.544551 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:13 crc kubenswrapper[4718]: I1210 14:32:13.544567 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:13 crc kubenswrapper[4718]: I1210 14:32:13.544580 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:13Z","lastTransitionTime":"2025-12-10T14:32:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:13 crc kubenswrapper[4718]: I1210 14:32:13.605658 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 10 14:32:13 crc kubenswrapper[4718]: E1210 14:32:13.605956 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-10 14:32:29.605909843 +0000 UTC m=+54.555133270 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 14:32:13 crc kubenswrapper[4718]: I1210 14:32:13.647731 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:13 crc kubenswrapper[4718]: I1210 14:32:13.647790 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:13 crc kubenswrapper[4718]: I1210 14:32:13.647803 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:13 crc kubenswrapper[4718]: I1210 14:32:13.647825 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:13 crc kubenswrapper[4718]: I1210 14:32:13.647845 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:13Z","lastTransitionTime":"2025-12-10T14:32:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:13 crc kubenswrapper[4718]: I1210 14:32:13.751254 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:13 crc kubenswrapper[4718]: I1210 14:32:13.751324 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:13 crc kubenswrapper[4718]: I1210 14:32:13.751338 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:13 crc kubenswrapper[4718]: I1210 14:32:13.751364 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:13 crc kubenswrapper[4718]: I1210 14:32:13.751379 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:13Z","lastTransitionTime":"2025-12-10T14:32:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:13 crc kubenswrapper[4718]: I1210 14:32:13.853973 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:13 crc kubenswrapper[4718]: I1210 14:32:13.854014 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:13 crc kubenswrapper[4718]: I1210 14:32:13.854025 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:13 crc kubenswrapper[4718]: I1210 14:32:13.854042 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:13 crc kubenswrapper[4718]: I1210 14:32:13.854053 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:13Z","lastTransitionTime":"2025-12-10T14:32:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:13 crc kubenswrapper[4718]: I1210 14:32:13.909071 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 10 14:32:13 crc kubenswrapper[4718]: I1210 14:32:13.909187 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 10 14:32:13 crc kubenswrapper[4718]: I1210 14:32:13.909221 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 10 14:32:13 crc kubenswrapper[4718]: E1210 14:32:13.909335 4718 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 10 14:32:13 crc kubenswrapper[4718]: E1210 14:32:13.909358 4718 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 10 14:32:13 crc kubenswrapper[4718]: E1210 14:32:13.909417 4718 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 10 14:32:13 crc kubenswrapper[4718]: E1210 14:32:13.909446 4718 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 10 14:32:13 crc kubenswrapper[4718]: E1210 14:32:13.909473 4718 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 10 14:32:13 crc kubenswrapper[4718]: E1210 14:32:13.909481 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-10 14:32:29.909449221 +0000 UTC m=+54.858672668 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 10 14:32:13 crc kubenswrapper[4718]: E1210 14:32:13.909513 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-10 14:32:29.909502892 +0000 UTC m=+54.858726309 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 10 14:32:13 crc kubenswrapper[4718]: E1210 14:32:13.909534 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-10 14:32:29.909525883 +0000 UTC m=+54.858749300 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 10 14:32:13 crc kubenswrapper[4718]: I1210 14:32:13.957233 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:13 crc kubenswrapper[4718]: I1210 14:32:13.957290 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:13 crc kubenswrapper[4718]: I1210 14:32:13.957301 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:13 crc kubenswrapper[4718]: I1210 14:32:13.957321 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:13 crc kubenswrapper[4718]: I1210 14:32:13.957331 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:13Z","lastTransitionTime":"2025-12-10T14:32:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:14 crc kubenswrapper[4718]: I1210 14:32:14.010119 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 10 14:32:14 crc kubenswrapper[4718]: E1210 14:32:14.010301 4718 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 10 14:32:14 crc kubenswrapper[4718]: E1210 14:32:14.010333 4718 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 10 14:32:14 crc kubenswrapper[4718]: E1210 14:32:14.010347 4718 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 10 14:32:14 crc kubenswrapper[4718]: E1210 14:32:14.010445 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-10 14:32:30.010424125 +0000 UTC m=+54.959647542 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 10 14:32:14 crc kubenswrapper[4718]: I1210 14:32:14.020236 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 10 14:32:14 crc kubenswrapper[4718]: I1210 14:32:14.020303 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 10 14:32:14 crc kubenswrapper[4718]: E1210 14:32:14.020355 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 10 14:32:14 crc kubenswrapper[4718]: E1210 14:32:14.020485 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 10 14:32:14 crc kubenswrapper[4718]: I1210 14:32:14.061242 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:14 crc kubenswrapper[4718]: I1210 14:32:14.061313 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:14 crc kubenswrapper[4718]: I1210 14:32:14.061328 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:14 crc kubenswrapper[4718]: I1210 14:32:14.061351 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:14 crc kubenswrapper[4718]: I1210 14:32:14.061367 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:14Z","lastTransitionTime":"2025-12-10T14:32:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:14 crc kubenswrapper[4718]: I1210 14:32:14.165283 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:14 crc kubenswrapper[4718]: I1210 14:32:14.165378 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:14 crc kubenswrapper[4718]: I1210 14:32:14.165416 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:14 crc kubenswrapper[4718]: I1210 14:32:14.165441 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:14 crc kubenswrapper[4718]: I1210 14:32:14.165454 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:14Z","lastTransitionTime":"2025-12-10T14:32:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:14 crc kubenswrapper[4718]: I1210 14:32:14.268511 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:14 crc kubenswrapper[4718]: I1210 14:32:14.268571 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:14 crc kubenswrapper[4718]: I1210 14:32:14.268588 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:14 crc kubenswrapper[4718]: I1210 14:32:14.268612 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:14 crc kubenswrapper[4718]: I1210 14:32:14.268628 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:14Z","lastTransitionTime":"2025-12-10T14:32:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:14 crc kubenswrapper[4718]: I1210 14:32:14.372034 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:14 crc kubenswrapper[4718]: I1210 14:32:14.372089 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:14 crc kubenswrapper[4718]: I1210 14:32:14.372099 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:14 crc kubenswrapper[4718]: I1210 14:32:14.372118 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:14 crc kubenswrapper[4718]: I1210 14:32:14.372130 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:14Z","lastTransitionTime":"2025-12-10T14:32:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:14 crc kubenswrapper[4718]: I1210 14:32:14.475011 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:14 crc kubenswrapper[4718]: I1210 14:32:14.475055 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:14 crc kubenswrapper[4718]: I1210 14:32:14.475067 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:14 crc kubenswrapper[4718]: I1210 14:32:14.475081 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:14 crc kubenswrapper[4718]: I1210 14:32:14.475090 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:14Z","lastTransitionTime":"2025-12-10T14:32:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:14 crc kubenswrapper[4718]: I1210 14:32:14.579262 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:14 crc kubenswrapper[4718]: I1210 14:32:14.579537 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:14 crc kubenswrapper[4718]: I1210 14:32:14.579547 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:14 crc kubenswrapper[4718]: I1210 14:32:14.579562 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:14 crc kubenswrapper[4718]: I1210 14:32:14.579574 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:14Z","lastTransitionTime":"2025-12-10T14:32:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:14 crc kubenswrapper[4718]: I1210 14:32:14.681677 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:14 crc kubenswrapper[4718]: I1210 14:32:14.681742 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:14 crc kubenswrapper[4718]: I1210 14:32:14.681764 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:14 crc kubenswrapper[4718]: I1210 14:32:14.682217 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:14 crc kubenswrapper[4718]: I1210 14:32:14.682279 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:14Z","lastTransitionTime":"2025-12-10T14:32:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:14 crc kubenswrapper[4718]: I1210 14:32:14.786293 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:14 crc kubenswrapper[4718]: I1210 14:32:14.786423 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:14 crc kubenswrapper[4718]: I1210 14:32:14.786458 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:14 crc kubenswrapper[4718]: I1210 14:32:14.786486 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:14 crc kubenswrapper[4718]: I1210 14:32:14.786505 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:14Z","lastTransitionTime":"2025-12-10T14:32:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:14 crc kubenswrapper[4718]: I1210 14:32:14.889007 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:14 crc kubenswrapper[4718]: I1210 14:32:14.889070 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:14 crc kubenswrapper[4718]: I1210 14:32:14.889088 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:14 crc kubenswrapper[4718]: I1210 14:32:14.889115 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:14 crc kubenswrapper[4718]: I1210 14:32:14.889134 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:14Z","lastTransitionTime":"2025-12-10T14:32:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:14 crc kubenswrapper[4718]: I1210 14:32:14.992334 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:14 crc kubenswrapper[4718]: I1210 14:32:14.992415 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:14 crc kubenswrapper[4718]: I1210 14:32:14.992433 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:14 crc kubenswrapper[4718]: I1210 14:32:14.992453 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:14 crc kubenswrapper[4718]: I1210 14:32:14.992468 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:14Z","lastTransitionTime":"2025-12-10T14:32:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:15 crc kubenswrapper[4718]: I1210 14:32:15.020123 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 10 14:32:15 crc kubenswrapper[4718]: E1210 14:32:15.020302 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 10 14:32:15 crc kubenswrapper[4718]: I1210 14:32:15.095291 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:15 crc kubenswrapper[4718]: I1210 14:32:15.095360 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:15 crc kubenswrapper[4718]: I1210 14:32:15.095377 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:15 crc kubenswrapper[4718]: I1210 14:32:15.095440 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:15 crc kubenswrapper[4718]: I1210 14:32:15.095462 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:15Z","lastTransitionTime":"2025-12-10T14:32:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:15 crc kubenswrapper[4718]: I1210 14:32:15.197992 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:15 crc kubenswrapper[4718]: I1210 14:32:15.198087 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:15 crc kubenswrapper[4718]: I1210 14:32:15.198105 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:15 crc kubenswrapper[4718]: I1210 14:32:15.198129 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:15 crc kubenswrapper[4718]: I1210 14:32:15.198166 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:15Z","lastTransitionTime":"2025-12-10T14:32:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:15 crc kubenswrapper[4718]: I1210 14:32:15.302267 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:15 crc kubenswrapper[4718]: I1210 14:32:15.302374 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:15 crc kubenswrapper[4718]: I1210 14:32:15.302418 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:15 crc kubenswrapper[4718]: I1210 14:32:15.302438 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:15 crc kubenswrapper[4718]: I1210 14:32:15.302452 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:15Z","lastTransitionTime":"2025-12-10T14:32:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:15 crc kubenswrapper[4718]: I1210 14:32:15.406861 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:15 crc kubenswrapper[4718]: I1210 14:32:15.406950 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:15 crc kubenswrapper[4718]: I1210 14:32:15.406979 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:15 crc kubenswrapper[4718]: I1210 14:32:15.407010 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:15 crc kubenswrapper[4718]: I1210 14:32:15.407034 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:15Z","lastTransitionTime":"2025-12-10T14:32:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:15 crc kubenswrapper[4718]: I1210 14:32:15.510612 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:15 crc kubenswrapper[4718]: I1210 14:32:15.510668 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:15 crc kubenswrapper[4718]: I1210 14:32:15.510683 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:15 crc kubenswrapper[4718]: I1210 14:32:15.510701 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:15 crc kubenswrapper[4718]: I1210 14:32:15.510719 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:15Z","lastTransitionTime":"2025-12-10T14:32:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:15 crc kubenswrapper[4718]: I1210 14:32:15.613770 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:15 crc kubenswrapper[4718]: I1210 14:32:15.613836 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:15 crc kubenswrapper[4718]: I1210 14:32:15.613855 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:15 crc kubenswrapper[4718]: I1210 14:32:15.613879 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:15 crc kubenswrapper[4718]: I1210 14:32:15.613897 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:15Z","lastTransitionTime":"2025-12-10T14:32:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:15 crc kubenswrapper[4718]: I1210 14:32:15.703186 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dtch2" event={"ID":"612af6cb-db4d-4874-a9ea-8b3c7eb8e30c","Type":"ContainerStarted","Data":"1b308265eb50c9a1d2ecb14c394c7c261eb6dfb2173452a951ed7e67a84bc9e5"} Dec 10 14:32:15 crc kubenswrapper[4718]: I1210 14:32:15.704496 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-dtch2" Dec 10 14:32:15 crc kubenswrapper[4718]: I1210 14:32:15.704538 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-dtch2" Dec 10 14:32:15 crc kubenswrapper[4718]: I1210 14:32:15.716378 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:15 crc kubenswrapper[4718]: I1210 14:32:15.716457 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:15 crc kubenswrapper[4718]: I1210 14:32:15.716470 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:15 crc kubenswrapper[4718]: I1210 14:32:15.716490 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:15 crc kubenswrapper[4718]: I1210 14:32:15.716502 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:15Z","lastTransitionTime":"2025-12-10T14:32:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:15 crc kubenswrapper[4718]: I1210 14:32:15.717568 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-kkfdg" event={"ID":"db9c6842-03cb-4a28-baff-77f27f537aa4","Type":"ContainerStarted","Data":"193bd4ffc5c3324f05727b94ff852aaae455949fbb968b27c572f43fe0328854"} Dec 10 14:32:15 crc kubenswrapper[4718]: I1210 14:32:15.718583 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a6ff07473ddfa3d1db1e5fe47937274f803c400ee435102c3b0383c02af4340\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b56868fde34c04a22b78be1e6cb02ff5b11f0dd9fa0232e5a4c38725a8bf193\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:15Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:15 crc kubenswrapper[4718]: I1210 14:32:15.746143 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9fmrd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"186fd52a-c63c-461f-a551-8b57ead36f59\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c79dfef1bedaab660a35e29f8a48c6b153f7ca5028ef6a936a196699a30566e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r6f28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:03Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9fmrd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:15Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:15 crc kubenswrapper[4718]: I1210 14:32:15.752601 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-dtch2" Dec 10 14:32:15 crc kubenswrapper[4718]: I1210 14:32:15.753722 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-dtch2" Dec 10 14:32:15 crc kubenswrapper[4718]: I1210 14:32:15.764711 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hv62w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9db3984f-4589-462f-94d7-89a885be63d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c18891e2a71e33784526db2ff2226ff6aa8a9df462f5cb11ff4f5a446509f54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d84p4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hv62w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:15Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:15 crc kubenswrapper[4718]: I1210 14:32:15.776853 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xvkg4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af814c60-50de-499d-a1b2-b18f3749bc35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01d0637707f68305ada3aab8a2cf56db32750665ef5c1bcfbfb1993fd135c3b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vsmbx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:00Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xvkg4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:15Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:15 crc kubenswrapper[4718]: I1210 14:32:15.790052 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8db53917-7cfb-496d-b8a0-5cc68f3be4e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ac0946788c7cef36abe84639ca343f7ba379f19b94308cd1a08f4dcac4ad249\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f420f195ef9f1879dc97af4b3feb8835025c2cbe0ede4113fb1b0e0b08ba6c2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8zmhn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:15Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:15 crc kubenswrapper[4718]: I1210 14:32:15.806003 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1432ae01-a453-47a7-be1d-5fdcabe6f0c8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://651112c229c32968527538e66f1b161c92411b37f45f36ca90b27b6710d7a43d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d1de02a2f41beed3e92a05e69a6c76b44861e7dba46d723a38fffe8c3d0ed42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9405e4c833e1d215824604c6d7ec42dec156f2968d3351f110df665cdac0ce1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d898946159da9ec7806b5daa074a2313a8d3cd6086f4ae2bdfe858a7ba70eca7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:31:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:15Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:15 crc kubenswrapper[4718]: I1210 14:32:15.820267 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:15 crc kubenswrapper[4718]: I1210 14:32:15.820322 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:15 crc kubenswrapper[4718]: I1210 14:32:15.820338 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:15 crc kubenswrapper[4718]: I1210 14:32:15.820364 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:15 crc kubenswrapper[4718]: I1210 14:32:15.820377 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:15Z","lastTransitionTime":"2025-12-10T14:32:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:15 crc kubenswrapper[4718]: I1210 14:32:15.822317 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://caa6ee7ab7f267517dcd1699f91f61cbc8b2dd0a135dd553e096f31a7f1dd93b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:15Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:15 crc kubenswrapper[4718]: I1210 14:32:15.839113 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:15Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:15 crc kubenswrapper[4718]: I1210 14:32:15.857801 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kkfdg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db9c6842-03cb-4a28-baff-77f27f537aa4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:00Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bxrt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34acb23b1eef53ae920322853c0503b9d26b0d30894e6c53ae52fb90e1671294\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34acb23b1eef53ae920322853c0503b9d26b0d30894e6c53ae52fb90e1671294\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bxrt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1b13fbe360cd4e791e86acdb311b78d1ab1d3344fd287fd70cfa87cf656affa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1b13fbe360cd4e791e86acdb311b78d1ab1d3344fd287fd70cfa87cf656affa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bxrt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d9ee16140b928b7e5c33811af2ca2b20b2fc98fdefc12bb1790fce59c407d9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d9ee16140b928b7e5c33811af2ca2b20b2fc98fdefc12bb1790fce59c407d9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bxrt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1daf49ff990637090db3ead68f5d0777b6502f79edfe524df743389ee220f8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1daf49ff990637090db3ead68f5d0777b6502f79edfe524df743389ee220f8d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bxrt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bxrt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bxrt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kkfdg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:15Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:15 crc kubenswrapper[4718]: I1210 14:32:15.875727 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52b297ad-8ff7-498e-8248-b64014de744f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65966d33004e0a6740f0b450f404729baec86fc47c84a58ee020a78b7378bfee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfa44e8f79d7525847ab10384aaf41105a8e6769b384679417cd0a7379748dee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://32fcede9b35f9a45177a341e272e7c2cbe2e5c6bf699e877664a603a5f96124a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d6e990a4a3f575ffc8a6399b8dd1fd59a84db71b34885946046fc7c5bd0a8e8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9d6e990a4a3f575ffc8a6399b8dd1fd59a84db71b34885946046fc7c5bd0a8e8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-10T14:31:58Z\\\",\\\"message\\\":\\\"le observer\\\\nW1210 14:31:57.496102 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1210 14:31:57.496266 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1210 14:31:57.497076 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1744864549/tls.crt::/tmp/serving-cert-1744864549/tls.key\\\\\\\"\\\\nI1210 14:31:57.965947 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1210 14:31:57.968367 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1210 14:31:57.968400 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1210 14:31:57.968417 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1210 14:31:57.968422 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1210 14:31:57.972813 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1210 14:31:57.972849 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1210 14:31:57.972856 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1210 14:31:57.972862 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1210 14:31:57.972867 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1210 14:31:57.972870 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1210 14:31:57.972874 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1210 14:31:57.973028 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1210 14:31:57.977590 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-10T14:31:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f363cabfc65ba2cb737968ead6f21568f30416a1f385f511f2d1a45da1e831a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3cbf8dfd3e1278a733c6ca7e888788d0bec845ff72dfd618e2070262d05b6a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3cbf8dfd3e1278a733c6ca7e888788d0bec845ff72dfd618e2070262d05b6a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:31:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:31:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:15Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:15 crc kubenswrapper[4718]: I1210 14:32:15.898310 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:15Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:15 crc kubenswrapper[4718]: I1210 14:32:15.917464 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:15Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:15 crc kubenswrapper[4718]: I1210 14:32:15.924237 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:15 crc kubenswrapper[4718]: I1210 14:32:15.924494 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:15 crc kubenswrapper[4718]: I1210 14:32:15.924587 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:15 crc kubenswrapper[4718]: I1210 14:32:15.924702 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:15 crc kubenswrapper[4718]: I1210 14:32:15.924795 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:15Z","lastTransitionTime":"2025-12-10T14:32:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:15 crc kubenswrapper[4718]: I1210 14:32:15.944449 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://368fbae55ef9c78f2f88c5a07301f271e1ccffc66976a0c2dc9f80edce04613a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:15Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:15 crc kubenswrapper[4718]: I1210 14:32:15.972062 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dtch2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"612af6cb-db4d-4874-a9ea-8b3c7eb8e30c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f49783c85c42bd1d4afd9d2546c84af07c770b069b7e6f22cf3d95c0e173bcb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02ee125e78be56eace63f1077a3952d4a9b9ccb387af5413a9325eb4bff80ea9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e231c6918c86142643af798db9c4c6de9764822d2b60e1155c727b143191139\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://abe2f058bd1c2102c5f00a29af4869eb6dcd8b732ae90f5c4fda6759b460622a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2ac19fc0eeba332014eefac978a35798eb8bf1ff79212bca24690236bd46e01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9263fdd7b12f5852db038570b5a179fff5850893af790fa27694f00375a607f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b308265eb50c9a1d2ecb14c394c7c261eb6dfb2173452a951ed7e67a84bc9e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f27c89c7dd6f5334f05c340d706e938abde36968f00c30bf84f7b2fb30dfcb3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df5db949bd1c53e8ae754b3e7e500e8c2e196aacd68a36d6331ddca378ef7504\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df5db949bd1c53e8ae754b3e7e500e8c2e196aacd68a36d6331ddca378ef7504\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-dtch2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:15Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:15 crc kubenswrapper[4718]: I1210 14:32:15.992851 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a6ff07473ddfa3d1db1e5fe47937274f803c400ee435102c3b0383c02af4340\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b56868fde34c04a22b78be1e6cb02ff5b11f0dd9fa0232e5a4c38725a8bf193\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:15Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:16 crc kubenswrapper[4718]: I1210 14:32:16.006738 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9fmrd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"186fd52a-c63c-461f-a551-8b57ead36f59\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c79dfef1bedaab660a35e29f8a48c6b153f7ca5028ef6a936a196699a30566e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r6f28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:03Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9fmrd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:16Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:16 crc kubenswrapper[4718]: I1210 14:32:16.023239 4718 scope.go:117] "RemoveContainer" containerID="9d6e990a4a3f575ffc8a6399b8dd1fd59a84db71b34885946046fc7c5bd0a8e8" Dec 10 14:32:16 crc kubenswrapper[4718]: I1210 14:32:16.023750 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 10 14:32:16 crc kubenswrapper[4718]: E1210 14:32:16.023826 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 10 14:32:16 crc kubenswrapper[4718]: I1210 14:32:16.024288 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 10 14:32:16 crc kubenswrapper[4718]: E1210 14:32:16.024369 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 10 14:32:16 crc kubenswrapper[4718]: I1210 14:32:16.024650 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://caa6ee7ab7f267517dcd1699f91f61cbc8b2dd0a135dd553e096f31a7f1dd93b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:16Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:16 crc kubenswrapper[4718]: I1210 14:32:16.030303 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:16 crc kubenswrapper[4718]: I1210 14:32:16.030498 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:16 crc kubenswrapper[4718]: I1210 14:32:16.030590 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:16 crc kubenswrapper[4718]: I1210 14:32:16.030702 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:16 crc kubenswrapper[4718]: I1210 14:32:16.030918 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:16Z","lastTransitionTime":"2025-12-10T14:32:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:16 crc kubenswrapper[4718]: I1210 14:32:16.043699 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:16Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:16 crc kubenswrapper[4718]: I1210 14:32:16.065227 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hv62w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9db3984f-4589-462f-94d7-89a885be63d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c18891e2a71e33784526db2ff2226ff6aa8a9df462f5cb11ff4f5a446509f54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d84p4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hv62w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:16Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:16 crc kubenswrapper[4718]: I1210 14:32:16.083141 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xvkg4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af814c60-50de-499d-a1b2-b18f3749bc35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01d0637707f68305ada3aab8a2cf56db32750665ef5c1bcfbfb1993fd135c3b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vsmbx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:00Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xvkg4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:16Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:16 crc kubenswrapper[4718]: I1210 14:32:16.156049 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8db53917-7cfb-496d-b8a0-5cc68f3be4e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ac0946788c7cef36abe84639ca343f7ba379f19b94308cd1a08f4dcac4ad249\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f420f195ef9f1879dc97af4b3feb8835025c2cbe0ede4113fb1b0e0b08ba6c2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8zmhn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:16Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:16 crc kubenswrapper[4718]: I1210 14:32:16.159171 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:16 crc kubenswrapper[4718]: I1210 14:32:16.159216 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:16 crc kubenswrapper[4718]: I1210 14:32:16.159227 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:16 crc kubenswrapper[4718]: I1210 14:32:16.159243 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:16 crc kubenswrapper[4718]: I1210 14:32:16.159256 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:16Z","lastTransitionTime":"2025-12-10T14:32:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:16 crc kubenswrapper[4718]: I1210 14:32:16.171424 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1432ae01-a453-47a7-be1d-5fdcabe6f0c8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://651112c229c32968527538e66f1b161c92411b37f45f36ca90b27b6710d7a43d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d1de02a2f41beed3e92a05e69a6c76b44861e7dba46d723a38fffe8c3d0ed42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9405e4c833e1d215824604c6d7ec42dec156f2968d3351f110df665cdac0ce1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d898946159da9ec7806b5daa074a2313a8d3cd6086f4ae2bdfe858a7ba70eca7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:31:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:16Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:16 crc kubenswrapper[4718]: I1210 14:32:16.188146 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kkfdg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db9c6842-03cb-4a28-baff-77f27f537aa4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:00Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bxrt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34acb23b1eef53ae920322853c0503b9d26b0d30894e6c53ae52fb90e1671294\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34acb23b1eef53ae920322853c0503b9d26b0d30894e6c53ae52fb90e1671294\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bxrt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1b13fbe360cd4e791e86acdb311b78d1ab1d3344fd287fd70cfa87cf656affa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1b13fbe360cd4e791e86acdb311b78d1ab1d3344fd287fd70cfa87cf656affa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bxrt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d9ee16140b928b7e5c33811af2ca2b20b2fc98fdefc12bb1790fce59c407d9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d9ee16140b928b7e5c33811af2ca2b20b2fc98fdefc12bb1790fce59c407d9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bxrt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1daf49ff990637090db3ead68f5d0777b6502f79edfe524df743389ee220f8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1daf49ff990637090db3ead68f5d0777b6502f79edfe524df743389ee220f8d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bxrt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://193bd4ffc5c3324f05727b94ff852aaae455949fbb968b27c572f43fe0328854\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bxrt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bxrt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kkfdg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:16Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:16 crc kubenswrapper[4718]: I1210 14:32:16.203179 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://368fbae55ef9c78f2f88c5a07301f271e1ccffc66976a0c2dc9f80edce04613a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:16Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:16 crc kubenswrapper[4718]: I1210 14:32:16.224636 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dtch2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"612af6cb-db4d-4874-a9ea-8b3c7eb8e30c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f49783c85c42bd1d4afd9d2546c84af07c770b069b7e6f22cf3d95c0e173bcb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02ee125e78be56eace63f1077a3952d4a9b9ccb387af5413a9325eb4bff80ea9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e231c6918c86142643af798db9c4c6de9764822d2b60e1155c727b143191139\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://abe2f058bd1c2102c5f00a29af4869eb6dcd8b732ae90f5c4fda6759b460622a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2ac19fc0eeba332014eefac978a35798eb8bf1ff79212bca24690236bd46e01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9263fdd7b12f5852db038570b5a179fff5850893af790fa27694f00375a607f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b308265eb50c9a1d2ecb14c394c7c261eb6dfb2173452a951ed7e67a84bc9e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f27c89c7dd6f5334f05c340d706e938abde36968f00c30bf84f7b2fb30dfcb3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df5db949bd1c53e8ae754b3e7e500e8c2e196aacd68a36d6331ddca378ef7504\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df5db949bd1c53e8ae754b3e7e500e8c2e196aacd68a36d6331ddca378ef7504\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-dtch2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:16Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:16 crc kubenswrapper[4718]: I1210 14:32:16.240481 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52b297ad-8ff7-498e-8248-b64014de744f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65966d33004e0a6740f0b450f404729baec86fc47c84a58ee020a78b7378bfee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfa44e8f79d7525847ab10384aaf41105a8e6769b384679417cd0a7379748dee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://32fcede9b35f9a45177a341e272e7c2cbe2e5c6bf699e877664a603a5f96124a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d6e990a4a3f575ffc8a6399b8dd1fd59a84db71b34885946046fc7c5bd0a8e8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9d6e990a4a3f575ffc8a6399b8dd1fd59a84db71b34885946046fc7c5bd0a8e8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-10T14:31:58Z\\\",\\\"message\\\":\\\"le observer\\\\nW1210 14:31:57.496102 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1210 14:31:57.496266 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1210 14:31:57.497076 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1744864549/tls.crt::/tmp/serving-cert-1744864549/tls.key\\\\\\\"\\\\nI1210 14:31:57.965947 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1210 14:31:57.968367 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1210 14:31:57.968400 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1210 14:31:57.968417 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1210 14:31:57.968422 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1210 14:31:57.972813 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1210 14:31:57.972849 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1210 14:31:57.972856 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1210 14:31:57.972862 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1210 14:31:57.972867 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1210 14:31:57.972870 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1210 14:31:57.972874 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1210 14:31:57.973028 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1210 14:31:57.977590 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-10T14:31:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f363cabfc65ba2cb737968ead6f21568f30416a1f385f511f2d1a45da1e831a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3cbf8dfd3e1278a733c6ca7e888788d0bec845ff72dfd618e2070262d05b6a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3cbf8dfd3e1278a733c6ca7e888788d0bec845ff72dfd618e2070262d05b6a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:31:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:31:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:16Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:16 crc kubenswrapper[4718]: I1210 14:32:16.255250 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:16Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:16 crc kubenswrapper[4718]: I1210 14:32:16.261795 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:16 crc kubenswrapper[4718]: I1210 14:32:16.261869 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:16 crc kubenswrapper[4718]: I1210 14:32:16.261885 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:16 crc kubenswrapper[4718]: I1210 14:32:16.261913 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:16 crc kubenswrapper[4718]: I1210 14:32:16.261931 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:16Z","lastTransitionTime":"2025-12-10T14:32:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:16 crc kubenswrapper[4718]: I1210 14:32:16.319511 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:16Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:16 crc kubenswrapper[4718]: I1210 14:32:16.338152 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dtch2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"612af6cb-db4d-4874-a9ea-8b3c7eb8e30c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f49783c85c42bd1d4afd9d2546c84af07c770b069b7e6f22cf3d95c0e173bcb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02ee125e78be56eace63f1077a3952d4a9b9ccb387af5413a9325eb4bff80ea9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e231c6918c86142643af798db9c4c6de9764822d2b60e1155c727b143191139\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://abe2f058bd1c2102c5f00a29af4869eb6dcd8b732ae90f5c4fda6759b460622a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2ac19fc0eeba332014eefac978a35798eb8bf1ff79212bca24690236bd46e01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9263fdd7b12f5852db038570b5a179fff5850893af790fa27694f00375a607f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b308265eb50c9a1d2ecb14c394c7c261eb6dfb2173452a951ed7e67a84bc9e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f27c89c7dd6f5334f05c340d706e938abde36968f00c30bf84f7b2fb30dfcb3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df5db949bd1c53e8ae754b3e7e500e8c2e196aacd68a36d6331ddca378ef7504\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df5db949bd1c53e8ae754b3e7e500e8c2e196aacd68a36d6331ddca378ef7504\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-dtch2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:16Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:16 crc kubenswrapper[4718]: I1210 14:32:16.349223 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52b297ad-8ff7-498e-8248-b64014de744f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65966d33004e0a6740f0b450f404729baec86fc47c84a58ee020a78b7378bfee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfa44e8f79d7525847ab10384aaf41105a8e6769b384679417cd0a7379748dee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://32fcede9b35f9a45177a341e272e7c2cbe2e5c6bf699e877664a603a5f96124a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d6e990a4a3f575ffc8a6399b8dd1fd59a84db71b34885946046fc7c5bd0a8e8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9d6e990a4a3f575ffc8a6399b8dd1fd59a84db71b34885946046fc7c5bd0a8e8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-10T14:31:58Z\\\",\\\"message\\\":\\\"le observer\\\\nW1210 14:31:57.496102 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1210 14:31:57.496266 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1210 14:31:57.497076 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1744864549/tls.crt::/tmp/serving-cert-1744864549/tls.key\\\\\\\"\\\\nI1210 14:31:57.965947 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1210 14:31:57.968367 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1210 14:31:57.968400 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1210 14:31:57.968417 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1210 14:31:57.968422 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1210 14:31:57.972813 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1210 14:31:57.972849 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1210 14:31:57.972856 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1210 14:31:57.972862 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1210 14:31:57.972867 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1210 14:31:57.972870 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1210 14:31:57.972874 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1210 14:31:57.973028 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1210 14:31:57.977590 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-10T14:31:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f363cabfc65ba2cb737968ead6f21568f30416a1f385f511f2d1a45da1e831a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3cbf8dfd3e1278a733c6ca7e888788d0bec845ff72dfd618e2070262d05b6a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3cbf8dfd3e1278a733c6ca7e888788d0bec845ff72dfd618e2070262d05b6a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:31:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:31:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:16Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:16 crc kubenswrapper[4718]: I1210 14:32:16.364858 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:16Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:16 crc kubenswrapper[4718]: I1210 14:32:16.379231 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:16Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:16 crc kubenswrapper[4718]: I1210 14:32:16.393567 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://368fbae55ef9c78f2f88c5a07301f271e1ccffc66976a0c2dc9f80edce04613a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:16Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:16 crc kubenswrapper[4718]: I1210 14:32:16.469117 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:16 crc kubenswrapper[4718]: I1210 14:32:16.469157 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:16 crc kubenswrapper[4718]: I1210 14:32:16.469167 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:16 crc kubenswrapper[4718]: I1210 14:32:16.469185 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:16 crc kubenswrapper[4718]: I1210 14:32:16.469196 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:16Z","lastTransitionTime":"2025-12-10T14:32:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:16 crc kubenswrapper[4718]: I1210 14:32:16.476247 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a6ff07473ddfa3d1db1e5fe47937274f803c400ee435102c3b0383c02af4340\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b56868fde34c04a22b78be1e6cb02ff5b11f0dd9fa0232e5a4c38725a8bf193\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:16Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:16 crc kubenswrapper[4718]: I1210 14:32:16.493158 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9fmrd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"186fd52a-c63c-461f-a551-8b57ead36f59\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c79dfef1bedaab660a35e29f8a48c6b153f7ca5028ef6a936a196699a30566e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r6f28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:03Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9fmrd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:16Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:16 crc kubenswrapper[4718]: I1210 14:32:16.514302 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:16Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:16 crc kubenswrapper[4718]: I1210 14:32:16.537822 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hv62w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9db3984f-4589-462f-94d7-89a885be63d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c18891e2a71e33784526db2ff2226ff6aa8a9df462f5cb11ff4f5a446509f54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d84p4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hv62w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:16Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:16 crc kubenswrapper[4718]: I1210 14:32:16.554739 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xvkg4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af814c60-50de-499d-a1b2-b18f3749bc35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01d0637707f68305ada3aab8a2cf56db32750665ef5c1bcfbfb1993fd135c3b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vsmbx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:00Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xvkg4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:16Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:16 crc kubenswrapper[4718]: I1210 14:32:16.572819 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:16 crc kubenswrapper[4718]: I1210 14:32:16.572874 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:16 crc kubenswrapper[4718]: I1210 14:32:16.572887 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:16 crc kubenswrapper[4718]: I1210 14:32:16.572907 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:16 crc kubenswrapper[4718]: I1210 14:32:16.572921 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:16Z","lastTransitionTime":"2025-12-10T14:32:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:16 crc kubenswrapper[4718]: I1210 14:32:16.573109 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8db53917-7cfb-496d-b8a0-5cc68f3be4e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ac0946788c7cef36abe84639ca343f7ba379f19b94308cd1a08f4dcac4ad249\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f420f195ef9f1879dc97af4b3feb8835025c2cbe0ede4113fb1b0e0b08ba6c2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8zmhn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:16Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:16 crc kubenswrapper[4718]: I1210 14:32:16.594134 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1432ae01-a453-47a7-be1d-5fdcabe6f0c8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://651112c229c32968527538e66f1b161c92411b37f45f36ca90b27b6710d7a43d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d1de02a2f41beed3e92a05e69a6c76b44861e7dba46d723a38fffe8c3d0ed42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9405e4c833e1d215824604c6d7ec42dec156f2968d3351f110df665cdac0ce1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d898946159da9ec7806b5daa074a2313a8d3cd6086f4ae2bdfe858a7ba70eca7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:31:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:16Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:16 crc kubenswrapper[4718]: I1210 14:32:16.616084 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://caa6ee7ab7f267517dcd1699f91f61cbc8b2dd0a135dd553e096f31a7f1dd93b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:16Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:16 crc kubenswrapper[4718]: I1210 14:32:16.637556 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kkfdg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db9c6842-03cb-4a28-baff-77f27f537aa4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:00Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bxrt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34acb23b1eef53ae920322853c0503b9d26b0d30894e6c53ae52fb90e1671294\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34acb23b1eef53ae920322853c0503b9d26b0d30894e6c53ae52fb90e1671294\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bxrt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1b13fbe360cd4e791e86acdb311b78d1ab1d3344fd287fd70cfa87cf656affa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1b13fbe360cd4e791e86acdb311b78d1ab1d3344fd287fd70cfa87cf656affa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bxrt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d9ee16140b928b7e5c33811af2ca2b20b2fc98fdefc12bb1790fce59c407d9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d9ee16140b928b7e5c33811af2ca2b20b2fc98fdefc12bb1790fce59c407d9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bxrt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1daf49ff990637090db3ead68f5d0777b6502f79edfe524df743389ee220f8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1daf49ff990637090db3ead68f5d0777b6502f79edfe524df743389ee220f8d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bxrt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://193bd4ffc5c3324f05727b94ff852aaae455949fbb968b27c572f43fe0328854\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bxrt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bxrt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kkfdg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:16Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:16 crc kubenswrapper[4718]: I1210 14:32:16.676018 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:16 crc kubenswrapper[4718]: I1210 14:32:16.676073 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:16 crc kubenswrapper[4718]: I1210 14:32:16.676087 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:16 crc kubenswrapper[4718]: I1210 14:32:16.676105 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:16 crc kubenswrapper[4718]: I1210 14:32:16.676117 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:16Z","lastTransitionTime":"2025-12-10T14:32:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:16 crc kubenswrapper[4718]: I1210 14:32:16.720680 4718 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 10 14:32:16 crc kubenswrapper[4718]: I1210 14:32:16.760156 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-49r77"] Dec 10 14:32:16 crc kubenswrapper[4718]: I1210 14:32:16.760737 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-49r77" Dec 10 14:32:16 crc kubenswrapper[4718]: I1210 14:32:16.762691 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Dec 10 14:32:16 crc kubenswrapper[4718]: I1210 14:32:16.762716 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Dec 10 14:32:16 crc kubenswrapper[4718]: I1210 14:32:16.778357 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xvkg4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af814c60-50de-499d-a1b2-b18f3749bc35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01d0637707f68305ada3aab8a2cf56db32750665ef5c1bcfbfb1993fd135c3b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vsmbx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:00Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xvkg4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:16Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:16 crc kubenswrapper[4718]: I1210 14:32:16.779116 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:16 crc kubenswrapper[4718]: I1210 14:32:16.779151 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:16 crc kubenswrapper[4718]: I1210 14:32:16.779160 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:16 crc kubenswrapper[4718]: I1210 14:32:16.779175 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:16 crc kubenswrapper[4718]: I1210 14:32:16.779187 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:16Z","lastTransitionTime":"2025-12-10T14:32:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:16 crc kubenswrapper[4718]: I1210 14:32:16.790702 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8db53917-7cfb-496d-b8a0-5cc68f3be4e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ac0946788c7cef36abe84639ca343f7ba379f19b94308cd1a08f4dcac4ad249\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f420f195ef9f1879dc97af4b3feb8835025c2cbe0ede4113fb1b0e0b08ba6c2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8zmhn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:16Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:16 crc kubenswrapper[4718]: I1210 14:32:16.805509 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1432ae01-a453-47a7-be1d-5fdcabe6f0c8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://651112c229c32968527538e66f1b161c92411b37f45f36ca90b27b6710d7a43d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d1de02a2f41beed3e92a05e69a6c76b44861e7dba46d723a38fffe8c3d0ed42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9405e4c833e1d215824604c6d7ec42dec156f2968d3351f110df665cdac0ce1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d898946159da9ec7806b5daa074a2313a8d3cd6086f4ae2bdfe858a7ba70eca7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:31:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:16Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:16 crc kubenswrapper[4718]: I1210 14:32:16.822294 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://caa6ee7ab7f267517dcd1699f91f61cbc8b2dd0a135dd553e096f31a7f1dd93b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:16Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:16 crc kubenswrapper[4718]: I1210 14:32:16.835677 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:16Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:16 crc kubenswrapper[4718]: I1210 14:32:16.847946 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hv62w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9db3984f-4589-462f-94d7-89a885be63d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c18891e2a71e33784526db2ff2226ff6aa8a9df462f5cb11ff4f5a446509f54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d84p4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hv62w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:16Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:16 crc kubenswrapper[4718]: I1210 14:32:16.859563 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-49r77" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22aace52-6f58-4459-89d2-9fec98b12ead\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l47m9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l47m9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-49r77\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:16Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:16 crc kubenswrapper[4718]: I1210 14:32:16.874756 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kkfdg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db9c6842-03cb-4a28-baff-77f27f537aa4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:00Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bxrt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34acb23b1eef53ae920322853c0503b9d26b0d30894e6c53ae52fb90e1671294\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34acb23b1eef53ae920322853c0503b9d26b0d30894e6c53ae52fb90e1671294\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bxrt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1b13fbe360cd4e791e86acdb311b78d1ab1d3344fd287fd70cfa87cf656affa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1b13fbe360cd4e791e86acdb311b78d1ab1d3344fd287fd70cfa87cf656affa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bxrt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d9ee16140b928b7e5c33811af2ca2b20b2fc98fdefc12bb1790fce59c407d9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d9ee16140b928b7e5c33811af2ca2b20b2fc98fdefc12bb1790fce59c407d9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bxrt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1daf49ff990637090db3ead68f5d0777b6502f79edfe524df743389ee220f8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1daf49ff990637090db3ead68f5d0777b6502f79edfe524df743389ee220f8d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bxrt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://193bd4ffc5c3324f05727b94ff852aaae455949fbb968b27c572f43fe0328854\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bxrt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bxrt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kkfdg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:16Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:16 crc kubenswrapper[4718]: I1210 14:32:16.881130 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:16 crc kubenswrapper[4718]: I1210 14:32:16.881158 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:16 crc kubenswrapper[4718]: I1210 14:32:16.881167 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:16 crc kubenswrapper[4718]: I1210 14:32:16.881181 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:16 crc kubenswrapper[4718]: I1210 14:32:16.881191 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:16Z","lastTransitionTime":"2025-12-10T14:32:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:16 crc kubenswrapper[4718]: I1210 14:32:16.888278 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52b297ad-8ff7-498e-8248-b64014de744f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65966d33004e0a6740f0b450f404729baec86fc47c84a58ee020a78b7378bfee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfa44e8f79d7525847ab10384aaf41105a8e6769b384679417cd0a7379748dee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://32fcede9b35f9a45177a341e272e7c2cbe2e5c6bf699e877664a603a5f96124a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d6e990a4a3f575ffc8a6399b8dd1fd59a84db71b34885946046fc7c5bd0a8e8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9d6e990a4a3f575ffc8a6399b8dd1fd59a84db71b34885946046fc7c5bd0a8e8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-10T14:31:58Z\\\",\\\"message\\\":\\\"le observer\\\\nW1210 14:31:57.496102 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1210 14:31:57.496266 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1210 14:31:57.497076 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1744864549/tls.crt::/tmp/serving-cert-1744864549/tls.key\\\\\\\"\\\\nI1210 14:31:57.965947 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1210 14:31:57.968367 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1210 14:31:57.968400 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1210 14:31:57.968417 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1210 14:31:57.968422 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1210 14:31:57.972813 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1210 14:31:57.972849 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1210 14:31:57.972856 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1210 14:31:57.972862 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1210 14:31:57.972867 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1210 14:31:57.972870 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1210 14:31:57.972874 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1210 14:31:57.973028 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1210 14:31:57.977590 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-10T14:31:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f363cabfc65ba2cb737968ead6f21568f30416a1f385f511f2d1a45da1e831a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3cbf8dfd3e1278a733c6ca7e888788d0bec845ff72dfd618e2070262d05b6a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3cbf8dfd3e1278a733c6ca7e888788d0bec845ff72dfd618e2070262d05b6a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:31:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:31:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:16Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:16 crc kubenswrapper[4718]: I1210 14:32:16.901821 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:16Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:16 crc kubenswrapper[4718]: I1210 14:32:16.914283 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:16Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:16 crc kubenswrapper[4718]: I1210 14:32:16.930372 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://368fbae55ef9c78f2f88c5a07301f271e1ccffc66976a0c2dc9f80edce04613a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:16Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:16 crc kubenswrapper[4718]: I1210 14:32:16.949815 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dtch2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"612af6cb-db4d-4874-a9ea-8b3c7eb8e30c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f49783c85c42bd1d4afd9d2546c84af07c770b069b7e6f22cf3d95c0e173bcb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02ee125e78be56eace63f1077a3952d4a9b9ccb387af5413a9325eb4bff80ea9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e231c6918c86142643af798db9c4c6de9764822d2b60e1155c727b143191139\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://abe2f058bd1c2102c5f00a29af4869eb6dcd8b732ae90f5c4fda6759b460622a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2ac19fc0eeba332014eefac978a35798eb8bf1ff79212bca24690236bd46e01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9263fdd7b12f5852db038570b5a179fff5850893af790fa27694f00375a607f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b308265eb50c9a1d2ecb14c394c7c261eb6dfb2173452a951ed7e67a84bc9e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f27c89c7dd6f5334f05c340d706e938abde36968f00c30bf84f7b2fb30dfcb3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df5db949bd1c53e8ae754b3e7e500e8c2e196aacd68a36d6331ddca378ef7504\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df5db949bd1c53e8ae754b3e7e500e8c2e196aacd68a36d6331ddca378ef7504\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-dtch2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:16Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:16 crc kubenswrapper[4718]: I1210 14:32:16.961223 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/22aace52-6f58-4459-89d2-9fec98b12ead-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-49r77\" (UID: \"22aace52-6f58-4459-89d2-9fec98b12ead\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-49r77" Dec 10 14:32:16 crc kubenswrapper[4718]: I1210 14:32:16.961272 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/22aace52-6f58-4459-89d2-9fec98b12ead-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-49r77\" (UID: \"22aace52-6f58-4459-89d2-9fec98b12ead\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-49r77" Dec 10 14:32:16 crc kubenswrapper[4718]: I1210 14:32:16.961292 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/22aace52-6f58-4459-89d2-9fec98b12ead-env-overrides\") pod \"ovnkube-control-plane-749d76644c-49r77\" (UID: \"22aace52-6f58-4459-89d2-9fec98b12ead\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-49r77" Dec 10 14:32:16 crc kubenswrapper[4718]: I1210 14:32:16.961400 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l47m9\" (UniqueName: \"kubernetes.io/projected/22aace52-6f58-4459-89d2-9fec98b12ead-kube-api-access-l47m9\") pod \"ovnkube-control-plane-749d76644c-49r77\" (UID: \"22aace52-6f58-4459-89d2-9fec98b12ead\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-49r77" Dec 10 14:32:16 crc kubenswrapper[4718]: I1210 14:32:16.963294 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a6ff07473ddfa3d1db1e5fe47937274f803c400ee435102c3b0383c02af4340\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b56868fde34c04a22b78be1e6cb02ff5b11f0dd9fa0232e5a4c38725a8bf193\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:16Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:16 crc kubenswrapper[4718]: I1210 14:32:16.991102 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9fmrd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"186fd52a-c63c-461f-a551-8b57ead36f59\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c79dfef1bedaab660a35e29f8a48c6b153f7ca5028ef6a936a196699a30566e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r6f28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:03Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9fmrd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:16Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:16 crc kubenswrapper[4718]: I1210 14:32:16.992660 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:16 crc kubenswrapper[4718]: I1210 14:32:16.992699 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:16 crc kubenswrapper[4718]: I1210 14:32:16.992712 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:16 crc kubenswrapper[4718]: I1210 14:32:16.992731 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:16 crc kubenswrapper[4718]: I1210 14:32:16.992744 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:16Z","lastTransitionTime":"2025-12-10T14:32:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:17 crc kubenswrapper[4718]: I1210 14:32:17.020105 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 10 14:32:17 crc kubenswrapper[4718]: E1210 14:32:17.020339 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 10 14:32:17 crc kubenswrapper[4718]: I1210 14:32:17.062610 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/22aace52-6f58-4459-89d2-9fec98b12ead-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-49r77\" (UID: \"22aace52-6f58-4459-89d2-9fec98b12ead\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-49r77" Dec 10 14:32:17 crc kubenswrapper[4718]: I1210 14:32:17.062702 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/22aace52-6f58-4459-89d2-9fec98b12ead-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-49r77\" (UID: \"22aace52-6f58-4459-89d2-9fec98b12ead\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-49r77" Dec 10 14:32:17 crc kubenswrapper[4718]: I1210 14:32:17.062726 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/22aace52-6f58-4459-89d2-9fec98b12ead-env-overrides\") pod \"ovnkube-control-plane-749d76644c-49r77\" (UID: \"22aace52-6f58-4459-89d2-9fec98b12ead\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-49r77" Dec 10 14:32:17 crc kubenswrapper[4718]: I1210 14:32:17.062755 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l47m9\" (UniqueName: \"kubernetes.io/projected/22aace52-6f58-4459-89d2-9fec98b12ead-kube-api-access-l47m9\") pod \"ovnkube-control-plane-749d76644c-49r77\" (UID: \"22aace52-6f58-4459-89d2-9fec98b12ead\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-49r77" Dec 10 14:32:17 crc kubenswrapper[4718]: I1210 14:32:17.064502 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/22aace52-6f58-4459-89d2-9fec98b12ead-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-49r77\" (UID: \"22aace52-6f58-4459-89d2-9fec98b12ead\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-49r77" Dec 10 14:32:17 crc kubenswrapper[4718]: I1210 14:32:17.064910 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/22aace52-6f58-4459-89d2-9fec98b12ead-env-overrides\") pod \"ovnkube-control-plane-749d76644c-49r77\" (UID: \"22aace52-6f58-4459-89d2-9fec98b12ead\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-49r77" Dec 10 14:32:17 crc kubenswrapper[4718]: I1210 14:32:17.071148 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/22aace52-6f58-4459-89d2-9fec98b12ead-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-49r77\" (UID: \"22aace52-6f58-4459-89d2-9fec98b12ead\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-49r77" Dec 10 14:32:17 crc kubenswrapper[4718]: I1210 14:32:17.096744 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l47m9\" (UniqueName: \"kubernetes.io/projected/22aace52-6f58-4459-89d2-9fec98b12ead-kube-api-access-l47m9\") pod \"ovnkube-control-plane-749d76644c-49r77\" (UID: \"22aace52-6f58-4459-89d2-9fec98b12ead\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-49r77" Dec 10 14:32:17 crc kubenswrapper[4718]: I1210 14:32:17.098759 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:17 crc kubenswrapper[4718]: I1210 14:32:17.098798 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:17 crc kubenswrapper[4718]: I1210 14:32:17.098808 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:17 crc kubenswrapper[4718]: I1210 14:32:17.098825 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:17 crc kubenswrapper[4718]: I1210 14:32:17.098837 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:17Z","lastTransitionTime":"2025-12-10T14:32:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:17 crc kubenswrapper[4718]: I1210 14:32:17.225857 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:17 crc kubenswrapper[4718]: I1210 14:32:17.225891 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:17 crc kubenswrapper[4718]: I1210 14:32:17.225900 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:17 crc kubenswrapper[4718]: I1210 14:32:17.225915 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:17 crc kubenswrapper[4718]: I1210 14:32:17.225924 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:17Z","lastTransitionTime":"2025-12-10T14:32:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:17 crc kubenswrapper[4718]: I1210 14:32:17.328730 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:17 crc kubenswrapper[4718]: I1210 14:32:17.328785 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:17 crc kubenswrapper[4718]: I1210 14:32:17.328805 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:17 crc kubenswrapper[4718]: I1210 14:32:17.328827 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:17 crc kubenswrapper[4718]: I1210 14:32:17.328840 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:17Z","lastTransitionTime":"2025-12-10T14:32:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:17 crc kubenswrapper[4718]: I1210 14:32:17.382217 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-49r77" Dec 10 14:32:17 crc kubenswrapper[4718]: W1210 14:32:17.397198 4718 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod22aace52_6f58_4459_89d2_9fec98b12ead.slice/crio-0e4ceaa60315e91b2e4702df68e04a00141c1d4e9b3b0a3d556554777ff0c9b4 WatchSource:0}: Error finding container 0e4ceaa60315e91b2e4702df68e04a00141c1d4e9b3b0a3d556554777ff0c9b4: Status 404 returned error can't find the container with id 0e4ceaa60315e91b2e4702df68e04a00141c1d4e9b3b0a3d556554777ff0c9b4 Dec 10 14:32:17 crc kubenswrapper[4718]: I1210 14:32:17.433379 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:17 crc kubenswrapper[4718]: I1210 14:32:17.433444 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:17 crc kubenswrapper[4718]: I1210 14:32:17.433456 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:17 crc kubenswrapper[4718]: I1210 14:32:17.433478 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:17 crc kubenswrapper[4718]: I1210 14:32:17.433493 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:17Z","lastTransitionTime":"2025-12-10T14:32:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:17 crc kubenswrapper[4718]: I1210 14:32:17.497453 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-r8zbt"] Dec 10 14:32:17 crc kubenswrapper[4718]: I1210 14:32:17.498645 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-r8zbt" Dec 10 14:32:17 crc kubenswrapper[4718]: E1210 14:32:17.498777 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-r8zbt" podUID="1494ebfa-d66c-4200-a336-2cedebcd5889" Dec 10 14:32:17 crc kubenswrapper[4718]: I1210 14:32:17.514412 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a6ff07473ddfa3d1db1e5fe47937274f803c400ee435102c3b0383c02af4340\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b56868fde34c04a22b78be1e6cb02ff5b11f0dd9fa0232e5a4c38725a8bf193\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:17Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:17 crc kubenswrapper[4718]: I1210 14:32:17.531742 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9fmrd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"186fd52a-c63c-461f-a551-8b57ead36f59\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c79dfef1bedaab660a35e29f8a48c6b153f7ca5028ef6a936a196699a30566e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r6f28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:03Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9fmrd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:17Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:17 crc kubenswrapper[4718]: I1210 14:32:17.536021 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:17 crc kubenswrapper[4718]: I1210 14:32:17.536048 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:17 crc kubenswrapper[4718]: I1210 14:32:17.536060 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:17 crc kubenswrapper[4718]: I1210 14:32:17.536075 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:17 crc kubenswrapper[4718]: I1210 14:32:17.536089 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:17Z","lastTransitionTime":"2025-12-10T14:32:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:17 crc kubenswrapper[4718]: I1210 14:32:17.551231 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hv62w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9db3984f-4589-462f-94d7-89a885be63d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c18891e2a71e33784526db2ff2226ff6aa8a9df462f5cb11ff4f5a446509f54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d84p4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hv62w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:17Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:17 crc kubenswrapper[4718]: I1210 14:32:17.563547 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xvkg4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af814c60-50de-499d-a1b2-b18f3749bc35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01d0637707f68305ada3aab8a2cf56db32750665ef5c1bcfbfb1993fd135c3b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vsmbx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:00Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xvkg4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:17Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:17 crc kubenswrapper[4718]: I1210 14:32:17.568786 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1494ebfa-d66c-4200-a336-2cedebcd5889-metrics-certs\") pod \"network-metrics-daemon-r8zbt\" (UID: \"1494ebfa-d66c-4200-a336-2cedebcd5889\") " pod="openshift-multus/network-metrics-daemon-r8zbt" Dec 10 14:32:17 crc kubenswrapper[4718]: I1210 14:32:17.568852 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p7qsk\" (UniqueName: \"kubernetes.io/projected/1494ebfa-d66c-4200-a336-2cedebcd5889-kube-api-access-p7qsk\") pod \"network-metrics-daemon-r8zbt\" (UID: \"1494ebfa-d66c-4200-a336-2cedebcd5889\") " pod="openshift-multus/network-metrics-daemon-r8zbt" Dec 10 14:32:17 crc kubenswrapper[4718]: I1210 14:32:17.576716 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8db53917-7cfb-496d-b8a0-5cc68f3be4e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ac0946788c7cef36abe84639ca343f7ba379f19b94308cd1a08f4dcac4ad249\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f420f195ef9f1879dc97af4b3feb8835025c2cbe0ede4113fb1b0e0b08ba6c2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8zmhn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:17Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:17 crc kubenswrapper[4718]: I1210 14:32:17.591647 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1432ae01-a453-47a7-be1d-5fdcabe6f0c8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://651112c229c32968527538e66f1b161c92411b37f45f36ca90b27b6710d7a43d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d1de02a2f41beed3e92a05e69a6c76b44861e7dba46d723a38fffe8c3d0ed42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9405e4c833e1d215824604c6d7ec42dec156f2968d3351f110df665cdac0ce1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d898946159da9ec7806b5daa074a2313a8d3cd6086f4ae2bdfe858a7ba70eca7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:31:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:17Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:17 crc kubenswrapper[4718]: I1210 14:32:17.608444 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://caa6ee7ab7f267517dcd1699f91f61cbc8b2dd0a135dd553e096f31a7f1dd93b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:17Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:17 crc kubenswrapper[4718]: I1210 14:32:17.628006 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:17Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:17 crc kubenswrapper[4718]: I1210 14:32:17.640324 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:17 crc kubenswrapper[4718]: I1210 14:32:17.640698 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:17 crc kubenswrapper[4718]: I1210 14:32:17.640832 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:17 crc kubenswrapper[4718]: I1210 14:32:17.640971 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:17 crc kubenswrapper[4718]: I1210 14:32:17.641097 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:17Z","lastTransitionTime":"2025-12-10T14:32:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:17 crc kubenswrapper[4718]: I1210 14:32:17.645314 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-49r77" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22aace52-6f58-4459-89d2-9fec98b12ead\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l47m9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l47m9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-49r77\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:17Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:17 crc kubenswrapper[4718]: I1210 14:32:17.666253 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kkfdg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db9c6842-03cb-4a28-baff-77f27f537aa4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:00Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bxrt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34acb23b1eef53ae920322853c0503b9d26b0d30894e6c53ae52fb90e1671294\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34acb23b1eef53ae920322853c0503b9d26b0d30894e6c53ae52fb90e1671294\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bxrt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1b13fbe360cd4e791e86acdb311b78d1ab1d3344fd287fd70cfa87cf656affa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1b13fbe360cd4e791e86acdb311b78d1ab1d3344fd287fd70cfa87cf656affa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bxrt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d9ee16140b928b7e5c33811af2ca2b20b2fc98fdefc12bb1790fce59c407d9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d9ee16140b928b7e5c33811af2ca2b20b2fc98fdefc12bb1790fce59c407d9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bxrt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1daf49ff990637090db3ead68f5d0777b6502f79edfe524df743389ee220f8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1daf49ff990637090db3ead68f5d0777b6502f79edfe524df743389ee220f8d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bxrt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://193bd4ffc5c3324f05727b94ff852aaae455949fbb968b27c572f43fe0328854\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bxrt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bxrt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kkfdg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:17Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:17 crc kubenswrapper[4718]: I1210 14:32:17.674540 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1494ebfa-d66c-4200-a336-2cedebcd5889-metrics-certs\") pod \"network-metrics-daemon-r8zbt\" (UID: \"1494ebfa-d66c-4200-a336-2cedebcd5889\") " pod="openshift-multus/network-metrics-daemon-r8zbt" Dec 10 14:32:17 crc kubenswrapper[4718]: I1210 14:32:17.674673 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p7qsk\" (UniqueName: \"kubernetes.io/projected/1494ebfa-d66c-4200-a336-2cedebcd5889-kube-api-access-p7qsk\") pod \"network-metrics-daemon-r8zbt\" (UID: \"1494ebfa-d66c-4200-a336-2cedebcd5889\") " pod="openshift-multus/network-metrics-daemon-r8zbt" Dec 10 14:32:17 crc kubenswrapper[4718]: E1210 14:32:17.674823 4718 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 10 14:32:17 crc kubenswrapper[4718]: E1210 14:32:17.675012 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1494ebfa-d66c-4200-a336-2cedebcd5889-metrics-certs podName:1494ebfa-d66c-4200-a336-2cedebcd5889 nodeName:}" failed. No retries permitted until 2025-12-10 14:32:18.174959415 +0000 UTC m=+43.124182912 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/1494ebfa-d66c-4200-a336-2cedebcd5889-metrics-certs") pod "network-metrics-daemon-r8zbt" (UID: "1494ebfa-d66c-4200-a336-2cedebcd5889") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 10 14:32:17 crc kubenswrapper[4718]: I1210 14:32:17.686819 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-r8zbt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1494ebfa-d66c-4200-a336-2cedebcd5889\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p7qsk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p7qsk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:17Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-r8zbt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:17Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:17 crc kubenswrapper[4718]: I1210 14:32:17.700260 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p7qsk\" (UniqueName: \"kubernetes.io/projected/1494ebfa-d66c-4200-a336-2cedebcd5889-kube-api-access-p7qsk\") pod \"network-metrics-daemon-r8zbt\" (UID: \"1494ebfa-d66c-4200-a336-2cedebcd5889\") " pod="openshift-multus/network-metrics-daemon-r8zbt" Dec 10 14:32:17 crc kubenswrapper[4718]: I1210 14:32:17.711988 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52b297ad-8ff7-498e-8248-b64014de744f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65966d33004e0a6740f0b450f404729baec86fc47c84a58ee020a78b7378bfee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfa44e8f79d7525847ab10384aaf41105a8e6769b384679417cd0a7379748dee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://32fcede9b35f9a45177a341e272e7c2cbe2e5c6bf699e877664a603a5f96124a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d6e990a4a3f575ffc8a6399b8dd1fd59a84db71b34885946046fc7c5bd0a8e8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9d6e990a4a3f575ffc8a6399b8dd1fd59a84db71b34885946046fc7c5bd0a8e8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-10T14:31:58Z\\\",\\\"message\\\":\\\"le observer\\\\nW1210 14:31:57.496102 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1210 14:31:57.496266 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1210 14:31:57.497076 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1744864549/tls.crt::/tmp/serving-cert-1744864549/tls.key\\\\\\\"\\\\nI1210 14:31:57.965947 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1210 14:31:57.968367 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1210 14:31:57.968400 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1210 14:31:57.968417 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1210 14:31:57.968422 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1210 14:31:57.972813 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1210 14:31:57.972849 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1210 14:31:57.972856 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1210 14:31:57.972862 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1210 14:31:57.972867 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1210 14:31:57.972870 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1210 14:31:57.972874 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1210 14:31:57.973028 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1210 14:31:57.977590 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-10T14:31:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f363cabfc65ba2cb737968ead6f21568f30416a1f385f511f2d1a45da1e831a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3cbf8dfd3e1278a733c6ca7e888788d0bec845ff72dfd618e2070262d05b6a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3cbf8dfd3e1278a733c6ca7e888788d0bec845ff72dfd618e2070262d05b6a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:31:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:31:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:17Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:17 crc kubenswrapper[4718]: I1210 14:32:17.725654 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-49r77" event={"ID":"22aace52-6f58-4459-89d2-9fec98b12ead","Type":"ContainerStarted","Data":"0e4ceaa60315e91b2e4702df68e04a00141c1d4e9b3b0a3d556554777ff0c9b4"} Dec 10 14:32:17 crc kubenswrapper[4718]: I1210 14:32:17.727734 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Dec 10 14:32:17 crc kubenswrapper[4718]: I1210 14:32:17.729285 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"d3d4b682855c5510a5957e308008602e4f749bd44ca52f974cb4be27d06a7c80"} Dec 10 14:32:17 crc kubenswrapper[4718]: I1210 14:32:17.729749 4718 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 10 14:32:17 crc kubenswrapper[4718]: I1210 14:32:17.729921 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 10 14:32:17 crc kubenswrapper[4718]: I1210 14:32:17.797020 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:17 crc kubenswrapper[4718]: I1210 14:32:17.798767 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:17 crc kubenswrapper[4718]: I1210 14:32:17.798874 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:17 crc kubenswrapper[4718]: I1210 14:32:17.798965 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:17 crc kubenswrapper[4718]: I1210 14:32:17.799048 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:17Z","lastTransitionTime":"2025-12-10T14:32:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:17 crc kubenswrapper[4718]: I1210 14:32:17.799483 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:17Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:17 crc kubenswrapper[4718]: I1210 14:32:17.815865 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:17Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:17 crc kubenswrapper[4718]: I1210 14:32:17.831813 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://368fbae55ef9c78f2f88c5a07301f271e1ccffc66976a0c2dc9f80edce04613a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:17Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:17 crc kubenswrapper[4718]: I1210 14:32:17.864207 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dtch2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"612af6cb-db4d-4874-a9ea-8b3c7eb8e30c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f49783c85c42bd1d4afd9d2546c84af07c770b069b7e6f22cf3d95c0e173bcb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02ee125e78be56eace63f1077a3952d4a9b9ccb387af5413a9325eb4bff80ea9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e231c6918c86142643af798db9c4c6de9764822d2b60e1155c727b143191139\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://abe2f058bd1c2102c5f00a29af4869eb6dcd8b732ae90f5c4fda6759b460622a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2ac19fc0eeba332014eefac978a35798eb8bf1ff79212bca24690236bd46e01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9263fdd7b12f5852db038570b5a179fff5850893af790fa27694f00375a607f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b308265eb50c9a1d2ecb14c394c7c261eb6dfb2173452a951ed7e67a84bc9e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f27c89c7dd6f5334f05c340d706e938abde36968f00c30bf84f7b2fb30dfcb3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df5db949bd1c53e8ae754b3e7e500e8c2e196aacd68a36d6331ddca378ef7504\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df5db949bd1c53e8ae754b3e7e500e8c2e196aacd68a36d6331ddca378ef7504\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-dtch2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:17Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:17 crc kubenswrapper[4718]: I1210 14:32:17.883276 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kkfdg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db9c6842-03cb-4a28-baff-77f27f537aa4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:00Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bxrt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34acb23b1eef53ae920322853c0503b9d26b0d30894e6c53ae52fb90e1671294\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34acb23b1eef53ae920322853c0503b9d26b0d30894e6c53ae52fb90e1671294\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bxrt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1b13fbe360cd4e791e86acdb311b78d1ab1d3344fd287fd70cfa87cf656affa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1b13fbe360cd4e791e86acdb311b78d1ab1d3344fd287fd70cfa87cf656affa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bxrt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d9ee16140b928b7e5c33811af2ca2b20b2fc98fdefc12bb1790fce59c407d9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d9ee16140b928b7e5c33811af2ca2b20b2fc98fdefc12bb1790fce59c407d9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bxrt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1daf49ff990637090db3ead68f5d0777b6502f79edfe524df743389ee220f8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1daf49ff990637090db3ead68f5d0777b6502f79edfe524df743389ee220f8d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bxrt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://193bd4ffc5c3324f05727b94ff852aaae455949fbb968b27c572f43fe0328854\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bxrt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bxrt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kkfdg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:17Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:17 crc kubenswrapper[4718]: I1210 14:32:17.894924 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-r8zbt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1494ebfa-d66c-4200-a336-2cedebcd5889\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p7qsk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p7qsk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:17Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-r8zbt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:17Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:17 crc kubenswrapper[4718]: I1210 14:32:17.902130 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:17 crc kubenswrapper[4718]: I1210 14:32:17.902352 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:17 crc kubenswrapper[4718]: I1210 14:32:17.902454 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:17 crc kubenswrapper[4718]: I1210 14:32:17.902534 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:17 crc kubenswrapper[4718]: I1210 14:32:17.902596 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:17Z","lastTransitionTime":"2025-12-10T14:32:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:17 crc kubenswrapper[4718]: I1210 14:32:17.910034 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52b297ad-8ff7-498e-8248-b64014de744f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65966d33004e0a6740f0b450f404729baec86fc47c84a58ee020a78b7378bfee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfa44e8f79d7525847ab10384aaf41105a8e6769b384679417cd0a7379748dee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://32fcede9b35f9a45177a341e272e7c2cbe2e5c6bf699e877664a603a5f96124a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3d4b682855c5510a5957e308008602e4f749bd44ca52f974cb4be27d06a7c80\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9d6e990a4a3f575ffc8a6399b8dd1fd59a84db71b34885946046fc7c5bd0a8e8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-10T14:31:58Z\\\",\\\"message\\\":\\\"le observer\\\\nW1210 14:31:57.496102 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1210 14:31:57.496266 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1210 14:31:57.497076 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1744864549/tls.crt::/tmp/serving-cert-1744864549/tls.key\\\\\\\"\\\\nI1210 14:31:57.965947 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1210 14:31:57.968367 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1210 14:31:57.968400 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1210 14:31:57.968417 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1210 14:31:57.968422 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1210 14:31:57.972813 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1210 14:31:57.972849 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1210 14:31:57.972856 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1210 14:31:57.972862 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1210 14:31:57.972867 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1210 14:31:57.972870 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1210 14:31:57.972874 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1210 14:31:57.973028 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1210 14:31:57.977590 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-10T14:31:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f363cabfc65ba2cb737968ead6f21568f30416a1f385f511f2d1a45da1e831a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3cbf8dfd3e1278a733c6ca7e888788d0bec845ff72dfd618e2070262d05b6a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3cbf8dfd3e1278a733c6ca7e888788d0bec845ff72dfd618e2070262d05b6a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:31:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:31:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:17Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:18 crc kubenswrapper[4718]: I1210 14:32:18.006137 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:18 crc kubenswrapper[4718]: I1210 14:32:18.006734 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:18 crc kubenswrapper[4718]: I1210 14:32:18.006834 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:18 crc kubenswrapper[4718]: I1210 14:32:18.007113 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:18 crc kubenswrapper[4718]: I1210 14:32:18.007208 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:18Z","lastTransitionTime":"2025-12-10T14:32:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:18 crc kubenswrapper[4718]: I1210 14:32:18.019639 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 10 14:32:18 crc kubenswrapper[4718]: I1210 14:32:18.019726 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 10 14:32:18 crc kubenswrapper[4718]: E1210 14:32:18.019854 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 10 14:32:18 crc kubenswrapper[4718]: E1210 14:32:18.019956 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 10 14:32:18 crc kubenswrapper[4718]: I1210 14:32:18.059276 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:17Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:18 crc kubenswrapper[4718]: I1210 14:32:18.083651 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:18Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:18 crc kubenswrapper[4718]: I1210 14:32:18.110373 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:18 crc kubenswrapper[4718]: I1210 14:32:18.110447 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:18 crc kubenswrapper[4718]: I1210 14:32:18.110465 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:18 crc kubenswrapper[4718]: I1210 14:32:18.110487 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:18 crc kubenswrapper[4718]: I1210 14:32:18.110503 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:18Z","lastTransitionTime":"2025-12-10T14:32:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:18 crc kubenswrapper[4718]: I1210 14:32:18.118027 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://368fbae55ef9c78f2f88c5a07301f271e1ccffc66976a0c2dc9f80edce04613a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:18Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:18 crc kubenswrapper[4718]: I1210 14:32:18.145785 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dtch2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"612af6cb-db4d-4874-a9ea-8b3c7eb8e30c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f49783c85c42bd1d4afd9d2546c84af07c770b069b7e6f22cf3d95c0e173bcb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02ee125e78be56eace63f1077a3952d4a9b9ccb387af5413a9325eb4bff80ea9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e231c6918c86142643af798db9c4c6de9764822d2b60e1155c727b143191139\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://abe2f058bd1c2102c5f00a29af4869eb6dcd8b732ae90f5c4fda6759b460622a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2ac19fc0eeba332014eefac978a35798eb8bf1ff79212bca24690236bd46e01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9263fdd7b12f5852db038570b5a179fff5850893af790fa27694f00375a607f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b308265eb50c9a1d2ecb14c394c7c261eb6dfb2173452a951ed7e67a84bc9e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f27c89c7dd6f5334f05c340d706e938abde36968f00c30bf84f7b2fb30dfcb3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df5db949bd1c53e8ae754b3e7e500e8c2e196aacd68a36d6331ddca378ef7504\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df5db949bd1c53e8ae754b3e7e500e8c2e196aacd68a36d6331ddca378ef7504\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-dtch2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:18Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:18 crc kubenswrapper[4718]: I1210 14:32:18.163535 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a6ff07473ddfa3d1db1e5fe47937274f803c400ee435102c3b0383c02af4340\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b56868fde34c04a22b78be1e6cb02ff5b11f0dd9fa0232e5a4c38725a8bf193\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:18Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:18 crc kubenswrapper[4718]: I1210 14:32:18.180693 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9fmrd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"186fd52a-c63c-461f-a551-8b57ead36f59\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c79dfef1bedaab660a35e29f8a48c6b153f7ca5028ef6a936a196699a30566e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r6f28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:03Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9fmrd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:18Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:18 crc kubenswrapper[4718]: I1210 14:32:18.192226 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1494ebfa-d66c-4200-a336-2cedebcd5889-metrics-certs\") pod \"network-metrics-daemon-r8zbt\" (UID: \"1494ebfa-d66c-4200-a336-2cedebcd5889\") " pod="openshift-multus/network-metrics-daemon-r8zbt" Dec 10 14:32:18 crc kubenswrapper[4718]: E1210 14:32:18.192592 4718 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 10 14:32:18 crc kubenswrapper[4718]: E1210 14:32:18.192773 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1494ebfa-d66c-4200-a336-2cedebcd5889-metrics-certs podName:1494ebfa-d66c-4200-a336-2cedebcd5889 nodeName:}" failed. No retries permitted until 2025-12-10 14:32:19.192742894 +0000 UTC m=+44.141966311 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/1494ebfa-d66c-4200-a336-2cedebcd5889-metrics-certs") pod "network-metrics-daemon-r8zbt" (UID: "1494ebfa-d66c-4200-a336-2cedebcd5889") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 10 14:32:18 crc kubenswrapper[4718]: I1210 14:32:18.198231 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xvkg4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af814c60-50de-499d-a1b2-b18f3749bc35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01d0637707f68305ada3aab8a2cf56db32750665ef5c1bcfbfb1993fd135c3b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vsmbx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:00Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xvkg4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:18Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:18 crc kubenswrapper[4718]: I1210 14:32:18.212846 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:18 crc kubenswrapper[4718]: I1210 14:32:18.212894 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:18 crc kubenswrapper[4718]: I1210 14:32:18.212906 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:18 crc kubenswrapper[4718]: I1210 14:32:18.212925 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:18 crc kubenswrapper[4718]: I1210 14:32:18.212937 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:18Z","lastTransitionTime":"2025-12-10T14:32:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:18 crc kubenswrapper[4718]: I1210 14:32:18.213459 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8db53917-7cfb-496d-b8a0-5cc68f3be4e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ac0946788c7cef36abe84639ca343f7ba379f19b94308cd1a08f4dcac4ad249\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f420f195ef9f1879dc97af4b3feb8835025c2cbe0ede4113fb1b0e0b08ba6c2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8zmhn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:18Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:18 crc kubenswrapper[4718]: I1210 14:32:18.231706 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1432ae01-a453-47a7-be1d-5fdcabe6f0c8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://651112c229c32968527538e66f1b161c92411b37f45f36ca90b27b6710d7a43d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d1de02a2f41beed3e92a05e69a6c76b44861e7dba46d723a38fffe8c3d0ed42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9405e4c833e1d215824604c6d7ec42dec156f2968d3351f110df665cdac0ce1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d898946159da9ec7806b5daa074a2313a8d3cd6086f4ae2bdfe858a7ba70eca7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:31:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:18Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:18 crc kubenswrapper[4718]: I1210 14:32:18.248459 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://caa6ee7ab7f267517dcd1699f91f61cbc8b2dd0a135dd553e096f31a7f1dd93b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:18Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:18 crc kubenswrapper[4718]: I1210 14:32:18.267783 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:18Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:18 crc kubenswrapper[4718]: I1210 14:32:18.285285 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hv62w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9db3984f-4589-462f-94d7-89a885be63d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c18891e2a71e33784526db2ff2226ff6aa8a9df462f5cb11ff4f5a446509f54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d84p4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hv62w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:18Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:18 crc kubenswrapper[4718]: I1210 14:32:18.301692 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-49r77" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22aace52-6f58-4459-89d2-9fec98b12ead\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l47m9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l47m9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-49r77\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:18Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:18 crc kubenswrapper[4718]: I1210 14:32:18.316645 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:18 crc kubenswrapper[4718]: I1210 14:32:18.316703 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:18 crc kubenswrapper[4718]: I1210 14:32:18.316714 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:18 crc kubenswrapper[4718]: I1210 14:32:18.316732 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:18 crc kubenswrapper[4718]: I1210 14:32:18.316744 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:18Z","lastTransitionTime":"2025-12-10T14:32:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:18 crc kubenswrapper[4718]: I1210 14:32:18.419668 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:18 crc kubenswrapper[4718]: I1210 14:32:18.419782 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:18 crc kubenswrapper[4718]: I1210 14:32:18.419838 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:18 crc kubenswrapper[4718]: I1210 14:32:18.419896 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:18 crc kubenswrapper[4718]: I1210 14:32:18.419965 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:18Z","lastTransitionTime":"2025-12-10T14:32:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:18 crc kubenswrapper[4718]: I1210 14:32:18.523969 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:18 crc kubenswrapper[4718]: I1210 14:32:18.524044 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:18 crc kubenswrapper[4718]: I1210 14:32:18.524064 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:18 crc kubenswrapper[4718]: I1210 14:32:18.524084 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:18 crc kubenswrapper[4718]: I1210 14:32:18.524105 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:18Z","lastTransitionTime":"2025-12-10T14:32:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:18 crc kubenswrapper[4718]: I1210 14:32:18.627024 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:18 crc kubenswrapper[4718]: I1210 14:32:18.627081 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:18 crc kubenswrapper[4718]: I1210 14:32:18.627093 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:18 crc kubenswrapper[4718]: I1210 14:32:18.627120 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:18 crc kubenswrapper[4718]: I1210 14:32:18.627138 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:18Z","lastTransitionTime":"2025-12-10T14:32:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:18 crc kubenswrapper[4718]: I1210 14:32:18.748151 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:18 crc kubenswrapper[4718]: I1210 14:32:18.748225 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:18 crc kubenswrapper[4718]: I1210 14:32:18.748251 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:18 crc kubenswrapper[4718]: I1210 14:32:18.748287 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:18 crc kubenswrapper[4718]: I1210 14:32:18.748303 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:18Z","lastTransitionTime":"2025-12-10T14:32:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:18 crc kubenswrapper[4718]: I1210 14:32:18.754354 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-49r77" event={"ID":"22aace52-6f58-4459-89d2-9fec98b12ead","Type":"ContainerStarted","Data":"bc20dc8e96606df154ecb5850afe3c1002d2869646e8a43a642e34d6f5ec8683"} Dec 10 14:32:18 crc kubenswrapper[4718]: I1210 14:32:18.754438 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-49r77" event={"ID":"22aace52-6f58-4459-89d2-9fec98b12ead","Type":"ContainerStarted","Data":"2b18d0c437956c7db43ad53ae225d4340e62f2c0ceb79ab6cc519d3c97570027"} Dec 10 14:32:18 crc kubenswrapper[4718]: I1210 14:32:18.791541 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52b297ad-8ff7-498e-8248-b64014de744f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65966d33004e0a6740f0b450f404729baec86fc47c84a58ee020a78b7378bfee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfa44e8f79d7525847ab10384aaf41105a8e6769b384679417cd0a7379748dee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://32fcede9b35f9a45177a341e272e7c2cbe2e5c6bf699e877664a603a5f96124a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3d4b682855c5510a5957e308008602e4f749bd44ca52f974cb4be27d06a7c80\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9d6e990a4a3f575ffc8a6399b8dd1fd59a84db71b34885946046fc7c5bd0a8e8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-10T14:31:58Z\\\",\\\"message\\\":\\\"le observer\\\\nW1210 14:31:57.496102 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1210 14:31:57.496266 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1210 14:31:57.497076 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1744864549/tls.crt::/tmp/serving-cert-1744864549/tls.key\\\\\\\"\\\\nI1210 14:31:57.965947 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1210 14:31:57.968367 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1210 14:31:57.968400 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1210 14:31:57.968417 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1210 14:31:57.968422 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1210 14:31:57.972813 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1210 14:31:57.972849 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1210 14:31:57.972856 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1210 14:31:57.972862 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1210 14:31:57.972867 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1210 14:31:57.972870 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1210 14:31:57.972874 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1210 14:31:57.973028 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1210 14:31:57.977590 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-10T14:31:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f363cabfc65ba2cb737968ead6f21568f30416a1f385f511f2d1a45da1e831a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3cbf8dfd3e1278a733c6ca7e888788d0bec845ff72dfd618e2070262d05b6a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3cbf8dfd3e1278a733c6ca7e888788d0bec845ff72dfd618e2070262d05b6a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:31:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:31:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:18Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:18 crc kubenswrapper[4718]: I1210 14:32:18.805700 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:18Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:18 crc kubenswrapper[4718]: I1210 14:32:18.817134 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:18Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:18 crc kubenswrapper[4718]: I1210 14:32:18.833549 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://368fbae55ef9c78f2f88c5a07301f271e1ccffc66976a0c2dc9f80edce04613a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:18Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:18 crc kubenswrapper[4718]: I1210 14:32:18.855914 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dtch2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"612af6cb-db4d-4874-a9ea-8b3c7eb8e30c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f49783c85c42bd1d4afd9d2546c84af07c770b069b7e6f22cf3d95c0e173bcb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02ee125e78be56eace63f1077a3952d4a9b9ccb387af5413a9325eb4bff80ea9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e231c6918c86142643af798db9c4c6de9764822d2b60e1155c727b143191139\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://abe2f058bd1c2102c5f00a29af4869eb6dcd8b732ae90f5c4fda6759b460622a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2ac19fc0eeba332014eefac978a35798eb8bf1ff79212bca24690236bd46e01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9263fdd7b12f5852db038570b5a179fff5850893af790fa27694f00375a607f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b308265eb50c9a1d2ecb14c394c7c261eb6dfb2173452a951ed7e67a84bc9e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f27c89c7dd6f5334f05c340d706e938abde36968f00c30bf84f7b2fb30dfcb3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df5db949bd1c53e8ae754b3e7e500e8c2e196aacd68a36d6331ddca378ef7504\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df5db949bd1c53e8ae754b3e7e500e8c2e196aacd68a36d6331ddca378ef7504\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-dtch2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:18Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:18 crc kubenswrapper[4718]: I1210 14:32:18.875024 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a6ff07473ddfa3d1db1e5fe47937274f803c400ee435102c3b0383c02af4340\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b56868fde34c04a22b78be1e6cb02ff5b11f0dd9fa0232e5a4c38725a8bf193\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:18Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:18 crc kubenswrapper[4718]: I1210 14:32:18.887666 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9fmrd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"186fd52a-c63c-461f-a551-8b57ead36f59\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c79dfef1bedaab660a35e29f8a48c6b153f7ca5028ef6a936a196699a30566e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r6f28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:03Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9fmrd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:18Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:18 crc kubenswrapper[4718]: I1210 14:32:18.909828 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:18 crc kubenswrapper[4718]: I1210 14:32:18.909877 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:18 crc kubenswrapper[4718]: I1210 14:32:18.909888 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:18 crc kubenswrapper[4718]: I1210 14:32:18.909906 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:18 crc kubenswrapper[4718]: I1210 14:32:18.909918 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:18Z","lastTransitionTime":"2025-12-10T14:32:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:18 crc kubenswrapper[4718]: I1210 14:32:18.915924 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hv62w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9db3984f-4589-462f-94d7-89a885be63d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c18891e2a71e33784526db2ff2226ff6aa8a9df462f5cb11ff4f5a446509f54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d84p4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hv62w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:18Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:18 crc kubenswrapper[4718]: I1210 14:32:18.928542 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xvkg4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af814c60-50de-499d-a1b2-b18f3749bc35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01d0637707f68305ada3aab8a2cf56db32750665ef5c1bcfbfb1993fd135c3b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vsmbx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:00Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xvkg4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:18Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:18 crc kubenswrapper[4718]: I1210 14:32:18.949552 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8db53917-7cfb-496d-b8a0-5cc68f3be4e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ac0946788c7cef36abe84639ca343f7ba379f19b94308cd1a08f4dcac4ad249\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f420f195ef9f1879dc97af4b3feb8835025c2cbe0ede4113fb1b0e0b08ba6c2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8zmhn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:18Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:18 crc kubenswrapper[4718]: I1210 14:32:18.962617 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1432ae01-a453-47a7-be1d-5fdcabe6f0c8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://651112c229c32968527538e66f1b161c92411b37f45f36ca90b27b6710d7a43d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d1de02a2f41beed3e92a05e69a6c76b44861e7dba46d723a38fffe8c3d0ed42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9405e4c833e1d215824604c6d7ec42dec156f2968d3351f110df665cdac0ce1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d898946159da9ec7806b5daa074a2313a8d3cd6086f4ae2bdfe858a7ba70eca7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:31:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:18Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:18 crc kubenswrapper[4718]: I1210 14:32:18.975507 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://caa6ee7ab7f267517dcd1699f91f61cbc8b2dd0a135dd553e096f31a7f1dd93b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:18Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:18 crc kubenswrapper[4718]: I1210 14:32:18.993137 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:18Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:19 crc kubenswrapper[4718]: I1210 14:32:19.006230 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-49r77" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22aace52-6f58-4459-89d2-9fec98b12ead\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b18d0c437956c7db43ad53ae225d4340e62f2c0ceb79ab6cc519d3c97570027\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l47m9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc20dc8e96606df154ecb5850afe3c1002d2869646e8a43a642e34d6f5ec8683\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l47m9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-49r77\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:19Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:19 crc kubenswrapper[4718]: I1210 14:32:19.012836 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:19 crc kubenswrapper[4718]: I1210 14:32:19.012900 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:19 crc kubenswrapper[4718]: I1210 14:32:19.012917 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:19 crc kubenswrapper[4718]: I1210 14:32:19.012942 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:19 crc kubenswrapper[4718]: I1210 14:32:19.012956 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:19Z","lastTransitionTime":"2025-12-10T14:32:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:19 crc kubenswrapper[4718]: I1210 14:32:19.020098 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-r8zbt" Dec 10 14:32:19 crc kubenswrapper[4718]: I1210 14:32:19.020118 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 10 14:32:19 crc kubenswrapper[4718]: E1210 14:32:19.020232 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-r8zbt" podUID="1494ebfa-d66c-4200-a336-2cedebcd5889" Dec 10 14:32:19 crc kubenswrapper[4718]: E1210 14:32:19.020325 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 10 14:32:19 crc kubenswrapper[4718]: I1210 14:32:19.022077 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kkfdg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db9c6842-03cb-4a28-baff-77f27f537aa4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:00Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bxrt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34acb23b1eef53ae920322853c0503b9d26b0d30894e6c53ae52fb90e1671294\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34acb23b1eef53ae920322853c0503b9d26b0d30894e6c53ae52fb90e1671294\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bxrt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1b13fbe360cd4e791e86acdb311b78d1ab1d3344fd287fd70cfa87cf656affa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1b13fbe360cd4e791e86acdb311b78d1ab1d3344fd287fd70cfa87cf656affa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bxrt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d9ee16140b928b7e5c33811af2ca2b20b2fc98fdefc12bb1790fce59c407d9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d9ee16140b928b7e5c33811af2ca2b20b2fc98fdefc12bb1790fce59c407d9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bxrt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1daf49ff990637090db3ead68f5d0777b6502f79edfe524df743389ee220f8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1daf49ff990637090db3ead68f5d0777b6502f79edfe524df743389ee220f8d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bxrt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://193bd4ffc5c3324f05727b94ff852aaae455949fbb968b27c572f43fe0328854\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bxrt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bxrt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kkfdg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:19Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:19 crc kubenswrapper[4718]: I1210 14:32:19.033244 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-r8zbt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1494ebfa-d66c-4200-a336-2cedebcd5889\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p7qsk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p7qsk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:17Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-r8zbt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:19Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:19 crc kubenswrapper[4718]: I1210 14:32:19.115595 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:19 crc kubenswrapper[4718]: I1210 14:32:19.115640 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:19 crc kubenswrapper[4718]: I1210 14:32:19.115666 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:19 crc kubenswrapper[4718]: I1210 14:32:19.115678 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:19 crc kubenswrapper[4718]: I1210 14:32:19.115687 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:19Z","lastTransitionTime":"2025-12-10T14:32:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:19 crc kubenswrapper[4718]: I1210 14:32:19.206624 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1494ebfa-d66c-4200-a336-2cedebcd5889-metrics-certs\") pod \"network-metrics-daemon-r8zbt\" (UID: \"1494ebfa-d66c-4200-a336-2cedebcd5889\") " pod="openshift-multus/network-metrics-daemon-r8zbt" Dec 10 14:32:19 crc kubenswrapper[4718]: E1210 14:32:19.206975 4718 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 10 14:32:19 crc kubenswrapper[4718]: E1210 14:32:19.207123 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1494ebfa-d66c-4200-a336-2cedebcd5889-metrics-certs podName:1494ebfa-d66c-4200-a336-2cedebcd5889 nodeName:}" failed. No retries permitted until 2025-12-10 14:32:21.207085162 +0000 UTC m=+46.156308619 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/1494ebfa-d66c-4200-a336-2cedebcd5889-metrics-certs") pod "network-metrics-daemon-r8zbt" (UID: "1494ebfa-d66c-4200-a336-2cedebcd5889") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 10 14:32:19 crc kubenswrapper[4718]: I1210 14:32:19.219996 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:19 crc kubenswrapper[4718]: I1210 14:32:19.220069 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:19 crc kubenswrapper[4718]: I1210 14:32:19.220081 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:19 crc kubenswrapper[4718]: I1210 14:32:19.220099 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:19 crc kubenswrapper[4718]: I1210 14:32:19.220112 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:19Z","lastTransitionTime":"2025-12-10T14:32:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:19 crc kubenswrapper[4718]: I1210 14:32:19.323113 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:19 crc kubenswrapper[4718]: I1210 14:32:19.323204 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:19 crc kubenswrapper[4718]: I1210 14:32:19.323237 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:19 crc kubenswrapper[4718]: I1210 14:32:19.323268 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:19 crc kubenswrapper[4718]: I1210 14:32:19.323293 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:19Z","lastTransitionTime":"2025-12-10T14:32:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:19 crc kubenswrapper[4718]: I1210 14:32:19.426941 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:19 crc kubenswrapper[4718]: I1210 14:32:19.427025 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:19 crc kubenswrapper[4718]: I1210 14:32:19.427041 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:19 crc kubenswrapper[4718]: I1210 14:32:19.427067 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:19 crc kubenswrapper[4718]: I1210 14:32:19.427081 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:19Z","lastTransitionTime":"2025-12-10T14:32:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:19 crc kubenswrapper[4718]: I1210 14:32:19.530221 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:19 crc kubenswrapper[4718]: I1210 14:32:19.530271 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:19 crc kubenswrapper[4718]: I1210 14:32:19.530280 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:19 crc kubenswrapper[4718]: I1210 14:32:19.530299 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:19 crc kubenswrapper[4718]: I1210 14:32:19.530309 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:19Z","lastTransitionTime":"2025-12-10T14:32:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:19 crc kubenswrapper[4718]: I1210 14:32:19.633894 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:19 crc kubenswrapper[4718]: I1210 14:32:19.633955 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:19 crc kubenswrapper[4718]: I1210 14:32:19.633969 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:19 crc kubenswrapper[4718]: I1210 14:32:19.633989 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:19 crc kubenswrapper[4718]: I1210 14:32:19.634000 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:19Z","lastTransitionTime":"2025-12-10T14:32:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:19 crc kubenswrapper[4718]: I1210 14:32:19.737833 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:19 crc kubenswrapper[4718]: I1210 14:32:19.737902 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:19 crc kubenswrapper[4718]: I1210 14:32:19.737919 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:19 crc kubenswrapper[4718]: I1210 14:32:19.737941 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:19 crc kubenswrapper[4718]: I1210 14:32:19.737960 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:19Z","lastTransitionTime":"2025-12-10T14:32:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:19 crc kubenswrapper[4718]: I1210 14:32:19.757745 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:19 crc kubenswrapper[4718]: I1210 14:32:19.757820 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:19 crc kubenswrapper[4718]: I1210 14:32:19.757834 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:19 crc kubenswrapper[4718]: I1210 14:32:19.757861 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:19 crc kubenswrapper[4718]: I1210 14:32:19.757877 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:19Z","lastTransitionTime":"2025-12-10T14:32:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:19 crc kubenswrapper[4718]: E1210 14:32:19.772829 4718 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:32:19Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:32:19Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:19Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:32:19Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:32:19Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:19Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"fbef6824-3734-4f3d-bf18-030e5c72cff8\\\",\\\"systemUUID\\\":\\\"559eaef7-72a9-45f0-b8d3-0046c76adc0d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:19Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:19 crc kubenswrapper[4718]: I1210 14:32:19.777220 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:19 crc kubenswrapper[4718]: I1210 14:32:19.777280 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:19 crc kubenswrapper[4718]: I1210 14:32:19.777292 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:19 crc kubenswrapper[4718]: I1210 14:32:19.777308 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:19 crc kubenswrapper[4718]: I1210 14:32:19.777318 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:19Z","lastTransitionTime":"2025-12-10T14:32:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:19 crc kubenswrapper[4718]: E1210 14:32:19.798279 4718 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:32:19Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:32:19Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:19Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:32:19Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:32:19Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:19Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"fbef6824-3734-4f3d-bf18-030e5c72cff8\\\",\\\"systemUUID\\\":\\\"559eaef7-72a9-45f0-b8d3-0046c76adc0d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:19Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:19 crc kubenswrapper[4718]: I1210 14:32:19.804645 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:19 crc kubenswrapper[4718]: I1210 14:32:19.804714 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:19 crc kubenswrapper[4718]: I1210 14:32:19.804733 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:19 crc kubenswrapper[4718]: I1210 14:32:19.804761 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:19 crc kubenswrapper[4718]: I1210 14:32:19.804776 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:19Z","lastTransitionTime":"2025-12-10T14:32:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:19 crc kubenswrapper[4718]: E1210 14:32:19.823191 4718 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:32:19Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:32:19Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:19Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:32:19Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:32:19Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:19Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"fbef6824-3734-4f3d-bf18-030e5c72cff8\\\",\\\"systemUUID\\\":\\\"559eaef7-72a9-45f0-b8d3-0046c76adc0d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:19Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:19 crc kubenswrapper[4718]: I1210 14:32:19.828141 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:19 crc kubenswrapper[4718]: I1210 14:32:19.828189 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:19 crc kubenswrapper[4718]: I1210 14:32:19.828201 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:19 crc kubenswrapper[4718]: I1210 14:32:19.828223 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:19 crc kubenswrapper[4718]: I1210 14:32:19.828236 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:19Z","lastTransitionTime":"2025-12-10T14:32:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:19 crc kubenswrapper[4718]: E1210 14:32:19.846971 4718 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:32:19Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:32:19Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:19Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:32:19Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:32:19Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:19Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"fbef6824-3734-4f3d-bf18-030e5c72cff8\\\",\\\"systemUUID\\\":\\\"559eaef7-72a9-45f0-b8d3-0046c76adc0d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:19Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:19 crc kubenswrapper[4718]: I1210 14:32:19.852508 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:19 crc kubenswrapper[4718]: I1210 14:32:19.852852 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:19 crc kubenswrapper[4718]: I1210 14:32:19.852984 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:19 crc kubenswrapper[4718]: I1210 14:32:19.853105 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:19 crc kubenswrapper[4718]: I1210 14:32:19.853236 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:19Z","lastTransitionTime":"2025-12-10T14:32:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:19 crc kubenswrapper[4718]: E1210 14:32:19.867414 4718 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:32:19Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:32:19Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:19Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:32:19Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:32:19Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:19Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"fbef6824-3734-4f3d-bf18-030e5c72cff8\\\",\\\"systemUUID\\\":\\\"559eaef7-72a9-45f0-b8d3-0046c76adc0d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:19Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:19 crc kubenswrapper[4718]: E1210 14:32:19.867633 4718 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 10 14:32:19 crc kubenswrapper[4718]: I1210 14:32:19.870042 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:19 crc kubenswrapper[4718]: I1210 14:32:19.870205 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:19 crc kubenswrapper[4718]: I1210 14:32:19.870293 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:19 crc kubenswrapper[4718]: I1210 14:32:19.870435 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:19 crc kubenswrapper[4718]: I1210 14:32:19.870558 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:19Z","lastTransitionTime":"2025-12-10T14:32:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:19 crc kubenswrapper[4718]: I1210 14:32:19.975299 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:19 crc kubenswrapper[4718]: I1210 14:32:19.975360 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:19 crc kubenswrapper[4718]: I1210 14:32:19.975380 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:19 crc kubenswrapper[4718]: I1210 14:32:19.975432 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:19 crc kubenswrapper[4718]: I1210 14:32:19.975453 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:19Z","lastTransitionTime":"2025-12-10T14:32:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:20 crc kubenswrapper[4718]: I1210 14:32:20.019473 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 10 14:32:20 crc kubenswrapper[4718]: I1210 14:32:20.019548 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 10 14:32:20 crc kubenswrapper[4718]: E1210 14:32:20.019760 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 10 14:32:20 crc kubenswrapper[4718]: E1210 14:32:20.019915 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 10 14:32:20 crc kubenswrapper[4718]: I1210 14:32:20.079626 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:20 crc kubenswrapper[4718]: I1210 14:32:20.079691 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:20 crc kubenswrapper[4718]: I1210 14:32:20.079718 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:20 crc kubenswrapper[4718]: I1210 14:32:20.079739 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:20 crc kubenswrapper[4718]: I1210 14:32:20.079752 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:20Z","lastTransitionTime":"2025-12-10T14:32:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:20 crc kubenswrapper[4718]: I1210 14:32:20.182871 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:20 crc kubenswrapper[4718]: I1210 14:32:20.182938 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:20 crc kubenswrapper[4718]: I1210 14:32:20.182958 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:20 crc kubenswrapper[4718]: I1210 14:32:20.182981 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:20 crc kubenswrapper[4718]: I1210 14:32:20.183001 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:20Z","lastTransitionTime":"2025-12-10T14:32:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:20 crc kubenswrapper[4718]: I1210 14:32:20.288317 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:20 crc kubenswrapper[4718]: I1210 14:32:20.288366 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:20 crc kubenswrapper[4718]: I1210 14:32:20.288399 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:20 crc kubenswrapper[4718]: I1210 14:32:20.288419 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:20 crc kubenswrapper[4718]: I1210 14:32:20.288432 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:20Z","lastTransitionTime":"2025-12-10T14:32:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:20 crc kubenswrapper[4718]: I1210 14:32:20.460006 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:20 crc kubenswrapper[4718]: I1210 14:32:20.460056 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:20 crc kubenswrapper[4718]: I1210 14:32:20.460066 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:20 crc kubenswrapper[4718]: I1210 14:32:20.460088 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:20 crc kubenswrapper[4718]: I1210 14:32:20.460101 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:20Z","lastTransitionTime":"2025-12-10T14:32:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:20 crc kubenswrapper[4718]: I1210 14:32:20.668172 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:20 crc kubenswrapper[4718]: I1210 14:32:20.668236 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:20 crc kubenswrapper[4718]: I1210 14:32:20.668248 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:20 crc kubenswrapper[4718]: I1210 14:32:20.668264 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:20 crc kubenswrapper[4718]: I1210 14:32:20.668277 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:20Z","lastTransitionTime":"2025-12-10T14:32:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:20 crc kubenswrapper[4718]: I1210 14:32:20.763048 4718 generic.go:334] "Generic (PLEG): container finished" podID="db9c6842-03cb-4a28-baff-77f27f537aa4" containerID="193bd4ffc5c3324f05727b94ff852aaae455949fbb968b27c572f43fe0328854" exitCode=0 Dec 10 14:32:20 crc kubenswrapper[4718]: I1210 14:32:20.763108 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-kkfdg" event={"ID":"db9c6842-03cb-4a28-baff-77f27f537aa4","Type":"ContainerDied","Data":"193bd4ffc5c3324f05727b94ff852aaae455949fbb968b27c572f43fe0328854"} Dec 10 14:32:20 crc kubenswrapper[4718]: I1210 14:32:20.787842 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52b297ad-8ff7-498e-8248-b64014de744f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65966d33004e0a6740f0b450f404729baec86fc47c84a58ee020a78b7378bfee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfa44e8f79d7525847ab10384aaf41105a8e6769b384679417cd0a7379748dee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://32fcede9b35f9a45177a341e272e7c2cbe2e5c6bf699e877664a603a5f96124a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3d4b682855c5510a5957e308008602e4f749bd44ca52f974cb4be27d06a7c80\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9d6e990a4a3f575ffc8a6399b8dd1fd59a84db71b34885946046fc7c5bd0a8e8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-10T14:31:58Z\\\",\\\"message\\\":\\\"le observer\\\\nW1210 14:31:57.496102 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1210 14:31:57.496266 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1210 14:31:57.497076 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1744864549/tls.crt::/tmp/serving-cert-1744864549/tls.key\\\\\\\"\\\\nI1210 14:31:57.965947 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1210 14:31:57.968367 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1210 14:31:57.968400 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1210 14:31:57.968417 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1210 14:31:57.968422 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1210 14:31:57.972813 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1210 14:31:57.972849 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1210 14:31:57.972856 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1210 14:31:57.972862 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1210 14:31:57.972867 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1210 14:31:57.972870 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1210 14:31:57.972874 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1210 14:31:57.973028 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1210 14:31:57.977590 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-10T14:31:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f363cabfc65ba2cb737968ead6f21568f30416a1f385f511f2d1a45da1e831a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3cbf8dfd3e1278a733c6ca7e888788d0bec845ff72dfd618e2070262d05b6a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3cbf8dfd3e1278a733c6ca7e888788d0bec845ff72dfd618e2070262d05b6a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:31:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:31:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:20Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:20 crc kubenswrapper[4718]: I1210 14:32:20.788300 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:20 crc kubenswrapper[4718]: I1210 14:32:20.788405 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:20 crc kubenswrapper[4718]: I1210 14:32:20.788423 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:20 crc kubenswrapper[4718]: I1210 14:32:20.788447 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:20 crc kubenswrapper[4718]: I1210 14:32:20.788463 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:20Z","lastTransitionTime":"2025-12-10T14:32:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:20 crc kubenswrapper[4718]: I1210 14:32:20.804871 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:20Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:20 crc kubenswrapper[4718]: I1210 14:32:20.820033 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:20Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:20 crc kubenswrapper[4718]: I1210 14:32:20.837822 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://368fbae55ef9c78f2f88c5a07301f271e1ccffc66976a0c2dc9f80edce04613a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:20Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:20 crc kubenswrapper[4718]: I1210 14:32:20.860271 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dtch2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"612af6cb-db4d-4874-a9ea-8b3c7eb8e30c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f49783c85c42bd1d4afd9d2546c84af07c770b069b7e6f22cf3d95c0e173bcb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02ee125e78be56eace63f1077a3952d4a9b9ccb387af5413a9325eb4bff80ea9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e231c6918c86142643af798db9c4c6de9764822d2b60e1155c727b143191139\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://abe2f058bd1c2102c5f00a29af4869eb6dcd8b732ae90f5c4fda6759b460622a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2ac19fc0eeba332014eefac978a35798eb8bf1ff79212bca24690236bd46e01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9263fdd7b12f5852db038570b5a179fff5850893af790fa27694f00375a607f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b308265eb50c9a1d2ecb14c394c7c261eb6dfb2173452a951ed7e67a84bc9e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f27c89c7dd6f5334f05c340d706e938abde36968f00c30bf84f7b2fb30dfcb3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df5db949bd1c53e8ae754b3e7e500e8c2e196aacd68a36d6331ddca378ef7504\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df5db949bd1c53e8ae754b3e7e500e8c2e196aacd68a36d6331ddca378ef7504\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-dtch2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:20Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:20 crc kubenswrapper[4718]: I1210 14:32:20.878691 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a6ff07473ddfa3d1db1e5fe47937274f803c400ee435102c3b0383c02af4340\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b56868fde34c04a22b78be1e6cb02ff5b11f0dd9fa0232e5a4c38725a8bf193\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:20Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:20 crc kubenswrapper[4718]: I1210 14:32:20.895419 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9fmrd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"186fd52a-c63c-461f-a551-8b57ead36f59\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c79dfef1bedaab660a35e29f8a48c6b153f7ca5028ef6a936a196699a30566e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r6f28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:03Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9fmrd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:20Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:20 crc kubenswrapper[4718]: I1210 14:32:20.897031 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:20 crc kubenswrapper[4718]: I1210 14:32:20.897085 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:20 crc kubenswrapper[4718]: I1210 14:32:20.897105 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:20 crc kubenswrapper[4718]: I1210 14:32:20.899636 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:20 crc kubenswrapper[4718]: I1210 14:32:20.899656 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:20Z","lastTransitionTime":"2025-12-10T14:32:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:20 crc kubenswrapper[4718]: I1210 14:32:20.915051 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1432ae01-a453-47a7-be1d-5fdcabe6f0c8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://651112c229c32968527538e66f1b161c92411b37f45f36ca90b27b6710d7a43d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d1de02a2f41beed3e92a05e69a6c76b44861e7dba46d723a38fffe8c3d0ed42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9405e4c833e1d215824604c6d7ec42dec156f2968d3351f110df665cdac0ce1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d898946159da9ec7806b5daa074a2313a8d3cd6086f4ae2bdfe858a7ba70eca7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:31:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:20Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:20 crc kubenswrapper[4718]: I1210 14:32:20.943368 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://caa6ee7ab7f267517dcd1699f91f61cbc8b2dd0a135dd553e096f31a7f1dd93b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:20Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:20 crc kubenswrapper[4718]: I1210 14:32:20.957811 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:20Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:20 crc kubenswrapper[4718]: I1210 14:32:20.974083 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hv62w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9db3984f-4589-462f-94d7-89a885be63d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c18891e2a71e33784526db2ff2226ff6aa8a9df462f5cb11ff4f5a446509f54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d84p4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hv62w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:20Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:20 crc kubenswrapper[4718]: I1210 14:32:20.992084 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xvkg4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af814c60-50de-499d-a1b2-b18f3749bc35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01d0637707f68305ada3aab8a2cf56db32750665ef5c1bcfbfb1993fd135c3b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vsmbx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:00Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xvkg4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:20Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:21 crc kubenswrapper[4718]: I1210 14:32:21.013608 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8db53917-7cfb-496d-b8a0-5cc68f3be4e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ac0946788c7cef36abe84639ca343f7ba379f19b94308cd1a08f4dcac4ad249\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f420f195ef9f1879dc97af4b3feb8835025c2cbe0ede4113fb1b0e0b08ba6c2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8zmhn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:21Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:21 crc kubenswrapper[4718]: I1210 14:32:21.014089 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:21 crc kubenswrapper[4718]: I1210 14:32:21.014141 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:21 crc kubenswrapper[4718]: I1210 14:32:21.014154 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:21 crc kubenswrapper[4718]: I1210 14:32:21.014179 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:21 crc kubenswrapper[4718]: I1210 14:32:21.014194 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:21Z","lastTransitionTime":"2025-12-10T14:32:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:21 crc kubenswrapper[4718]: I1210 14:32:21.019881 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 10 14:32:21 crc kubenswrapper[4718]: I1210 14:32:21.019951 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-r8zbt" Dec 10 14:32:21 crc kubenswrapper[4718]: E1210 14:32:21.020044 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 10 14:32:21 crc kubenswrapper[4718]: E1210 14:32:21.020164 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-r8zbt" podUID="1494ebfa-d66c-4200-a336-2cedebcd5889" Dec 10 14:32:21 crc kubenswrapper[4718]: I1210 14:32:21.026090 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-49r77" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22aace52-6f58-4459-89d2-9fec98b12ead\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b18d0c437956c7db43ad53ae225d4340e62f2c0ceb79ab6cc519d3c97570027\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l47m9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc20dc8e96606df154ecb5850afe3c1002d2869646e8a43a642e34d6f5ec8683\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l47m9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-49r77\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:21Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:21 crc kubenswrapper[4718]: I1210 14:32:21.042497 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kkfdg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db9c6842-03cb-4a28-baff-77f27f537aa4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:00Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bxrt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34acb23b1eef53ae920322853c0503b9d26b0d30894e6c53ae52fb90e1671294\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34acb23b1eef53ae920322853c0503b9d26b0d30894e6c53ae52fb90e1671294\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bxrt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1b13fbe360cd4e791e86acdb311b78d1ab1d3344fd287fd70cfa87cf656affa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1b13fbe360cd4e791e86acdb311b78d1ab1d3344fd287fd70cfa87cf656affa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bxrt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d9ee16140b928b7e5c33811af2ca2b20b2fc98fdefc12bb1790fce59c407d9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d9ee16140b928b7e5c33811af2ca2b20b2fc98fdefc12bb1790fce59c407d9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bxrt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1daf49ff990637090db3ead68f5d0777b6502f79edfe524df743389ee220f8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1daf49ff990637090db3ead68f5d0777b6502f79edfe524df743389ee220f8d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bxrt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://193bd4ffc5c3324f05727b94ff852aaae455949fbb968b27c572f43fe0328854\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://193bd4ffc5c3324f05727b94ff852aaae455949fbb968b27c572f43fe0328854\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bxrt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bxrt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kkfdg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:21Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:21 crc kubenswrapper[4718]: I1210 14:32:21.055773 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-r8zbt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1494ebfa-d66c-4200-a336-2cedebcd5889\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p7qsk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p7qsk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:17Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-r8zbt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:21Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:21 crc kubenswrapper[4718]: I1210 14:32:21.117511 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:21 crc kubenswrapper[4718]: I1210 14:32:21.117565 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:21 crc kubenswrapper[4718]: I1210 14:32:21.117580 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:21 crc kubenswrapper[4718]: I1210 14:32:21.117603 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:21 crc kubenswrapper[4718]: I1210 14:32:21.117618 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:21Z","lastTransitionTime":"2025-12-10T14:32:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:21 crc kubenswrapper[4718]: I1210 14:32:21.221356 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:21 crc kubenswrapper[4718]: I1210 14:32:21.221418 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:21 crc kubenswrapper[4718]: I1210 14:32:21.221428 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:21 crc kubenswrapper[4718]: I1210 14:32:21.221442 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:21 crc kubenswrapper[4718]: I1210 14:32:21.221452 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:21Z","lastTransitionTime":"2025-12-10T14:32:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:21 crc kubenswrapper[4718]: I1210 14:32:21.234062 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1494ebfa-d66c-4200-a336-2cedebcd5889-metrics-certs\") pod \"network-metrics-daemon-r8zbt\" (UID: \"1494ebfa-d66c-4200-a336-2cedebcd5889\") " pod="openshift-multus/network-metrics-daemon-r8zbt" Dec 10 14:32:21 crc kubenswrapper[4718]: E1210 14:32:21.234230 4718 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 10 14:32:21 crc kubenswrapper[4718]: E1210 14:32:21.234303 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1494ebfa-d66c-4200-a336-2cedebcd5889-metrics-certs podName:1494ebfa-d66c-4200-a336-2cedebcd5889 nodeName:}" failed. No retries permitted until 2025-12-10 14:32:25.234283562 +0000 UTC m=+50.183506979 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/1494ebfa-d66c-4200-a336-2cedebcd5889-metrics-certs") pod "network-metrics-daemon-r8zbt" (UID: "1494ebfa-d66c-4200-a336-2cedebcd5889") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 10 14:32:21 crc kubenswrapper[4718]: I1210 14:32:21.324713 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:21 crc kubenswrapper[4718]: I1210 14:32:21.324755 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:21 crc kubenswrapper[4718]: I1210 14:32:21.324765 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:21 crc kubenswrapper[4718]: I1210 14:32:21.324779 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:21 crc kubenswrapper[4718]: I1210 14:32:21.324789 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:21Z","lastTransitionTime":"2025-12-10T14:32:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:21 crc kubenswrapper[4718]: I1210 14:32:21.429171 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:21 crc kubenswrapper[4718]: I1210 14:32:21.429243 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:21 crc kubenswrapper[4718]: I1210 14:32:21.429260 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:21 crc kubenswrapper[4718]: I1210 14:32:21.429283 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:21 crc kubenswrapper[4718]: I1210 14:32:21.429301 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:21Z","lastTransitionTime":"2025-12-10T14:32:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:21 crc kubenswrapper[4718]: I1210 14:32:21.532337 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:21 crc kubenswrapper[4718]: I1210 14:32:21.532381 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:21 crc kubenswrapper[4718]: I1210 14:32:21.532416 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:21 crc kubenswrapper[4718]: I1210 14:32:21.532442 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:21 crc kubenswrapper[4718]: I1210 14:32:21.532461 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:21Z","lastTransitionTime":"2025-12-10T14:32:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:21 crc kubenswrapper[4718]: I1210 14:32:21.635352 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:21 crc kubenswrapper[4718]: I1210 14:32:21.635444 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:21 crc kubenswrapper[4718]: I1210 14:32:21.635479 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:21 crc kubenswrapper[4718]: I1210 14:32:21.635526 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:21 crc kubenswrapper[4718]: I1210 14:32:21.635547 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:21Z","lastTransitionTime":"2025-12-10T14:32:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:21 crc kubenswrapper[4718]: I1210 14:32:21.738420 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:21 crc kubenswrapper[4718]: I1210 14:32:21.738466 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:21 crc kubenswrapper[4718]: I1210 14:32:21.738477 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:21 crc kubenswrapper[4718]: I1210 14:32:21.738496 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:21 crc kubenswrapper[4718]: I1210 14:32:21.738508 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:21Z","lastTransitionTime":"2025-12-10T14:32:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:21 crc kubenswrapper[4718]: I1210 14:32:21.842358 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:21 crc kubenswrapper[4718]: I1210 14:32:21.842408 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:21 crc kubenswrapper[4718]: I1210 14:32:21.842416 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:21 crc kubenswrapper[4718]: I1210 14:32:21.842431 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:21 crc kubenswrapper[4718]: I1210 14:32:21.842440 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:21Z","lastTransitionTime":"2025-12-10T14:32:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:21 crc kubenswrapper[4718]: I1210 14:32:21.946886 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:21 crc kubenswrapper[4718]: I1210 14:32:21.946957 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:21 crc kubenswrapper[4718]: I1210 14:32:21.946971 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:21 crc kubenswrapper[4718]: I1210 14:32:21.946993 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:21 crc kubenswrapper[4718]: I1210 14:32:21.947013 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:21Z","lastTransitionTime":"2025-12-10T14:32:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:22 crc kubenswrapper[4718]: I1210 14:32:22.019996 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 10 14:32:22 crc kubenswrapper[4718]: E1210 14:32:22.020153 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 10 14:32:22 crc kubenswrapper[4718]: I1210 14:32:22.020645 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 10 14:32:22 crc kubenswrapper[4718]: E1210 14:32:22.020704 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 10 14:32:22 crc kubenswrapper[4718]: I1210 14:32:22.050282 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:22 crc kubenswrapper[4718]: I1210 14:32:22.050313 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:22 crc kubenswrapper[4718]: I1210 14:32:22.050323 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:22 crc kubenswrapper[4718]: I1210 14:32:22.050336 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:22 crc kubenswrapper[4718]: I1210 14:32:22.050345 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:22Z","lastTransitionTime":"2025-12-10T14:32:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:22 crc kubenswrapper[4718]: I1210 14:32:22.156427 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:22 crc kubenswrapper[4718]: I1210 14:32:22.156483 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:22 crc kubenswrapper[4718]: I1210 14:32:22.156495 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:22 crc kubenswrapper[4718]: I1210 14:32:22.156512 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:22 crc kubenswrapper[4718]: I1210 14:32:22.156524 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:22Z","lastTransitionTime":"2025-12-10T14:32:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:22 crc kubenswrapper[4718]: I1210 14:32:22.258833 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:22 crc kubenswrapper[4718]: I1210 14:32:22.258879 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:22 crc kubenswrapper[4718]: I1210 14:32:22.258890 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:22 crc kubenswrapper[4718]: I1210 14:32:22.258916 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:22 crc kubenswrapper[4718]: I1210 14:32:22.258929 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:22Z","lastTransitionTime":"2025-12-10T14:32:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:22 crc kubenswrapper[4718]: I1210 14:32:22.361876 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:22 crc kubenswrapper[4718]: I1210 14:32:22.361916 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:22 crc kubenswrapper[4718]: I1210 14:32:22.361928 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:22 crc kubenswrapper[4718]: I1210 14:32:22.361949 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:22 crc kubenswrapper[4718]: I1210 14:32:22.361963 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:22Z","lastTransitionTime":"2025-12-10T14:32:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:22 crc kubenswrapper[4718]: I1210 14:32:22.467288 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:22 crc kubenswrapper[4718]: I1210 14:32:22.467350 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:22 crc kubenswrapper[4718]: I1210 14:32:22.467364 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:22 crc kubenswrapper[4718]: I1210 14:32:22.467403 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:22 crc kubenswrapper[4718]: I1210 14:32:22.467418 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:22Z","lastTransitionTime":"2025-12-10T14:32:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:22 crc kubenswrapper[4718]: I1210 14:32:22.570227 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:22 crc kubenswrapper[4718]: I1210 14:32:22.570277 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:22 crc kubenswrapper[4718]: I1210 14:32:22.570288 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:22 crc kubenswrapper[4718]: I1210 14:32:22.570306 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:22 crc kubenswrapper[4718]: I1210 14:32:22.570316 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:22Z","lastTransitionTime":"2025-12-10T14:32:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:22 crc kubenswrapper[4718]: I1210 14:32:22.721932 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:22 crc kubenswrapper[4718]: I1210 14:32:22.721985 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:22 crc kubenswrapper[4718]: I1210 14:32:22.721996 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:22 crc kubenswrapper[4718]: I1210 14:32:22.722013 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:22 crc kubenswrapper[4718]: I1210 14:32:22.722029 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:22Z","lastTransitionTime":"2025-12-10T14:32:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:22 crc kubenswrapper[4718]: I1210 14:32:22.827897 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:22 crc kubenswrapper[4718]: I1210 14:32:22.827946 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:22 crc kubenswrapper[4718]: I1210 14:32:22.827958 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:22 crc kubenswrapper[4718]: I1210 14:32:22.827976 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:22 crc kubenswrapper[4718]: I1210 14:32:22.827988 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:22Z","lastTransitionTime":"2025-12-10T14:32:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:22 crc kubenswrapper[4718]: I1210 14:32:22.931090 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:22 crc kubenswrapper[4718]: I1210 14:32:22.931145 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:22 crc kubenswrapper[4718]: I1210 14:32:22.931157 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:22 crc kubenswrapper[4718]: I1210 14:32:22.931174 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:22 crc kubenswrapper[4718]: I1210 14:32:22.931186 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:22Z","lastTransitionTime":"2025-12-10T14:32:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:23 crc kubenswrapper[4718]: I1210 14:32:23.019650 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-r8zbt" Dec 10 14:32:23 crc kubenswrapper[4718]: I1210 14:32:23.019691 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 10 14:32:23 crc kubenswrapper[4718]: E1210 14:32:23.019830 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-r8zbt" podUID="1494ebfa-d66c-4200-a336-2cedebcd5889" Dec 10 14:32:23 crc kubenswrapper[4718]: E1210 14:32:23.019949 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 10 14:32:23 crc kubenswrapper[4718]: I1210 14:32:23.033171 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:23 crc kubenswrapper[4718]: I1210 14:32:23.033216 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:23 crc kubenswrapper[4718]: I1210 14:32:23.033225 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:23 crc kubenswrapper[4718]: I1210 14:32:23.033238 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:23 crc kubenswrapper[4718]: I1210 14:32:23.033248 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:23Z","lastTransitionTime":"2025-12-10T14:32:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:23 crc kubenswrapper[4718]: I1210 14:32:23.135965 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:23 crc kubenswrapper[4718]: I1210 14:32:23.135995 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:23 crc kubenswrapper[4718]: I1210 14:32:23.136006 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:23 crc kubenswrapper[4718]: I1210 14:32:23.136031 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:23 crc kubenswrapper[4718]: I1210 14:32:23.136057 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:23Z","lastTransitionTime":"2025-12-10T14:32:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:23 crc kubenswrapper[4718]: I1210 14:32:23.240949 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:23 crc kubenswrapper[4718]: I1210 14:32:23.241007 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:23 crc kubenswrapper[4718]: I1210 14:32:23.241019 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:23 crc kubenswrapper[4718]: I1210 14:32:23.241040 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:23 crc kubenswrapper[4718]: I1210 14:32:23.241054 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:23Z","lastTransitionTime":"2025-12-10T14:32:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:23 crc kubenswrapper[4718]: I1210 14:32:23.362747 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:23 crc kubenswrapper[4718]: I1210 14:32:23.362795 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:23 crc kubenswrapper[4718]: I1210 14:32:23.362811 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:23 crc kubenswrapper[4718]: I1210 14:32:23.362829 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:23 crc kubenswrapper[4718]: I1210 14:32:23.362842 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:23Z","lastTransitionTime":"2025-12-10T14:32:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:23 crc kubenswrapper[4718]: I1210 14:32:23.466088 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:23 crc kubenswrapper[4718]: I1210 14:32:23.466145 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:23 crc kubenswrapper[4718]: I1210 14:32:23.466162 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:23 crc kubenswrapper[4718]: I1210 14:32:23.466183 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:23 crc kubenswrapper[4718]: I1210 14:32:23.466199 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:23Z","lastTransitionTime":"2025-12-10T14:32:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:23 crc kubenswrapper[4718]: I1210 14:32:23.569296 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:23 crc kubenswrapper[4718]: I1210 14:32:23.569357 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:23 crc kubenswrapper[4718]: I1210 14:32:23.569372 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:23 crc kubenswrapper[4718]: I1210 14:32:23.569416 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:23 crc kubenswrapper[4718]: I1210 14:32:23.569436 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:23Z","lastTransitionTime":"2025-12-10T14:32:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:23 crc kubenswrapper[4718]: I1210 14:32:23.671940 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:23 crc kubenswrapper[4718]: I1210 14:32:23.671985 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:23 crc kubenswrapper[4718]: I1210 14:32:23.671997 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:23 crc kubenswrapper[4718]: I1210 14:32:23.672014 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:23 crc kubenswrapper[4718]: I1210 14:32:23.672024 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:23Z","lastTransitionTime":"2025-12-10T14:32:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:23 crc kubenswrapper[4718]: I1210 14:32:23.775262 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:23 crc kubenswrapper[4718]: I1210 14:32:23.775310 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:23 crc kubenswrapper[4718]: I1210 14:32:23.775321 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:23 crc kubenswrapper[4718]: I1210 14:32:23.775339 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:23 crc kubenswrapper[4718]: I1210 14:32:23.775353 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:23Z","lastTransitionTime":"2025-12-10T14:32:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:23 crc kubenswrapper[4718]: I1210 14:32:23.869602 4718 generic.go:334] "Generic (PLEG): container finished" podID="db9c6842-03cb-4a28-baff-77f27f537aa4" containerID="95db561c06acc2e7272a5a4a2c8a6a38315343379b80213c0e63f7a65a46f35d" exitCode=0 Dec 10 14:32:23 crc kubenswrapper[4718]: I1210 14:32:23.869641 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-kkfdg" event={"ID":"db9c6842-03cb-4a28-baff-77f27f537aa4","Type":"ContainerDied","Data":"95db561c06acc2e7272a5a4a2c8a6a38315343379b80213c0e63f7a65a46f35d"} Dec 10 14:32:23 crc kubenswrapper[4718]: I1210 14:32:23.877789 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:23 crc kubenswrapper[4718]: I1210 14:32:23.877848 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:23 crc kubenswrapper[4718]: I1210 14:32:23.877865 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:23 crc kubenswrapper[4718]: I1210 14:32:23.877886 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:23 crc kubenswrapper[4718]: I1210 14:32:23.877911 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:23Z","lastTransitionTime":"2025-12-10T14:32:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:23 crc kubenswrapper[4718]: I1210 14:32:23.919736 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a6ff07473ddfa3d1db1e5fe47937274f803c400ee435102c3b0383c02af4340\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b56868fde34c04a22b78be1e6cb02ff5b11f0dd9fa0232e5a4c38725a8bf193\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:23Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:23 crc kubenswrapper[4718]: I1210 14:32:23.936528 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9fmrd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"186fd52a-c63c-461f-a551-8b57ead36f59\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c79dfef1bedaab660a35e29f8a48c6b153f7ca5028ef6a936a196699a30566e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r6f28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:03Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9fmrd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:23Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:23 crc kubenswrapper[4718]: I1210 14:32:23.953139 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hv62w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9db3984f-4589-462f-94d7-89a885be63d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c18891e2a71e33784526db2ff2226ff6aa8a9df462f5cb11ff4f5a446509f54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d84p4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hv62w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:23Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:23 crc kubenswrapper[4718]: I1210 14:32:23.976835 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xvkg4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af814c60-50de-499d-a1b2-b18f3749bc35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01d0637707f68305ada3aab8a2cf56db32750665ef5c1bcfbfb1993fd135c3b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vsmbx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:00Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xvkg4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:23Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:23 crc kubenswrapper[4718]: I1210 14:32:23.984225 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:23 crc kubenswrapper[4718]: I1210 14:32:23.984312 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:23 crc kubenswrapper[4718]: I1210 14:32:23.984326 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:23 crc kubenswrapper[4718]: I1210 14:32:23.984349 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:23 crc kubenswrapper[4718]: I1210 14:32:23.984362 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:23Z","lastTransitionTime":"2025-12-10T14:32:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:23 crc kubenswrapper[4718]: I1210 14:32:23.997822 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8db53917-7cfb-496d-b8a0-5cc68f3be4e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ac0946788c7cef36abe84639ca343f7ba379f19b94308cd1a08f4dcac4ad249\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f420f195ef9f1879dc97af4b3feb8835025c2cbe0ede4113fb1b0e0b08ba6c2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8zmhn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:23Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:24 crc kubenswrapper[4718]: I1210 14:32:24.019644 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 10 14:32:24 crc kubenswrapper[4718]: I1210 14:32:24.019734 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 10 14:32:24 crc kubenswrapper[4718]: E1210 14:32:24.019836 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 10 14:32:24 crc kubenswrapper[4718]: E1210 14:32:24.019895 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 10 14:32:24 crc kubenswrapper[4718]: I1210 14:32:24.031182 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1432ae01-a453-47a7-be1d-5fdcabe6f0c8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://651112c229c32968527538e66f1b161c92411b37f45f36ca90b27b6710d7a43d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d1de02a2f41beed3e92a05e69a6c76b44861e7dba46d723a38fffe8c3d0ed42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9405e4c833e1d215824604c6d7ec42dec156f2968d3351f110df665cdac0ce1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d898946159da9ec7806b5daa074a2313a8d3cd6086f4ae2bdfe858a7ba70eca7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:31:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:24Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:24 crc kubenswrapper[4718]: I1210 14:32:24.047596 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://caa6ee7ab7f267517dcd1699f91f61cbc8b2dd0a135dd553e096f31a7f1dd93b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:24Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:24 crc kubenswrapper[4718]: I1210 14:32:24.061680 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:24Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:24 crc kubenswrapper[4718]: I1210 14:32:24.074202 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-49r77" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22aace52-6f58-4459-89d2-9fec98b12ead\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b18d0c437956c7db43ad53ae225d4340e62f2c0ceb79ab6cc519d3c97570027\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l47m9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc20dc8e96606df154ecb5850afe3c1002d2869646e8a43a642e34d6f5ec8683\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l47m9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-49r77\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:24Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:24 crc kubenswrapper[4718]: I1210 14:32:24.088849 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:24 crc kubenswrapper[4718]: I1210 14:32:24.089082 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:24 crc kubenswrapper[4718]: I1210 14:32:24.089149 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:24 crc kubenswrapper[4718]: I1210 14:32:24.089274 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:24 crc kubenswrapper[4718]: I1210 14:32:24.089355 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:24Z","lastTransitionTime":"2025-12-10T14:32:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:24 crc kubenswrapper[4718]: I1210 14:32:24.089699 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kkfdg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db9c6842-03cb-4a28-baff-77f27f537aa4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bxrt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34acb23b1eef53ae920322853c0503b9d26b0d30894e6c53ae52fb90e1671294\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34acb23b1eef53ae920322853c0503b9d26b0d30894e6c53ae52fb90e1671294\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bxrt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1b13fbe360cd4e791e86acdb311b78d1ab1d3344fd287fd70cfa87cf656affa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1b13fbe360cd4e791e86acdb311b78d1ab1d3344fd287fd70cfa87cf656affa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bxrt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d9ee16140b928b7e5c33811af2ca2b20b2fc98fdefc12bb1790fce59c407d9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d9ee16140b928b7e5c33811af2ca2b20b2fc98fdefc12bb1790fce59c407d9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bxrt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1daf49ff990637090db3ead68f5d0777b6502f79edfe524df743389ee220f8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1daf49ff990637090db3ead68f5d0777b6502f79edfe524df743389ee220f8d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bxrt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://193bd4ffc5c3324f05727b94ff852aaae455949fbb968b27c572f43fe0328854\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://193bd4ffc5c3324f05727b94ff852aaae455949fbb968b27c572f43fe0328854\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bxrt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://95db561c06acc2e7272a5a4a2c8a6a38315343379b80213c0e63f7a65a46f35d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://95db561c06acc2e7272a5a4a2c8a6a38315343379b80213c0e63f7a65a46f35d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bxrt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kkfdg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:24Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:24 crc kubenswrapper[4718]: I1210 14:32:24.099600 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-r8zbt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1494ebfa-d66c-4200-a336-2cedebcd5889\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p7qsk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p7qsk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:17Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-r8zbt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:24Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:24 crc kubenswrapper[4718]: I1210 14:32:24.111859 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52b297ad-8ff7-498e-8248-b64014de744f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65966d33004e0a6740f0b450f404729baec86fc47c84a58ee020a78b7378bfee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfa44e8f79d7525847ab10384aaf41105a8e6769b384679417cd0a7379748dee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://32fcede9b35f9a45177a341e272e7c2cbe2e5c6bf699e877664a603a5f96124a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3d4b682855c5510a5957e308008602e4f749bd44ca52f974cb4be27d06a7c80\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9d6e990a4a3f575ffc8a6399b8dd1fd59a84db71b34885946046fc7c5bd0a8e8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-10T14:31:58Z\\\",\\\"message\\\":\\\"le observer\\\\nW1210 14:31:57.496102 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1210 14:31:57.496266 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1210 14:31:57.497076 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1744864549/tls.crt::/tmp/serving-cert-1744864549/tls.key\\\\\\\"\\\\nI1210 14:31:57.965947 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1210 14:31:57.968367 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1210 14:31:57.968400 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1210 14:31:57.968417 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1210 14:31:57.968422 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1210 14:31:57.972813 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1210 14:31:57.972849 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1210 14:31:57.972856 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1210 14:31:57.972862 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1210 14:31:57.972867 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1210 14:31:57.972870 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1210 14:31:57.972874 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1210 14:31:57.973028 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1210 14:31:57.977590 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-10T14:31:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f363cabfc65ba2cb737968ead6f21568f30416a1f385f511f2d1a45da1e831a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3cbf8dfd3e1278a733c6ca7e888788d0bec845ff72dfd618e2070262d05b6a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3cbf8dfd3e1278a733c6ca7e888788d0bec845ff72dfd618e2070262d05b6a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:31:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:31:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:24Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:24 crc kubenswrapper[4718]: I1210 14:32:24.123597 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:24Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:24 crc kubenswrapper[4718]: I1210 14:32:24.136136 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:24Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:24 crc kubenswrapper[4718]: I1210 14:32:24.149789 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://368fbae55ef9c78f2f88c5a07301f271e1ccffc66976a0c2dc9f80edce04613a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:24Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:24 crc kubenswrapper[4718]: I1210 14:32:24.169264 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dtch2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"612af6cb-db4d-4874-a9ea-8b3c7eb8e30c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f49783c85c42bd1d4afd9d2546c84af07c770b069b7e6f22cf3d95c0e173bcb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02ee125e78be56eace63f1077a3952d4a9b9ccb387af5413a9325eb4bff80ea9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e231c6918c86142643af798db9c4c6de9764822d2b60e1155c727b143191139\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://abe2f058bd1c2102c5f00a29af4869eb6dcd8b732ae90f5c4fda6759b460622a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2ac19fc0eeba332014eefac978a35798eb8bf1ff79212bca24690236bd46e01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9263fdd7b12f5852db038570b5a179fff5850893af790fa27694f00375a607f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b308265eb50c9a1d2ecb14c394c7c261eb6dfb2173452a951ed7e67a84bc9e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f27c89c7dd6f5334f05c340d706e938abde36968f00c30bf84f7b2fb30dfcb3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df5db949bd1c53e8ae754b3e7e500e8c2e196aacd68a36d6331ddca378ef7504\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df5db949bd1c53e8ae754b3e7e500e8c2e196aacd68a36d6331ddca378ef7504\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-dtch2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:24Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:24 crc kubenswrapper[4718]: I1210 14:32:24.192811 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:24 crc kubenswrapper[4718]: I1210 14:32:24.192872 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:24 crc kubenswrapper[4718]: I1210 14:32:24.192884 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:24 crc kubenswrapper[4718]: I1210 14:32:24.192915 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:24 crc kubenswrapper[4718]: I1210 14:32:24.192933 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:24Z","lastTransitionTime":"2025-12-10T14:32:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:24 crc kubenswrapper[4718]: I1210 14:32:24.297410 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:24 crc kubenswrapper[4718]: I1210 14:32:24.297465 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:24 crc kubenswrapper[4718]: I1210 14:32:24.297478 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:24 crc kubenswrapper[4718]: I1210 14:32:24.297501 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:24 crc kubenswrapper[4718]: I1210 14:32:24.297522 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:24Z","lastTransitionTime":"2025-12-10T14:32:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:24 crc kubenswrapper[4718]: I1210 14:32:24.401903 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:24 crc kubenswrapper[4718]: I1210 14:32:24.401955 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:24 crc kubenswrapper[4718]: I1210 14:32:24.401969 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:24 crc kubenswrapper[4718]: I1210 14:32:24.401990 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:24 crc kubenswrapper[4718]: I1210 14:32:24.402003 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:24Z","lastTransitionTime":"2025-12-10T14:32:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:24 crc kubenswrapper[4718]: I1210 14:32:24.505320 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:24 crc kubenswrapper[4718]: I1210 14:32:24.505371 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:24 crc kubenswrapper[4718]: I1210 14:32:24.505401 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:24 crc kubenswrapper[4718]: I1210 14:32:24.505424 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:24 crc kubenswrapper[4718]: I1210 14:32:24.505440 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:24Z","lastTransitionTime":"2025-12-10T14:32:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:24 crc kubenswrapper[4718]: I1210 14:32:24.608964 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:24 crc kubenswrapper[4718]: I1210 14:32:24.609005 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:24 crc kubenswrapper[4718]: I1210 14:32:24.609014 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:24 crc kubenswrapper[4718]: I1210 14:32:24.609027 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:24 crc kubenswrapper[4718]: I1210 14:32:24.609039 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:24Z","lastTransitionTime":"2025-12-10T14:32:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:24 crc kubenswrapper[4718]: I1210 14:32:24.712695 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:24 crc kubenswrapper[4718]: I1210 14:32:24.712766 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:24 crc kubenswrapper[4718]: I1210 14:32:24.712781 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:24 crc kubenswrapper[4718]: I1210 14:32:24.712804 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:24 crc kubenswrapper[4718]: I1210 14:32:24.712822 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:24Z","lastTransitionTime":"2025-12-10T14:32:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:24 crc kubenswrapper[4718]: I1210 14:32:24.815768 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:24 crc kubenswrapper[4718]: I1210 14:32:24.815815 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:24 crc kubenswrapper[4718]: I1210 14:32:24.815825 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:24 crc kubenswrapper[4718]: I1210 14:32:24.815840 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:24 crc kubenswrapper[4718]: I1210 14:32:24.815852 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:24Z","lastTransitionTime":"2025-12-10T14:32:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:24 crc kubenswrapper[4718]: I1210 14:32:24.876806 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-dtch2_612af6cb-db4d-4874-a9ea-8b3c7eb8e30c/ovnkube-controller/0.log" Dec 10 14:32:24 crc kubenswrapper[4718]: I1210 14:32:24.880232 4718 generic.go:334] "Generic (PLEG): container finished" podID="612af6cb-db4d-4874-a9ea-8b3c7eb8e30c" containerID="1b308265eb50c9a1d2ecb14c394c7c261eb6dfb2173452a951ed7e67a84bc9e5" exitCode=1 Dec 10 14:32:24 crc kubenswrapper[4718]: I1210 14:32:24.880334 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dtch2" event={"ID":"612af6cb-db4d-4874-a9ea-8b3c7eb8e30c","Type":"ContainerDied","Data":"1b308265eb50c9a1d2ecb14c394c7c261eb6dfb2173452a951ed7e67a84bc9e5"} Dec 10 14:32:24 crc kubenswrapper[4718]: I1210 14:32:24.881111 4718 scope.go:117] "RemoveContainer" containerID="1b308265eb50c9a1d2ecb14c394c7c261eb6dfb2173452a951ed7e67a84bc9e5" Dec 10 14:32:24 crc kubenswrapper[4718]: I1210 14:32:24.885962 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-kkfdg" event={"ID":"db9c6842-03cb-4a28-baff-77f27f537aa4","Type":"ContainerStarted","Data":"bbdc104f3ec13789d339165601ac84cf9973e811127f8fe05ddb9b8d5ab333a8"} Dec 10 14:32:24 crc kubenswrapper[4718]: I1210 14:32:24.902966 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a6ff07473ddfa3d1db1e5fe47937274f803c400ee435102c3b0383c02af4340\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b56868fde34c04a22b78be1e6cb02ff5b11f0dd9fa0232e5a4c38725a8bf193\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:24Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:24 crc kubenswrapper[4718]: I1210 14:32:24.915028 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9fmrd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"186fd52a-c63c-461f-a551-8b57ead36f59\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c79dfef1bedaab660a35e29f8a48c6b153f7ca5028ef6a936a196699a30566e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r6f28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:03Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9fmrd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:24Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:24 crc kubenswrapper[4718]: I1210 14:32:24.918188 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:24 crc kubenswrapper[4718]: I1210 14:32:24.918219 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:24 crc kubenswrapper[4718]: I1210 14:32:24.918227 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:24 crc kubenswrapper[4718]: I1210 14:32:24.918241 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:24 crc kubenswrapper[4718]: I1210 14:32:24.918251 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:24Z","lastTransitionTime":"2025-12-10T14:32:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:24 crc kubenswrapper[4718]: I1210 14:32:24.929628 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8db53917-7cfb-496d-b8a0-5cc68f3be4e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ac0946788c7cef36abe84639ca343f7ba379f19b94308cd1a08f4dcac4ad249\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f420f195ef9f1879dc97af4b3feb8835025c2cbe0ede4113fb1b0e0b08ba6c2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8zmhn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:24Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:24 crc kubenswrapper[4718]: I1210 14:32:24.947118 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1432ae01-a453-47a7-be1d-5fdcabe6f0c8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://651112c229c32968527538e66f1b161c92411b37f45f36ca90b27b6710d7a43d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d1de02a2f41beed3e92a05e69a6c76b44861e7dba46d723a38fffe8c3d0ed42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9405e4c833e1d215824604c6d7ec42dec156f2968d3351f110df665cdac0ce1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d898946159da9ec7806b5daa074a2313a8d3cd6086f4ae2bdfe858a7ba70eca7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:31:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:24Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:24 crc kubenswrapper[4718]: I1210 14:32:24.963425 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://caa6ee7ab7f267517dcd1699f91f61cbc8b2dd0a135dd553e096f31a7f1dd93b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:24Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:24 crc kubenswrapper[4718]: I1210 14:32:24.978828 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:24Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:24 crc kubenswrapper[4718]: I1210 14:32:24.993945 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hv62w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9db3984f-4589-462f-94d7-89a885be63d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c18891e2a71e33784526db2ff2226ff6aa8a9df462f5cb11ff4f5a446509f54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d84p4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hv62w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:24Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:25 crc kubenswrapper[4718]: I1210 14:32:25.004449 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xvkg4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af814c60-50de-499d-a1b2-b18f3749bc35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01d0637707f68305ada3aab8a2cf56db32750665ef5c1bcfbfb1993fd135c3b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vsmbx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:00Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xvkg4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:25Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:25 crc kubenswrapper[4718]: I1210 14:32:25.019082 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-49r77" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22aace52-6f58-4459-89d2-9fec98b12ead\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b18d0c437956c7db43ad53ae225d4340e62f2c0ceb79ab6cc519d3c97570027\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l47m9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc20dc8e96606df154ecb5850afe3c1002d2869646e8a43a642e34d6f5ec8683\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l47m9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-49r77\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:25Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:25 crc kubenswrapper[4718]: I1210 14:32:25.019255 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 10 14:32:25 crc kubenswrapper[4718]: I1210 14:32:25.019276 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-r8zbt" Dec 10 14:32:25 crc kubenswrapper[4718]: E1210 14:32:25.019712 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 10 14:32:25 crc kubenswrapper[4718]: E1210 14:32:25.019795 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-r8zbt" podUID="1494ebfa-d66c-4200-a336-2cedebcd5889" Dec 10 14:32:25 crc kubenswrapper[4718]: I1210 14:32:25.020645 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:25 crc kubenswrapper[4718]: I1210 14:32:25.020681 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:25 crc kubenswrapper[4718]: I1210 14:32:25.020694 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:25 crc kubenswrapper[4718]: I1210 14:32:25.020710 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:25 crc kubenswrapper[4718]: I1210 14:32:25.020721 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:25Z","lastTransitionTime":"2025-12-10T14:32:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:25 crc kubenswrapper[4718]: I1210 14:32:25.036162 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kkfdg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db9c6842-03cb-4a28-baff-77f27f537aa4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bxrt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34acb23b1eef53ae920322853c0503b9d26b0d30894e6c53ae52fb90e1671294\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34acb23b1eef53ae920322853c0503b9d26b0d30894e6c53ae52fb90e1671294\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bxrt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1b13fbe360cd4e791e86acdb311b78d1ab1d3344fd287fd70cfa87cf656affa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1b13fbe360cd4e791e86acdb311b78d1ab1d3344fd287fd70cfa87cf656affa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bxrt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d9ee16140b928b7e5c33811af2ca2b20b2fc98fdefc12bb1790fce59c407d9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d9ee16140b928b7e5c33811af2ca2b20b2fc98fdefc12bb1790fce59c407d9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bxrt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1daf49ff990637090db3ead68f5d0777b6502f79edfe524df743389ee220f8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1daf49ff990637090db3ead68f5d0777b6502f79edfe524df743389ee220f8d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bxrt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://193bd4ffc5c3324f05727b94ff852aaae455949fbb968b27c572f43fe0328854\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://193bd4ffc5c3324f05727b94ff852aaae455949fbb968b27c572f43fe0328854\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bxrt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://95db561c06acc2e7272a5a4a2c8a6a38315343379b80213c0e63f7a65a46f35d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://95db561c06acc2e7272a5a4a2c8a6a38315343379b80213c0e63f7a65a46f35d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bxrt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kkfdg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:25Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:25 crc kubenswrapper[4718]: I1210 14:32:25.050877 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-r8zbt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1494ebfa-d66c-4200-a336-2cedebcd5889\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p7qsk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p7qsk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:17Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-r8zbt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:25Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:25 crc kubenswrapper[4718]: I1210 14:32:25.070773 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52b297ad-8ff7-498e-8248-b64014de744f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65966d33004e0a6740f0b450f404729baec86fc47c84a58ee020a78b7378bfee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfa44e8f79d7525847ab10384aaf41105a8e6769b384679417cd0a7379748dee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://32fcede9b35f9a45177a341e272e7c2cbe2e5c6bf699e877664a603a5f96124a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3d4b682855c5510a5957e308008602e4f749bd44ca52f974cb4be27d06a7c80\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9d6e990a4a3f575ffc8a6399b8dd1fd59a84db71b34885946046fc7c5bd0a8e8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-10T14:31:58Z\\\",\\\"message\\\":\\\"le observer\\\\nW1210 14:31:57.496102 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1210 14:31:57.496266 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1210 14:31:57.497076 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1744864549/tls.crt::/tmp/serving-cert-1744864549/tls.key\\\\\\\"\\\\nI1210 14:31:57.965947 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1210 14:31:57.968367 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1210 14:31:57.968400 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1210 14:31:57.968417 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1210 14:31:57.968422 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1210 14:31:57.972813 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1210 14:31:57.972849 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1210 14:31:57.972856 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1210 14:31:57.972862 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1210 14:31:57.972867 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1210 14:31:57.972870 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1210 14:31:57.972874 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1210 14:31:57.973028 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1210 14:31:57.977590 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-10T14:31:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f363cabfc65ba2cb737968ead6f21568f30416a1f385f511f2d1a45da1e831a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3cbf8dfd3e1278a733c6ca7e888788d0bec845ff72dfd618e2070262d05b6a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3cbf8dfd3e1278a733c6ca7e888788d0bec845ff72dfd618e2070262d05b6a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:31:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:31:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:25Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:25 crc kubenswrapper[4718]: I1210 14:32:25.089627 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:25Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:25 crc kubenswrapper[4718]: I1210 14:32:25.105805 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:25Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:25 crc kubenswrapper[4718]: I1210 14:32:25.121245 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://368fbae55ef9c78f2f88c5a07301f271e1ccffc66976a0c2dc9f80edce04613a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:25Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:25 crc kubenswrapper[4718]: I1210 14:32:25.123349 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:25 crc kubenswrapper[4718]: I1210 14:32:25.123406 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:25 crc kubenswrapper[4718]: I1210 14:32:25.123415 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:25 crc kubenswrapper[4718]: I1210 14:32:25.123430 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:25 crc kubenswrapper[4718]: I1210 14:32:25.123440 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:25Z","lastTransitionTime":"2025-12-10T14:32:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:25 crc kubenswrapper[4718]: I1210 14:32:25.144470 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dtch2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"612af6cb-db4d-4874-a9ea-8b3c7eb8e30c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f49783c85c42bd1d4afd9d2546c84af07c770b069b7e6f22cf3d95c0e173bcb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02ee125e78be56eace63f1077a3952d4a9b9ccb387af5413a9325eb4bff80ea9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e231c6918c86142643af798db9c4c6de9764822d2b60e1155c727b143191139\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://abe2f058bd1c2102c5f00a29af4869eb6dcd8b732ae90f5c4fda6759b460622a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2ac19fc0eeba332014eefac978a35798eb8bf1ff79212bca24690236bd46e01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9263fdd7b12f5852db038570b5a179fff5850893af790fa27694f00375a607f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b308265eb50c9a1d2ecb14c394c7c261eb6dfb2173452a951ed7e67a84bc9e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b308265eb50c9a1d2ecb14c394c7c261eb6dfb2173452a951ed7e67a84bc9e5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-10T14:32:24Z\\\",\\\"message\\\":\\\"ctory.go:160\\\\nI1210 14:32:24.392179 5957 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI1210 14:32:24.392370 5957 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1210 14:32:24.392550 5957 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1210 14:32:24.393240 5957 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1210 14:32:24.393627 5957 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1210 14:32:24.393738 5957 factory.go:656] Stopping watch factory\\\\nI1210 14:32:24.393670 5957 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI1210 14:32:24.393754 5957 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1210 14:32:24.393802 5957 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f27c89c7dd6f5334f05c340d706e938abde36968f00c30bf84f7b2fb30dfcb3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df5db949bd1c53e8ae754b3e7e500e8c2e196aacd68a36d6331ddca378ef7504\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df5db949bd1c53e8ae754b3e7e500e8c2e196aacd68a36d6331ddca378ef7504\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-dtch2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:25Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:25 crc kubenswrapper[4718]: I1210 14:32:25.161687 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a6ff07473ddfa3d1db1e5fe47937274f803c400ee435102c3b0383c02af4340\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b56868fde34c04a22b78be1e6cb02ff5b11f0dd9fa0232e5a4c38725a8bf193\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:25Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:25 crc kubenswrapper[4718]: I1210 14:32:25.175087 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9fmrd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"186fd52a-c63c-461f-a551-8b57ead36f59\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c79dfef1bedaab660a35e29f8a48c6b153f7ca5028ef6a936a196699a30566e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r6f28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:03Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9fmrd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:25Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:25 crc kubenswrapper[4718]: I1210 14:32:25.190201 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:25Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:25 crc kubenswrapper[4718]: I1210 14:32:25.206175 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hv62w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9db3984f-4589-462f-94d7-89a885be63d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c18891e2a71e33784526db2ff2226ff6aa8a9df462f5cb11ff4f5a446509f54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d84p4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hv62w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:25Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:25 crc kubenswrapper[4718]: I1210 14:32:25.218333 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xvkg4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af814c60-50de-499d-a1b2-b18f3749bc35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01d0637707f68305ada3aab8a2cf56db32750665ef5c1bcfbfb1993fd135c3b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vsmbx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:00Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xvkg4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:25Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:25 crc kubenswrapper[4718]: I1210 14:32:25.226127 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:25 crc kubenswrapper[4718]: I1210 14:32:25.226173 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:25 crc kubenswrapper[4718]: I1210 14:32:25.226186 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:25 crc kubenswrapper[4718]: I1210 14:32:25.226204 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:25 crc kubenswrapper[4718]: I1210 14:32:25.226215 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:25Z","lastTransitionTime":"2025-12-10T14:32:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:25 crc kubenswrapper[4718]: I1210 14:32:25.230028 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8db53917-7cfb-496d-b8a0-5cc68f3be4e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ac0946788c7cef36abe84639ca343f7ba379f19b94308cd1a08f4dcac4ad249\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f420f195ef9f1879dc97af4b3feb8835025c2cbe0ede4113fb1b0e0b08ba6c2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8zmhn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:25Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:25 crc kubenswrapper[4718]: I1210 14:32:25.246507 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1432ae01-a453-47a7-be1d-5fdcabe6f0c8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://651112c229c32968527538e66f1b161c92411b37f45f36ca90b27b6710d7a43d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d1de02a2f41beed3e92a05e69a6c76b44861e7dba46d723a38fffe8c3d0ed42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9405e4c833e1d215824604c6d7ec42dec156f2968d3351f110df665cdac0ce1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d898946159da9ec7806b5daa074a2313a8d3cd6086f4ae2bdfe858a7ba70eca7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:31:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:25Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:25 crc kubenswrapper[4718]: I1210 14:32:25.261173 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://caa6ee7ab7f267517dcd1699f91f61cbc8b2dd0a135dd553e096f31a7f1dd93b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:25Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:25 crc kubenswrapper[4718]: I1210 14:32:25.274461 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-49r77" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22aace52-6f58-4459-89d2-9fec98b12ead\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b18d0c437956c7db43ad53ae225d4340e62f2c0ceb79ab6cc519d3c97570027\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l47m9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc20dc8e96606df154ecb5850afe3c1002d2869646e8a43a642e34d6f5ec8683\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l47m9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-49r77\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:25Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:25 crc kubenswrapper[4718]: I1210 14:32:25.281418 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1494ebfa-d66c-4200-a336-2cedebcd5889-metrics-certs\") pod \"network-metrics-daemon-r8zbt\" (UID: \"1494ebfa-d66c-4200-a336-2cedebcd5889\") " pod="openshift-multus/network-metrics-daemon-r8zbt" Dec 10 14:32:25 crc kubenswrapper[4718]: E1210 14:32:25.281588 4718 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 10 14:32:25 crc kubenswrapper[4718]: E1210 14:32:25.281674 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1494ebfa-d66c-4200-a336-2cedebcd5889-metrics-certs podName:1494ebfa-d66c-4200-a336-2cedebcd5889 nodeName:}" failed. No retries permitted until 2025-12-10 14:32:33.281652152 +0000 UTC m=+58.230875569 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/1494ebfa-d66c-4200-a336-2cedebcd5889-metrics-certs") pod "network-metrics-daemon-r8zbt" (UID: "1494ebfa-d66c-4200-a336-2cedebcd5889") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 10 14:32:25 crc kubenswrapper[4718]: I1210 14:32:25.289285 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kkfdg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db9c6842-03cb-4a28-baff-77f27f537aa4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bbdc104f3ec13789d339165601ac84cf9973e811127f8fe05ddb9b8d5ab333a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bxrt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34acb23b1eef53ae920322853c0503b9d26b0d30894e6c53ae52fb90e1671294\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34acb23b1eef53ae920322853c0503b9d26b0d30894e6c53ae52fb90e1671294\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bxrt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1b13fbe360cd4e791e86acdb311b78d1ab1d3344fd287fd70cfa87cf656affa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1b13fbe360cd4e791e86acdb311b78d1ab1d3344fd287fd70cfa87cf656affa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bxrt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d9ee16140b928b7e5c33811af2ca2b20b2fc98fdefc12bb1790fce59c407d9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d9ee16140b928b7e5c33811af2ca2b20b2fc98fdefc12bb1790fce59c407d9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bxrt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1daf49ff990637090db3ead68f5d0777b6502f79edfe524df743389ee220f8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1daf49ff990637090db3ead68f5d0777b6502f79edfe524df743389ee220f8d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bxrt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://193bd4ffc5c3324f05727b94ff852aaae455949fbb968b27c572f43fe0328854\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://193bd4ffc5c3324f05727b94ff852aaae455949fbb968b27c572f43fe0328854\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bxrt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://95db561c06acc2e7272a5a4a2c8a6a38315343379b80213c0e63f7a65a46f35d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://95db561c06acc2e7272a5a4a2c8a6a38315343379b80213c0e63f7a65a46f35d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bxrt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kkfdg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:25Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:25 crc kubenswrapper[4718]: I1210 14:32:25.302478 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-r8zbt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1494ebfa-d66c-4200-a336-2cedebcd5889\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p7qsk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p7qsk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:17Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-r8zbt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:25Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:25 crc kubenswrapper[4718]: I1210 14:32:25.330360 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:25 crc kubenswrapper[4718]: I1210 14:32:25.330435 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:25 crc kubenswrapper[4718]: I1210 14:32:25.330447 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:25 crc kubenswrapper[4718]: I1210 14:32:25.330468 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:25 crc kubenswrapper[4718]: I1210 14:32:25.330480 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:25Z","lastTransitionTime":"2025-12-10T14:32:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:25 crc kubenswrapper[4718]: I1210 14:32:25.332902 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dtch2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"612af6cb-db4d-4874-a9ea-8b3c7eb8e30c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f49783c85c42bd1d4afd9d2546c84af07c770b069b7e6f22cf3d95c0e173bcb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02ee125e78be56eace63f1077a3952d4a9b9ccb387af5413a9325eb4bff80ea9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e231c6918c86142643af798db9c4c6de9764822d2b60e1155c727b143191139\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://abe2f058bd1c2102c5f00a29af4869eb6dcd8b732ae90f5c4fda6759b460622a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2ac19fc0eeba332014eefac978a35798eb8bf1ff79212bca24690236bd46e01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9263fdd7b12f5852db038570b5a179fff5850893af790fa27694f00375a607f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b308265eb50c9a1d2ecb14c394c7c261eb6dfb2173452a951ed7e67a84bc9e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b308265eb50c9a1d2ecb14c394c7c261eb6dfb2173452a951ed7e67a84bc9e5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-10T14:32:24Z\\\",\\\"message\\\":\\\"ctory.go:160\\\\nI1210 14:32:24.392179 5957 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI1210 14:32:24.392370 5957 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1210 14:32:24.392550 5957 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1210 14:32:24.393240 5957 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1210 14:32:24.393627 5957 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1210 14:32:24.393738 5957 factory.go:656] Stopping watch factory\\\\nI1210 14:32:24.393670 5957 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI1210 14:32:24.393754 5957 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1210 14:32:24.393802 5957 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f27c89c7dd6f5334f05c340d706e938abde36968f00c30bf84f7b2fb30dfcb3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df5db949bd1c53e8ae754b3e7e500e8c2e196aacd68a36d6331ddca378ef7504\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df5db949bd1c53e8ae754b3e7e500e8c2e196aacd68a36d6331ddca378ef7504\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-dtch2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:25Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:25 crc kubenswrapper[4718]: I1210 14:32:25.354315 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52b297ad-8ff7-498e-8248-b64014de744f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65966d33004e0a6740f0b450f404729baec86fc47c84a58ee020a78b7378bfee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfa44e8f79d7525847ab10384aaf41105a8e6769b384679417cd0a7379748dee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://32fcede9b35f9a45177a341e272e7c2cbe2e5c6bf699e877664a603a5f96124a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3d4b682855c5510a5957e308008602e4f749bd44ca52f974cb4be27d06a7c80\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9d6e990a4a3f575ffc8a6399b8dd1fd59a84db71b34885946046fc7c5bd0a8e8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-10T14:31:58Z\\\",\\\"message\\\":\\\"le observer\\\\nW1210 14:31:57.496102 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1210 14:31:57.496266 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1210 14:31:57.497076 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1744864549/tls.crt::/tmp/serving-cert-1744864549/tls.key\\\\\\\"\\\\nI1210 14:31:57.965947 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1210 14:31:57.968367 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1210 14:31:57.968400 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1210 14:31:57.968417 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1210 14:31:57.968422 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1210 14:31:57.972813 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1210 14:31:57.972849 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1210 14:31:57.972856 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1210 14:31:57.972862 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1210 14:31:57.972867 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1210 14:31:57.972870 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1210 14:31:57.972874 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1210 14:31:57.973028 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1210 14:31:57.977590 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-10T14:31:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f363cabfc65ba2cb737968ead6f21568f30416a1f385f511f2d1a45da1e831a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3cbf8dfd3e1278a733c6ca7e888788d0bec845ff72dfd618e2070262d05b6a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3cbf8dfd3e1278a733c6ca7e888788d0bec845ff72dfd618e2070262d05b6a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:31:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:31:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:25Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:25 crc kubenswrapper[4718]: I1210 14:32:25.372953 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:25Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:25 crc kubenswrapper[4718]: I1210 14:32:25.373561 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 10 14:32:25 crc kubenswrapper[4718]: I1210 14:32:25.383622 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Dec 10 14:32:25 crc kubenswrapper[4718]: I1210 14:32:25.389501 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:25Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:25 crc kubenswrapper[4718]: I1210 14:32:25.403078 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://368fbae55ef9c78f2f88c5a07301f271e1ccffc66976a0c2dc9f80edce04613a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:25Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:25 crc kubenswrapper[4718]: I1210 14:32:25.418609 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:25Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:25 crc kubenswrapper[4718]: I1210 14:32:25.432415 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://368fbae55ef9c78f2f88c5a07301f271e1ccffc66976a0c2dc9f80edce04613a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:25Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:25 crc kubenswrapper[4718]: I1210 14:32:25.434077 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:25 crc kubenswrapper[4718]: I1210 14:32:25.434121 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:25 crc kubenswrapper[4718]: I1210 14:32:25.434202 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:25 crc kubenswrapper[4718]: I1210 14:32:25.434229 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:25 crc kubenswrapper[4718]: I1210 14:32:25.434653 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:25Z","lastTransitionTime":"2025-12-10T14:32:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:25 crc kubenswrapper[4718]: I1210 14:32:25.452290 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dtch2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"612af6cb-db4d-4874-a9ea-8b3c7eb8e30c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f49783c85c42bd1d4afd9d2546c84af07c770b069b7e6f22cf3d95c0e173bcb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02ee125e78be56eace63f1077a3952d4a9b9ccb387af5413a9325eb4bff80ea9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e231c6918c86142643af798db9c4c6de9764822d2b60e1155c727b143191139\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://abe2f058bd1c2102c5f00a29af4869eb6dcd8b732ae90f5c4fda6759b460622a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2ac19fc0eeba332014eefac978a35798eb8bf1ff79212bca24690236bd46e01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9263fdd7b12f5852db038570b5a179fff5850893af790fa27694f00375a607f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b308265eb50c9a1d2ecb14c394c7c261eb6dfb2173452a951ed7e67a84bc9e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b308265eb50c9a1d2ecb14c394c7c261eb6dfb2173452a951ed7e67a84bc9e5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-10T14:32:24Z\\\",\\\"message\\\":\\\"ctory.go:160\\\\nI1210 14:32:24.392179 5957 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI1210 14:32:24.392370 5957 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1210 14:32:24.392550 5957 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1210 14:32:24.393240 5957 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1210 14:32:24.393627 5957 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1210 14:32:24.393738 5957 factory.go:656] Stopping watch factory\\\\nI1210 14:32:24.393670 5957 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI1210 14:32:24.393754 5957 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1210 14:32:24.393802 5957 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f27c89c7dd6f5334f05c340d706e938abde36968f00c30bf84f7b2fb30dfcb3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df5db949bd1c53e8ae754b3e7e500e8c2e196aacd68a36d6331ddca378ef7504\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df5db949bd1c53e8ae754b3e7e500e8c2e196aacd68a36d6331ddca378ef7504\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-dtch2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:25Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:25 crc kubenswrapper[4718]: I1210 14:32:25.468302 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52b297ad-8ff7-498e-8248-b64014de744f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65966d33004e0a6740f0b450f404729baec86fc47c84a58ee020a78b7378bfee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfa44e8f79d7525847ab10384aaf41105a8e6769b384679417cd0a7379748dee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://32fcede9b35f9a45177a341e272e7c2cbe2e5c6bf699e877664a603a5f96124a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3d4b682855c5510a5957e308008602e4f749bd44ca52f974cb4be27d06a7c80\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9d6e990a4a3f575ffc8a6399b8dd1fd59a84db71b34885946046fc7c5bd0a8e8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-10T14:31:58Z\\\",\\\"message\\\":\\\"le observer\\\\nW1210 14:31:57.496102 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1210 14:31:57.496266 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1210 14:31:57.497076 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1744864549/tls.crt::/tmp/serving-cert-1744864549/tls.key\\\\\\\"\\\\nI1210 14:31:57.965947 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1210 14:31:57.968367 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1210 14:31:57.968400 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1210 14:31:57.968417 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1210 14:31:57.968422 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1210 14:31:57.972813 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1210 14:31:57.972849 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1210 14:31:57.972856 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1210 14:31:57.972862 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1210 14:31:57.972867 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1210 14:31:57.972870 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1210 14:31:57.972874 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1210 14:31:57.973028 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1210 14:31:57.977590 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-10T14:31:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f363cabfc65ba2cb737968ead6f21568f30416a1f385f511f2d1a45da1e831a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3cbf8dfd3e1278a733c6ca7e888788d0bec845ff72dfd618e2070262d05b6a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3cbf8dfd3e1278a733c6ca7e888788d0bec845ff72dfd618e2070262d05b6a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:31:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:31:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:25Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:25 crc kubenswrapper[4718]: I1210 14:32:25.482629 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:25Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:25 crc kubenswrapper[4718]: I1210 14:32:25.496301 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a6ff07473ddfa3d1db1e5fe47937274f803c400ee435102c3b0383c02af4340\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b56868fde34c04a22b78be1e6cb02ff5b11f0dd9fa0232e5a4c38725a8bf193\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:25Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:25 crc kubenswrapper[4718]: I1210 14:32:25.509600 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9fmrd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"186fd52a-c63c-461f-a551-8b57ead36f59\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c79dfef1bedaab660a35e29f8a48c6b153f7ca5028ef6a936a196699a30566e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r6f28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:03Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9fmrd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:25Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:25 crc kubenswrapper[4718]: I1210 14:32:25.522654 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5867ee7f-0b5d-48e7-9c39-956298a845a1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca56ecda46fc73e9f734cfd36b87b7353db70adefef0665771911d62633ce508\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65ade8ad5ab17011ad49a8283beac1b53ed3612052f8ad9c9b5521b18a71f970\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4c8b683dbf4a1d50d5987ea168baecaf4b2124a5a6286dc22c96cd7472db60a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://541c7810a9f52a43bd4d949a836dfed2fbf9384a1c46d64f81168a8eec1437e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://541c7810a9f52a43bd4d949a836dfed2fbf9384a1c46d64f81168a8eec1437e3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:31:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:31:37Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:31:36Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:25Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:25 crc kubenswrapper[4718]: I1210 14:32:25.537739 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://caa6ee7ab7f267517dcd1699f91f61cbc8b2dd0a135dd553e096f31a7f1dd93b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:25Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:25 crc kubenswrapper[4718]: I1210 14:32:25.537832 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:25 crc kubenswrapper[4718]: I1210 14:32:25.537874 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:25 crc kubenswrapper[4718]: I1210 14:32:25.537887 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:25 crc kubenswrapper[4718]: I1210 14:32:25.537907 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:25 crc kubenswrapper[4718]: I1210 14:32:25.537920 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:25Z","lastTransitionTime":"2025-12-10T14:32:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:25 crc kubenswrapper[4718]: I1210 14:32:25.551721 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:25Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:25 crc kubenswrapper[4718]: I1210 14:32:25.565746 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hv62w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9db3984f-4589-462f-94d7-89a885be63d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c18891e2a71e33784526db2ff2226ff6aa8a9df462f5cb11ff4f5a446509f54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d84p4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hv62w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:25Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:25 crc kubenswrapper[4718]: I1210 14:32:25.581133 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xvkg4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af814c60-50de-499d-a1b2-b18f3749bc35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01d0637707f68305ada3aab8a2cf56db32750665ef5c1bcfbfb1993fd135c3b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vsmbx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:00Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xvkg4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:25Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:25 crc kubenswrapper[4718]: I1210 14:32:25.602140 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8db53917-7cfb-496d-b8a0-5cc68f3be4e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ac0946788c7cef36abe84639ca343f7ba379f19b94308cd1a08f4dcac4ad249\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f420f195ef9f1879dc97af4b3feb8835025c2cbe0ede4113fb1b0e0b08ba6c2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8zmhn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:25Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:25 crc kubenswrapper[4718]: I1210 14:32:25.624764 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1432ae01-a453-47a7-be1d-5fdcabe6f0c8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://651112c229c32968527538e66f1b161c92411b37f45f36ca90b27b6710d7a43d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d1de02a2f41beed3e92a05e69a6c76b44861e7dba46d723a38fffe8c3d0ed42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9405e4c833e1d215824604c6d7ec42dec156f2968d3351f110df665cdac0ce1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d898946159da9ec7806b5daa074a2313a8d3cd6086f4ae2bdfe858a7ba70eca7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:31:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:25Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:25 crc kubenswrapper[4718]: I1210 14:32:25.641021 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:25 crc kubenswrapper[4718]: I1210 14:32:25.641086 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:25 crc kubenswrapper[4718]: I1210 14:32:25.641096 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:25 crc kubenswrapper[4718]: I1210 14:32:25.641114 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:25 crc kubenswrapper[4718]: I1210 14:32:25.641129 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:25Z","lastTransitionTime":"2025-12-10T14:32:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:25 crc kubenswrapper[4718]: I1210 14:32:25.642290 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-49r77" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22aace52-6f58-4459-89d2-9fec98b12ead\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b18d0c437956c7db43ad53ae225d4340e62f2c0ceb79ab6cc519d3c97570027\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l47m9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc20dc8e96606df154ecb5850afe3c1002d2869646e8a43a642e34d6f5ec8683\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l47m9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-49r77\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:25Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:25 crc kubenswrapper[4718]: I1210 14:32:25.656735 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-r8zbt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1494ebfa-d66c-4200-a336-2cedebcd5889\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p7qsk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p7qsk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:17Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-r8zbt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:25Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:25 crc kubenswrapper[4718]: I1210 14:32:25.675615 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kkfdg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db9c6842-03cb-4a28-baff-77f27f537aa4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bbdc104f3ec13789d339165601ac84cf9973e811127f8fe05ddb9b8d5ab333a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bxrt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34acb23b1eef53ae920322853c0503b9d26b0d30894e6c53ae52fb90e1671294\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34acb23b1eef53ae920322853c0503b9d26b0d30894e6c53ae52fb90e1671294\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bxrt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1b13fbe360cd4e791e86acdb311b78d1ab1d3344fd287fd70cfa87cf656affa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1b13fbe360cd4e791e86acdb311b78d1ab1d3344fd287fd70cfa87cf656affa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bxrt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d9ee16140b928b7e5c33811af2ca2b20b2fc98fdefc12bb1790fce59c407d9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d9ee16140b928b7e5c33811af2ca2b20b2fc98fdefc12bb1790fce59c407d9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bxrt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1daf49ff990637090db3ead68f5d0777b6502f79edfe524df743389ee220f8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1daf49ff990637090db3ead68f5d0777b6502f79edfe524df743389ee220f8d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bxrt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://193bd4ffc5c3324f05727b94ff852aaae455949fbb968b27c572f43fe0328854\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://193bd4ffc5c3324f05727b94ff852aaae455949fbb968b27c572f43fe0328854\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bxrt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://95db561c06acc2e7272a5a4a2c8a6a38315343379b80213c0e63f7a65a46f35d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://95db561c06acc2e7272a5a4a2c8a6a38315343379b80213c0e63f7a65a46f35d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bxrt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kkfdg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:25Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:25 crc kubenswrapper[4718]: I1210 14:32:25.744149 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:25 crc kubenswrapper[4718]: I1210 14:32:25.744194 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:25 crc kubenswrapper[4718]: I1210 14:32:25.744205 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:25 crc kubenswrapper[4718]: I1210 14:32:25.744224 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:25 crc kubenswrapper[4718]: I1210 14:32:25.744241 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:25Z","lastTransitionTime":"2025-12-10T14:32:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:25 crc kubenswrapper[4718]: I1210 14:32:25.846529 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:25 crc kubenswrapper[4718]: I1210 14:32:25.846577 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:25 crc kubenswrapper[4718]: I1210 14:32:25.846586 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:25 crc kubenswrapper[4718]: I1210 14:32:25.846602 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:25 crc kubenswrapper[4718]: I1210 14:32:25.846614 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:25Z","lastTransitionTime":"2025-12-10T14:32:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:25 crc kubenswrapper[4718]: I1210 14:32:25.893779 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-dtch2_612af6cb-db4d-4874-a9ea-8b3c7eb8e30c/ovnkube-controller/0.log" Dec 10 14:32:25 crc kubenswrapper[4718]: I1210 14:32:25.897981 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dtch2" event={"ID":"612af6cb-db4d-4874-a9ea-8b3c7eb8e30c","Type":"ContainerStarted","Data":"b37a96430550aa45384959d4587f4ab21c637451e334481e14e7b3a666d7f936"} Dec 10 14:32:25 crc kubenswrapper[4718]: I1210 14:32:25.898212 4718 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 10 14:32:25 crc kubenswrapper[4718]: I1210 14:32:25.918381 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5867ee7f-0b5d-48e7-9c39-956298a845a1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca56ecda46fc73e9f734cfd36b87b7353db70adefef0665771911d62633ce508\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65ade8ad5ab17011ad49a8283beac1b53ed3612052f8ad9c9b5521b18a71f970\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4c8b683dbf4a1d50d5987ea168baecaf4b2124a5a6286dc22c96cd7472db60a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://541c7810a9f52a43bd4d949a836dfed2fbf9384a1c46d64f81168a8eec1437e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://541c7810a9f52a43bd4d949a836dfed2fbf9384a1c46d64f81168a8eec1437e3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:31:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:31:37Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:31:36Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:25Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:25 crc kubenswrapper[4718]: I1210 14:32:25.937936 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://caa6ee7ab7f267517dcd1699f91f61cbc8b2dd0a135dd553e096f31a7f1dd93b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:25Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:25 crc kubenswrapper[4718]: I1210 14:32:25.949261 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:25 crc kubenswrapper[4718]: I1210 14:32:25.949320 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:25 crc kubenswrapper[4718]: I1210 14:32:25.949334 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:25 crc kubenswrapper[4718]: I1210 14:32:25.949357 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:25 crc kubenswrapper[4718]: I1210 14:32:25.949377 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:25Z","lastTransitionTime":"2025-12-10T14:32:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:25 crc kubenswrapper[4718]: I1210 14:32:25.956585 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:25Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:25 crc kubenswrapper[4718]: I1210 14:32:25.972022 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hv62w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9db3984f-4589-462f-94d7-89a885be63d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c18891e2a71e33784526db2ff2226ff6aa8a9df462f5cb11ff4f5a446509f54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d84p4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hv62w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:25Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:25 crc kubenswrapper[4718]: I1210 14:32:25.982242 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xvkg4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af814c60-50de-499d-a1b2-b18f3749bc35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01d0637707f68305ada3aab8a2cf56db32750665ef5c1bcfbfb1993fd135c3b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vsmbx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:00Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xvkg4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:25Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:25 crc kubenswrapper[4718]: I1210 14:32:25.993344 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8db53917-7cfb-496d-b8a0-5cc68f3be4e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ac0946788c7cef36abe84639ca343f7ba379f19b94308cd1a08f4dcac4ad249\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f420f195ef9f1879dc97af4b3feb8835025c2cbe0ede4113fb1b0e0b08ba6c2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8zmhn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:25Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:26 crc kubenswrapper[4718]: I1210 14:32:26.008719 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1432ae01-a453-47a7-be1d-5fdcabe6f0c8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://651112c229c32968527538e66f1b161c92411b37f45f36ca90b27b6710d7a43d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d1de02a2f41beed3e92a05e69a6c76b44861e7dba46d723a38fffe8c3d0ed42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9405e4c833e1d215824604c6d7ec42dec156f2968d3351f110df665cdac0ce1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d898946159da9ec7806b5daa074a2313a8d3cd6086f4ae2bdfe858a7ba70eca7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:31:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:26Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:26 crc kubenswrapper[4718]: I1210 14:32:26.019982 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 10 14:32:26 crc kubenswrapper[4718]: E1210 14:32:26.020131 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 10 14:32:26 crc kubenswrapper[4718]: I1210 14:32:26.020223 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 10 14:32:26 crc kubenswrapper[4718]: E1210 14:32:26.020466 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 10 14:32:26 crc kubenswrapper[4718]: I1210 14:32:26.021401 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-49r77" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22aace52-6f58-4459-89d2-9fec98b12ead\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b18d0c437956c7db43ad53ae225d4340e62f2c0ceb79ab6cc519d3c97570027\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l47m9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc20dc8e96606df154ecb5850afe3c1002d2869646e8a43a642e34d6f5ec8683\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l47m9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-49r77\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:26Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:26 crc kubenswrapper[4718]: I1210 14:32:26.039486 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-r8zbt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1494ebfa-d66c-4200-a336-2cedebcd5889\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p7qsk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p7qsk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:17Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-r8zbt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:26Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:26 crc kubenswrapper[4718]: I1210 14:32:26.052836 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:26 crc kubenswrapper[4718]: I1210 14:32:26.052877 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:26 crc kubenswrapper[4718]: I1210 14:32:26.052885 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:26 crc kubenswrapper[4718]: I1210 14:32:26.052900 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:26 crc kubenswrapper[4718]: I1210 14:32:26.052911 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:26Z","lastTransitionTime":"2025-12-10T14:32:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:26 crc kubenswrapper[4718]: I1210 14:32:26.055637 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kkfdg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db9c6842-03cb-4a28-baff-77f27f537aa4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bbdc104f3ec13789d339165601ac84cf9973e811127f8fe05ddb9b8d5ab333a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bxrt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34acb23b1eef53ae920322853c0503b9d26b0d30894e6c53ae52fb90e1671294\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34acb23b1eef53ae920322853c0503b9d26b0d30894e6c53ae52fb90e1671294\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bxrt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1b13fbe360cd4e791e86acdb311b78d1ab1d3344fd287fd70cfa87cf656affa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1b13fbe360cd4e791e86acdb311b78d1ab1d3344fd287fd70cfa87cf656affa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bxrt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d9ee16140b928b7e5c33811af2ca2b20b2fc98fdefc12bb1790fce59c407d9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d9ee16140b928b7e5c33811af2ca2b20b2fc98fdefc12bb1790fce59c407d9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bxrt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1daf49ff990637090db3ead68f5d0777b6502f79edfe524df743389ee220f8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1daf49ff990637090db3ead68f5d0777b6502f79edfe524df743389ee220f8d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bxrt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://193bd4ffc5c3324f05727b94ff852aaae455949fbb968b27c572f43fe0328854\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://193bd4ffc5c3324f05727b94ff852aaae455949fbb968b27c572f43fe0328854\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bxrt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://95db561c06acc2e7272a5a4a2c8a6a38315343379b80213c0e63f7a65a46f35d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://95db561c06acc2e7272a5a4a2c8a6a38315343379b80213c0e63f7a65a46f35d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bxrt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kkfdg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:26Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:26 crc kubenswrapper[4718]: I1210 14:32:26.071426 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:26Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:26 crc kubenswrapper[4718]: I1210 14:32:26.084614 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://368fbae55ef9c78f2f88c5a07301f271e1ccffc66976a0c2dc9f80edce04613a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:26Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:26 crc kubenswrapper[4718]: I1210 14:32:26.103046 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dtch2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"612af6cb-db4d-4874-a9ea-8b3c7eb8e30c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f49783c85c42bd1d4afd9d2546c84af07c770b069b7e6f22cf3d95c0e173bcb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02ee125e78be56eace63f1077a3952d4a9b9ccb387af5413a9325eb4bff80ea9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e231c6918c86142643af798db9c4c6de9764822d2b60e1155c727b143191139\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://abe2f058bd1c2102c5f00a29af4869eb6dcd8b732ae90f5c4fda6759b460622a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2ac19fc0eeba332014eefac978a35798eb8bf1ff79212bca24690236bd46e01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9263fdd7b12f5852db038570b5a179fff5850893af790fa27694f00375a607f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b37a96430550aa45384959d4587f4ab21c637451e334481e14e7b3a666d7f936\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b308265eb50c9a1d2ecb14c394c7c261eb6dfb2173452a951ed7e67a84bc9e5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-10T14:32:24Z\\\",\\\"message\\\":\\\"ctory.go:160\\\\nI1210 14:32:24.392179 5957 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI1210 14:32:24.392370 5957 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1210 14:32:24.392550 5957 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1210 14:32:24.393240 5957 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1210 14:32:24.393627 5957 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1210 14:32:24.393738 5957 factory.go:656] Stopping watch factory\\\\nI1210 14:32:24.393670 5957 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI1210 14:32:24.393754 5957 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1210 14:32:24.393802 5957 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:15Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f27c89c7dd6f5334f05c340d706e938abde36968f00c30bf84f7b2fb30dfcb3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df5db949bd1c53e8ae754b3e7e500e8c2e196aacd68a36d6331ddca378ef7504\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df5db949bd1c53e8ae754b3e7e500e8c2e196aacd68a36d6331ddca378ef7504\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-dtch2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:26Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:26 crc kubenswrapper[4718]: I1210 14:32:26.120279 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52b297ad-8ff7-498e-8248-b64014de744f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65966d33004e0a6740f0b450f404729baec86fc47c84a58ee020a78b7378bfee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfa44e8f79d7525847ab10384aaf41105a8e6769b384679417cd0a7379748dee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://32fcede9b35f9a45177a341e272e7c2cbe2e5c6bf699e877664a603a5f96124a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3d4b682855c5510a5957e308008602e4f749bd44ca52f974cb4be27d06a7c80\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9d6e990a4a3f575ffc8a6399b8dd1fd59a84db71b34885946046fc7c5bd0a8e8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-10T14:31:58Z\\\",\\\"message\\\":\\\"le observer\\\\nW1210 14:31:57.496102 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1210 14:31:57.496266 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1210 14:31:57.497076 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1744864549/tls.crt::/tmp/serving-cert-1744864549/tls.key\\\\\\\"\\\\nI1210 14:31:57.965947 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1210 14:31:57.968367 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1210 14:31:57.968400 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1210 14:31:57.968417 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1210 14:31:57.968422 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1210 14:31:57.972813 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1210 14:31:57.972849 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1210 14:31:57.972856 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1210 14:31:57.972862 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1210 14:31:57.972867 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1210 14:31:57.972870 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1210 14:31:57.972874 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1210 14:31:57.973028 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1210 14:31:57.977590 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-10T14:31:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f363cabfc65ba2cb737968ead6f21568f30416a1f385f511f2d1a45da1e831a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3cbf8dfd3e1278a733c6ca7e888788d0bec845ff72dfd618e2070262d05b6a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3cbf8dfd3e1278a733c6ca7e888788d0bec845ff72dfd618e2070262d05b6a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:31:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:31:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:26Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:26 crc kubenswrapper[4718]: I1210 14:32:26.135161 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:26Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:26 crc kubenswrapper[4718]: I1210 14:32:26.152248 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a6ff07473ddfa3d1db1e5fe47937274f803c400ee435102c3b0383c02af4340\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b56868fde34c04a22b78be1e6cb02ff5b11f0dd9fa0232e5a4c38725a8bf193\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:26Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:26 crc kubenswrapper[4718]: I1210 14:32:26.154937 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:26 crc kubenswrapper[4718]: I1210 14:32:26.154983 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:26 crc kubenswrapper[4718]: I1210 14:32:26.155005 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:26 crc kubenswrapper[4718]: I1210 14:32:26.155022 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:26 crc kubenswrapper[4718]: I1210 14:32:26.155032 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:26Z","lastTransitionTime":"2025-12-10T14:32:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:26 crc kubenswrapper[4718]: I1210 14:32:26.167705 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9fmrd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"186fd52a-c63c-461f-a551-8b57ead36f59\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c79dfef1bedaab660a35e29f8a48c6b153f7ca5028ef6a936a196699a30566e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r6f28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:03Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9fmrd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:26Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:26 crc kubenswrapper[4718]: I1210 14:32:26.179103 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5867ee7f-0b5d-48e7-9c39-956298a845a1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca56ecda46fc73e9f734cfd36b87b7353db70adefef0665771911d62633ce508\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65ade8ad5ab17011ad49a8283beac1b53ed3612052f8ad9c9b5521b18a71f970\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4c8b683dbf4a1d50d5987ea168baecaf4b2124a5a6286dc22c96cd7472db60a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://541c7810a9f52a43bd4d949a836dfed2fbf9384a1c46d64f81168a8eec1437e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://541c7810a9f52a43bd4d949a836dfed2fbf9384a1c46d64f81168a8eec1437e3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:31:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:31:37Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:31:36Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:26Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:26 crc kubenswrapper[4718]: I1210 14:32:26.193244 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://caa6ee7ab7f267517dcd1699f91f61cbc8b2dd0a135dd553e096f31a7f1dd93b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:26Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:26 crc kubenswrapper[4718]: I1210 14:32:26.206745 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:26Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:26 crc kubenswrapper[4718]: I1210 14:32:26.219348 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hv62w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9db3984f-4589-462f-94d7-89a885be63d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c18891e2a71e33784526db2ff2226ff6aa8a9df462f5cb11ff4f5a446509f54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d84p4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hv62w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:26Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:26 crc kubenswrapper[4718]: I1210 14:32:26.232001 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xvkg4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af814c60-50de-499d-a1b2-b18f3749bc35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01d0637707f68305ada3aab8a2cf56db32750665ef5c1bcfbfb1993fd135c3b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vsmbx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:00Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xvkg4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:26Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:26 crc kubenswrapper[4718]: I1210 14:32:26.245947 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8db53917-7cfb-496d-b8a0-5cc68f3be4e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ac0946788c7cef36abe84639ca343f7ba379f19b94308cd1a08f4dcac4ad249\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f420f195ef9f1879dc97af4b3feb8835025c2cbe0ede4113fb1b0e0b08ba6c2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8zmhn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:26Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:26 crc kubenswrapper[4718]: I1210 14:32:26.258528 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:26 crc kubenswrapper[4718]: I1210 14:32:26.258588 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:26 crc kubenswrapper[4718]: I1210 14:32:26.258599 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:26 crc kubenswrapper[4718]: I1210 14:32:26.258624 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:26 crc kubenswrapper[4718]: I1210 14:32:26.258639 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:26Z","lastTransitionTime":"2025-12-10T14:32:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:26 crc kubenswrapper[4718]: I1210 14:32:26.261967 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1432ae01-a453-47a7-be1d-5fdcabe6f0c8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://651112c229c32968527538e66f1b161c92411b37f45f36ca90b27b6710d7a43d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d1de02a2f41beed3e92a05e69a6c76b44861e7dba46d723a38fffe8c3d0ed42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9405e4c833e1d215824604c6d7ec42dec156f2968d3351f110df665cdac0ce1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d898946159da9ec7806b5daa074a2313a8d3cd6086f4ae2bdfe858a7ba70eca7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:31:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:26Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:26 crc kubenswrapper[4718]: I1210 14:32:26.279158 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-49r77" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22aace52-6f58-4459-89d2-9fec98b12ead\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b18d0c437956c7db43ad53ae225d4340e62f2c0ceb79ab6cc519d3c97570027\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l47m9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc20dc8e96606df154ecb5850afe3c1002d2869646e8a43a642e34d6f5ec8683\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l47m9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-49r77\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:26Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:26 crc kubenswrapper[4718]: I1210 14:32:26.291436 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-r8zbt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1494ebfa-d66c-4200-a336-2cedebcd5889\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p7qsk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p7qsk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:17Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-r8zbt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:26Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:26 crc kubenswrapper[4718]: I1210 14:32:26.309707 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kkfdg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db9c6842-03cb-4a28-baff-77f27f537aa4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bbdc104f3ec13789d339165601ac84cf9973e811127f8fe05ddb9b8d5ab333a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bxrt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34acb23b1eef53ae920322853c0503b9d26b0d30894e6c53ae52fb90e1671294\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34acb23b1eef53ae920322853c0503b9d26b0d30894e6c53ae52fb90e1671294\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bxrt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1b13fbe360cd4e791e86acdb311b78d1ab1d3344fd287fd70cfa87cf656affa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1b13fbe360cd4e791e86acdb311b78d1ab1d3344fd287fd70cfa87cf656affa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bxrt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d9ee16140b928b7e5c33811af2ca2b20b2fc98fdefc12bb1790fce59c407d9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d9ee16140b928b7e5c33811af2ca2b20b2fc98fdefc12bb1790fce59c407d9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bxrt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1daf49ff990637090db3ead68f5d0777b6502f79edfe524df743389ee220f8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1daf49ff990637090db3ead68f5d0777b6502f79edfe524df743389ee220f8d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bxrt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://193bd4ffc5c3324f05727b94ff852aaae455949fbb968b27c572f43fe0328854\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://193bd4ffc5c3324f05727b94ff852aaae455949fbb968b27c572f43fe0328854\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bxrt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://95db561c06acc2e7272a5a4a2c8a6a38315343379b80213c0e63f7a65a46f35d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://95db561c06acc2e7272a5a4a2c8a6a38315343379b80213c0e63f7a65a46f35d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bxrt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kkfdg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:26Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:26 crc kubenswrapper[4718]: I1210 14:32:26.322450 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:26Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:26 crc kubenswrapper[4718]: I1210 14:32:26.334120 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://368fbae55ef9c78f2f88c5a07301f271e1ccffc66976a0c2dc9f80edce04613a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:26Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:26 crc kubenswrapper[4718]: I1210 14:32:26.362524 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:26 crc kubenswrapper[4718]: I1210 14:32:26.362559 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:26 crc kubenswrapper[4718]: I1210 14:32:26.362569 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:26 crc kubenswrapper[4718]: I1210 14:32:26.362586 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:26 crc kubenswrapper[4718]: I1210 14:32:26.362597 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:26Z","lastTransitionTime":"2025-12-10T14:32:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:26 crc kubenswrapper[4718]: I1210 14:32:26.374057 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dtch2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"612af6cb-db4d-4874-a9ea-8b3c7eb8e30c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f49783c85c42bd1d4afd9d2546c84af07c770b069b7e6f22cf3d95c0e173bcb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02ee125e78be56eace63f1077a3952d4a9b9ccb387af5413a9325eb4bff80ea9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e231c6918c86142643af798db9c4c6de9764822d2b60e1155c727b143191139\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://abe2f058bd1c2102c5f00a29af4869eb6dcd8b732ae90f5c4fda6759b460622a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2ac19fc0eeba332014eefac978a35798eb8bf1ff79212bca24690236bd46e01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9263fdd7b12f5852db038570b5a179fff5850893af790fa27694f00375a607f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b37a96430550aa45384959d4587f4ab21c637451e334481e14e7b3a666d7f936\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b308265eb50c9a1d2ecb14c394c7c261eb6dfb2173452a951ed7e67a84bc9e5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-10T14:32:24Z\\\",\\\"message\\\":\\\"ctory.go:160\\\\nI1210 14:32:24.392179 5957 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI1210 14:32:24.392370 5957 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1210 14:32:24.392550 5957 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1210 14:32:24.393240 5957 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1210 14:32:24.393627 5957 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1210 14:32:24.393738 5957 factory.go:656] Stopping watch factory\\\\nI1210 14:32:24.393670 5957 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI1210 14:32:24.393754 5957 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1210 14:32:24.393802 5957 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:15Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f27c89c7dd6f5334f05c340d706e938abde36968f00c30bf84f7b2fb30dfcb3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df5db949bd1c53e8ae754b3e7e500e8c2e196aacd68a36d6331ddca378ef7504\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df5db949bd1c53e8ae754b3e7e500e8c2e196aacd68a36d6331ddca378ef7504\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-dtch2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:26Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:26 crc kubenswrapper[4718]: I1210 14:32:26.398606 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52b297ad-8ff7-498e-8248-b64014de744f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65966d33004e0a6740f0b450f404729baec86fc47c84a58ee020a78b7378bfee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfa44e8f79d7525847ab10384aaf41105a8e6769b384679417cd0a7379748dee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://32fcede9b35f9a45177a341e272e7c2cbe2e5c6bf699e877664a603a5f96124a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3d4b682855c5510a5957e308008602e4f749bd44ca52f974cb4be27d06a7c80\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9d6e990a4a3f575ffc8a6399b8dd1fd59a84db71b34885946046fc7c5bd0a8e8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-10T14:31:58Z\\\",\\\"message\\\":\\\"le observer\\\\nW1210 14:31:57.496102 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1210 14:31:57.496266 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1210 14:31:57.497076 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1744864549/tls.crt::/tmp/serving-cert-1744864549/tls.key\\\\\\\"\\\\nI1210 14:31:57.965947 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1210 14:31:57.968367 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1210 14:31:57.968400 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1210 14:31:57.968417 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1210 14:31:57.968422 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1210 14:31:57.972813 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1210 14:31:57.972849 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1210 14:31:57.972856 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1210 14:31:57.972862 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1210 14:31:57.972867 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1210 14:31:57.972870 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1210 14:31:57.972874 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1210 14:31:57.973028 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1210 14:31:57.977590 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-10T14:31:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f363cabfc65ba2cb737968ead6f21568f30416a1f385f511f2d1a45da1e831a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3cbf8dfd3e1278a733c6ca7e888788d0bec845ff72dfd618e2070262d05b6a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3cbf8dfd3e1278a733c6ca7e888788d0bec845ff72dfd618e2070262d05b6a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:31:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:31:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:26Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:26 crc kubenswrapper[4718]: I1210 14:32:26.416513 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:26Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:26 crc kubenswrapper[4718]: I1210 14:32:26.429607 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a6ff07473ddfa3d1db1e5fe47937274f803c400ee435102c3b0383c02af4340\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b56868fde34c04a22b78be1e6cb02ff5b11f0dd9fa0232e5a4c38725a8bf193\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:26Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:26 crc kubenswrapper[4718]: I1210 14:32:26.442030 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9fmrd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"186fd52a-c63c-461f-a551-8b57ead36f59\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c79dfef1bedaab660a35e29f8a48c6b153f7ca5028ef6a936a196699a30566e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r6f28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:03Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9fmrd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:26Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:26 crc kubenswrapper[4718]: I1210 14:32:26.465078 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:26 crc kubenswrapper[4718]: I1210 14:32:26.465121 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:26 crc kubenswrapper[4718]: I1210 14:32:26.465134 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:26 crc kubenswrapper[4718]: I1210 14:32:26.465152 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:26 crc kubenswrapper[4718]: I1210 14:32:26.465165 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:26Z","lastTransitionTime":"2025-12-10T14:32:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:26 crc kubenswrapper[4718]: I1210 14:32:26.568327 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:26 crc kubenswrapper[4718]: I1210 14:32:26.568366 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:26 crc kubenswrapper[4718]: I1210 14:32:26.568375 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:26 crc kubenswrapper[4718]: I1210 14:32:26.568412 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:26 crc kubenswrapper[4718]: I1210 14:32:26.568423 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:26Z","lastTransitionTime":"2025-12-10T14:32:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:26 crc kubenswrapper[4718]: I1210 14:32:26.671611 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:26 crc kubenswrapper[4718]: I1210 14:32:26.671672 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:26 crc kubenswrapper[4718]: I1210 14:32:26.671687 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:26 crc kubenswrapper[4718]: I1210 14:32:26.671729 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:26 crc kubenswrapper[4718]: I1210 14:32:26.671745 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:26Z","lastTransitionTime":"2025-12-10T14:32:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:26 crc kubenswrapper[4718]: I1210 14:32:26.775100 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:26 crc kubenswrapper[4718]: I1210 14:32:26.775160 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:26 crc kubenswrapper[4718]: I1210 14:32:26.775173 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:26 crc kubenswrapper[4718]: I1210 14:32:26.775194 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:26 crc kubenswrapper[4718]: I1210 14:32:26.775208 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:26Z","lastTransitionTime":"2025-12-10T14:32:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:26 crc kubenswrapper[4718]: I1210 14:32:26.878289 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:26 crc kubenswrapper[4718]: I1210 14:32:26.878340 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:26 crc kubenswrapper[4718]: I1210 14:32:26.878355 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:26 crc kubenswrapper[4718]: I1210 14:32:26.878381 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:26 crc kubenswrapper[4718]: I1210 14:32:26.878417 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:26Z","lastTransitionTime":"2025-12-10T14:32:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:26 crc kubenswrapper[4718]: I1210 14:32:26.981298 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:26 crc kubenswrapper[4718]: I1210 14:32:26.981343 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:26 crc kubenswrapper[4718]: I1210 14:32:26.981354 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:26 crc kubenswrapper[4718]: I1210 14:32:26.981371 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:26 crc kubenswrapper[4718]: I1210 14:32:26.981405 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:26Z","lastTransitionTime":"2025-12-10T14:32:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:27 crc kubenswrapper[4718]: I1210 14:32:27.020037 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-r8zbt" Dec 10 14:32:27 crc kubenswrapper[4718]: I1210 14:32:27.020121 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 10 14:32:27 crc kubenswrapper[4718]: E1210 14:32:27.020173 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-r8zbt" podUID="1494ebfa-d66c-4200-a336-2cedebcd5889" Dec 10 14:32:27 crc kubenswrapper[4718]: E1210 14:32:27.020311 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 10 14:32:27 crc kubenswrapper[4718]: I1210 14:32:27.084679 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:27 crc kubenswrapper[4718]: I1210 14:32:27.084723 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:27 crc kubenswrapper[4718]: I1210 14:32:27.084733 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:27 crc kubenswrapper[4718]: I1210 14:32:27.084746 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:27 crc kubenswrapper[4718]: I1210 14:32:27.084756 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:27Z","lastTransitionTime":"2025-12-10T14:32:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:27 crc kubenswrapper[4718]: I1210 14:32:27.188067 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:27 crc kubenswrapper[4718]: I1210 14:32:27.188139 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:27 crc kubenswrapper[4718]: I1210 14:32:27.188153 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:27 crc kubenswrapper[4718]: I1210 14:32:27.188174 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:27 crc kubenswrapper[4718]: I1210 14:32:27.188187 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:27Z","lastTransitionTime":"2025-12-10T14:32:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:27 crc kubenswrapper[4718]: I1210 14:32:27.291607 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:27 crc kubenswrapper[4718]: I1210 14:32:27.291954 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:27 crc kubenswrapper[4718]: I1210 14:32:27.291969 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:27 crc kubenswrapper[4718]: I1210 14:32:27.291986 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:27 crc kubenswrapper[4718]: I1210 14:32:27.291999 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:27Z","lastTransitionTime":"2025-12-10T14:32:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:27 crc kubenswrapper[4718]: I1210 14:32:27.394379 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:27 crc kubenswrapper[4718]: I1210 14:32:27.394472 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:27 crc kubenswrapper[4718]: I1210 14:32:27.394490 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:27 crc kubenswrapper[4718]: I1210 14:32:27.394511 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:27 crc kubenswrapper[4718]: I1210 14:32:27.394527 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:27Z","lastTransitionTime":"2025-12-10T14:32:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:27 crc kubenswrapper[4718]: I1210 14:32:27.497052 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:27 crc kubenswrapper[4718]: I1210 14:32:27.497111 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:27 crc kubenswrapper[4718]: I1210 14:32:27.497128 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:27 crc kubenswrapper[4718]: I1210 14:32:27.497167 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:27 crc kubenswrapper[4718]: I1210 14:32:27.497182 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:27Z","lastTransitionTime":"2025-12-10T14:32:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:27 crc kubenswrapper[4718]: I1210 14:32:27.600816 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:27 crc kubenswrapper[4718]: I1210 14:32:27.600860 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:27 crc kubenswrapper[4718]: I1210 14:32:27.600871 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:27 crc kubenswrapper[4718]: I1210 14:32:27.600887 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:27 crc kubenswrapper[4718]: I1210 14:32:27.600897 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:27Z","lastTransitionTime":"2025-12-10T14:32:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:27 crc kubenswrapper[4718]: I1210 14:32:27.703346 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:27 crc kubenswrapper[4718]: I1210 14:32:27.703415 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:27 crc kubenswrapper[4718]: I1210 14:32:27.703429 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:27 crc kubenswrapper[4718]: I1210 14:32:27.703447 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:27 crc kubenswrapper[4718]: I1210 14:32:27.703462 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:27Z","lastTransitionTime":"2025-12-10T14:32:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:27 crc kubenswrapper[4718]: I1210 14:32:27.806325 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:27 crc kubenswrapper[4718]: I1210 14:32:27.806420 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:27 crc kubenswrapper[4718]: I1210 14:32:27.806436 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:27 crc kubenswrapper[4718]: I1210 14:32:27.806461 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:27 crc kubenswrapper[4718]: I1210 14:32:27.806474 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:27Z","lastTransitionTime":"2025-12-10T14:32:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:27 crc kubenswrapper[4718]: I1210 14:32:27.909471 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:27 crc kubenswrapper[4718]: I1210 14:32:27.909567 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:27 crc kubenswrapper[4718]: I1210 14:32:27.909577 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:27 crc kubenswrapper[4718]: I1210 14:32:27.909599 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:27 crc kubenswrapper[4718]: I1210 14:32:27.909610 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:27Z","lastTransitionTime":"2025-12-10T14:32:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:27 crc kubenswrapper[4718]: I1210 14:32:27.910418 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-dtch2_612af6cb-db4d-4874-a9ea-8b3c7eb8e30c/ovnkube-controller/1.log" Dec 10 14:32:27 crc kubenswrapper[4718]: I1210 14:32:27.911696 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-dtch2_612af6cb-db4d-4874-a9ea-8b3c7eb8e30c/ovnkube-controller/0.log" Dec 10 14:32:27 crc kubenswrapper[4718]: I1210 14:32:27.916917 4718 generic.go:334] "Generic (PLEG): container finished" podID="612af6cb-db4d-4874-a9ea-8b3c7eb8e30c" containerID="b37a96430550aa45384959d4587f4ab21c637451e334481e14e7b3a666d7f936" exitCode=1 Dec 10 14:32:27 crc kubenswrapper[4718]: I1210 14:32:27.916972 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dtch2" event={"ID":"612af6cb-db4d-4874-a9ea-8b3c7eb8e30c","Type":"ContainerDied","Data":"b37a96430550aa45384959d4587f4ab21c637451e334481e14e7b3a666d7f936"} Dec 10 14:32:27 crc kubenswrapper[4718]: I1210 14:32:27.917024 4718 scope.go:117] "RemoveContainer" containerID="1b308265eb50c9a1d2ecb14c394c7c261eb6dfb2173452a951ed7e67a84bc9e5" Dec 10 14:32:27 crc kubenswrapper[4718]: I1210 14:32:27.918715 4718 scope.go:117] "RemoveContainer" containerID="b37a96430550aa45384959d4587f4ab21c637451e334481e14e7b3a666d7f936" Dec 10 14:32:27 crc kubenswrapper[4718]: E1210 14:32:27.919113 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-dtch2_openshift-ovn-kubernetes(612af6cb-db4d-4874-a9ea-8b3c7eb8e30c)\"" pod="openshift-ovn-kubernetes/ovnkube-node-dtch2" podUID="612af6cb-db4d-4874-a9ea-8b3c7eb8e30c" Dec 10 14:32:27 crc kubenswrapper[4718]: I1210 14:32:27.937552 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a6ff07473ddfa3d1db1e5fe47937274f803c400ee435102c3b0383c02af4340\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b56868fde34c04a22b78be1e6cb02ff5b11f0dd9fa0232e5a4c38725a8bf193\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:27Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:27 crc kubenswrapper[4718]: I1210 14:32:27.949833 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9fmrd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"186fd52a-c63c-461f-a551-8b57ead36f59\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c79dfef1bedaab660a35e29f8a48c6b153f7ca5028ef6a936a196699a30566e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r6f28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:03Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9fmrd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:27Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:27 crc kubenswrapper[4718]: I1210 14:32:27.960670 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xvkg4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af814c60-50de-499d-a1b2-b18f3749bc35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01d0637707f68305ada3aab8a2cf56db32750665ef5c1bcfbfb1993fd135c3b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vsmbx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:00Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xvkg4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:27Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:27 crc kubenswrapper[4718]: I1210 14:32:27.971581 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8db53917-7cfb-496d-b8a0-5cc68f3be4e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ac0946788c7cef36abe84639ca343f7ba379f19b94308cd1a08f4dcac4ad249\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f420f195ef9f1879dc97af4b3feb8835025c2cbe0ede4113fb1b0e0b08ba6c2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8zmhn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:27Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:27 crc kubenswrapper[4718]: I1210 14:32:27.987197 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1432ae01-a453-47a7-be1d-5fdcabe6f0c8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://651112c229c32968527538e66f1b161c92411b37f45f36ca90b27b6710d7a43d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d1de02a2f41beed3e92a05e69a6c76b44861e7dba46d723a38fffe8c3d0ed42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9405e4c833e1d215824604c6d7ec42dec156f2968d3351f110df665cdac0ce1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d898946159da9ec7806b5daa074a2313a8d3cd6086f4ae2bdfe858a7ba70eca7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:31:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:27Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:28 crc kubenswrapper[4718]: I1210 14:32:28.000628 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5867ee7f-0b5d-48e7-9c39-956298a845a1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca56ecda46fc73e9f734cfd36b87b7353db70adefef0665771911d62633ce508\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65ade8ad5ab17011ad49a8283beac1b53ed3612052f8ad9c9b5521b18a71f970\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4c8b683dbf4a1d50d5987ea168baecaf4b2124a5a6286dc22c96cd7472db60a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://541c7810a9f52a43bd4d949a836dfed2fbf9384a1c46d64f81168a8eec1437e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://541c7810a9f52a43bd4d949a836dfed2fbf9384a1c46d64f81168a8eec1437e3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:31:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:31:37Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:31:36Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:27Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:28 crc kubenswrapper[4718]: I1210 14:32:28.012447 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:28 crc kubenswrapper[4718]: I1210 14:32:28.012480 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:28 crc kubenswrapper[4718]: I1210 14:32:28.012489 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:28 crc kubenswrapper[4718]: I1210 14:32:28.012503 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:28 crc kubenswrapper[4718]: I1210 14:32:28.012513 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:28Z","lastTransitionTime":"2025-12-10T14:32:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:28 crc kubenswrapper[4718]: I1210 14:32:28.016485 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://caa6ee7ab7f267517dcd1699f91f61cbc8b2dd0a135dd553e096f31a7f1dd93b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:28Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:28 crc kubenswrapper[4718]: I1210 14:32:28.019561 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 10 14:32:28 crc kubenswrapper[4718]: I1210 14:32:28.019561 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 10 14:32:28 crc kubenswrapper[4718]: E1210 14:32:28.019688 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 10 14:32:28 crc kubenswrapper[4718]: E1210 14:32:28.019875 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 10 14:32:28 crc kubenswrapper[4718]: I1210 14:32:28.034263 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:28Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:28 crc kubenswrapper[4718]: I1210 14:32:28.049311 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hv62w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9db3984f-4589-462f-94d7-89a885be63d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c18891e2a71e33784526db2ff2226ff6aa8a9df462f5cb11ff4f5a446509f54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d84p4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hv62w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:28Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:28 crc kubenswrapper[4718]: I1210 14:32:28.062827 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-49r77" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22aace52-6f58-4459-89d2-9fec98b12ead\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b18d0c437956c7db43ad53ae225d4340e62f2c0ceb79ab6cc519d3c97570027\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l47m9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc20dc8e96606df154ecb5850afe3c1002d2869646e8a43a642e34d6f5ec8683\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l47m9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-49r77\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:28Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:28 crc kubenswrapper[4718]: I1210 14:32:28.076787 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kkfdg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db9c6842-03cb-4a28-baff-77f27f537aa4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bbdc104f3ec13789d339165601ac84cf9973e811127f8fe05ddb9b8d5ab333a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bxrt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34acb23b1eef53ae920322853c0503b9d26b0d30894e6c53ae52fb90e1671294\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34acb23b1eef53ae920322853c0503b9d26b0d30894e6c53ae52fb90e1671294\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bxrt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1b13fbe360cd4e791e86acdb311b78d1ab1d3344fd287fd70cfa87cf656affa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1b13fbe360cd4e791e86acdb311b78d1ab1d3344fd287fd70cfa87cf656affa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bxrt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d9ee16140b928b7e5c33811af2ca2b20b2fc98fdefc12bb1790fce59c407d9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d9ee16140b928b7e5c33811af2ca2b20b2fc98fdefc12bb1790fce59c407d9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bxrt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1daf49ff990637090db3ead68f5d0777b6502f79edfe524df743389ee220f8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1daf49ff990637090db3ead68f5d0777b6502f79edfe524df743389ee220f8d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bxrt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://193bd4ffc5c3324f05727b94ff852aaae455949fbb968b27c572f43fe0328854\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://193bd4ffc5c3324f05727b94ff852aaae455949fbb968b27c572f43fe0328854\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bxrt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://95db561c06acc2e7272a5a4a2c8a6a38315343379b80213c0e63f7a65a46f35d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://95db561c06acc2e7272a5a4a2c8a6a38315343379b80213c0e63f7a65a46f35d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bxrt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kkfdg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:28Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:28 crc kubenswrapper[4718]: I1210 14:32:28.090444 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-r8zbt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1494ebfa-d66c-4200-a336-2cedebcd5889\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p7qsk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p7qsk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:17Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-r8zbt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:28Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:28 crc kubenswrapper[4718]: I1210 14:32:28.106525 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52b297ad-8ff7-498e-8248-b64014de744f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65966d33004e0a6740f0b450f404729baec86fc47c84a58ee020a78b7378bfee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfa44e8f79d7525847ab10384aaf41105a8e6769b384679417cd0a7379748dee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://32fcede9b35f9a45177a341e272e7c2cbe2e5c6bf699e877664a603a5f96124a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3d4b682855c5510a5957e308008602e4f749bd44ca52f974cb4be27d06a7c80\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9d6e990a4a3f575ffc8a6399b8dd1fd59a84db71b34885946046fc7c5bd0a8e8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-10T14:31:58Z\\\",\\\"message\\\":\\\"le observer\\\\nW1210 14:31:57.496102 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1210 14:31:57.496266 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1210 14:31:57.497076 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1744864549/tls.crt::/tmp/serving-cert-1744864549/tls.key\\\\\\\"\\\\nI1210 14:31:57.965947 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1210 14:31:57.968367 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1210 14:31:57.968400 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1210 14:31:57.968417 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1210 14:31:57.968422 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1210 14:31:57.972813 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1210 14:31:57.972849 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1210 14:31:57.972856 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1210 14:31:57.972862 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1210 14:31:57.972867 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1210 14:31:57.972870 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1210 14:31:57.972874 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1210 14:31:57.973028 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1210 14:31:57.977590 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-10T14:31:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f363cabfc65ba2cb737968ead6f21568f30416a1f385f511f2d1a45da1e831a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3cbf8dfd3e1278a733c6ca7e888788d0bec845ff72dfd618e2070262d05b6a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3cbf8dfd3e1278a733c6ca7e888788d0bec845ff72dfd618e2070262d05b6a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:31:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:31:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:28Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:28 crc kubenswrapper[4718]: I1210 14:32:28.114693 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:28 crc kubenswrapper[4718]: I1210 14:32:28.114731 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:28 crc kubenswrapper[4718]: I1210 14:32:28.114742 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:28 crc kubenswrapper[4718]: I1210 14:32:28.114757 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:28 crc kubenswrapper[4718]: I1210 14:32:28.114768 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:28Z","lastTransitionTime":"2025-12-10T14:32:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:28 crc kubenswrapper[4718]: I1210 14:32:28.118890 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:28Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:28 crc kubenswrapper[4718]: I1210 14:32:28.130730 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:28Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:28 crc kubenswrapper[4718]: I1210 14:32:28.142927 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://368fbae55ef9c78f2f88c5a07301f271e1ccffc66976a0c2dc9f80edce04613a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:28Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:28 crc kubenswrapper[4718]: I1210 14:32:28.163553 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dtch2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"612af6cb-db4d-4874-a9ea-8b3c7eb8e30c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f49783c85c42bd1d4afd9d2546c84af07c770b069b7e6f22cf3d95c0e173bcb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02ee125e78be56eace63f1077a3952d4a9b9ccb387af5413a9325eb4bff80ea9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e231c6918c86142643af798db9c4c6de9764822d2b60e1155c727b143191139\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://abe2f058bd1c2102c5f00a29af4869eb6dcd8b732ae90f5c4fda6759b460622a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2ac19fc0eeba332014eefac978a35798eb8bf1ff79212bca24690236bd46e01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9263fdd7b12f5852db038570b5a179fff5850893af790fa27694f00375a607f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b37a96430550aa45384959d4587f4ab21c637451e334481e14e7b3a666d7f936\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b308265eb50c9a1d2ecb14c394c7c261eb6dfb2173452a951ed7e67a84bc9e5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-10T14:32:24Z\\\",\\\"message\\\":\\\"ctory.go:160\\\\nI1210 14:32:24.392179 5957 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI1210 14:32:24.392370 5957 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1210 14:32:24.392550 5957 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1210 14:32:24.393240 5957 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1210 14:32:24.393627 5957 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1210 14:32:24.393738 5957 factory.go:656] Stopping watch factory\\\\nI1210 14:32:24.393670 5957 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI1210 14:32:24.393754 5957 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1210 14:32:24.393802 5957 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:15Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b37a96430550aa45384959d4587f4ab21c637451e334481e14e7b3a666d7f936\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-10T14:32:27Z\\\",\\\"message\\\":\\\"shift-console/console]} name:Service_openshift-console/console_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.194:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {d7d7b270-1480-47f8-bdf9-690dbab310cb}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1210 14:32:26.076824 6228 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI1210 14:32:26.076822 6228 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-console/console]} name:Service_openshift-console/console_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.194:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {d7d7b270-1480-47f8-bdf9-690dbab310cb}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF1210 14:32:26.076881 6228 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f27c89c7dd6f5334f05c340d706e938abde36968f00c30bf84f7b2fb30dfcb3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df5db949bd1c53e8ae754b3e7e500e8c2e196aacd68a36d6331ddca378ef7504\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df5db949bd1c53e8ae754b3e7e500e8c2e196aacd68a36d6331ddca378ef7504\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-dtch2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:28Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:28 crc kubenswrapper[4718]: I1210 14:32:28.217911 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:28 crc kubenswrapper[4718]: I1210 14:32:28.217960 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:28 crc kubenswrapper[4718]: I1210 14:32:28.217971 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:28 crc kubenswrapper[4718]: I1210 14:32:28.217991 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:28 crc kubenswrapper[4718]: I1210 14:32:28.218004 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:28Z","lastTransitionTime":"2025-12-10T14:32:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:28 crc kubenswrapper[4718]: I1210 14:32:28.320834 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:28 crc kubenswrapper[4718]: I1210 14:32:28.320937 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:28 crc kubenswrapper[4718]: I1210 14:32:28.320958 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:28 crc kubenswrapper[4718]: I1210 14:32:28.320982 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:28 crc kubenswrapper[4718]: I1210 14:32:28.320999 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:28Z","lastTransitionTime":"2025-12-10T14:32:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:28 crc kubenswrapper[4718]: I1210 14:32:28.423692 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:28 crc kubenswrapper[4718]: I1210 14:32:28.423732 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:28 crc kubenswrapper[4718]: I1210 14:32:28.423741 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:28 crc kubenswrapper[4718]: I1210 14:32:28.423755 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:28 crc kubenswrapper[4718]: I1210 14:32:28.423766 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:28Z","lastTransitionTime":"2025-12-10T14:32:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:28 crc kubenswrapper[4718]: I1210 14:32:28.526819 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:28 crc kubenswrapper[4718]: I1210 14:32:28.527099 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:28 crc kubenswrapper[4718]: I1210 14:32:28.527249 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:28 crc kubenswrapper[4718]: I1210 14:32:28.527371 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:28 crc kubenswrapper[4718]: I1210 14:32:28.527479 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:28Z","lastTransitionTime":"2025-12-10T14:32:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:28 crc kubenswrapper[4718]: I1210 14:32:28.631869 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:28 crc kubenswrapper[4718]: I1210 14:32:28.631933 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:28 crc kubenswrapper[4718]: I1210 14:32:28.631946 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:28 crc kubenswrapper[4718]: I1210 14:32:28.631967 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:28 crc kubenswrapper[4718]: I1210 14:32:28.631981 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:28Z","lastTransitionTime":"2025-12-10T14:32:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:28 crc kubenswrapper[4718]: I1210 14:32:28.735128 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:28 crc kubenswrapper[4718]: I1210 14:32:28.735176 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:28 crc kubenswrapper[4718]: I1210 14:32:28.735184 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:28 crc kubenswrapper[4718]: I1210 14:32:28.735198 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:28 crc kubenswrapper[4718]: I1210 14:32:28.735207 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:28Z","lastTransitionTime":"2025-12-10T14:32:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:28 crc kubenswrapper[4718]: I1210 14:32:28.837508 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:28 crc kubenswrapper[4718]: I1210 14:32:28.837558 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:28 crc kubenswrapper[4718]: I1210 14:32:28.837575 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:28 crc kubenswrapper[4718]: I1210 14:32:28.837593 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:28 crc kubenswrapper[4718]: I1210 14:32:28.837607 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:28Z","lastTransitionTime":"2025-12-10T14:32:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:28 crc kubenswrapper[4718]: I1210 14:32:28.922929 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-dtch2_612af6cb-db4d-4874-a9ea-8b3c7eb8e30c/ovnkube-controller/1.log" Dec 10 14:32:28 crc kubenswrapper[4718]: I1210 14:32:28.939606 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:28 crc kubenswrapper[4718]: I1210 14:32:28.939675 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:28 crc kubenswrapper[4718]: I1210 14:32:28.939693 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:28 crc kubenswrapper[4718]: I1210 14:32:28.939741 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:28 crc kubenswrapper[4718]: I1210 14:32:28.939779 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:28Z","lastTransitionTime":"2025-12-10T14:32:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:29 crc kubenswrapper[4718]: I1210 14:32:29.019598 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-r8zbt" Dec 10 14:32:29 crc kubenswrapper[4718]: I1210 14:32:29.019719 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 10 14:32:29 crc kubenswrapper[4718]: E1210 14:32:29.020113 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-r8zbt" podUID="1494ebfa-d66c-4200-a336-2cedebcd5889" Dec 10 14:32:29 crc kubenswrapper[4718]: E1210 14:32:29.020134 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 10 14:32:29 crc kubenswrapper[4718]: I1210 14:32:29.042058 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:29 crc kubenswrapper[4718]: I1210 14:32:29.042112 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:29 crc kubenswrapper[4718]: I1210 14:32:29.042121 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:29 crc kubenswrapper[4718]: I1210 14:32:29.042135 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:29 crc kubenswrapper[4718]: I1210 14:32:29.042165 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:29Z","lastTransitionTime":"2025-12-10T14:32:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:29 crc kubenswrapper[4718]: I1210 14:32:29.145606 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:29 crc kubenswrapper[4718]: I1210 14:32:29.146480 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:29 crc kubenswrapper[4718]: I1210 14:32:29.146629 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:29 crc kubenswrapper[4718]: I1210 14:32:29.146755 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:29 crc kubenswrapper[4718]: I1210 14:32:29.146869 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:29Z","lastTransitionTime":"2025-12-10T14:32:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:29 crc kubenswrapper[4718]: I1210 14:32:29.250099 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:29 crc kubenswrapper[4718]: I1210 14:32:29.250240 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:29 crc kubenswrapper[4718]: I1210 14:32:29.250268 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:29 crc kubenswrapper[4718]: I1210 14:32:29.250295 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:29 crc kubenswrapper[4718]: I1210 14:32:29.250313 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:29Z","lastTransitionTime":"2025-12-10T14:32:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:29 crc kubenswrapper[4718]: I1210 14:32:29.353890 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:29 crc kubenswrapper[4718]: I1210 14:32:29.353970 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:29 crc kubenswrapper[4718]: I1210 14:32:29.353989 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:29 crc kubenswrapper[4718]: I1210 14:32:29.354020 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:29 crc kubenswrapper[4718]: I1210 14:32:29.354034 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:29Z","lastTransitionTime":"2025-12-10T14:32:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:29 crc kubenswrapper[4718]: I1210 14:32:29.457076 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:29 crc kubenswrapper[4718]: I1210 14:32:29.457132 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:29 crc kubenswrapper[4718]: I1210 14:32:29.457144 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:29 crc kubenswrapper[4718]: I1210 14:32:29.457165 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:29 crc kubenswrapper[4718]: I1210 14:32:29.457187 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:29Z","lastTransitionTime":"2025-12-10T14:32:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:29 crc kubenswrapper[4718]: I1210 14:32:29.559995 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:29 crc kubenswrapper[4718]: I1210 14:32:29.560037 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:29 crc kubenswrapper[4718]: I1210 14:32:29.560047 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:29 crc kubenswrapper[4718]: I1210 14:32:29.560066 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:29 crc kubenswrapper[4718]: I1210 14:32:29.560078 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:29Z","lastTransitionTime":"2025-12-10T14:32:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:29 crc kubenswrapper[4718]: I1210 14:32:29.627494 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 10 14:32:29 crc kubenswrapper[4718]: E1210 14:32:29.627816 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-10 14:33:01.627778907 +0000 UTC m=+86.577002324 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 14:32:29 crc kubenswrapper[4718]: I1210 14:32:29.663439 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:29 crc kubenswrapper[4718]: I1210 14:32:29.663483 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:29 crc kubenswrapper[4718]: I1210 14:32:29.663492 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:29 crc kubenswrapper[4718]: I1210 14:32:29.663507 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:29 crc kubenswrapper[4718]: I1210 14:32:29.663517 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:29Z","lastTransitionTime":"2025-12-10T14:32:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:29 crc kubenswrapper[4718]: I1210 14:32:29.765591 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:29 crc kubenswrapper[4718]: I1210 14:32:29.765633 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:29 crc kubenswrapper[4718]: I1210 14:32:29.765645 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:29 crc kubenswrapper[4718]: I1210 14:32:29.765662 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:29 crc kubenswrapper[4718]: I1210 14:32:29.765674 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:29Z","lastTransitionTime":"2025-12-10T14:32:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:29 crc kubenswrapper[4718]: I1210 14:32:29.869259 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:29 crc kubenswrapper[4718]: I1210 14:32:29.869328 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:29 crc kubenswrapper[4718]: I1210 14:32:29.869340 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:29 crc kubenswrapper[4718]: I1210 14:32:29.869359 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:29 crc kubenswrapper[4718]: I1210 14:32:29.869375 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:29Z","lastTransitionTime":"2025-12-10T14:32:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:29 crc kubenswrapper[4718]: I1210 14:32:29.931409 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 10 14:32:29 crc kubenswrapper[4718]: I1210 14:32:29.931454 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 10 14:32:29 crc kubenswrapper[4718]: I1210 14:32:29.931490 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 10 14:32:29 crc kubenswrapper[4718]: E1210 14:32:29.931647 4718 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 10 14:32:29 crc kubenswrapper[4718]: E1210 14:32:29.931708 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-10 14:33:01.931690264 +0000 UTC m=+86.880913681 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 10 14:32:29 crc kubenswrapper[4718]: E1210 14:32:29.931990 4718 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 10 14:32:29 crc kubenswrapper[4718]: E1210 14:32:29.932009 4718 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 10 14:32:29 crc kubenswrapper[4718]: E1210 14:32:29.932029 4718 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 10 14:32:29 crc kubenswrapper[4718]: E1210 14:32:29.932060 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-10 14:33:01.932052063 +0000 UTC m=+86.881275480 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 10 14:32:29 crc kubenswrapper[4718]: E1210 14:32:29.932365 4718 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 10 14:32:29 crc kubenswrapper[4718]: E1210 14:32:29.932638 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-10 14:33:01.932601607 +0000 UTC m=+86.881825154 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 10 14:32:29 crc kubenswrapper[4718]: I1210 14:32:29.971874 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:29 crc kubenswrapper[4718]: I1210 14:32:29.971933 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:29 crc kubenswrapper[4718]: I1210 14:32:29.971945 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:29 crc kubenswrapper[4718]: I1210 14:32:29.971963 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:29 crc kubenswrapper[4718]: I1210 14:32:29.971979 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:29Z","lastTransitionTime":"2025-12-10T14:32:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:30 crc kubenswrapper[4718]: I1210 14:32:30.019252 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 10 14:32:30 crc kubenswrapper[4718]: I1210 14:32:30.019286 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 10 14:32:30 crc kubenswrapper[4718]: I1210 14:32:30.019297 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:30 crc kubenswrapper[4718]: I1210 14:32:30.019399 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:30 crc kubenswrapper[4718]: I1210 14:32:30.019410 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:30 crc kubenswrapper[4718]: E1210 14:32:30.019417 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 10 14:32:30 crc kubenswrapper[4718]: I1210 14:32:30.019429 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:30 crc kubenswrapper[4718]: I1210 14:32:30.019466 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:30Z","lastTransitionTime":"2025-12-10T14:32:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:30 crc kubenswrapper[4718]: E1210 14:32:30.019561 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 10 14:32:30 crc kubenswrapper[4718]: I1210 14:32:30.033954 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 10 14:32:30 crc kubenswrapper[4718]: E1210 14:32:30.034301 4718 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 10 14:32:30 crc kubenswrapper[4718]: E1210 14:32:30.034360 4718 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 10 14:32:30 crc kubenswrapper[4718]: E1210 14:32:30.034378 4718 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 10 14:32:30 crc kubenswrapper[4718]: E1210 14:32:30.034488 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-10 14:33:02.034463704 +0000 UTC m=+86.983687121 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 10 14:32:30 crc kubenswrapper[4718]: E1210 14:32:30.034716 4718 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:32:30Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:32:30Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:30Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:32:30Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:32:30Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:30Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"fbef6824-3734-4f3d-bf18-030e5c72cff8\\\",\\\"systemUUID\\\":\\\"559eaef7-72a9-45f0-b8d3-0046c76adc0d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:30Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:30 crc kubenswrapper[4718]: I1210 14:32:30.038728 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:30 crc kubenswrapper[4718]: I1210 14:32:30.038849 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:30 crc kubenswrapper[4718]: I1210 14:32:30.038954 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:30 crc kubenswrapper[4718]: I1210 14:32:30.039050 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:30 crc kubenswrapper[4718]: I1210 14:32:30.039149 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:30Z","lastTransitionTime":"2025-12-10T14:32:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:30 crc kubenswrapper[4718]: E1210 14:32:30.054493 4718 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:32:30Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:32:30Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:30Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:32:30Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:32:30Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:30Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"fbef6824-3734-4f3d-bf18-030e5c72cff8\\\",\\\"systemUUID\\\":\\\"559eaef7-72a9-45f0-b8d3-0046c76adc0d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:30Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:30 crc kubenswrapper[4718]: I1210 14:32:30.058887 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:30 crc kubenswrapper[4718]: I1210 14:32:30.058954 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:30 crc kubenswrapper[4718]: I1210 14:32:30.058965 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:30 crc kubenswrapper[4718]: I1210 14:32:30.058983 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:30 crc kubenswrapper[4718]: I1210 14:32:30.058996 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:30Z","lastTransitionTime":"2025-12-10T14:32:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:30 crc kubenswrapper[4718]: E1210 14:32:30.072789 4718 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:32:30Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:32:30Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:30Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:32:30Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:32:30Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:30Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"fbef6824-3734-4f3d-bf18-030e5c72cff8\\\",\\\"systemUUID\\\":\\\"559eaef7-72a9-45f0-b8d3-0046c76adc0d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:30Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:30 crc kubenswrapper[4718]: I1210 14:32:30.077190 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:30 crc kubenswrapper[4718]: I1210 14:32:30.077250 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:30 crc kubenswrapper[4718]: I1210 14:32:30.077263 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:30 crc kubenswrapper[4718]: I1210 14:32:30.077302 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:30 crc kubenswrapper[4718]: I1210 14:32:30.077316 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:30Z","lastTransitionTime":"2025-12-10T14:32:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:30 crc kubenswrapper[4718]: E1210 14:32:30.092490 4718 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:32:30Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:32:30Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:30Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:32:30Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:32:30Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:30Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"fbef6824-3734-4f3d-bf18-030e5c72cff8\\\",\\\"systemUUID\\\":\\\"559eaef7-72a9-45f0-b8d3-0046c76adc0d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:30Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:30 crc kubenswrapper[4718]: I1210 14:32:30.097266 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:30 crc kubenswrapper[4718]: I1210 14:32:30.097321 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:30 crc kubenswrapper[4718]: I1210 14:32:30.097332 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:30 crc kubenswrapper[4718]: I1210 14:32:30.097351 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:30 crc kubenswrapper[4718]: I1210 14:32:30.097364 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:30Z","lastTransitionTime":"2025-12-10T14:32:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:30 crc kubenswrapper[4718]: E1210 14:32:30.112479 4718 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:32:30Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:32:30Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:30Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:32:30Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:32:30Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:30Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"fbef6824-3734-4f3d-bf18-030e5c72cff8\\\",\\\"systemUUID\\\":\\\"559eaef7-72a9-45f0-b8d3-0046c76adc0d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:30Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:30 crc kubenswrapper[4718]: E1210 14:32:30.112609 4718 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 10 14:32:30 crc kubenswrapper[4718]: I1210 14:32:30.114743 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:30 crc kubenswrapper[4718]: I1210 14:32:30.114816 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:30 crc kubenswrapper[4718]: I1210 14:32:30.114830 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:30 crc kubenswrapper[4718]: I1210 14:32:30.114849 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:30 crc kubenswrapper[4718]: I1210 14:32:30.114864 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:30Z","lastTransitionTime":"2025-12-10T14:32:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:30 crc kubenswrapper[4718]: I1210 14:32:30.218221 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:30 crc kubenswrapper[4718]: I1210 14:32:30.218273 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:30 crc kubenswrapper[4718]: I1210 14:32:30.218284 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:30 crc kubenswrapper[4718]: I1210 14:32:30.218302 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:30 crc kubenswrapper[4718]: I1210 14:32:30.218313 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:30Z","lastTransitionTime":"2025-12-10T14:32:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:30 crc kubenswrapper[4718]: I1210 14:32:30.322373 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:30 crc kubenswrapper[4718]: I1210 14:32:30.322476 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:30 crc kubenswrapper[4718]: I1210 14:32:30.322486 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:30 crc kubenswrapper[4718]: I1210 14:32:30.322519 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:30 crc kubenswrapper[4718]: I1210 14:32:30.322542 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:30Z","lastTransitionTime":"2025-12-10T14:32:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:30 crc kubenswrapper[4718]: I1210 14:32:30.426906 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:30 crc kubenswrapper[4718]: I1210 14:32:30.427288 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:30 crc kubenswrapper[4718]: I1210 14:32:30.427380 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:30 crc kubenswrapper[4718]: I1210 14:32:30.427475 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:30 crc kubenswrapper[4718]: I1210 14:32:30.427535 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:30Z","lastTransitionTime":"2025-12-10T14:32:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:30 crc kubenswrapper[4718]: I1210 14:32:30.529879 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:30 crc kubenswrapper[4718]: I1210 14:32:30.530211 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:30 crc kubenswrapper[4718]: I1210 14:32:30.530276 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:30 crc kubenswrapper[4718]: I1210 14:32:30.530356 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:30 crc kubenswrapper[4718]: I1210 14:32:30.530460 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:30Z","lastTransitionTime":"2025-12-10T14:32:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:30 crc kubenswrapper[4718]: I1210 14:32:30.633011 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:30 crc kubenswrapper[4718]: I1210 14:32:30.633057 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:30 crc kubenswrapper[4718]: I1210 14:32:30.633072 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:30 crc kubenswrapper[4718]: I1210 14:32:30.633088 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:30 crc kubenswrapper[4718]: I1210 14:32:30.633100 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:30Z","lastTransitionTime":"2025-12-10T14:32:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:30 crc kubenswrapper[4718]: I1210 14:32:30.736290 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:30 crc kubenswrapper[4718]: I1210 14:32:30.736594 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:30 crc kubenswrapper[4718]: I1210 14:32:30.736637 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:30 crc kubenswrapper[4718]: I1210 14:32:30.736669 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:30 crc kubenswrapper[4718]: I1210 14:32:30.736688 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:30Z","lastTransitionTime":"2025-12-10T14:32:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:30 crc kubenswrapper[4718]: I1210 14:32:30.839603 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:30 crc kubenswrapper[4718]: I1210 14:32:30.839670 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:30 crc kubenswrapper[4718]: I1210 14:32:30.839682 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:30 crc kubenswrapper[4718]: I1210 14:32:30.839701 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:30 crc kubenswrapper[4718]: I1210 14:32:30.839712 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:30Z","lastTransitionTime":"2025-12-10T14:32:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:30 crc kubenswrapper[4718]: I1210 14:32:30.944204 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:30 crc kubenswrapper[4718]: I1210 14:32:30.944254 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:30 crc kubenswrapper[4718]: I1210 14:32:30.944267 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:30 crc kubenswrapper[4718]: I1210 14:32:30.944286 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:30 crc kubenswrapper[4718]: I1210 14:32:30.944300 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:30Z","lastTransitionTime":"2025-12-10T14:32:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:31 crc kubenswrapper[4718]: I1210 14:32:31.019981 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 10 14:32:31 crc kubenswrapper[4718]: I1210 14:32:31.020024 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-r8zbt" Dec 10 14:32:31 crc kubenswrapper[4718]: E1210 14:32:31.020191 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 10 14:32:31 crc kubenswrapper[4718]: E1210 14:32:31.020536 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-r8zbt" podUID="1494ebfa-d66c-4200-a336-2cedebcd5889" Dec 10 14:32:31 crc kubenswrapper[4718]: I1210 14:32:31.047056 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:31 crc kubenswrapper[4718]: I1210 14:32:31.047121 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:31 crc kubenswrapper[4718]: I1210 14:32:31.047134 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:31 crc kubenswrapper[4718]: I1210 14:32:31.047154 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:31 crc kubenswrapper[4718]: I1210 14:32:31.047169 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:31Z","lastTransitionTime":"2025-12-10T14:32:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:31 crc kubenswrapper[4718]: I1210 14:32:31.068498 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-dtch2" Dec 10 14:32:31 crc kubenswrapper[4718]: I1210 14:32:31.069358 4718 scope.go:117] "RemoveContainer" containerID="b37a96430550aa45384959d4587f4ab21c637451e334481e14e7b3a666d7f936" Dec 10 14:32:31 crc kubenswrapper[4718]: E1210 14:32:31.069566 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-dtch2_openshift-ovn-kubernetes(612af6cb-db4d-4874-a9ea-8b3c7eb8e30c)\"" pod="openshift-ovn-kubernetes/ovnkube-node-dtch2" podUID="612af6cb-db4d-4874-a9ea-8b3c7eb8e30c" Dec 10 14:32:31 crc kubenswrapper[4718]: I1210 14:32:31.085122 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kkfdg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db9c6842-03cb-4a28-baff-77f27f537aa4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bbdc104f3ec13789d339165601ac84cf9973e811127f8fe05ddb9b8d5ab333a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bxrt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34acb23b1eef53ae920322853c0503b9d26b0d30894e6c53ae52fb90e1671294\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34acb23b1eef53ae920322853c0503b9d26b0d30894e6c53ae52fb90e1671294\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bxrt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1b13fbe360cd4e791e86acdb311b78d1ab1d3344fd287fd70cfa87cf656affa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1b13fbe360cd4e791e86acdb311b78d1ab1d3344fd287fd70cfa87cf656affa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bxrt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d9ee16140b928b7e5c33811af2ca2b20b2fc98fdefc12bb1790fce59c407d9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d9ee16140b928b7e5c33811af2ca2b20b2fc98fdefc12bb1790fce59c407d9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bxrt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1daf49ff990637090db3ead68f5d0777b6502f79edfe524df743389ee220f8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1daf49ff990637090db3ead68f5d0777b6502f79edfe524df743389ee220f8d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bxrt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://193bd4ffc5c3324f05727b94ff852aaae455949fbb968b27c572f43fe0328854\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://193bd4ffc5c3324f05727b94ff852aaae455949fbb968b27c572f43fe0328854\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bxrt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://95db561c06acc2e7272a5a4a2c8a6a38315343379b80213c0e63f7a65a46f35d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://95db561c06acc2e7272a5a4a2c8a6a38315343379b80213c0e63f7a65a46f35d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bxrt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kkfdg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:31Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:31 crc kubenswrapper[4718]: I1210 14:32:31.096008 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-r8zbt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1494ebfa-d66c-4200-a336-2cedebcd5889\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p7qsk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p7qsk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:17Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-r8zbt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:31Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:31 crc kubenswrapper[4718]: I1210 14:32:31.108903 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:31Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:31 crc kubenswrapper[4718]: I1210 14:32:31.121748 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:31Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:31 crc kubenswrapper[4718]: I1210 14:32:31.135361 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://368fbae55ef9c78f2f88c5a07301f271e1ccffc66976a0c2dc9f80edce04613a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:31Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:31 crc kubenswrapper[4718]: I1210 14:32:31.150271 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:31 crc kubenswrapper[4718]: I1210 14:32:31.150318 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:31 crc kubenswrapper[4718]: I1210 14:32:31.150329 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:31 crc kubenswrapper[4718]: I1210 14:32:31.150345 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:31 crc kubenswrapper[4718]: I1210 14:32:31.150355 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:31Z","lastTransitionTime":"2025-12-10T14:32:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:31 crc kubenswrapper[4718]: I1210 14:32:31.155302 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dtch2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"612af6cb-db4d-4874-a9ea-8b3c7eb8e30c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f49783c85c42bd1d4afd9d2546c84af07c770b069b7e6f22cf3d95c0e173bcb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02ee125e78be56eace63f1077a3952d4a9b9ccb387af5413a9325eb4bff80ea9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e231c6918c86142643af798db9c4c6de9764822d2b60e1155c727b143191139\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://abe2f058bd1c2102c5f00a29af4869eb6dcd8b732ae90f5c4fda6759b460622a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2ac19fc0eeba332014eefac978a35798eb8bf1ff79212bca24690236bd46e01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9263fdd7b12f5852db038570b5a179fff5850893af790fa27694f00375a607f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b37a96430550aa45384959d4587f4ab21c637451e334481e14e7b3a666d7f936\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b37a96430550aa45384959d4587f4ab21c637451e334481e14e7b3a666d7f936\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-10T14:32:27Z\\\",\\\"message\\\":\\\"shift-console/console]} name:Service_openshift-console/console_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.194:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {d7d7b270-1480-47f8-bdf9-690dbab310cb}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1210 14:32:26.076824 6228 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI1210 14:32:26.076822 6228 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-console/console]} name:Service_openshift-console/console_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.194:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {d7d7b270-1480-47f8-bdf9-690dbab310cb}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF1210 14:32:26.076881 6228 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:25Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-dtch2_openshift-ovn-kubernetes(612af6cb-db4d-4874-a9ea-8b3c7eb8e30c)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f27c89c7dd6f5334f05c340d706e938abde36968f00c30bf84f7b2fb30dfcb3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df5db949bd1c53e8ae754b3e7e500e8c2e196aacd68a36d6331ddca378ef7504\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df5db949bd1c53e8ae754b3e7e500e8c2e196aacd68a36d6331ddca378ef7504\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-dtch2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:31Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:31 crc kubenswrapper[4718]: I1210 14:32:31.170054 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52b297ad-8ff7-498e-8248-b64014de744f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65966d33004e0a6740f0b450f404729baec86fc47c84a58ee020a78b7378bfee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfa44e8f79d7525847ab10384aaf41105a8e6769b384679417cd0a7379748dee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://32fcede9b35f9a45177a341e272e7c2cbe2e5c6bf699e877664a603a5f96124a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3d4b682855c5510a5957e308008602e4f749bd44ca52f974cb4be27d06a7c80\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9d6e990a4a3f575ffc8a6399b8dd1fd59a84db71b34885946046fc7c5bd0a8e8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-10T14:31:58Z\\\",\\\"message\\\":\\\"le observer\\\\nW1210 14:31:57.496102 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1210 14:31:57.496266 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1210 14:31:57.497076 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1744864549/tls.crt::/tmp/serving-cert-1744864549/tls.key\\\\\\\"\\\\nI1210 14:31:57.965947 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1210 14:31:57.968367 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1210 14:31:57.968400 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1210 14:31:57.968417 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1210 14:31:57.968422 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1210 14:31:57.972813 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1210 14:31:57.972849 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1210 14:31:57.972856 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1210 14:31:57.972862 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1210 14:31:57.972867 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1210 14:31:57.972870 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1210 14:31:57.972874 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1210 14:31:57.973028 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1210 14:31:57.977590 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-10T14:31:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f363cabfc65ba2cb737968ead6f21568f30416a1f385f511f2d1a45da1e831a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3cbf8dfd3e1278a733c6ca7e888788d0bec845ff72dfd618e2070262d05b6a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3cbf8dfd3e1278a733c6ca7e888788d0bec845ff72dfd618e2070262d05b6a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:31:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:31:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:31Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:31 crc kubenswrapper[4718]: I1210 14:32:31.184469 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9fmrd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"186fd52a-c63c-461f-a551-8b57ead36f59\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c79dfef1bedaab660a35e29f8a48c6b153f7ca5028ef6a936a196699a30566e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r6f28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:03Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9fmrd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:31Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:31 crc kubenswrapper[4718]: I1210 14:32:31.200954 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a6ff07473ddfa3d1db1e5fe47937274f803c400ee435102c3b0383c02af4340\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b56868fde34c04a22b78be1e6cb02ff5b11f0dd9fa0232e5a4c38725a8bf193\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:31Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:31 crc kubenswrapper[4718]: I1210 14:32:31.220313 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1432ae01-a453-47a7-be1d-5fdcabe6f0c8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://651112c229c32968527538e66f1b161c92411b37f45f36ca90b27b6710d7a43d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d1de02a2f41beed3e92a05e69a6c76b44861e7dba46d723a38fffe8c3d0ed42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9405e4c833e1d215824604c6d7ec42dec156f2968d3351f110df665cdac0ce1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d898946159da9ec7806b5daa074a2313a8d3cd6086f4ae2bdfe858a7ba70eca7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:31:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:31Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:31 crc kubenswrapper[4718]: I1210 14:32:31.236767 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5867ee7f-0b5d-48e7-9c39-956298a845a1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca56ecda46fc73e9f734cfd36b87b7353db70adefef0665771911d62633ce508\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65ade8ad5ab17011ad49a8283beac1b53ed3612052f8ad9c9b5521b18a71f970\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4c8b683dbf4a1d50d5987ea168baecaf4b2124a5a6286dc22c96cd7472db60a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://541c7810a9f52a43bd4d949a836dfed2fbf9384a1c46d64f81168a8eec1437e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://541c7810a9f52a43bd4d949a836dfed2fbf9384a1c46d64f81168a8eec1437e3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:31:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:31:37Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:31:36Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:31Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:31 crc kubenswrapper[4718]: I1210 14:32:31.251276 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://caa6ee7ab7f267517dcd1699f91f61cbc8b2dd0a135dd553e096f31a7f1dd93b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:31Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:31 crc kubenswrapper[4718]: I1210 14:32:31.252865 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:31 crc kubenswrapper[4718]: I1210 14:32:31.252909 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:31 crc kubenswrapper[4718]: I1210 14:32:31.252925 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:31 crc kubenswrapper[4718]: I1210 14:32:31.252945 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:31 crc kubenswrapper[4718]: I1210 14:32:31.252958 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:31Z","lastTransitionTime":"2025-12-10T14:32:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:31 crc kubenswrapper[4718]: I1210 14:32:31.270638 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:31Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:31 crc kubenswrapper[4718]: I1210 14:32:31.285889 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hv62w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9db3984f-4589-462f-94d7-89a885be63d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c18891e2a71e33784526db2ff2226ff6aa8a9df462f5cb11ff4f5a446509f54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d84p4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hv62w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:31Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:31 crc kubenswrapper[4718]: I1210 14:32:31.299214 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xvkg4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af814c60-50de-499d-a1b2-b18f3749bc35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01d0637707f68305ada3aab8a2cf56db32750665ef5c1bcfbfb1993fd135c3b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vsmbx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:00Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xvkg4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:31Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:31 crc kubenswrapper[4718]: I1210 14:32:31.313216 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8db53917-7cfb-496d-b8a0-5cc68f3be4e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ac0946788c7cef36abe84639ca343f7ba379f19b94308cd1a08f4dcac4ad249\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f420f195ef9f1879dc97af4b3feb8835025c2cbe0ede4113fb1b0e0b08ba6c2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8zmhn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:31Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:31 crc kubenswrapper[4718]: I1210 14:32:31.329591 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-49r77" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22aace52-6f58-4459-89d2-9fec98b12ead\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b18d0c437956c7db43ad53ae225d4340e62f2c0ceb79ab6cc519d3c97570027\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l47m9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc20dc8e96606df154ecb5850afe3c1002d2869646e8a43a642e34d6f5ec8683\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l47m9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-49r77\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:31Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:31 crc kubenswrapper[4718]: I1210 14:32:31.356288 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:31 crc kubenswrapper[4718]: I1210 14:32:31.356343 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:31 crc kubenswrapper[4718]: I1210 14:32:31.356355 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:31 crc kubenswrapper[4718]: I1210 14:32:31.356376 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:31 crc kubenswrapper[4718]: I1210 14:32:31.356402 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:31Z","lastTransitionTime":"2025-12-10T14:32:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:31 crc kubenswrapper[4718]: I1210 14:32:31.460536 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:31 crc kubenswrapper[4718]: I1210 14:32:31.460630 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:31 crc kubenswrapper[4718]: I1210 14:32:31.460643 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:31 crc kubenswrapper[4718]: I1210 14:32:31.460666 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:31 crc kubenswrapper[4718]: I1210 14:32:31.460679 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:31Z","lastTransitionTime":"2025-12-10T14:32:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:31 crc kubenswrapper[4718]: I1210 14:32:31.563417 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:31 crc kubenswrapper[4718]: I1210 14:32:31.563479 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:31 crc kubenswrapper[4718]: I1210 14:32:31.563493 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:31 crc kubenswrapper[4718]: I1210 14:32:31.563513 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:31 crc kubenswrapper[4718]: I1210 14:32:31.563528 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:31Z","lastTransitionTime":"2025-12-10T14:32:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:31 crc kubenswrapper[4718]: I1210 14:32:31.666822 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:31 crc kubenswrapper[4718]: I1210 14:32:31.666874 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:31 crc kubenswrapper[4718]: I1210 14:32:31.666883 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:31 crc kubenswrapper[4718]: I1210 14:32:31.666903 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:31 crc kubenswrapper[4718]: I1210 14:32:31.666919 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:31Z","lastTransitionTime":"2025-12-10T14:32:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:31 crc kubenswrapper[4718]: I1210 14:32:31.769754 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:31 crc kubenswrapper[4718]: I1210 14:32:31.769802 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:31 crc kubenswrapper[4718]: I1210 14:32:31.769815 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:31 crc kubenswrapper[4718]: I1210 14:32:31.769831 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:31 crc kubenswrapper[4718]: I1210 14:32:31.769844 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:31Z","lastTransitionTime":"2025-12-10T14:32:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:31 crc kubenswrapper[4718]: I1210 14:32:31.872060 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:31 crc kubenswrapper[4718]: I1210 14:32:31.872105 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:31 crc kubenswrapper[4718]: I1210 14:32:31.872117 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:31 crc kubenswrapper[4718]: I1210 14:32:31.872134 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:31 crc kubenswrapper[4718]: I1210 14:32:31.872148 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:31Z","lastTransitionTime":"2025-12-10T14:32:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:31 crc kubenswrapper[4718]: I1210 14:32:31.974582 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:31 crc kubenswrapper[4718]: I1210 14:32:31.974642 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:31 crc kubenswrapper[4718]: I1210 14:32:31.974661 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:31 crc kubenswrapper[4718]: I1210 14:32:31.974691 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:31 crc kubenswrapper[4718]: I1210 14:32:31.974716 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:31Z","lastTransitionTime":"2025-12-10T14:32:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:32 crc kubenswrapper[4718]: I1210 14:32:32.019984 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 10 14:32:32 crc kubenswrapper[4718]: I1210 14:32:32.020187 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 10 14:32:32 crc kubenswrapper[4718]: E1210 14:32:32.020312 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 10 14:32:32 crc kubenswrapper[4718]: E1210 14:32:32.020413 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 10 14:32:32 crc kubenswrapper[4718]: I1210 14:32:32.077851 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:32 crc kubenswrapper[4718]: I1210 14:32:32.077906 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:32 crc kubenswrapper[4718]: I1210 14:32:32.077919 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:32 crc kubenswrapper[4718]: I1210 14:32:32.077940 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:32 crc kubenswrapper[4718]: I1210 14:32:32.077954 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:32Z","lastTransitionTime":"2025-12-10T14:32:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:32 crc kubenswrapper[4718]: I1210 14:32:32.180785 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:32 crc kubenswrapper[4718]: I1210 14:32:32.180827 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:32 crc kubenswrapper[4718]: I1210 14:32:32.180838 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:32 crc kubenswrapper[4718]: I1210 14:32:32.180852 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:32 crc kubenswrapper[4718]: I1210 14:32:32.180863 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:32Z","lastTransitionTime":"2025-12-10T14:32:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:32 crc kubenswrapper[4718]: I1210 14:32:32.284897 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:32 crc kubenswrapper[4718]: I1210 14:32:32.284957 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:32 crc kubenswrapper[4718]: I1210 14:32:32.284972 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:32 crc kubenswrapper[4718]: I1210 14:32:32.284993 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:32 crc kubenswrapper[4718]: I1210 14:32:32.285012 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:32Z","lastTransitionTime":"2025-12-10T14:32:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:32 crc kubenswrapper[4718]: I1210 14:32:32.388169 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:32 crc kubenswrapper[4718]: I1210 14:32:32.388231 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:32 crc kubenswrapper[4718]: I1210 14:32:32.388243 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:32 crc kubenswrapper[4718]: I1210 14:32:32.388260 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:32 crc kubenswrapper[4718]: I1210 14:32:32.388275 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:32Z","lastTransitionTime":"2025-12-10T14:32:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:32 crc kubenswrapper[4718]: I1210 14:32:32.491727 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:32 crc kubenswrapper[4718]: I1210 14:32:32.491785 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:32 crc kubenswrapper[4718]: I1210 14:32:32.491802 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:32 crc kubenswrapper[4718]: I1210 14:32:32.491817 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:32 crc kubenswrapper[4718]: I1210 14:32:32.491828 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:32Z","lastTransitionTime":"2025-12-10T14:32:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:32 crc kubenswrapper[4718]: I1210 14:32:32.594936 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:32 crc kubenswrapper[4718]: I1210 14:32:32.595023 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:32 crc kubenswrapper[4718]: I1210 14:32:32.595038 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:32 crc kubenswrapper[4718]: I1210 14:32:32.595061 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:32 crc kubenswrapper[4718]: I1210 14:32:32.595084 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:32Z","lastTransitionTime":"2025-12-10T14:32:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:32 crc kubenswrapper[4718]: I1210 14:32:32.699973 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:32 crc kubenswrapper[4718]: I1210 14:32:32.700056 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:32 crc kubenswrapper[4718]: I1210 14:32:32.700070 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:32 crc kubenswrapper[4718]: I1210 14:32:32.700099 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:32 crc kubenswrapper[4718]: I1210 14:32:32.700122 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:32Z","lastTransitionTime":"2025-12-10T14:32:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:32 crc kubenswrapper[4718]: I1210 14:32:32.803113 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:32 crc kubenswrapper[4718]: I1210 14:32:32.803167 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:32 crc kubenswrapper[4718]: I1210 14:32:32.803180 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:32 crc kubenswrapper[4718]: I1210 14:32:32.803197 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:32 crc kubenswrapper[4718]: I1210 14:32:32.803208 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:32Z","lastTransitionTime":"2025-12-10T14:32:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:32 crc kubenswrapper[4718]: I1210 14:32:32.906064 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:32 crc kubenswrapper[4718]: I1210 14:32:32.906475 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:32 crc kubenswrapper[4718]: I1210 14:32:32.906737 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:32 crc kubenswrapper[4718]: I1210 14:32:32.906945 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:32 crc kubenswrapper[4718]: I1210 14:32:32.907096 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:32Z","lastTransitionTime":"2025-12-10T14:32:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:33 crc kubenswrapper[4718]: I1210 14:32:33.010180 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:33 crc kubenswrapper[4718]: I1210 14:32:33.010241 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:33 crc kubenswrapper[4718]: I1210 14:32:33.010262 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:33 crc kubenswrapper[4718]: I1210 14:32:33.010286 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:33 crc kubenswrapper[4718]: I1210 14:32:33.010303 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:33Z","lastTransitionTime":"2025-12-10T14:32:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:33 crc kubenswrapper[4718]: I1210 14:32:33.019458 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-r8zbt" Dec 10 14:32:33 crc kubenswrapper[4718]: E1210 14:32:33.019606 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-r8zbt" podUID="1494ebfa-d66c-4200-a336-2cedebcd5889" Dec 10 14:32:33 crc kubenswrapper[4718]: I1210 14:32:33.019458 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 10 14:32:33 crc kubenswrapper[4718]: E1210 14:32:33.019688 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 10 14:32:33 crc kubenswrapper[4718]: I1210 14:32:33.113547 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:33 crc kubenswrapper[4718]: I1210 14:32:33.113594 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:33 crc kubenswrapper[4718]: I1210 14:32:33.113608 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:33 crc kubenswrapper[4718]: I1210 14:32:33.113632 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:33 crc kubenswrapper[4718]: I1210 14:32:33.113648 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:33Z","lastTransitionTime":"2025-12-10T14:32:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:33 crc kubenswrapper[4718]: I1210 14:32:33.216765 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:33 crc kubenswrapper[4718]: I1210 14:32:33.217067 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:33 crc kubenswrapper[4718]: I1210 14:32:33.217235 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:33 crc kubenswrapper[4718]: I1210 14:32:33.217411 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:33 crc kubenswrapper[4718]: I1210 14:32:33.217537 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:33Z","lastTransitionTime":"2025-12-10T14:32:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:33 crc kubenswrapper[4718]: I1210 14:32:33.321746 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:33 crc kubenswrapper[4718]: I1210 14:32:33.321796 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:33 crc kubenswrapper[4718]: I1210 14:32:33.321808 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:33 crc kubenswrapper[4718]: I1210 14:32:33.321836 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:33 crc kubenswrapper[4718]: I1210 14:32:33.321852 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:33Z","lastTransitionTime":"2025-12-10T14:32:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:33 crc kubenswrapper[4718]: I1210 14:32:33.371598 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1494ebfa-d66c-4200-a336-2cedebcd5889-metrics-certs\") pod \"network-metrics-daemon-r8zbt\" (UID: \"1494ebfa-d66c-4200-a336-2cedebcd5889\") " pod="openshift-multus/network-metrics-daemon-r8zbt" Dec 10 14:32:33 crc kubenswrapper[4718]: E1210 14:32:33.371865 4718 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 10 14:32:33 crc kubenswrapper[4718]: E1210 14:32:33.371997 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1494ebfa-d66c-4200-a336-2cedebcd5889-metrics-certs podName:1494ebfa-d66c-4200-a336-2cedebcd5889 nodeName:}" failed. No retries permitted until 2025-12-10 14:32:49.371965967 +0000 UTC m=+74.321189564 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/1494ebfa-d66c-4200-a336-2cedebcd5889-metrics-certs") pod "network-metrics-daemon-r8zbt" (UID: "1494ebfa-d66c-4200-a336-2cedebcd5889") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 10 14:32:33 crc kubenswrapper[4718]: I1210 14:32:33.424776 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:33 crc kubenswrapper[4718]: I1210 14:32:33.424839 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:33 crc kubenswrapper[4718]: I1210 14:32:33.424849 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:33 crc kubenswrapper[4718]: I1210 14:32:33.424868 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:33 crc kubenswrapper[4718]: I1210 14:32:33.424879 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:33Z","lastTransitionTime":"2025-12-10T14:32:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:33 crc kubenswrapper[4718]: I1210 14:32:33.528123 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:33 crc kubenswrapper[4718]: I1210 14:32:33.528199 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:33 crc kubenswrapper[4718]: I1210 14:32:33.528214 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:33 crc kubenswrapper[4718]: I1210 14:32:33.528236 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:33 crc kubenswrapper[4718]: I1210 14:32:33.528551 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:33Z","lastTransitionTime":"2025-12-10T14:32:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:33 crc kubenswrapper[4718]: I1210 14:32:33.633126 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:33 crc kubenswrapper[4718]: I1210 14:32:33.633180 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:33 crc kubenswrapper[4718]: I1210 14:32:33.633190 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:33 crc kubenswrapper[4718]: I1210 14:32:33.633206 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:33 crc kubenswrapper[4718]: I1210 14:32:33.633218 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:33Z","lastTransitionTime":"2025-12-10T14:32:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:33 crc kubenswrapper[4718]: I1210 14:32:33.736487 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:33 crc kubenswrapper[4718]: I1210 14:32:33.736544 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:33 crc kubenswrapper[4718]: I1210 14:32:33.736560 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:33 crc kubenswrapper[4718]: I1210 14:32:33.736585 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:33 crc kubenswrapper[4718]: I1210 14:32:33.736606 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:33Z","lastTransitionTime":"2025-12-10T14:32:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:33 crc kubenswrapper[4718]: I1210 14:32:33.840168 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:33 crc kubenswrapper[4718]: I1210 14:32:33.840221 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:33 crc kubenswrapper[4718]: I1210 14:32:33.840231 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:33 crc kubenswrapper[4718]: I1210 14:32:33.840245 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:33 crc kubenswrapper[4718]: I1210 14:32:33.840255 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:33Z","lastTransitionTime":"2025-12-10T14:32:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:33 crc kubenswrapper[4718]: I1210 14:32:33.942858 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:33 crc kubenswrapper[4718]: I1210 14:32:33.943145 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:33 crc kubenswrapper[4718]: I1210 14:32:33.943222 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:33 crc kubenswrapper[4718]: I1210 14:32:33.943298 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:33 crc kubenswrapper[4718]: I1210 14:32:33.943371 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:33Z","lastTransitionTime":"2025-12-10T14:32:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:34 crc kubenswrapper[4718]: I1210 14:32:34.020279 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 10 14:32:34 crc kubenswrapper[4718]: I1210 14:32:34.020301 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 10 14:32:34 crc kubenswrapper[4718]: E1210 14:32:34.020494 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 10 14:32:34 crc kubenswrapper[4718]: E1210 14:32:34.020622 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 10 14:32:34 crc kubenswrapper[4718]: I1210 14:32:34.047022 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:34 crc kubenswrapper[4718]: I1210 14:32:34.047121 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:34 crc kubenswrapper[4718]: I1210 14:32:34.047151 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:34 crc kubenswrapper[4718]: I1210 14:32:34.047183 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:34 crc kubenswrapper[4718]: I1210 14:32:34.047206 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:34Z","lastTransitionTime":"2025-12-10T14:32:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:34 crc kubenswrapper[4718]: I1210 14:32:34.151093 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:34 crc kubenswrapper[4718]: I1210 14:32:34.151172 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:34 crc kubenswrapper[4718]: I1210 14:32:34.151187 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:34 crc kubenswrapper[4718]: I1210 14:32:34.151214 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:34 crc kubenswrapper[4718]: I1210 14:32:34.151228 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:34Z","lastTransitionTime":"2025-12-10T14:32:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:34 crc kubenswrapper[4718]: I1210 14:32:34.254543 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:34 crc kubenswrapper[4718]: I1210 14:32:34.254604 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:34 crc kubenswrapper[4718]: I1210 14:32:34.254619 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:34 crc kubenswrapper[4718]: I1210 14:32:34.254641 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:34 crc kubenswrapper[4718]: I1210 14:32:34.254655 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:34Z","lastTransitionTime":"2025-12-10T14:32:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:34 crc kubenswrapper[4718]: I1210 14:32:34.357457 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:34 crc kubenswrapper[4718]: I1210 14:32:34.357533 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:34 crc kubenswrapper[4718]: I1210 14:32:34.357543 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:34 crc kubenswrapper[4718]: I1210 14:32:34.357561 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:34 crc kubenswrapper[4718]: I1210 14:32:34.357571 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:34Z","lastTransitionTime":"2025-12-10T14:32:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:34 crc kubenswrapper[4718]: I1210 14:32:34.460489 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:34 crc kubenswrapper[4718]: I1210 14:32:34.460599 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:34 crc kubenswrapper[4718]: I1210 14:32:34.460615 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:34 crc kubenswrapper[4718]: I1210 14:32:34.460639 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:34 crc kubenswrapper[4718]: I1210 14:32:34.460652 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:34Z","lastTransitionTime":"2025-12-10T14:32:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:34 crc kubenswrapper[4718]: I1210 14:32:34.563990 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:34 crc kubenswrapper[4718]: I1210 14:32:34.564107 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:34 crc kubenswrapper[4718]: I1210 14:32:34.564136 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:34 crc kubenswrapper[4718]: I1210 14:32:34.564167 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:34 crc kubenswrapper[4718]: I1210 14:32:34.564190 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:34Z","lastTransitionTime":"2025-12-10T14:32:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:34 crc kubenswrapper[4718]: I1210 14:32:34.667501 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:34 crc kubenswrapper[4718]: I1210 14:32:34.667545 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:34 crc kubenswrapper[4718]: I1210 14:32:34.667554 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:34 crc kubenswrapper[4718]: I1210 14:32:34.667567 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:34 crc kubenswrapper[4718]: I1210 14:32:34.667577 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:34Z","lastTransitionTime":"2025-12-10T14:32:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:34 crc kubenswrapper[4718]: I1210 14:32:34.770192 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:34 crc kubenswrapper[4718]: I1210 14:32:34.770237 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:34 crc kubenswrapper[4718]: I1210 14:32:34.770249 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:34 crc kubenswrapper[4718]: I1210 14:32:34.770265 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:34 crc kubenswrapper[4718]: I1210 14:32:34.770277 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:34Z","lastTransitionTime":"2025-12-10T14:32:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:34 crc kubenswrapper[4718]: I1210 14:32:34.873402 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:34 crc kubenswrapper[4718]: I1210 14:32:34.873454 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:34 crc kubenswrapper[4718]: I1210 14:32:34.873466 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:34 crc kubenswrapper[4718]: I1210 14:32:34.873483 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:34 crc kubenswrapper[4718]: I1210 14:32:34.873495 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:34Z","lastTransitionTime":"2025-12-10T14:32:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:34 crc kubenswrapper[4718]: I1210 14:32:34.976420 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:34 crc kubenswrapper[4718]: I1210 14:32:34.976470 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:34 crc kubenswrapper[4718]: I1210 14:32:34.976479 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:34 crc kubenswrapper[4718]: I1210 14:32:34.976497 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:34 crc kubenswrapper[4718]: I1210 14:32:34.976509 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:34Z","lastTransitionTime":"2025-12-10T14:32:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:35 crc kubenswrapper[4718]: I1210 14:32:35.020303 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-r8zbt" Dec 10 14:32:35 crc kubenswrapper[4718]: I1210 14:32:35.020453 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 10 14:32:35 crc kubenswrapper[4718]: E1210 14:32:35.020564 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-r8zbt" podUID="1494ebfa-d66c-4200-a336-2cedebcd5889" Dec 10 14:32:35 crc kubenswrapper[4718]: E1210 14:32:35.020742 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 10 14:32:35 crc kubenswrapper[4718]: I1210 14:32:35.079538 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:35 crc kubenswrapper[4718]: I1210 14:32:35.079668 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:35 crc kubenswrapper[4718]: I1210 14:32:35.079684 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:35 crc kubenswrapper[4718]: I1210 14:32:35.079703 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:35 crc kubenswrapper[4718]: I1210 14:32:35.079717 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:35Z","lastTransitionTime":"2025-12-10T14:32:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:35 crc kubenswrapper[4718]: I1210 14:32:35.181944 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:35 crc kubenswrapper[4718]: I1210 14:32:35.181993 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:35 crc kubenswrapper[4718]: I1210 14:32:35.182004 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:35 crc kubenswrapper[4718]: I1210 14:32:35.182020 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:35 crc kubenswrapper[4718]: I1210 14:32:35.182031 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:35Z","lastTransitionTime":"2025-12-10T14:32:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:35 crc kubenswrapper[4718]: I1210 14:32:35.284781 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:35 crc kubenswrapper[4718]: I1210 14:32:35.284821 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:35 crc kubenswrapper[4718]: I1210 14:32:35.284831 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:35 crc kubenswrapper[4718]: I1210 14:32:35.284845 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:35 crc kubenswrapper[4718]: I1210 14:32:35.284856 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:35Z","lastTransitionTime":"2025-12-10T14:32:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:35 crc kubenswrapper[4718]: I1210 14:32:35.387070 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:35 crc kubenswrapper[4718]: I1210 14:32:35.387102 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:35 crc kubenswrapper[4718]: I1210 14:32:35.387113 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:35 crc kubenswrapper[4718]: I1210 14:32:35.387127 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:35 crc kubenswrapper[4718]: I1210 14:32:35.387137 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:35Z","lastTransitionTime":"2025-12-10T14:32:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:35 crc kubenswrapper[4718]: I1210 14:32:35.477458 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 10 14:32:35 crc kubenswrapper[4718]: I1210 14:32:35.489824 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:35 crc kubenswrapper[4718]: I1210 14:32:35.489861 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:35 crc kubenswrapper[4718]: I1210 14:32:35.489869 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:35 crc kubenswrapper[4718]: I1210 14:32:35.489882 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:35 crc kubenswrapper[4718]: I1210 14:32:35.489892 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:35Z","lastTransitionTime":"2025-12-10T14:32:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:35 crc kubenswrapper[4718]: I1210 14:32:35.489941 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5867ee7f-0b5d-48e7-9c39-956298a845a1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca56ecda46fc73e9f734cfd36b87b7353db70adefef0665771911d62633ce508\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65ade8ad5ab17011ad49a8283beac1b53ed3612052f8ad9c9b5521b18a71f970\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4c8b683dbf4a1d50d5987ea168baecaf4b2124a5a6286dc22c96cd7472db60a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://541c7810a9f52a43bd4d949a836dfed2fbf9384a1c46d64f81168a8eec1437e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://541c7810a9f52a43bd4d949a836dfed2fbf9384a1c46d64f81168a8eec1437e3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:31:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:31:37Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:31:36Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:35Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:35 crc kubenswrapper[4718]: I1210 14:32:35.501983 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://caa6ee7ab7f267517dcd1699f91f61cbc8b2dd0a135dd553e096f31a7f1dd93b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:35Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:35 crc kubenswrapper[4718]: I1210 14:32:35.515187 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:35Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:35 crc kubenswrapper[4718]: I1210 14:32:35.528626 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hv62w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9db3984f-4589-462f-94d7-89a885be63d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c18891e2a71e33784526db2ff2226ff6aa8a9df462f5cb11ff4f5a446509f54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d84p4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hv62w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:35Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:35 crc kubenswrapper[4718]: I1210 14:32:35.540217 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xvkg4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af814c60-50de-499d-a1b2-b18f3749bc35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01d0637707f68305ada3aab8a2cf56db32750665ef5c1bcfbfb1993fd135c3b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vsmbx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:00Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xvkg4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:35Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:35 crc kubenswrapper[4718]: I1210 14:32:35.551995 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8db53917-7cfb-496d-b8a0-5cc68f3be4e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ac0946788c7cef36abe84639ca343f7ba379f19b94308cd1a08f4dcac4ad249\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f420f195ef9f1879dc97af4b3feb8835025c2cbe0ede4113fb1b0e0b08ba6c2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8zmhn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:35Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:35 crc kubenswrapper[4718]: I1210 14:32:35.564620 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1432ae01-a453-47a7-be1d-5fdcabe6f0c8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://651112c229c32968527538e66f1b161c92411b37f45f36ca90b27b6710d7a43d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d1de02a2f41beed3e92a05e69a6c76b44861e7dba46d723a38fffe8c3d0ed42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9405e4c833e1d215824604c6d7ec42dec156f2968d3351f110df665cdac0ce1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d898946159da9ec7806b5daa074a2313a8d3cd6086f4ae2bdfe858a7ba70eca7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:31:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:35Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:35 crc kubenswrapper[4718]: I1210 14:32:35.577904 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-49r77" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22aace52-6f58-4459-89d2-9fec98b12ead\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b18d0c437956c7db43ad53ae225d4340e62f2c0ceb79ab6cc519d3c97570027\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l47m9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc20dc8e96606df154ecb5850afe3c1002d2869646e8a43a642e34d6f5ec8683\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l47m9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-49r77\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:35Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:35 crc kubenswrapper[4718]: I1210 14:32:35.592318 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:35 crc kubenswrapper[4718]: I1210 14:32:35.592619 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:35 crc kubenswrapper[4718]: I1210 14:32:35.592690 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:35 crc kubenswrapper[4718]: I1210 14:32:35.592767 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:35 crc kubenswrapper[4718]: I1210 14:32:35.592831 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:35Z","lastTransitionTime":"2025-12-10T14:32:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:35 crc kubenswrapper[4718]: I1210 14:32:35.593029 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-r8zbt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1494ebfa-d66c-4200-a336-2cedebcd5889\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p7qsk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p7qsk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:17Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-r8zbt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:35Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:35 crc kubenswrapper[4718]: I1210 14:32:35.609482 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kkfdg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db9c6842-03cb-4a28-baff-77f27f537aa4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bbdc104f3ec13789d339165601ac84cf9973e811127f8fe05ddb9b8d5ab333a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bxrt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34acb23b1eef53ae920322853c0503b9d26b0d30894e6c53ae52fb90e1671294\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34acb23b1eef53ae920322853c0503b9d26b0d30894e6c53ae52fb90e1671294\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bxrt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1b13fbe360cd4e791e86acdb311b78d1ab1d3344fd287fd70cfa87cf656affa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1b13fbe360cd4e791e86acdb311b78d1ab1d3344fd287fd70cfa87cf656affa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bxrt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d9ee16140b928b7e5c33811af2ca2b20b2fc98fdefc12bb1790fce59c407d9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d9ee16140b928b7e5c33811af2ca2b20b2fc98fdefc12bb1790fce59c407d9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bxrt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1daf49ff990637090db3ead68f5d0777b6502f79edfe524df743389ee220f8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1daf49ff990637090db3ead68f5d0777b6502f79edfe524df743389ee220f8d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bxrt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://193bd4ffc5c3324f05727b94ff852aaae455949fbb968b27c572f43fe0328854\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://193bd4ffc5c3324f05727b94ff852aaae455949fbb968b27c572f43fe0328854\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bxrt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://95db561c06acc2e7272a5a4a2c8a6a38315343379b80213c0e63f7a65a46f35d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://95db561c06acc2e7272a5a4a2c8a6a38315343379b80213c0e63f7a65a46f35d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bxrt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kkfdg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:35Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:35 crc kubenswrapper[4718]: I1210 14:32:35.625738 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:35Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:35 crc kubenswrapper[4718]: I1210 14:32:35.639040 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://368fbae55ef9c78f2f88c5a07301f271e1ccffc66976a0c2dc9f80edce04613a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:35Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:35 crc kubenswrapper[4718]: I1210 14:32:35.662847 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dtch2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"612af6cb-db4d-4874-a9ea-8b3c7eb8e30c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f49783c85c42bd1d4afd9d2546c84af07c770b069b7e6f22cf3d95c0e173bcb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02ee125e78be56eace63f1077a3952d4a9b9ccb387af5413a9325eb4bff80ea9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e231c6918c86142643af798db9c4c6de9764822d2b60e1155c727b143191139\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://abe2f058bd1c2102c5f00a29af4869eb6dcd8b732ae90f5c4fda6759b460622a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2ac19fc0eeba332014eefac978a35798eb8bf1ff79212bca24690236bd46e01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9263fdd7b12f5852db038570b5a179fff5850893af790fa27694f00375a607f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b37a96430550aa45384959d4587f4ab21c637451e334481e14e7b3a666d7f936\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b37a96430550aa45384959d4587f4ab21c637451e334481e14e7b3a666d7f936\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-10T14:32:27Z\\\",\\\"message\\\":\\\"shift-console/console]} name:Service_openshift-console/console_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.194:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {d7d7b270-1480-47f8-bdf9-690dbab310cb}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1210 14:32:26.076824 6228 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI1210 14:32:26.076822 6228 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-console/console]} name:Service_openshift-console/console_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.194:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {d7d7b270-1480-47f8-bdf9-690dbab310cb}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF1210 14:32:26.076881 6228 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:25Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-dtch2_openshift-ovn-kubernetes(612af6cb-db4d-4874-a9ea-8b3c7eb8e30c)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f27c89c7dd6f5334f05c340d706e938abde36968f00c30bf84f7b2fb30dfcb3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df5db949bd1c53e8ae754b3e7e500e8c2e196aacd68a36d6331ddca378ef7504\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df5db949bd1c53e8ae754b3e7e500e8c2e196aacd68a36d6331ddca378ef7504\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-dtch2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:35Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:35 crc kubenswrapper[4718]: I1210 14:32:35.678915 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52b297ad-8ff7-498e-8248-b64014de744f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65966d33004e0a6740f0b450f404729baec86fc47c84a58ee020a78b7378bfee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfa44e8f79d7525847ab10384aaf41105a8e6769b384679417cd0a7379748dee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://32fcede9b35f9a45177a341e272e7c2cbe2e5c6bf699e877664a603a5f96124a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3d4b682855c5510a5957e308008602e4f749bd44ca52f974cb4be27d06a7c80\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9d6e990a4a3f575ffc8a6399b8dd1fd59a84db71b34885946046fc7c5bd0a8e8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-10T14:31:58Z\\\",\\\"message\\\":\\\"le observer\\\\nW1210 14:31:57.496102 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1210 14:31:57.496266 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1210 14:31:57.497076 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1744864549/tls.crt::/tmp/serving-cert-1744864549/tls.key\\\\\\\"\\\\nI1210 14:31:57.965947 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1210 14:31:57.968367 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1210 14:31:57.968400 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1210 14:31:57.968417 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1210 14:31:57.968422 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1210 14:31:57.972813 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1210 14:31:57.972849 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1210 14:31:57.972856 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1210 14:31:57.972862 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1210 14:31:57.972867 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1210 14:31:57.972870 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1210 14:31:57.972874 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1210 14:31:57.973028 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1210 14:31:57.977590 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-10T14:31:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f363cabfc65ba2cb737968ead6f21568f30416a1f385f511f2d1a45da1e831a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3cbf8dfd3e1278a733c6ca7e888788d0bec845ff72dfd618e2070262d05b6a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3cbf8dfd3e1278a733c6ca7e888788d0bec845ff72dfd618e2070262d05b6a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:31:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:31:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:35Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:35 crc kubenswrapper[4718]: I1210 14:32:35.693152 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:35Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:35 crc kubenswrapper[4718]: I1210 14:32:35.695571 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:35 crc kubenswrapper[4718]: I1210 14:32:35.695611 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:35 crc kubenswrapper[4718]: I1210 14:32:35.695623 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:35 crc kubenswrapper[4718]: I1210 14:32:35.695640 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:35 crc kubenswrapper[4718]: I1210 14:32:35.695651 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:35Z","lastTransitionTime":"2025-12-10T14:32:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:35 crc kubenswrapper[4718]: I1210 14:32:35.706164 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a6ff07473ddfa3d1db1e5fe47937274f803c400ee435102c3b0383c02af4340\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b56868fde34c04a22b78be1e6cb02ff5b11f0dd9fa0232e5a4c38725a8bf193\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:35Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:35 crc kubenswrapper[4718]: I1210 14:32:35.718685 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9fmrd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"186fd52a-c63c-461f-a551-8b57ead36f59\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c79dfef1bedaab660a35e29f8a48c6b153f7ca5028ef6a936a196699a30566e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r6f28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:03Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9fmrd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:35Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:35 crc kubenswrapper[4718]: I1210 14:32:35.798036 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:35 crc kubenswrapper[4718]: I1210 14:32:35.798104 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:35 crc kubenswrapper[4718]: I1210 14:32:35.798118 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:35 crc kubenswrapper[4718]: I1210 14:32:35.798146 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:35 crc kubenswrapper[4718]: I1210 14:32:35.798162 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:35Z","lastTransitionTime":"2025-12-10T14:32:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:35 crc kubenswrapper[4718]: I1210 14:32:35.901022 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:35 crc kubenswrapper[4718]: I1210 14:32:35.901097 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:35 crc kubenswrapper[4718]: I1210 14:32:35.901115 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:35 crc kubenswrapper[4718]: I1210 14:32:35.901142 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:35 crc kubenswrapper[4718]: I1210 14:32:35.901156 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:35Z","lastTransitionTime":"2025-12-10T14:32:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:36 crc kubenswrapper[4718]: I1210 14:32:36.003754 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:36 crc kubenswrapper[4718]: I1210 14:32:36.003795 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:36 crc kubenswrapper[4718]: I1210 14:32:36.003803 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:36 crc kubenswrapper[4718]: I1210 14:32:36.003816 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:36 crc kubenswrapper[4718]: I1210 14:32:36.003826 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:36Z","lastTransitionTime":"2025-12-10T14:32:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:36 crc kubenswrapper[4718]: I1210 14:32:36.019410 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 10 14:32:36 crc kubenswrapper[4718]: I1210 14:32:36.019449 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 10 14:32:36 crc kubenswrapper[4718]: E1210 14:32:36.019609 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 10 14:32:36 crc kubenswrapper[4718]: E1210 14:32:36.019814 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 10 14:32:36 crc kubenswrapper[4718]: I1210 14:32:36.034049 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://caa6ee7ab7f267517dcd1699f91f61cbc8b2dd0a135dd553e096f31a7f1dd93b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:36Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:36 crc kubenswrapper[4718]: I1210 14:32:36.047745 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:36Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:36 crc kubenswrapper[4718]: I1210 14:32:36.062185 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hv62w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9db3984f-4589-462f-94d7-89a885be63d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c18891e2a71e33784526db2ff2226ff6aa8a9df462f5cb11ff4f5a446509f54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d84p4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hv62w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:36Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:36 crc kubenswrapper[4718]: I1210 14:32:36.074978 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xvkg4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af814c60-50de-499d-a1b2-b18f3749bc35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01d0637707f68305ada3aab8a2cf56db32750665ef5c1bcfbfb1993fd135c3b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vsmbx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:00Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xvkg4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:36Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:36 crc kubenswrapper[4718]: I1210 14:32:36.087042 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8db53917-7cfb-496d-b8a0-5cc68f3be4e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ac0946788c7cef36abe84639ca343f7ba379f19b94308cd1a08f4dcac4ad249\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f420f195ef9f1879dc97af4b3feb8835025c2cbe0ede4113fb1b0e0b08ba6c2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8zmhn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:36Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:36 crc kubenswrapper[4718]: I1210 14:32:36.100024 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1432ae01-a453-47a7-be1d-5fdcabe6f0c8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://651112c229c32968527538e66f1b161c92411b37f45f36ca90b27b6710d7a43d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d1de02a2f41beed3e92a05e69a6c76b44861e7dba46d723a38fffe8c3d0ed42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9405e4c833e1d215824604c6d7ec42dec156f2968d3351f110df665cdac0ce1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d898946159da9ec7806b5daa074a2313a8d3cd6086f4ae2bdfe858a7ba70eca7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:31:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:36Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:36 crc kubenswrapper[4718]: I1210 14:32:36.106578 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:36 crc kubenswrapper[4718]: I1210 14:32:36.106612 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:36 crc kubenswrapper[4718]: I1210 14:32:36.106623 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:36 crc kubenswrapper[4718]: I1210 14:32:36.106640 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:36 crc kubenswrapper[4718]: I1210 14:32:36.106653 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:36Z","lastTransitionTime":"2025-12-10T14:32:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:36 crc kubenswrapper[4718]: I1210 14:32:36.112714 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5867ee7f-0b5d-48e7-9c39-956298a845a1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca56ecda46fc73e9f734cfd36b87b7353db70adefef0665771911d62633ce508\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65ade8ad5ab17011ad49a8283beac1b53ed3612052f8ad9c9b5521b18a71f970\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4c8b683dbf4a1d50d5987ea168baecaf4b2124a5a6286dc22c96cd7472db60a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://541c7810a9f52a43bd4d949a836dfed2fbf9384a1c46d64f81168a8eec1437e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://541c7810a9f52a43bd4d949a836dfed2fbf9384a1c46d64f81168a8eec1437e3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:31:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:31:37Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:31:36Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:36Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:36 crc kubenswrapper[4718]: I1210 14:32:36.126046 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-49r77" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22aace52-6f58-4459-89d2-9fec98b12ead\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b18d0c437956c7db43ad53ae225d4340e62f2c0ceb79ab6cc519d3c97570027\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l47m9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc20dc8e96606df154ecb5850afe3c1002d2869646e8a43a642e34d6f5ec8683\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l47m9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-49r77\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:36Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:36 crc kubenswrapper[4718]: I1210 14:32:36.141715 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kkfdg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db9c6842-03cb-4a28-baff-77f27f537aa4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bbdc104f3ec13789d339165601ac84cf9973e811127f8fe05ddb9b8d5ab333a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bxrt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34acb23b1eef53ae920322853c0503b9d26b0d30894e6c53ae52fb90e1671294\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34acb23b1eef53ae920322853c0503b9d26b0d30894e6c53ae52fb90e1671294\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bxrt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1b13fbe360cd4e791e86acdb311b78d1ab1d3344fd287fd70cfa87cf656affa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1b13fbe360cd4e791e86acdb311b78d1ab1d3344fd287fd70cfa87cf656affa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bxrt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d9ee16140b928b7e5c33811af2ca2b20b2fc98fdefc12bb1790fce59c407d9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d9ee16140b928b7e5c33811af2ca2b20b2fc98fdefc12bb1790fce59c407d9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bxrt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1daf49ff990637090db3ead68f5d0777b6502f79edfe524df743389ee220f8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1daf49ff990637090db3ead68f5d0777b6502f79edfe524df743389ee220f8d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bxrt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://193bd4ffc5c3324f05727b94ff852aaae455949fbb968b27c572f43fe0328854\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://193bd4ffc5c3324f05727b94ff852aaae455949fbb968b27c572f43fe0328854\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bxrt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://95db561c06acc2e7272a5a4a2c8a6a38315343379b80213c0e63f7a65a46f35d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://95db561c06acc2e7272a5a4a2c8a6a38315343379b80213c0e63f7a65a46f35d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bxrt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kkfdg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:36Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:36 crc kubenswrapper[4718]: I1210 14:32:36.153921 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-r8zbt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1494ebfa-d66c-4200-a336-2cedebcd5889\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p7qsk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p7qsk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:17Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-r8zbt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:36Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:36 crc kubenswrapper[4718]: I1210 14:32:36.167375 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://368fbae55ef9c78f2f88c5a07301f271e1ccffc66976a0c2dc9f80edce04613a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:36Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:36 crc kubenswrapper[4718]: I1210 14:32:36.185495 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dtch2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"612af6cb-db4d-4874-a9ea-8b3c7eb8e30c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f49783c85c42bd1d4afd9d2546c84af07c770b069b7e6f22cf3d95c0e173bcb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02ee125e78be56eace63f1077a3952d4a9b9ccb387af5413a9325eb4bff80ea9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e231c6918c86142643af798db9c4c6de9764822d2b60e1155c727b143191139\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://abe2f058bd1c2102c5f00a29af4869eb6dcd8b732ae90f5c4fda6759b460622a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2ac19fc0eeba332014eefac978a35798eb8bf1ff79212bca24690236bd46e01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9263fdd7b12f5852db038570b5a179fff5850893af790fa27694f00375a607f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b37a96430550aa45384959d4587f4ab21c637451e334481e14e7b3a666d7f936\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b37a96430550aa45384959d4587f4ab21c637451e334481e14e7b3a666d7f936\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-10T14:32:27Z\\\",\\\"message\\\":\\\"shift-console/console]} name:Service_openshift-console/console_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.194:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {d7d7b270-1480-47f8-bdf9-690dbab310cb}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1210 14:32:26.076824 6228 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI1210 14:32:26.076822 6228 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-console/console]} name:Service_openshift-console/console_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.194:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {d7d7b270-1480-47f8-bdf9-690dbab310cb}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF1210 14:32:26.076881 6228 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:25Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-dtch2_openshift-ovn-kubernetes(612af6cb-db4d-4874-a9ea-8b3c7eb8e30c)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f27c89c7dd6f5334f05c340d706e938abde36968f00c30bf84f7b2fb30dfcb3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df5db949bd1c53e8ae754b3e7e500e8c2e196aacd68a36d6331ddca378ef7504\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df5db949bd1c53e8ae754b3e7e500e8c2e196aacd68a36d6331ddca378ef7504\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-dtch2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:36Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:36 crc kubenswrapper[4718]: I1210 14:32:36.197699 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52b297ad-8ff7-498e-8248-b64014de744f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65966d33004e0a6740f0b450f404729baec86fc47c84a58ee020a78b7378bfee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfa44e8f79d7525847ab10384aaf41105a8e6769b384679417cd0a7379748dee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://32fcede9b35f9a45177a341e272e7c2cbe2e5c6bf699e877664a603a5f96124a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3d4b682855c5510a5957e308008602e4f749bd44ca52f974cb4be27d06a7c80\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9d6e990a4a3f575ffc8a6399b8dd1fd59a84db71b34885946046fc7c5bd0a8e8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-10T14:31:58Z\\\",\\\"message\\\":\\\"le observer\\\\nW1210 14:31:57.496102 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1210 14:31:57.496266 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1210 14:31:57.497076 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1744864549/tls.crt::/tmp/serving-cert-1744864549/tls.key\\\\\\\"\\\\nI1210 14:31:57.965947 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1210 14:31:57.968367 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1210 14:31:57.968400 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1210 14:31:57.968417 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1210 14:31:57.968422 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1210 14:31:57.972813 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1210 14:31:57.972849 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1210 14:31:57.972856 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1210 14:31:57.972862 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1210 14:31:57.972867 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1210 14:31:57.972870 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1210 14:31:57.972874 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1210 14:31:57.973028 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1210 14:31:57.977590 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-10T14:31:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f363cabfc65ba2cb737968ead6f21568f30416a1f385f511f2d1a45da1e831a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3cbf8dfd3e1278a733c6ca7e888788d0bec845ff72dfd618e2070262d05b6a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3cbf8dfd3e1278a733c6ca7e888788d0bec845ff72dfd618e2070262d05b6a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:31:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:31:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:36Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:36 crc kubenswrapper[4718]: I1210 14:32:36.209622 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:36 crc kubenswrapper[4718]: I1210 14:32:36.209663 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:36 crc kubenswrapper[4718]: I1210 14:32:36.209674 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:36 crc kubenswrapper[4718]: I1210 14:32:36.209693 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:36 crc kubenswrapper[4718]: I1210 14:32:36.209705 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:36Z","lastTransitionTime":"2025-12-10T14:32:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:36 crc kubenswrapper[4718]: I1210 14:32:36.210871 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:36Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:36 crc kubenswrapper[4718]: I1210 14:32:36.223322 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:36Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:36 crc kubenswrapper[4718]: I1210 14:32:36.237102 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a6ff07473ddfa3d1db1e5fe47937274f803c400ee435102c3b0383c02af4340\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b56868fde34c04a22b78be1e6cb02ff5b11f0dd9fa0232e5a4c38725a8bf193\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:36Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:36 crc kubenswrapper[4718]: I1210 14:32:36.247361 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9fmrd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"186fd52a-c63c-461f-a551-8b57ead36f59\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c79dfef1bedaab660a35e29f8a48c6b153f7ca5028ef6a936a196699a30566e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r6f28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:03Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9fmrd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:36Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:36 crc kubenswrapper[4718]: I1210 14:32:36.313703 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:36 crc kubenswrapper[4718]: I1210 14:32:36.314125 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:36 crc kubenswrapper[4718]: I1210 14:32:36.314312 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:36 crc kubenswrapper[4718]: I1210 14:32:36.314511 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:36 crc kubenswrapper[4718]: I1210 14:32:36.314653 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:36Z","lastTransitionTime":"2025-12-10T14:32:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:36 crc kubenswrapper[4718]: I1210 14:32:36.417523 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:36 crc kubenswrapper[4718]: I1210 14:32:36.417584 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:36 crc kubenswrapper[4718]: I1210 14:32:36.417596 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:36 crc kubenswrapper[4718]: I1210 14:32:36.417619 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:36 crc kubenswrapper[4718]: I1210 14:32:36.417637 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:36Z","lastTransitionTime":"2025-12-10T14:32:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:36 crc kubenswrapper[4718]: I1210 14:32:36.520122 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:36 crc kubenswrapper[4718]: I1210 14:32:36.520479 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:36 crc kubenswrapper[4718]: I1210 14:32:36.520581 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:36 crc kubenswrapper[4718]: I1210 14:32:36.520677 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:36 crc kubenswrapper[4718]: I1210 14:32:36.520774 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:36Z","lastTransitionTime":"2025-12-10T14:32:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:36 crc kubenswrapper[4718]: I1210 14:32:36.624098 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:36 crc kubenswrapper[4718]: I1210 14:32:36.624143 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:36 crc kubenswrapper[4718]: I1210 14:32:36.624153 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:36 crc kubenswrapper[4718]: I1210 14:32:36.624168 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:36 crc kubenswrapper[4718]: I1210 14:32:36.624177 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:36Z","lastTransitionTime":"2025-12-10T14:32:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:36 crc kubenswrapper[4718]: I1210 14:32:36.726813 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:36 crc kubenswrapper[4718]: I1210 14:32:36.727172 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:36 crc kubenswrapper[4718]: I1210 14:32:36.727188 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:36 crc kubenswrapper[4718]: I1210 14:32:36.727205 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:36 crc kubenswrapper[4718]: I1210 14:32:36.727215 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:36Z","lastTransitionTime":"2025-12-10T14:32:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:36 crc kubenswrapper[4718]: I1210 14:32:36.830017 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:36 crc kubenswrapper[4718]: I1210 14:32:36.830089 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:36 crc kubenswrapper[4718]: I1210 14:32:36.830102 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:36 crc kubenswrapper[4718]: I1210 14:32:36.830127 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:36 crc kubenswrapper[4718]: I1210 14:32:36.830141 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:36Z","lastTransitionTime":"2025-12-10T14:32:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:36 crc kubenswrapper[4718]: I1210 14:32:36.933846 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:36 crc kubenswrapper[4718]: I1210 14:32:36.933889 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:36 crc kubenswrapper[4718]: I1210 14:32:36.933899 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:36 crc kubenswrapper[4718]: I1210 14:32:36.933913 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:36 crc kubenswrapper[4718]: I1210 14:32:36.933923 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:36Z","lastTransitionTime":"2025-12-10T14:32:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:37 crc kubenswrapper[4718]: I1210 14:32:37.020284 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 10 14:32:37 crc kubenswrapper[4718]: I1210 14:32:37.020308 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-r8zbt" Dec 10 14:32:37 crc kubenswrapper[4718]: E1210 14:32:37.020540 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 10 14:32:37 crc kubenswrapper[4718]: E1210 14:32:37.020775 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-r8zbt" podUID="1494ebfa-d66c-4200-a336-2cedebcd5889" Dec 10 14:32:37 crc kubenswrapper[4718]: I1210 14:32:37.036923 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:37 crc kubenswrapper[4718]: I1210 14:32:37.036962 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:37 crc kubenswrapper[4718]: I1210 14:32:37.036972 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:37 crc kubenswrapper[4718]: I1210 14:32:37.036987 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:37 crc kubenswrapper[4718]: I1210 14:32:37.036998 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:37Z","lastTransitionTime":"2025-12-10T14:32:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:37 crc kubenswrapper[4718]: I1210 14:32:37.139477 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:37 crc kubenswrapper[4718]: I1210 14:32:37.139581 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:37 crc kubenswrapper[4718]: I1210 14:32:37.139603 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:37 crc kubenswrapper[4718]: I1210 14:32:37.139634 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:37 crc kubenswrapper[4718]: I1210 14:32:37.139658 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:37Z","lastTransitionTime":"2025-12-10T14:32:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:37 crc kubenswrapper[4718]: I1210 14:32:37.243582 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:37 crc kubenswrapper[4718]: I1210 14:32:37.243626 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:37 crc kubenswrapper[4718]: I1210 14:32:37.243639 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:37 crc kubenswrapper[4718]: I1210 14:32:37.243654 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:37 crc kubenswrapper[4718]: I1210 14:32:37.243666 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:37Z","lastTransitionTime":"2025-12-10T14:32:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:37 crc kubenswrapper[4718]: I1210 14:32:37.347172 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:37 crc kubenswrapper[4718]: I1210 14:32:37.347527 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:37 crc kubenswrapper[4718]: I1210 14:32:37.347615 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:37 crc kubenswrapper[4718]: I1210 14:32:37.347688 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:37 crc kubenswrapper[4718]: I1210 14:32:37.347759 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:37Z","lastTransitionTime":"2025-12-10T14:32:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:37 crc kubenswrapper[4718]: I1210 14:32:37.450076 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:37 crc kubenswrapper[4718]: I1210 14:32:37.450135 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:37 crc kubenswrapper[4718]: I1210 14:32:37.450146 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:37 crc kubenswrapper[4718]: I1210 14:32:37.450164 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:37 crc kubenswrapper[4718]: I1210 14:32:37.450178 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:37Z","lastTransitionTime":"2025-12-10T14:32:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:37 crc kubenswrapper[4718]: I1210 14:32:37.553746 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:37 crc kubenswrapper[4718]: I1210 14:32:37.553806 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:37 crc kubenswrapper[4718]: I1210 14:32:37.553816 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:37 crc kubenswrapper[4718]: I1210 14:32:37.553831 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:37 crc kubenswrapper[4718]: I1210 14:32:37.553844 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:37Z","lastTransitionTime":"2025-12-10T14:32:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:37 crc kubenswrapper[4718]: I1210 14:32:37.656013 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:37 crc kubenswrapper[4718]: I1210 14:32:37.656492 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:37 crc kubenswrapper[4718]: I1210 14:32:37.656601 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:37 crc kubenswrapper[4718]: I1210 14:32:37.656716 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:37 crc kubenswrapper[4718]: I1210 14:32:37.656820 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:37Z","lastTransitionTime":"2025-12-10T14:32:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:37 crc kubenswrapper[4718]: I1210 14:32:37.759022 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:37 crc kubenswrapper[4718]: I1210 14:32:37.759056 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:37 crc kubenswrapper[4718]: I1210 14:32:37.759066 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:37 crc kubenswrapper[4718]: I1210 14:32:37.759079 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:37 crc kubenswrapper[4718]: I1210 14:32:37.759089 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:37Z","lastTransitionTime":"2025-12-10T14:32:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:37 crc kubenswrapper[4718]: I1210 14:32:37.870988 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:37 crc kubenswrapper[4718]: I1210 14:32:37.871025 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:37 crc kubenswrapper[4718]: I1210 14:32:37.871034 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:37 crc kubenswrapper[4718]: I1210 14:32:37.871052 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:37 crc kubenswrapper[4718]: I1210 14:32:37.871062 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:37Z","lastTransitionTime":"2025-12-10T14:32:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:37 crc kubenswrapper[4718]: I1210 14:32:37.973485 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:37 crc kubenswrapper[4718]: I1210 14:32:37.973794 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:37 crc kubenswrapper[4718]: I1210 14:32:37.973906 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:37 crc kubenswrapper[4718]: I1210 14:32:37.974010 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:37 crc kubenswrapper[4718]: I1210 14:32:37.974104 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:37Z","lastTransitionTime":"2025-12-10T14:32:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:38 crc kubenswrapper[4718]: I1210 14:32:38.020168 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 10 14:32:38 crc kubenswrapper[4718]: E1210 14:32:38.020336 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 10 14:32:38 crc kubenswrapper[4718]: I1210 14:32:38.020688 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 10 14:32:38 crc kubenswrapper[4718]: E1210 14:32:38.020897 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 10 14:32:38 crc kubenswrapper[4718]: I1210 14:32:38.078337 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:38 crc kubenswrapper[4718]: I1210 14:32:38.078529 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:38 crc kubenswrapper[4718]: I1210 14:32:38.078552 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:38 crc kubenswrapper[4718]: I1210 14:32:38.078567 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:38 crc kubenswrapper[4718]: I1210 14:32:38.078578 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:38Z","lastTransitionTime":"2025-12-10T14:32:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:38 crc kubenswrapper[4718]: I1210 14:32:38.181201 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:38 crc kubenswrapper[4718]: I1210 14:32:38.181290 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:38 crc kubenswrapper[4718]: I1210 14:32:38.181306 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:38 crc kubenswrapper[4718]: I1210 14:32:38.181327 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:38 crc kubenswrapper[4718]: I1210 14:32:38.181341 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:38Z","lastTransitionTime":"2025-12-10T14:32:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:38 crc kubenswrapper[4718]: I1210 14:32:38.283969 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:38 crc kubenswrapper[4718]: I1210 14:32:38.284032 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:38 crc kubenswrapper[4718]: I1210 14:32:38.284042 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:38 crc kubenswrapper[4718]: I1210 14:32:38.284058 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:38 crc kubenswrapper[4718]: I1210 14:32:38.284069 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:38Z","lastTransitionTime":"2025-12-10T14:32:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:38 crc kubenswrapper[4718]: I1210 14:32:38.386786 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:38 crc kubenswrapper[4718]: I1210 14:32:38.387090 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:38 crc kubenswrapper[4718]: I1210 14:32:38.387168 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:38 crc kubenswrapper[4718]: I1210 14:32:38.387248 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:38 crc kubenswrapper[4718]: I1210 14:32:38.387311 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:38Z","lastTransitionTime":"2025-12-10T14:32:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:38 crc kubenswrapper[4718]: I1210 14:32:38.490115 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:38 crc kubenswrapper[4718]: I1210 14:32:38.490869 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:38 crc kubenswrapper[4718]: I1210 14:32:38.490968 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:38 crc kubenswrapper[4718]: I1210 14:32:38.491055 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:38 crc kubenswrapper[4718]: I1210 14:32:38.491136 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:38Z","lastTransitionTime":"2025-12-10T14:32:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:38 crc kubenswrapper[4718]: I1210 14:32:38.594142 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:38 crc kubenswrapper[4718]: I1210 14:32:38.594220 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:38 crc kubenswrapper[4718]: I1210 14:32:38.594234 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:38 crc kubenswrapper[4718]: I1210 14:32:38.594250 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:38 crc kubenswrapper[4718]: I1210 14:32:38.594261 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:38Z","lastTransitionTime":"2025-12-10T14:32:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:38 crc kubenswrapper[4718]: I1210 14:32:38.697048 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:38 crc kubenswrapper[4718]: I1210 14:32:38.697291 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:38 crc kubenswrapper[4718]: I1210 14:32:38.697365 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:38 crc kubenswrapper[4718]: I1210 14:32:38.697487 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:38 crc kubenswrapper[4718]: I1210 14:32:38.697602 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:38Z","lastTransitionTime":"2025-12-10T14:32:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:38 crc kubenswrapper[4718]: I1210 14:32:38.800748 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:38 crc kubenswrapper[4718]: I1210 14:32:38.801247 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:38 crc kubenswrapper[4718]: I1210 14:32:38.801352 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:38 crc kubenswrapper[4718]: I1210 14:32:38.801559 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:38 crc kubenswrapper[4718]: I1210 14:32:38.801650 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:38Z","lastTransitionTime":"2025-12-10T14:32:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:38 crc kubenswrapper[4718]: I1210 14:32:38.905922 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:38 crc kubenswrapper[4718]: I1210 14:32:38.906432 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:38 crc kubenswrapper[4718]: I1210 14:32:38.906727 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:38 crc kubenswrapper[4718]: I1210 14:32:38.906885 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:38 crc kubenswrapper[4718]: I1210 14:32:38.907065 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:38Z","lastTransitionTime":"2025-12-10T14:32:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:39 crc kubenswrapper[4718]: I1210 14:32:39.009784 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:39 crc kubenswrapper[4718]: I1210 14:32:39.009830 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:39 crc kubenswrapper[4718]: I1210 14:32:39.009839 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:39 crc kubenswrapper[4718]: I1210 14:32:39.009854 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:39 crc kubenswrapper[4718]: I1210 14:32:39.009866 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:39Z","lastTransitionTime":"2025-12-10T14:32:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:39 crc kubenswrapper[4718]: I1210 14:32:39.019754 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-r8zbt" Dec 10 14:32:39 crc kubenswrapper[4718]: E1210 14:32:39.020191 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-r8zbt" podUID="1494ebfa-d66c-4200-a336-2cedebcd5889" Dec 10 14:32:39 crc kubenswrapper[4718]: I1210 14:32:39.019881 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 10 14:32:39 crc kubenswrapper[4718]: E1210 14:32:39.020568 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 10 14:32:39 crc kubenswrapper[4718]: I1210 14:32:39.112879 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:39 crc kubenswrapper[4718]: I1210 14:32:39.112936 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:39 crc kubenswrapper[4718]: I1210 14:32:39.112949 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:39 crc kubenswrapper[4718]: I1210 14:32:39.112968 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:39 crc kubenswrapper[4718]: I1210 14:32:39.112982 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:39Z","lastTransitionTime":"2025-12-10T14:32:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:39 crc kubenswrapper[4718]: I1210 14:32:39.215765 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:39 crc kubenswrapper[4718]: I1210 14:32:39.215837 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:39 crc kubenswrapper[4718]: I1210 14:32:39.215856 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:39 crc kubenswrapper[4718]: I1210 14:32:39.215881 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:39 crc kubenswrapper[4718]: I1210 14:32:39.215902 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:39Z","lastTransitionTime":"2025-12-10T14:32:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:39 crc kubenswrapper[4718]: I1210 14:32:39.319797 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:39 crc kubenswrapper[4718]: I1210 14:32:39.319864 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:39 crc kubenswrapper[4718]: I1210 14:32:39.319876 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:39 crc kubenswrapper[4718]: I1210 14:32:39.319897 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:39 crc kubenswrapper[4718]: I1210 14:32:39.319908 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:39Z","lastTransitionTime":"2025-12-10T14:32:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:39 crc kubenswrapper[4718]: I1210 14:32:39.423796 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:39 crc kubenswrapper[4718]: I1210 14:32:39.423897 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:39 crc kubenswrapper[4718]: I1210 14:32:39.423933 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:39 crc kubenswrapper[4718]: I1210 14:32:39.423954 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:39 crc kubenswrapper[4718]: I1210 14:32:39.423968 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:39Z","lastTransitionTime":"2025-12-10T14:32:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:39 crc kubenswrapper[4718]: I1210 14:32:39.526780 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:39 crc kubenswrapper[4718]: I1210 14:32:39.526819 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:39 crc kubenswrapper[4718]: I1210 14:32:39.526828 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:39 crc kubenswrapper[4718]: I1210 14:32:39.526845 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:39 crc kubenswrapper[4718]: I1210 14:32:39.526856 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:39Z","lastTransitionTime":"2025-12-10T14:32:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:39 crc kubenswrapper[4718]: I1210 14:32:39.629753 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:39 crc kubenswrapper[4718]: I1210 14:32:39.629798 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:39 crc kubenswrapper[4718]: I1210 14:32:39.629810 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:39 crc kubenswrapper[4718]: I1210 14:32:39.629829 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:39 crc kubenswrapper[4718]: I1210 14:32:39.629841 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:39Z","lastTransitionTime":"2025-12-10T14:32:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:39 crc kubenswrapper[4718]: I1210 14:32:39.732799 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:39 crc kubenswrapper[4718]: I1210 14:32:39.732847 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:39 crc kubenswrapper[4718]: I1210 14:32:39.732860 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:39 crc kubenswrapper[4718]: I1210 14:32:39.732875 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:39 crc kubenswrapper[4718]: I1210 14:32:39.732886 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:39Z","lastTransitionTime":"2025-12-10T14:32:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:39 crc kubenswrapper[4718]: I1210 14:32:39.836648 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:39 crc kubenswrapper[4718]: I1210 14:32:39.836742 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:39 crc kubenswrapper[4718]: I1210 14:32:39.836757 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:39 crc kubenswrapper[4718]: I1210 14:32:39.836774 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:39 crc kubenswrapper[4718]: I1210 14:32:39.836788 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:39Z","lastTransitionTime":"2025-12-10T14:32:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:39 crc kubenswrapper[4718]: I1210 14:32:39.939119 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:39 crc kubenswrapper[4718]: I1210 14:32:39.939159 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:39 crc kubenswrapper[4718]: I1210 14:32:39.939173 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:39 crc kubenswrapper[4718]: I1210 14:32:39.939187 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:39 crc kubenswrapper[4718]: I1210 14:32:39.939199 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:39Z","lastTransitionTime":"2025-12-10T14:32:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:40 crc kubenswrapper[4718]: I1210 14:32:40.020269 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 10 14:32:40 crc kubenswrapper[4718]: I1210 14:32:40.020269 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 10 14:32:40 crc kubenswrapper[4718]: E1210 14:32:40.020465 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 10 14:32:40 crc kubenswrapper[4718]: E1210 14:32:40.020499 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 10 14:32:40 crc kubenswrapper[4718]: I1210 14:32:40.041214 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:40 crc kubenswrapper[4718]: I1210 14:32:40.041241 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:40 crc kubenswrapper[4718]: I1210 14:32:40.041253 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:40 crc kubenswrapper[4718]: I1210 14:32:40.041267 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:40 crc kubenswrapper[4718]: I1210 14:32:40.041280 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:40Z","lastTransitionTime":"2025-12-10T14:32:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:40 crc kubenswrapper[4718]: I1210 14:32:40.144759 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:40 crc kubenswrapper[4718]: I1210 14:32:40.144798 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:40 crc kubenswrapper[4718]: I1210 14:32:40.144810 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:40 crc kubenswrapper[4718]: I1210 14:32:40.144826 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:40 crc kubenswrapper[4718]: I1210 14:32:40.144838 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:40Z","lastTransitionTime":"2025-12-10T14:32:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:40 crc kubenswrapper[4718]: I1210 14:32:40.247434 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:40 crc kubenswrapper[4718]: I1210 14:32:40.247478 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:40 crc kubenswrapper[4718]: I1210 14:32:40.247490 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:40 crc kubenswrapper[4718]: I1210 14:32:40.247509 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:40 crc kubenswrapper[4718]: I1210 14:32:40.247526 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:40Z","lastTransitionTime":"2025-12-10T14:32:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:40 crc kubenswrapper[4718]: I1210 14:32:40.350348 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:40 crc kubenswrapper[4718]: I1210 14:32:40.350447 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:40 crc kubenswrapper[4718]: I1210 14:32:40.350464 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:40 crc kubenswrapper[4718]: I1210 14:32:40.350485 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:40 crc kubenswrapper[4718]: I1210 14:32:40.350500 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:40Z","lastTransitionTime":"2025-12-10T14:32:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:40 crc kubenswrapper[4718]: I1210 14:32:40.396648 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:40 crc kubenswrapper[4718]: I1210 14:32:40.396686 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:40 crc kubenswrapper[4718]: I1210 14:32:40.396696 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:40 crc kubenswrapper[4718]: I1210 14:32:40.396712 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:40 crc kubenswrapper[4718]: I1210 14:32:40.396723 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:40Z","lastTransitionTime":"2025-12-10T14:32:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:40 crc kubenswrapper[4718]: E1210 14:32:40.411577 4718 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:32:40Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:32:40Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:40Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:32:40Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:32:40Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:40Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"fbef6824-3734-4f3d-bf18-030e5c72cff8\\\",\\\"systemUUID\\\":\\\"559eaef7-72a9-45f0-b8d3-0046c76adc0d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:40Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:40 crc kubenswrapper[4718]: I1210 14:32:40.418703 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:40 crc kubenswrapper[4718]: I1210 14:32:40.418774 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:40 crc kubenswrapper[4718]: I1210 14:32:40.418794 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:40 crc kubenswrapper[4718]: I1210 14:32:40.418821 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:40 crc kubenswrapper[4718]: I1210 14:32:40.418834 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:40Z","lastTransitionTime":"2025-12-10T14:32:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:40 crc kubenswrapper[4718]: E1210 14:32:40.435989 4718 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:32:40Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:32:40Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:40Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:32:40Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:32:40Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:40Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"fbef6824-3734-4f3d-bf18-030e5c72cff8\\\",\\\"systemUUID\\\":\\\"559eaef7-72a9-45f0-b8d3-0046c76adc0d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:40Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:40 crc kubenswrapper[4718]: I1210 14:32:40.440826 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:40 crc kubenswrapper[4718]: I1210 14:32:40.440886 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:40 crc kubenswrapper[4718]: I1210 14:32:40.440902 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:40 crc kubenswrapper[4718]: I1210 14:32:40.440929 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:40 crc kubenswrapper[4718]: I1210 14:32:40.440947 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:40Z","lastTransitionTime":"2025-12-10T14:32:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:40 crc kubenswrapper[4718]: E1210 14:32:40.454119 4718 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:32:40Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:32:40Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:40Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:32:40Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:32:40Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:40Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"fbef6824-3734-4f3d-bf18-030e5c72cff8\\\",\\\"systemUUID\\\":\\\"559eaef7-72a9-45f0-b8d3-0046c76adc0d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:40Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:40 crc kubenswrapper[4718]: I1210 14:32:40.459649 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:40 crc kubenswrapper[4718]: I1210 14:32:40.459703 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:40 crc kubenswrapper[4718]: I1210 14:32:40.459718 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:40 crc kubenswrapper[4718]: I1210 14:32:40.459746 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:40 crc kubenswrapper[4718]: I1210 14:32:40.459761 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:40Z","lastTransitionTime":"2025-12-10T14:32:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:40 crc kubenswrapper[4718]: E1210 14:32:40.474245 4718 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:32:40Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:32:40Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:40Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:32:40Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:32:40Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:40Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"fbef6824-3734-4f3d-bf18-030e5c72cff8\\\",\\\"systemUUID\\\":\\\"559eaef7-72a9-45f0-b8d3-0046c76adc0d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:40Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:40 crc kubenswrapper[4718]: I1210 14:32:40.479533 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:40 crc kubenswrapper[4718]: I1210 14:32:40.479578 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:40 crc kubenswrapper[4718]: I1210 14:32:40.479590 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:40 crc kubenswrapper[4718]: I1210 14:32:40.479610 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:40 crc kubenswrapper[4718]: I1210 14:32:40.479621 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:40Z","lastTransitionTime":"2025-12-10T14:32:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:40 crc kubenswrapper[4718]: E1210 14:32:40.494277 4718 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:32:40Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:32:40Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:40Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:32:40Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:32:40Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:40Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"fbef6824-3734-4f3d-bf18-030e5c72cff8\\\",\\\"systemUUID\\\":\\\"559eaef7-72a9-45f0-b8d3-0046c76adc0d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:40Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:40 crc kubenswrapper[4718]: E1210 14:32:40.494502 4718 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 10 14:32:40 crc kubenswrapper[4718]: I1210 14:32:40.496305 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:40 crc kubenswrapper[4718]: I1210 14:32:40.496344 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:40 crc kubenswrapper[4718]: I1210 14:32:40.496364 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:40 crc kubenswrapper[4718]: I1210 14:32:40.496384 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:40 crc kubenswrapper[4718]: I1210 14:32:40.496417 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:40Z","lastTransitionTime":"2025-12-10T14:32:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:40 crc kubenswrapper[4718]: I1210 14:32:40.599339 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:40 crc kubenswrapper[4718]: I1210 14:32:40.599491 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:40 crc kubenswrapper[4718]: I1210 14:32:40.599529 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:40 crc kubenswrapper[4718]: I1210 14:32:40.599565 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:40 crc kubenswrapper[4718]: I1210 14:32:40.599591 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:40Z","lastTransitionTime":"2025-12-10T14:32:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:40 crc kubenswrapper[4718]: I1210 14:32:40.702824 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:40 crc kubenswrapper[4718]: I1210 14:32:40.702863 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:40 crc kubenswrapper[4718]: I1210 14:32:40.702875 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:40 crc kubenswrapper[4718]: I1210 14:32:40.702891 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:40 crc kubenswrapper[4718]: I1210 14:32:40.702903 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:40Z","lastTransitionTime":"2025-12-10T14:32:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:40 crc kubenswrapper[4718]: I1210 14:32:40.806430 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:40 crc kubenswrapper[4718]: I1210 14:32:40.806475 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:40 crc kubenswrapper[4718]: I1210 14:32:40.806486 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:40 crc kubenswrapper[4718]: I1210 14:32:40.806503 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:40 crc kubenswrapper[4718]: I1210 14:32:40.806515 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:40Z","lastTransitionTime":"2025-12-10T14:32:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:40 crc kubenswrapper[4718]: I1210 14:32:40.909683 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:40 crc kubenswrapper[4718]: I1210 14:32:40.909991 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:40 crc kubenswrapper[4718]: I1210 14:32:40.910071 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:40 crc kubenswrapper[4718]: I1210 14:32:40.910153 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:40 crc kubenswrapper[4718]: I1210 14:32:40.910213 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:40Z","lastTransitionTime":"2025-12-10T14:32:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:41 crc kubenswrapper[4718]: I1210 14:32:41.012883 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:41 crc kubenswrapper[4718]: I1210 14:32:41.012922 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:41 crc kubenswrapper[4718]: I1210 14:32:41.012932 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:41 crc kubenswrapper[4718]: I1210 14:32:41.012946 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:41 crc kubenswrapper[4718]: I1210 14:32:41.012958 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:41Z","lastTransitionTime":"2025-12-10T14:32:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:41 crc kubenswrapper[4718]: I1210 14:32:41.019688 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 10 14:32:41 crc kubenswrapper[4718]: E1210 14:32:41.019798 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 10 14:32:41 crc kubenswrapper[4718]: I1210 14:32:41.019932 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-r8zbt" Dec 10 14:32:41 crc kubenswrapper[4718]: E1210 14:32:41.019999 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-r8zbt" podUID="1494ebfa-d66c-4200-a336-2cedebcd5889" Dec 10 14:32:41 crc kubenswrapper[4718]: I1210 14:32:41.116136 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:41 crc kubenswrapper[4718]: I1210 14:32:41.116487 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:41 crc kubenswrapper[4718]: I1210 14:32:41.116815 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:41 crc kubenswrapper[4718]: I1210 14:32:41.117027 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:41 crc kubenswrapper[4718]: I1210 14:32:41.117202 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:41Z","lastTransitionTime":"2025-12-10T14:32:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:41 crc kubenswrapper[4718]: I1210 14:32:41.220767 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:41 crc kubenswrapper[4718]: I1210 14:32:41.221208 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:41 crc kubenswrapper[4718]: I1210 14:32:41.221291 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:41 crc kubenswrapper[4718]: I1210 14:32:41.221376 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:41 crc kubenswrapper[4718]: I1210 14:32:41.221466 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:41Z","lastTransitionTime":"2025-12-10T14:32:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:41 crc kubenswrapper[4718]: I1210 14:32:41.324950 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:41 crc kubenswrapper[4718]: I1210 14:32:41.325020 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:41 crc kubenswrapper[4718]: I1210 14:32:41.325031 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:41 crc kubenswrapper[4718]: I1210 14:32:41.325051 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:41 crc kubenswrapper[4718]: I1210 14:32:41.325068 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:41Z","lastTransitionTime":"2025-12-10T14:32:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:41 crc kubenswrapper[4718]: I1210 14:32:41.428322 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:41 crc kubenswrapper[4718]: I1210 14:32:41.428380 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:41 crc kubenswrapper[4718]: I1210 14:32:41.428409 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:41 crc kubenswrapper[4718]: I1210 14:32:41.428431 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:41 crc kubenswrapper[4718]: I1210 14:32:41.428447 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:41Z","lastTransitionTime":"2025-12-10T14:32:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:41 crc kubenswrapper[4718]: I1210 14:32:41.531091 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:41 crc kubenswrapper[4718]: I1210 14:32:41.531168 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:41 crc kubenswrapper[4718]: I1210 14:32:41.531182 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:41 crc kubenswrapper[4718]: I1210 14:32:41.531214 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:41 crc kubenswrapper[4718]: I1210 14:32:41.531228 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:41Z","lastTransitionTime":"2025-12-10T14:32:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:41 crc kubenswrapper[4718]: I1210 14:32:41.634597 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:41 crc kubenswrapper[4718]: I1210 14:32:41.634661 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:41 crc kubenswrapper[4718]: I1210 14:32:41.634679 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:41 crc kubenswrapper[4718]: I1210 14:32:41.634704 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:41 crc kubenswrapper[4718]: I1210 14:32:41.634723 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:41Z","lastTransitionTime":"2025-12-10T14:32:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:41 crc kubenswrapper[4718]: I1210 14:32:41.737104 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:41 crc kubenswrapper[4718]: I1210 14:32:41.737164 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:41 crc kubenswrapper[4718]: I1210 14:32:41.737177 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:41 crc kubenswrapper[4718]: I1210 14:32:41.737195 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:41 crc kubenswrapper[4718]: I1210 14:32:41.737206 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:41Z","lastTransitionTime":"2025-12-10T14:32:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:41 crc kubenswrapper[4718]: I1210 14:32:41.840178 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:41 crc kubenswrapper[4718]: I1210 14:32:41.840230 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:41 crc kubenswrapper[4718]: I1210 14:32:41.840243 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:41 crc kubenswrapper[4718]: I1210 14:32:41.840262 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:41 crc kubenswrapper[4718]: I1210 14:32:41.840274 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:41Z","lastTransitionTime":"2025-12-10T14:32:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:41 crc kubenswrapper[4718]: I1210 14:32:41.943187 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:41 crc kubenswrapper[4718]: I1210 14:32:41.943228 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:41 crc kubenswrapper[4718]: I1210 14:32:41.943237 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:41 crc kubenswrapper[4718]: I1210 14:32:41.943251 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:41 crc kubenswrapper[4718]: I1210 14:32:41.943261 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:41Z","lastTransitionTime":"2025-12-10T14:32:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:42 crc kubenswrapper[4718]: I1210 14:32:42.020074 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 10 14:32:42 crc kubenswrapper[4718]: I1210 14:32:42.020157 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 10 14:32:42 crc kubenswrapper[4718]: E1210 14:32:42.020278 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 10 14:32:42 crc kubenswrapper[4718]: E1210 14:32:42.020529 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 10 14:32:42 crc kubenswrapper[4718]: I1210 14:32:42.045786 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:42 crc kubenswrapper[4718]: I1210 14:32:42.045833 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:42 crc kubenswrapper[4718]: I1210 14:32:42.045845 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:42 crc kubenswrapper[4718]: I1210 14:32:42.045862 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:42 crc kubenswrapper[4718]: I1210 14:32:42.045875 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:42Z","lastTransitionTime":"2025-12-10T14:32:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:42 crc kubenswrapper[4718]: I1210 14:32:42.148367 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:42 crc kubenswrapper[4718]: I1210 14:32:42.148434 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:42 crc kubenswrapper[4718]: I1210 14:32:42.148446 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:42 crc kubenswrapper[4718]: I1210 14:32:42.148462 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:42 crc kubenswrapper[4718]: I1210 14:32:42.148474 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:42Z","lastTransitionTime":"2025-12-10T14:32:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:42 crc kubenswrapper[4718]: I1210 14:32:42.252110 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:42 crc kubenswrapper[4718]: I1210 14:32:42.252151 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:42 crc kubenswrapper[4718]: I1210 14:32:42.252167 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:42 crc kubenswrapper[4718]: I1210 14:32:42.252196 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:42 crc kubenswrapper[4718]: I1210 14:32:42.252209 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:42Z","lastTransitionTime":"2025-12-10T14:32:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:42 crc kubenswrapper[4718]: I1210 14:32:42.355544 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:42 crc kubenswrapper[4718]: I1210 14:32:42.355609 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:42 crc kubenswrapper[4718]: I1210 14:32:42.355627 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:42 crc kubenswrapper[4718]: I1210 14:32:42.355649 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:42 crc kubenswrapper[4718]: I1210 14:32:42.355661 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:42Z","lastTransitionTime":"2025-12-10T14:32:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:42 crc kubenswrapper[4718]: I1210 14:32:42.458542 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:42 crc kubenswrapper[4718]: I1210 14:32:42.458587 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:42 crc kubenswrapper[4718]: I1210 14:32:42.458598 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:42 crc kubenswrapper[4718]: I1210 14:32:42.458614 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:42 crc kubenswrapper[4718]: I1210 14:32:42.458626 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:42Z","lastTransitionTime":"2025-12-10T14:32:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:42 crc kubenswrapper[4718]: I1210 14:32:42.562166 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:42 crc kubenswrapper[4718]: I1210 14:32:42.562216 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:42 crc kubenswrapper[4718]: I1210 14:32:42.562232 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:42 crc kubenswrapper[4718]: I1210 14:32:42.562248 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:42 crc kubenswrapper[4718]: I1210 14:32:42.562260 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:42Z","lastTransitionTime":"2025-12-10T14:32:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:42 crc kubenswrapper[4718]: I1210 14:32:42.665342 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:42 crc kubenswrapper[4718]: I1210 14:32:42.665419 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:42 crc kubenswrapper[4718]: I1210 14:32:42.665433 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:42 crc kubenswrapper[4718]: I1210 14:32:42.665464 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:42 crc kubenswrapper[4718]: I1210 14:32:42.665479 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:42Z","lastTransitionTime":"2025-12-10T14:32:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:42 crc kubenswrapper[4718]: I1210 14:32:42.769605 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:42 crc kubenswrapper[4718]: I1210 14:32:42.769660 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:42 crc kubenswrapper[4718]: I1210 14:32:42.769671 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:42 crc kubenswrapper[4718]: I1210 14:32:42.769689 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:42 crc kubenswrapper[4718]: I1210 14:32:42.769707 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:42Z","lastTransitionTime":"2025-12-10T14:32:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:42 crc kubenswrapper[4718]: I1210 14:32:42.873010 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:42 crc kubenswrapper[4718]: I1210 14:32:42.873059 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:42 crc kubenswrapper[4718]: I1210 14:32:42.873071 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:42 crc kubenswrapper[4718]: I1210 14:32:42.873093 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:42 crc kubenswrapper[4718]: I1210 14:32:42.873107 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:42Z","lastTransitionTime":"2025-12-10T14:32:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:42 crc kubenswrapper[4718]: I1210 14:32:42.975865 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:42 crc kubenswrapper[4718]: I1210 14:32:42.975936 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:42 crc kubenswrapper[4718]: I1210 14:32:42.975952 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:42 crc kubenswrapper[4718]: I1210 14:32:42.975979 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:42 crc kubenswrapper[4718]: I1210 14:32:42.975994 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:42Z","lastTransitionTime":"2025-12-10T14:32:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:43 crc kubenswrapper[4718]: I1210 14:32:43.019592 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 10 14:32:43 crc kubenswrapper[4718]: I1210 14:32:43.019630 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-r8zbt" Dec 10 14:32:43 crc kubenswrapper[4718]: E1210 14:32:43.019816 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 10 14:32:43 crc kubenswrapper[4718]: E1210 14:32:43.020001 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-r8zbt" podUID="1494ebfa-d66c-4200-a336-2cedebcd5889" Dec 10 14:32:43 crc kubenswrapper[4718]: I1210 14:32:43.080440 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:43 crc kubenswrapper[4718]: I1210 14:32:43.080512 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:43 crc kubenswrapper[4718]: I1210 14:32:43.080525 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:43 crc kubenswrapper[4718]: I1210 14:32:43.080546 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:43 crc kubenswrapper[4718]: I1210 14:32:43.080559 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:43Z","lastTransitionTime":"2025-12-10T14:32:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:43 crc kubenswrapper[4718]: I1210 14:32:43.183465 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:43 crc kubenswrapper[4718]: I1210 14:32:43.183528 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:43 crc kubenswrapper[4718]: I1210 14:32:43.183537 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:43 crc kubenswrapper[4718]: I1210 14:32:43.183568 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:43 crc kubenswrapper[4718]: I1210 14:32:43.183579 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:43Z","lastTransitionTime":"2025-12-10T14:32:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:43 crc kubenswrapper[4718]: I1210 14:32:43.286757 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:43 crc kubenswrapper[4718]: I1210 14:32:43.286821 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:43 crc kubenswrapper[4718]: I1210 14:32:43.286836 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:43 crc kubenswrapper[4718]: I1210 14:32:43.286865 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:43 crc kubenswrapper[4718]: I1210 14:32:43.286879 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:43Z","lastTransitionTime":"2025-12-10T14:32:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:43 crc kubenswrapper[4718]: I1210 14:32:43.389566 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:43 crc kubenswrapper[4718]: I1210 14:32:43.389617 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:43 crc kubenswrapper[4718]: I1210 14:32:43.389629 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:43 crc kubenswrapper[4718]: I1210 14:32:43.389649 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:43 crc kubenswrapper[4718]: I1210 14:32:43.389661 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:43Z","lastTransitionTime":"2025-12-10T14:32:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:43 crc kubenswrapper[4718]: I1210 14:32:43.493322 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:43 crc kubenswrapper[4718]: I1210 14:32:43.493422 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:43 crc kubenswrapper[4718]: I1210 14:32:43.493440 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:43 crc kubenswrapper[4718]: I1210 14:32:43.493466 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:43 crc kubenswrapper[4718]: I1210 14:32:43.493484 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:43Z","lastTransitionTime":"2025-12-10T14:32:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:43 crc kubenswrapper[4718]: I1210 14:32:43.597164 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:43 crc kubenswrapper[4718]: I1210 14:32:43.597221 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:43 crc kubenswrapper[4718]: I1210 14:32:43.597232 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:43 crc kubenswrapper[4718]: I1210 14:32:43.597253 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:43 crc kubenswrapper[4718]: I1210 14:32:43.597266 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:43Z","lastTransitionTime":"2025-12-10T14:32:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:43 crc kubenswrapper[4718]: I1210 14:32:43.699979 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:43 crc kubenswrapper[4718]: I1210 14:32:43.700035 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:43 crc kubenswrapper[4718]: I1210 14:32:43.700047 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:43 crc kubenswrapper[4718]: I1210 14:32:43.700068 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:43 crc kubenswrapper[4718]: I1210 14:32:43.700105 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:43Z","lastTransitionTime":"2025-12-10T14:32:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:43 crc kubenswrapper[4718]: I1210 14:32:43.803018 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:43 crc kubenswrapper[4718]: I1210 14:32:43.803061 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:43 crc kubenswrapper[4718]: I1210 14:32:43.803072 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:43 crc kubenswrapper[4718]: I1210 14:32:43.803090 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:43 crc kubenswrapper[4718]: I1210 14:32:43.803102 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:43Z","lastTransitionTime":"2025-12-10T14:32:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:43 crc kubenswrapper[4718]: I1210 14:32:43.906419 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:43 crc kubenswrapper[4718]: I1210 14:32:43.906462 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:43 crc kubenswrapper[4718]: I1210 14:32:43.906471 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:43 crc kubenswrapper[4718]: I1210 14:32:43.906486 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:43 crc kubenswrapper[4718]: I1210 14:32:43.906497 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:43Z","lastTransitionTime":"2025-12-10T14:32:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:44 crc kubenswrapper[4718]: I1210 14:32:44.009428 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:44 crc kubenswrapper[4718]: I1210 14:32:44.009496 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:44 crc kubenswrapper[4718]: I1210 14:32:44.009509 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:44 crc kubenswrapper[4718]: I1210 14:32:44.009528 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:44 crc kubenswrapper[4718]: I1210 14:32:44.009948 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:44Z","lastTransitionTime":"2025-12-10T14:32:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:44 crc kubenswrapper[4718]: I1210 14:32:44.019691 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 10 14:32:44 crc kubenswrapper[4718]: I1210 14:32:44.019774 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 10 14:32:44 crc kubenswrapper[4718]: E1210 14:32:44.019837 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 10 14:32:44 crc kubenswrapper[4718]: E1210 14:32:44.019926 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 10 14:32:44 crc kubenswrapper[4718]: I1210 14:32:44.112753 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:44 crc kubenswrapper[4718]: I1210 14:32:44.112805 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:44 crc kubenswrapper[4718]: I1210 14:32:44.112817 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:44 crc kubenswrapper[4718]: I1210 14:32:44.112835 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:44 crc kubenswrapper[4718]: I1210 14:32:44.112850 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:44Z","lastTransitionTime":"2025-12-10T14:32:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:44 crc kubenswrapper[4718]: I1210 14:32:44.215846 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:44 crc kubenswrapper[4718]: I1210 14:32:44.215884 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:44 crc kubenswrapper[4718]: I1210 14:32:44.215895 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:44 crc kubenswrapper[4718]: I1210 14:32:44.215909 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:44 crc kubenswrapper[4718]: I1210 14:32:44.215920 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:44Z","lastTransitionTime":"2025-12-10T14:32:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:44 crc kubenswrapper[4718]: I1210 14:32:44.318965 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:44 crc kubenswrapper[4718]: I1210 14:32:44.319245 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:44 crc kubenswrapper[4718]: I1210 14:32:44.319359 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:44 crc kubenswrapper[4718]: I1210 14:32:44.319484 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:44 crc kubenswrapper[4718]: I1210 14:32:44.319563 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:44Z","lastTransitionTime":"2025-12-10T14:32:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:44 crc kubenswrapper[4718]: I1210 14:32:44.422513 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:44 crc kubenswrapper[4718]: I1210 14:32:44.422554 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:44 crc kubenswrapper[4718]: I1210 14:32:44.422565 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:44 crc kubenswrapper[4718]: I1210 14:32:44.422581 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:44 crc kubenswrapper[4718]: I1210 14:32:44.422596 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:44Z","lastTransitionTime":"2025-12-10T14:32:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:44 crc kubenswrapper[4718]: I1210 14:32:44.525653 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:44 crc kubenswrapper[4718]: I1210 14:32:44.525714 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:44 crc kubenswrapper[4718]: I1210 14:32:44.525728 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:44 crc kubenswrapper[4718]: I1210 14:32:44.525749 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:44 crc kubenswrapper[4718]: I1210 14:32:44.525766 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:44Z","lastTransitionTime":"2025-12-10T14:32:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:44 crc kubenswrapper[4718]: I1210 14:32:44.628324 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:44 crc kubenswrapper[4718]: I1210 14:32:44.628402 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:44 crc kubenswrapper[4718]: I1210 14:32:44.628414 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:44 crc kubenswrapper[4718]: I1210 14:32:44.628455 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:44 crc kubenswrapper[4718]: I1210 14:32:44.628469 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:44Z","lastTransitionTime":"2025-12-10T14:32:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:44 crc kubenswrapper[4718]: I1210 14:32:44.730936 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:44 crc kubenswrapper[4718]: I1210 14:32:44.731498 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:44 crc kubenswrapper[4718]: I1210 14:32:44.731628 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:44 crc kubenswrapper[4718]: I1210 14:32:44.731717 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:44 crc kubenswrapper[4718]: I1210 14:32:44.731780 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:44Z","lastTransitionTime":"2025-12-10T14:32:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:44 crc kubenswrapper[4718]: I1210 14:32:44.834214 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:44 crc kubenswrapper[4718]: I1210 14:32:44.834251 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:44 crc kubenswrapper[4718]: I1210 14:32:44.834260 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:44 crc kubenswrapper[4718]: I1210 14:32:44.834275 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:44 crc kubenswrapper[4718]: I1210 14:32:44.834286 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:44Z","lastTransitionTime":"2025-12-10T14:32:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:44 crc kubenswrapper[4718]: I1210 14:32:44.936772 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:44 crc kubenswrapper[4718]: I1210 14:32:44.936832 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:44 crc kubenswrapper[4718]: I1210 14:32:44.936851 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:44 crc kubenswrapper[4718]: I1210 14:32:44.936875 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:44 crc kubenswrapper[4718]: I1210 14:32:44.936892 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:44Z","lastTransitionTime":"2025-12-10T14:32:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:45 crc kubenswrapper[4718]: I1210 14:32:45.020077 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 10 14:32:45 crc kubenswrapper[4718]: E1210 14:32:45.020224 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 10 14:32:45 crc kubenswrapper[4718]: I1210 14:32:45.020600 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-r8zbt" Dec 10 14:32:45 crc kubenswrapper[4718]: E1210 14:32:45.020933 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-r8zbt" podUID="1494ebfa-d66c-4200-a336-2cedebcd5889" Dec 10 14:32:45 crc kubenswrapper[4718]: I1210 14:32:45.039300 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:45 crc kubenswrapper[4718]: I1210 14:32:45.039336 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:45 crc kubenswrapper[4718]: I1210 14:32:45.039347 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:45 crc kubenswrapper[4718]: I1210 14:32:45.039359 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:45 crc kubenswrapper[4718]: I1210 14:32:45.039371 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:45Z","lastTransitionTime":"2025-12-10T14:32:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:45 crc kubenswrapper[4718]: I1210 14:32:45.142685 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:45 crc kubenswrapper[4718]: I1210 14:32:45.142760 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:45 crc kubenswrapper[4718]: I1210 14:32:45.142784 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:45 crc kubenswrapper[4718]: I1210 14:32:45.142802 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:45 crc kubenswrapper[4718]: I1210 14:32:45.142844 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:45Z","lastTransitionTime":"2025-12-10T14:32:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:45 crc kubenswrapper[4718]: I1210 14:32:45.246067 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:45 crc kubenswrapper[4718]: I1210 14:32:45.246126 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:45 crc kubenswrapper[4718]: I1210 14:32:45.246146 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:45 crc kubenswrapper[4718]: I1210 14:32:45.246168 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:45 crc kubenswrapper[4718]: I1210 14:32:45.246185 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:45Z","lastTransitionTime":"2025-12-10T14:32:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:45 crc kubenswrapper[4718]: I1210 14:32:45.349066 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:45 crc kubenswrapper[4718]: I1210 14:32:45.349120 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:45 crc kubenswrapper[4718]: I1210 14:32:45.349132 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:45 crc kubenswrapper[4718]: I1210 14:32:45.349149 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:45 crc kubenswrapper[4718]: I1210 14:32:45.349166 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:45Z","lastTransitionTime":"2025-12-10T14:32:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:45 crc kubenswrapper[4718]: I1210 14:32:45.452107 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:45 crc kubenswrapper[4718]: I1210 14:32:45.452174 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:45 crc kubenswrapper[4718]: I1210 14:32:45.452187 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:45 crc kubenswrapper[4718]: I1210 14:32:45.452211 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:45 crc kubenswrapper[4718]: I1210 14:32:45.452225 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:45Z","lastTransitionTime":"2025-12-10T14:32:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:45 crc kubenswrapper[4718]: I1210 14:32:45.554888 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:45 crc kubenswrapper[4718]: I1210 14:32:45.554963 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:45 crc kubenswrapper[4718]: I1210 14:32:45.554973 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:45 crc kubenswrapper[4718]: I1210 14:32:45.554999 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:45 crc kubenswrapper[4718]: I1210 14:32:45.555011 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:45Z","lastTransitionTime":"2025-12-10T14:32:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:45 crc kubenswrapper[4718]: I1210 14:32:45.658518 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:45 crc kubenswrapper[4718]: I1210 14:32:45.658572 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:45 crc kubenswrapper[4718]: I1210 14:32:45.658586 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:45 crc kubenswrapper[4718]: I1210 14:32:45.658607 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:45 crc kubenswrapper[4718]: I1210 14:32:45.658622 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:45Z","lastTransitionTime":"2025-12-10T14:32:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:45 crc kubenswrapper[4718]: I1210 14:32:45.761901 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:45 crc kubenswrapper[4718]: I1210 14:32:45.761962 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:45 crc kubenswrapper[4718]: I1210 14:32:45.761978 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:45 crc kubenswrapper[4718]: I1210 14:32:45.761998 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:45 crc kubenswrapper[4718]: I1210 14:32:45.762013 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:45Z","lastTransitionTime":"2025-12-10T14:32:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:45 crc kubenswrapper[4718]: I1210 14:32:45.865101 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:45 crc kubenswrapper[4718]: I1210 14:32:45.865175 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:45 crc kubenswrapper[4718]: I1210 14:32:45.865187 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:45 crc kubenswrapper[4718]: I1210 14:32:45.865208 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:45 crc kubenswrapper[4718]: I1210 14:32:45.865222 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:45Z","lastTransitionTime":"2025-12-10T14:32:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:45 crc kubenswrapper[4718]: I1210 14:32:45.969241 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:45 crc kubenswrapper[4718]: I1210 14:32:45.969704 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:45 crc kubenswrapper[4718]: I1210 14:32:45.969793 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:45 crc kubenswrapper[4718]: I1210 14:32:45.969897 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:45 crc kubenswrapper[4718]: I1210 14:32:45.969985 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:45Z","lastTransitionTime":"2025-12-10T14:32:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:46 crc kubenswrapper[4718]: I1210 14:32:46.020084 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 10 14:32:46 crc kubenswrapper[4718]: I1210 14:32:46.020130 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 10 14:32:46 crc kubenswrapper[4718]: E1210 14:32:46.021078 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 10 14:32:46 crc kubenswrapper[4718]: E1210 14:32:46.021454 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 10 14:32:46 crc kubenswrapper[4718]: I1210 14:32:46.021532 4718 scope.go:117] "RemoveContainer" containerID="b37a96430550aa45384959d4587f4ab21c637451e334481e14e7b3a666d7f936" Dec 10 14:32:46 crc kubenswrapper[4718]: I1210 14:32:46.035701 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9fmrd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"186fd52a-c63c-461f-a551-8b57ead36f59\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c79dfef1bedaab660a35e29f8a48c6b153f7ca5028ef6a936a196699a30566e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r6f28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:03Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9fmrd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:46Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:46 crc kubenswrapper[4718]: I1210 14:32:46.053290 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a6ff07473ddfa3d1db1e5fe47937274f803c400ee435102c3b0383c02af4340\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b56868fde34c04a22b78be1e6cb02ff5b11f0dd9fa0232e5a4c38725a8bf193\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:46Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:46 crc kubenswrapper[4718]: I1210 14:32:46.073969 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1432ae01-a453-47a7-be1d-5fdcabe6f0c8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://651112c229c32968527538e66f1b161c92411b37f45f36ca90b27b6710d7a43d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d1de02a2f41beed3e92a05e69a6c76b44861e7dba46d723a38fffe8c3d0ed42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9405e4c833e1d215824604c6d7ec42dec156f2968d3351f110df665cdac0ce1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d898946159da9ec7806b5daa074a2313a8d3cd6086f4ae2bdfe858a7ba70eca7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:31:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:46Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:46 crc kubenswrapper[4718]: I1210 14:32:46.074308 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:46 crc kubenswrapper[4718]: I1210 14:32:46.074345 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:46 crc kubenswrapper[4718]: I1210 14:32:46.074362 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:46 crc kubenswrapper[4718]: I1210 14:32:46.074377 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:46 crc kubenswrapper[4718]: I1210 14:32:46.074401 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:46Z","lastTransitionTime":"2025-12-10T14:32:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:46 crc kubenswrapper[4718]: I1210 14:32:46.089018 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5867ee7f-0b5d-48e7-9c39-956298a845a1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca56ecda46fc73e9f734cfd36b87b7353db70adefef0665771911d62633ce508\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65ade8ad5ab17011ad49a8283beac1b53ed3612052f8ad9c9b5521b18a71f970\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4c8b683dbf4a1d50d5987ea168baecaf4b2124a5a6286dc22c96cd7472db60a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://541c7810a9f52a43bd4d949a836dfed2fbf9384a1c46d64f81168a8eec1437e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://541c7810a9f52a43bd4d949a836dfed2fbf9384a1c46d64f81168a8eec1437e3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:31:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:31:37Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:31:36Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:46Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:46 crc kubenswrapper[4718]: I1210 14:32:46.105485 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://caa6ee7ab7f267517dcd1699f91f61cbc8b2dd0a135dd553e096f31a7f1dd93b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:46Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:46 crc kubenswrapper[4718]: I1210 14:32:46.120789 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:46Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:46 crc kubenswrapper[4718]: I1210 14:32:46.137528 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hv62w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9db3984f-4589-462f-94d7-89a885be63d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c18891e2a71e33784526db2ff2226ff6aa8a9df462f5cb11ff4f5a446509f54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d84p4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hv62w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:46Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:46 crc kubenswrapper[4718]: I1210 14:32:46.151929 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xvkg4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af814c60-50de-499d-a1b2-b18f3749bc35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01d0637707f68305ada3aab8a2cf56db32750665ef5c1bcfbfb1993fd135c3b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vsmbx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:00Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xvkg4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:46Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:46 crc kubenswrapper[4718]: I1210 14:32:46.167754 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8db53917-7cfb-496d-b8a0-5cc68f3be4e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ac0946788c7cef36abe84639ca343f7ba379f19b94308cd1a08f4dcac4ad249\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f420f195ef9f1879dc97af4b3feb8835025c2cbe0ede4113fb1b0e0b08ba6c2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8zmhn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:46Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:46 crc kubenswrapper[4718]: I1210 14:32:46.176300 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:46 crc kubenswrapper[4718]: I1210 14:32:46.176321 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:46 crc kubenswrapper[4718]: I1210 14:32:46.176329 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:46 crc kubenswrapper[4718]: I1210 14:32:46.176343 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:46 crc kubenswrapper[4718]: I1210 14:32:46.176352 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:46Z","lastTransitionTime":"2025-12-10T14:32:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:46 crc kubenswrapper[4718]: I1210 14:32:46.181636 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-49r77" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22aace52-6f58-4459-89d2-9fec98b12ead\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b18d0c437956c7db43ad53ae225d4340e62f2c0ceb79ab6cc519d3c97570027\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l47m9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc20dc8e96606df154ecb5850afe3c1002d2869646e8a43a642e34d6f5ec8683\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l47m9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-49r77\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:46Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:46 crc kubenswrapper[4718]: I1210 14:32:46.197915 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kkfdg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db9c6842-03cb-4a28-baff-77f27f537aa4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bbdc104f3ec13789d339165601ac84cf9973e811127f8fe05ddb9b8d5ab333a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bxrt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34acb23b1eef53ae920322853c0503b9d26b0d30894e6c53ae52fb90e1671294\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34acb23b1eef53ae920322853c0503b9d26b0d30894e6c53ae52fb90e1671294\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bxrt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1b13fbe360cd4e791e86acdb311b78d1ab1d3344fd287fd70cfa87cf656affa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1b13fbe360cd4e791e86acdb311b78d1ab1d3344fd287fd70cfa87cf656affa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bxrt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d9ee16140b928b7e5c33811af2ca2b20b2fc98fdefc12bb1790fce59c407d9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d9ee16140b928b7e5c33811af2ca2b20b2fc98fdefc12bb1790fce59c407d9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bxrt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1daf49ff990637090db3ead68f5d0777b6502f79edfe524df743389ee220f8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1daf49ff990637090db3ead68f5d0777b6502f79edfe524df743389ee220f8d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bxrt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://193bd4ffc5c3324f05727b94ff852aaae455949fbb968b27c572f43fe0328854\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://193bd4ffc5c3324f05727b94ff852aaae455949fbb968b27c572f43fe0328854\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bxrt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://95db561c06acc2e7272a5a4a2c8a6a38315343379b80213c0e63f7a65a46f35d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://95db561c06acc2e7272a5a4a2c8a6a38315343379b80213c0e63f7a65a46f35d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bxrt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kkfdg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:46Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:46 crc kubenswrapper[4718]: I1210 14:32:46.210710 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-r8zbt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1494ebfa-d66c-4200-a336-2cedebcd5889\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p7qsk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p7qsk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:17Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-r8zbt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:46Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:46 crc kubenswrapper[4718]: I1210 14:32:46.226174 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:46Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:46 crc kubenswrapper[4718]: I1210 14:32:46.241806 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:46Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:46 crc kubenswrapper[4718]: I1210 14:32:46.257123 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://368fbae55ef9c78f2f88c5a07301f271e1ccffc66976a0c2dc9f80edce04613a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:46Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:46 crc kubenswrapper[4718]: I1210 14:32:46.279321 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dtch2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"612af6cb-db4d-4874-a9ea-8b3c7eb8e30c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f49783c85c42bd1d4afd9d2546c84af07c770b069b7e6f22cf3d95c0e173bcb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02ee125e78be56eace63f1077a3952d4a9b9ccb387af5413a9325eb4bff80ea9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e231c6918c86142643af798db9c4c6de9764822d2b60e1155c727b143191139\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://abe2f058bd1c2102c5f00a29af4869eb6dcd8b732ae90f5c4fda6759b460622a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2ac19fc0eeba332014eefac978a35798eb8bf1ff79212bca24690236bd46e01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9263fdd7b12f5852db038570b5a179fff5850893af790fa27694f00375a607f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b37a96430550aa45384959d4587f4ab21c637451e334481e14e7b3a666d7f936\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b37a96430550aa45384959d4587f4ab21c637451e334481e14e7b3a666d7f936\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-10T14:32:27Z\\\",\\\"message\\\":\\\"shift-console/console]} name:Service_openshift-console/console_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.194:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {d7d7b270-1480-47f8-bdf9-690dbab310cb}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1210 14:32:26.076824 6228 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI1210 14:32:26.076822 6228 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-console/console]} name:Service_openshift-console/console_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.194:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {d7d7b270-1480-47f8-bdf9-690dbab310cb}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF1210 14:32:26.076881 6228 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:25Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-dtch2_openshift-ovn-kubernetes(612af6cb-db4d-4874-a9ea-8b3c7eb8e30c)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f27c89c7dd6f5334f05c340d706e938abde36968f00c30bf84f7b2fb30dfcb3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df5db949bd1c53e8ae754b3e7e500e8c2e196aacd68a36d6331ddca378ef7504\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df5db949bd1c53e8ae754b3e7e500e8c2e196aacd68a36d6331ddca378ef7504\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-dtch2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:46Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:46 crc kubenswrapper[4718]: I1210 14:32:46.279884 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:46 crc kubenswrapper[4718]: I1210 14:32:46.280009 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:46 crc kubenswrapper[4718]: I1210 14:32:46.280027 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:46 crc kubenswrapper[4718]: I1210 14:32:46.280046 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:46 crc kubenswrapper[4718]: I1210 14:32:46.280090 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:46Z","lastTransitionTime":"2025-12-10T14:32:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:46 crc kubenswrapper[4718]: I1210 14:32:46.299132 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52b297ad-8ff7-498e-8248-b64014de744f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65966d33004e0a6740f0b450f404729baec86fc47c84a58ee020a78b7378bfee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfa44e8f79d7525847ab10384aaf41105a8e6769b384679417cd0a7379748dee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://32fcede9b35f9a45177a341e272e7c2cbe2e5c6bf699e877664a603a5f96124a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3d4b682855c5510a5957e308008602e4f749bd44ca52f974cb4be27d06a7c80\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9d6e990a4a3f575ffc8a6399b8dd1fd59a84db71b34885946046fc7c5bd0a8e8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-10T14:31:58Z\\\",\\\"message\\\":\\\"le observer\\\\nW1210 14:31:57.496102 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1210 14:31:57.496266 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1210 14:31:57.497076 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1744864549/tls.crt::/tmp/serving-cert-1744864549/tls.key\\\\\\\"\\\\nI1210 14:31:57.965947 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1210 14:31:57.968367 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1210 14:31:57.968400 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1210 14:31:57.968417 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1210 14:31:57.968422 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1210 14:31:57.972813 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1210 14:31:57.972849 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1210 14:31:57.972856 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1210 14:31:57.972862 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1210 14:31:57.972867 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1210 14:31:57.972870 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1210 14:31:57.972874 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1210 14:31:57.973028 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1210 14:31:57.977590 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-10T14:31:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f363cabfc65ba2cb737968ead6f21568f30416a1f385f511f2d1a45da1e831a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3cbf8dfd3e1278a733c6ca7e888788d0bec845ff72dfd618e2070262d05b6a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3cbf8dfd3e1278a733c6ca7e888788d0bec845ff72dfd618e2070262d05b6a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:31:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:31:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:46Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:46 crc kubenswrapper[4718]: I1210 14:32:46.382636 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:46 crc kubenswrapper[4718]: I1210 14:32:46.382673 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:46 crc kubenswrapper[4718]: I1210 14:32:46.382685 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:46 crc kubenswrapper[4718]: I1210 14:32:46.382700 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:46 crc kubenswrapper[4718]: I1210 14:32:46.382716 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:46Z","lastTransitionTime":"2025-12-10T14:32:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:46 crc kubenswrapper[4718]: I1210 14:32:46.486177 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:46 crc kubenswrapper[4718]: I1210 14:32:46.486221 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:46 crc kubenswrapper[4718]: I1210 14:32:46.486230 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:46 crc kubenswrapper[4718]: I1210 14:32:46.486247 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:46 crc kubenswrapper[4718]: I1210 14:32:46.486257 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:46Z","lastTransitionTime":"2025-12-10T14:32:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:46 crc kubenswrapper[4718]: I1210 14:32:46.589089 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:46 crc kubenswrapper[4718]: I1210 14:32:46.589140 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:46 crc kubenswrapper[4718]: I1210 14:32:46.589152 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:46 crc kubenswrapper[4718]: I1210 14:32:46.589170 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:46 crc kubenswrapper[4718]: I1210 14:32:46.589184 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:46Z","lastTransitionTime":"2025-12-10T14:32:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:46 crc kubenswrapper[4718]: I1210 14:32:46.691858 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:46 crc kubenswrapper[4718]: I1210 14:32:46.691932 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:46 crc kubenswrapper[4718]: I1210 14:32:46.692011 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:46 crc kubenswrapper[4718]: I1210 14:32:46.692234 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:46 crc kubenswrapper[4718]: I1210 14:32:46.692297 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:46Z","lastTransitionTime":"2025-12-10T14:32:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:46 crc kubenswrapper[4718]: I1210 14:32:46.799028 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:46 crc kubenswrapper[4718]: I1210 14:32:46.799284 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:46 crc kubenswrapper[4718]: I1210 14:32:46.799294 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:46 crc kubenswrapper[4718]: I1210 14:32:46.799310 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:46 crc kubenswrapper[4718]: I1210 14:32:46.799320 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:46Z","lastTransitionTime":"2025-12-10T14:32:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:46 crc kubenswrapper[4718]: I1210 14:32:46.902023 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:46 crc kubenswrapper[4718]: I1210 14:32:46.902083 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:46 crc kubenswrapper[4718]: I1210 14:32:46.902094 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:46 crc kubenswrapper[4718]: I1210 14:32:46.902112 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:46 crc kubenswrapper[4718]: I1210 14:32:46.902125 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:46Z","lastTransitionTime":"2025-12-10T14:32:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:46 crc kubenswrapper[4718]: I1210 14:32:46.992082 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-dtch2_612af6cb-db4d-4874-a9ea-8b3c7eb8e30c/ovnkube-controller/1.log" Dec 10 14:32:46 crc kubenswrapper[4718]: I1210 14:32:46.997833 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dtch2" event={"ID":"612af6cb-db4d-4874-a9ea-8b3c7eb8e30c","Type":"ContainerStarted","Data":"410548008fb250bc3e4c4409287aa72cf53991828f5c3f4ac74bcf8561cbf855"} Dec 10 14:32:46 crc kubenswrapper[4718]: I1210 14:32:46.998671 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-dtch2" Dec 10 14:32:47 crc kubenswrapper[4718]: I1210 14:32:47.003963 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:47 crc kubenswrapper[4718]: I1210 14:32:47.004007 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:47 crc kubenswrapper[4718]: I1210 14:32:47.004019 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:47 crc kubenswrapper[4718]: I1210 14:32:47.004040 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:47 crc kubenswrapper[4718]: I1210 14:32:47.004052 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:47Z","lastTransitionTime":"2025-12-10T14:32:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:47 crc kubenswrapper[4718]: I1210 14:32:47.013969 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a6ff07473ddfa3d1db1e5fe47937274f803c400ee435102c3b0383c02af4340\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b56868fde34c04a22b78be1e6cb02ff5b11f0dd9fa0232e5a4c38725a8bf193\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:47Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:47 crc kubenswrapper[4718]: I1210 14:32:47.020604 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 10 14:32:47 crc kubenswrapper[4718]: I1210 14:32:47.020646 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-r8zbt" Dec 10 14:32:47 crc kubenswrapper[4718]: E1210 14:32:47.020775 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 10 14:32:47 crc kubenswrapper[4718]: E1210 14:32:47.020885 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-r8zbt" podUID="1494ebfa-d66c-4200-a336-2cedebcd5889" Dec 10 14:32:47 crc kubenswrapper[4718]: I1210 14:32:47.026508 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9fmrd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"186fd52a-c63c-461f-a551-8b57ead36f59\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c79dfef1bedaab660a35e29f8a48c6b153f7ca5028ef6a936a196699a30566e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r6f28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:03Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9fmrd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:47Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:47 crc kubenswrapper[4718]: I1210 14:32:47.035400 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Dec 10 14:32:47 crc kubenswrapper[4718]: I1210 14:32:47.037995 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8db53917-7cfb-496d-b8a0-5cc68f3be4e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ac0946788c7cef36abe84639ca343f7ba379f19b94308cd1a08f4dcac4ad249\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f420f195ef9f1879dc97af4b3feb8835025c2cbe0ede4113fb1b0e0b08ba6c2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8zmhn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:47Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:47 crc kubenswrapper[4718]: I1210 14:32:47.053191 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1432ae01-a453-47a7-be1d-5fdcabe6f0c8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://651112c229c32968527538e66f1b161c92411b37f45f36ca90b27b6710d7a43d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d1de02a2f41beed3e92a05e69a6c76b44861e7dba46d723a38fffe8c3d0ed42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9405e4c833e1d215824604c6d7ec42dec156f2968d3351f110df665cdac0ce1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d898946159da9ec7806b5daa074a2313a8d3cd6086f4ae2bdfe858a7ba70eca7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:31:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:47Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:47 crc kubenswrapper[4718]: I1210 14:32:47.066606 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5867ee7f-0b5d-48e7-9c39-956298a845a1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca56ecda46fc73e9f734cfd36b87b7353db70adefef0665771911d62633ce508\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65ade8ad5ab17011ad49a8283beac1b53ed3612052f8ad9c9b5521b18a71f970\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4c8b683dbf4a1d50d5987ea168baecaf4b2124a5a6286dc22c96cd7472db60a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://541c7810a9f52a43bd4d949a836dfed2fbf9384a1c46d64f81168a8eec1437e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://541c7810a9f52a43bd4d949a836dfed2fbf9384a1c46d64f81168a8eec1437e3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:31:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:31:37Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:31:36Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:47Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:47 crc kubenswrapper[4718]: I1210 14:32:47.080896 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://caa6ee7ab7f267517dcd1699f91f61cbc8b2dd0a135dd553e096f31a7f1dd93b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:47Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:47 crc kubenswrapper[4718]: I1210 14:32:47.095126 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:47Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:47 crc kubenswrapper[4718]: I1210 14:32:47.107979 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:47 crc kubenswrapper[4718]: I1210 14:32:47.108033 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:47 crc kubenswrapper[4718]: I1210 14:32:47.108047 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:47 crc kubenswrapper[4718]: I1210 14:32:47.108068 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:47 crc kubenswrapper[4718]: I1210 14:32:47.108082 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:47Z","lastTransitionTime":"2025-12-10T14:32:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:47 crc kubenswrapper[4718]: I1210 14:32:47.109954 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hv62w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9db3984f-4589-462f-94d7-89a885be63d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c18891e2a71e33784526db2ff2226ff6aa8a9df462f5cb11ff4f5a446509f54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d84p4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hv62w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:47Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:47 crc kubenswrapper[4718]: I1210 14:32:47.123613 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xvkg4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af814c60-50de-499d-a1b2-b18f3749bc35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01d0637707f68305ada3aab8a2cf56db32750665ef5c1bcfbfb1993fd135c3b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vsmbx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:00Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xvkg4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:47Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:47 crc kubenswrapper[4718]: I1210 14:32:47.139868 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-49r77" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22aace52-6f58-4459-89d2-9fec98b12ead\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b18d0c437956c7db43ad53ae225d4340e62f2c0ceb79ab6cc519d3c97570027\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l47m9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc20dc8e96606df154ecb5850afe3c1002d2869646e8a43a642e34d6f5ec8683\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l47m9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-49r77\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:47Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:47 crc kubenswrapper[4718]: I1210 14:32:47.162891 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kkfdg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db9c6842-03cb-4a28-baff-77f27f537aa4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bbdc104f3ec13789d339165601ac84cf9973e811127f8fe05ddb9b8d5ab333a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bxrt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34acb23b1eef53ae920322853c0503b9d26b0d30894e6c53ae52fb90e1671294\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34acb23b1eef53ae920322853c0503b9d26b0d30894e6c53ae52fb90e1671294\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bxrt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1b13fbe360cd4e791e86acdb311b78d1ab1d3344fd287fd70cfa87cf656affa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1b13fbe360cd4e791e86acdb311b78d1ab1d3344fd287fd70cfa87cf656affa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bxrt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d9ee16140b928b7e5c33811af2ca2b20b2fc98fdefc12bb1790fce59c407d9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d9ee16140b928b7e5c33811af2ca2b20b2fc98fdefc12bb1790fce59c407d9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bxrt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1daf49ff990637090db3ead68f5d0777b6502f79edfe524df743389ee220f8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1daf49ff990637090db3ead68f5d0777b6502f79edfe524df743389ee220f8d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bxrt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://193bd4ffc5c3324f05727b94ff852aaae455949fbb968b27c572f43fe0328854\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://193bd4ffc5c3324f05727b94ff852aaae455949fbb968b27c572f43fe0328854\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bxrt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://95db561c06acc2e7272a5a4a2c8a6a38315343379b80213c0e63f7a65a46f35d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://95db561c06acc2e7272a5a4a2c8a6a38315343379b80213c0e63f7a65a46f35d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bxrt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kkfdg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:47Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:47 crc kubenswrapper[4718]: I1210 14:32:47.180785 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-r8zbt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1494ebfa-d66c-4200-a336-2cedebcd5889\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p7qsk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p7qsk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:17Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-r8zbt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:47Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:47 crc kubenswrapper[4718]: I1210 14:32:47.198587 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52b297ad-8ff7-498e-8248-b64014de744f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65966d33004e0a6740f0b450f404729baec86fc47c84a58ee020a78b7378bfee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfa44e8f79d7525847ab10384aaf41105a8e6769b384679417cd0a7379748dee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://32fcede9b35f9a45177a341e272e7c2cbe2e5c6bf699e877664a603a5f96124a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3d4b682855c5510a5957e308008602e4f749bd44ca52f974cb4be27d06a7c80\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9d6e990a4a3f575ffc8a6399b8dd1fd59a84db71b34885946046fc7c5bd0a8e8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-10T14:31:58Z\\\",\\\"message\\\":\\\"le observer\\\\nW1210 14:31:57.496102 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1210 14:31:57.496266 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1210 14:31:57.497076 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1744864549/tls.crt::/tmp/serving-cert-1744864549/tls.key\\\\\\\"\\\\nI1210 14:31:57.965947 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1210 14:31:57.968367 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1210 14:31:57.968400 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1210 14:31:57.968417 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1210 14:31:57.968422 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1210 14:31:57.972813 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1210 14:31:57.972849 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1210 14:31:57.972856 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1210 14:31:57.972862 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1210 14:31:57.972867 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1210 14:31:57.972870 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1210 14:31:57.972874 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1210 14:31:57.973028 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1210 14:31:57.977590 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-10T14:31:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f363cabfc65ba2cb737968ead6f21568f30416a1f385f511f2d1a45da1e831a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3cbf8dfd3e1278a733c6ca7e888788d0bec845ff72dfd618e2070262d05b6a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3cbf8dfd3e1278a733c6ca7e888788d0bec845ff72dfd618e2070262d05b6a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:31:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:31:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:47Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:47 crc kubenswrapper[4718]: I1210 14:32:47.210248 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:47 crc kubenswrapper[4718]: I1210 14:32:47.210301 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:47 crc kubenswrapper[4718]: I1210 14:32:47.210312 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:47 crc kubenswrapper[4718]: I1210 14:32:47.210329 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:47 crc kubenswrapper[4718]: I1210 14:32:47.210341 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:47Z","lastTransitionTime":"2025-12-10T14:32:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:47 crc kubenswrapper[4718]: I1210 14:32:47.213896 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:47Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:47 crc kubenswrapper[4718]: I1210 14:32:47.227123 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:47Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:47 crc kubenswrapper[4718]: I1210 14:32:47.240114 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://368fbae55ef9c78f2f88c5a07301f271e1ccffc66976a0c2dc9f80edce04613a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:47Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:47 crc kubenswrapper[4718]: I1210 14:32:47.262446 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dtch2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"612af6cb-db4d-4874-a9ea-8b3c7eb8e30c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f49783c85c42bd1d4afd9d2546c84af07c770b069b7e6f22cf3d95c0e173bcb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02ee125e78be56eace63f1077a3952d4a9b9ccb387af5413a9325eb4bff80ea9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e231c6918c86142643af798db9c4c6de9764822d2b60e1155c727b143191139\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://abe2f058bd1c2102c5f00a29af4869eb6dcd8b732ae90f5c4fda6759b460622a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2ac19fc0eeba332014eefac978a35798eb8bf1ff79212bca24690236bd46e01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9263fdd7b12f5852db038570b5a179fff5850893af790fa27694f00375a607f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://410548008fb250bc3e4c4409287aa72cf53991828f5c3f4ac74bcf8561cbf855\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b37a96430550aa45384959d4587f4ab21c637451e334481e14e7b3a666d7f936\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-10T14:32:27Z\\\",\\\"message\\\":\\\"shift-console/console]} name:Service_openshift-console/console_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.194:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {d7d7b270-1480-47f8-bdf9-690dbab310cb}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1210 14:32:26.076824 6228 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI1210 14:32:26.076822 6228 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-console/console]} name:Service_openshift-console/console_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.194:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {d7d7b270-1480-47f8-bdf9-690dbab310cb}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF1210 14:32:26.076881 6228 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:25Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f27c89c7dd6f5334f05c340d706e938abde36968f00c30bf84f7b2fb30dfcb3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df5db949bd1c53e8ae754b3e7e500e8c2e196aacd68a36d6331ddca378ef7504\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df5db949bd1c53e8ae754b3e7e500e8c2e196aacd68a36d6331ddca378ef7504\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-dtch2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:47Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:47 crc kubenswrapper[4718]: I1210 14:32:47.314327 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:47 crc kubenswrapper[4718]: I1210 14:32:47.314369 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:47 crc kubenswrapper[4718]: I1210 14:32:47.314379 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:47 crc kubenswrapper[4718]: I1210 14:32:47.314407 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:47 crc kubenswrapper[4718]: I1210 14:32:47.314418 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:47Z","lastTransitionTime":"2025-12-10T14:32:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:47 crc kubenswrapper[4718]: I1210 14:32:47.416788 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:47 crc kubenswrapper[4718]: I1210 14:32:47.416833 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:47 crc kubenswrapper[4718]: I1210 14:32:47.416846 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:47 crc kubenswrapper[4718]: I1210 14:32:47.416862 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:47 crc kubenswrapper[4718]: I1210 14:32:47.416873 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:47Z","lastTransitionTime":"2025-12-10T14:32:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:47 crc kubenswrapper[4718]: I1210 14:32:47.519827 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:47 crc kubenswrapper[4718]: I1210 14:32:47.519880 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:47 crc kubenswrapper[4718]: I1210 14:32:47.519892 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:47 crc kubenswrapper[4718]: I1210 14:32:47.519912 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:47 crc kubenswrapper[4718]: I1210 14:32:47.519924 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:47Z","lastTransitionTime":"2025-12-10T14:32:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:47 crc kubenswrapper[4718]: I1210 14:32:47.622967 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:47 crc kubenswrapper[4718]: I1210 14:32:47.623018 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:47 crc kubenswrapper[4718]: I1210 14:32:47.623028 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:47 crc kubenswrapper[4718]: I1210 14:32:47.623043 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:47 crc kubenswrapper[4718]: I1210 14:32:47.623056 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:47Z","lastTransitionTime":"2025-12-10T14:32:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:47 crc kubenswrapper[4718]: I1210 14:32:47.726440 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:47 crc kubenswrapper[4718]: I1210 14:32:47.726479 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:47 crc kubenswrapper[4718]: I1210 14:32:47.726488 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:47 crc kubenswrapper[4718]: I1210 14:32:47.726505 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:47 crc kubenswrapper[4718]: I1210 14:32:47.726518 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:47Z","lastTransitionTime":"2025-12-10T14:32:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:47 crc kubenswrapper[4718]: I1210 14:32:47.853826 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:47 crc kubenswrapper[4718]: I1210 14:32:47.853888 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:47 crc kubenswrapper[4718]: I1210 14:32:47.853900 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:47 crc kubenswrapper[4718]: I1210 14:32:47.853919 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:47 crc kubenswrapper[4718]: I1210 14:32:47.853935 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:47Z","lastTransitionTime":"2025-12-10T14:32:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:47 crc kubenswrapper[4718]: I1210 14:32:47.956758 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:47 crc kubenswrapper[4718]: I1210 14:32:47.956839 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:47 crc kubenswrapper[4718]: I1210 14:32:47.956851 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:47 crc kubenswrapper[4718]: I1210 14:32:47.956871 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:47 crc kubenswrapper[4718]: I1210 14:32:47.956883 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:47Z","lastTransitionTime":"2025-12-10T14:32:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:48 crc kubenswrapper[4718]: I1210 14:32:48.019316 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 10 14:32:48 crc kubenswrapper[4718]: I1210 14:32:48.019358 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 10 14:32:48 crc kubenswrapper[4718]: E1210 14:32:48.019465 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 10 14:32:48 crc kubenswrapper[4718]: E1210 14:32:48.019530 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 10 14:32:48 crc kubenswrapper[4718]: I1210 14:32:48.059370 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:48 crc kubenswrapper[4718]: I1210 14:32:48.059429 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:48 crc kubenswrapper[4718]: I1210 14:32:48.059441 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:48 crc kubenswrapper[4718]: I1210 14:32:48.059455 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:48 crc kubenswrapper[4718]: I1210 14:32:48.059469 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:48Z","lastTransitionTime":"2025-12-10T14:32:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:48 crc kubenswrapper[4718]: I1210 14:32:48.162173 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:48 crc kubenswrapper[4718]: I1210 14:32:48.162220 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:48 crc kubenswrapper[4718]: I1210 14:32:48.162230 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:48 crc kubenswrapper[4718]: I1210 14:32:48.162245 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:48 crc kubenswrapper[4718]: I1210 14:32:48.162256 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:48Z","lastTransitionTime":"2025-12-10T14:32:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:48 crc kubenswrapper[4718]: I1210 14:32:48.264621 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:48 crc kubenswrapper[4718]: I1210 14:32:48.264679 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:48 crc kubenswrapper[4718]: I1210 14:32:48.264689 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:48 crc kubenswrapper[4718]: I1210 14:32:48.264708 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:48 crc kubenswrapper[4718]: I1210 14:32:48.264723 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:48Z","lastTransitionTime":"2025-12-10T14:32:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:48 crc kubenswrapper[4718]: I1210 14:32:48.367263 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:48 crc kubenswrapper[4718]: I1210 14:32:48.367315 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:48 crc kubenswrapper[4718]: I1210 14:32:48.367329 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:48 crc kubenswrapper[4718]: I1210 14:32:48.367357 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:48 crc kubenswrapper[4718]: I1210 14:32:48.367369 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:48Z","lastTransitionTime":"2025-12-10T14:32:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:48 crc kubenswrapper[4718]: I1210 14:32:48.471259 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:48 crc kubenswrapper[4718]: I1210 14:32:48.471321 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:48 crc kubenswrapper[4718]: I1210 14:32:48.471332 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:48 crc kubenswrapper[4718]: I1210 14:32:48.471352 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:48 crc kubenswrapper[4718]: I1210 14:32:48.471364 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:48Z","lastTransitionTime":"2025-12-10T14:32:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:48 crc kubenswrapper[4718]: I1210 14:32:48.574023 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:48 crc kubenswrapper[4718]: I1210 14:32:48.574073 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:48 crc kubenswrapper[4718]: I1210 14:32:48.574085 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:48 crc kubenswrapper[4718]: I1210 14:32:48.574103 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:48 crc kubenswrapper[4718]: I1210 14:32:48.574115 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:48Z","lastTransitionTime":"2025-12-10T14:32:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:48 crc kubenswrapper[4718]: I1210 14:32:48.676802 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:48 crc kubenswrapper[4718]: I1210 14:32:48.676867 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:48 crc kubenswrapper[4718]: I1210 14:32:48.676878 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:48 crc kubenswrapper[4718]: I1210 14:32:48.676898 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:48 crc kubenswrapper[4718]: I1210 14:32:48.676913 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:48Z","lastTransitionTime":"2025-12-10T14:32:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:48 crc kubenswrapper[4718]: I1210 14:32:48.779611 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:48 crc kubenswrapper[4718]: I1210 14:32:48.779681 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:48 crc kubenswrapper[4718]: I1210 14:32:48.779700 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:48 crc kubenswrapper[4718]: I1210 14:32:48.779727 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:48 crc kubenswrapper[4718]: I1210 14:32:48.779742 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:48Z","lastTransitionTime":"2025-12-10T14:32:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:48 crc kubenswrapper[4718]: I1210 14:32:48.882692 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:48 crc kubenswrapper[4718]: I1210 14:32:48.882746 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:48 crc kubenswrapper[4718]: I1210 14:32:48.882759 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:48 crc kubenswrapper[4718]: I1210 14:32:48.882775 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:48 crc kubenswrapper[4718]: I1210 14:32:48.882789 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:48Z","lastTransitionTime":"2025-12-10T14:32:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:48 crc kubenswrapper[4718]: I1210 14:32:48.986353 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:48 crc kubenswrapper[4718]: I1210 14:32:48.986487 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:48 crc kubenswrapper[4718]: I1210 14:32:48.986508 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:48 crc kubenswrapper[4718]: I1210 14:32:48.986538 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:48 crc kubenswrapper[4718]: I1210 14:32:48.986557 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:48Z","lastTransitionTime":"2025-12-10T14:32:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:49 crc kubenswrapper[4718]: I1210 14:32:49.019675 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-r8zbt" Dec 10 14:32:49 crc kubenswrapper[4718]: I1210 14:32:49.019759 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 10 14:32:49 crc kubenswrapper[4718]: E1210 14:32:49.019832 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-r8zbt" podUID="1494ebfa-d66c-4200-a336-2cedebcd5889" Dec 10 14:32:49 crc kubenswrapper[4718]: E1210 14:32:49.019985 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 10 14:32:49 crc kubenswrapper[4718]: I1210 14:32:49.089331 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:49 crc kubenswrapper[4718]: I1210 14:32:49.089443 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:49 crc kubenswrapper[4718]: I1210 14:32:49.089466 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:49 crc kubenswrapper[4718]: I1210 14:32:49.089492 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:49 crc kubenswrapper[4718]: I1210 14:32:49.089513 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:49Z","lastTransitionTime":"2025-12-10T14:32:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:49 crc kubenswrapper[4718]: I1210 14:32:49.192668 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:49 crc kubenswrapper[4718]: I1210 14:32:49.192707 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:49 crc kubenswrapper[4718]: I1210 14:32:49.192716 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:49 crc kubenswrapper[4718]: I1210 14:32:49.192731 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:49 crc kubenswrapper[4718]: I1210 14:32:49.192741 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:49Z","lastTransitionTime":"2025-12-10T14:32:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:49 crc kubenswrapper[4718]: I1210 14:32:49.296091 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:49 crc kubenswrapper[4718]: I1210 14:32:49.296149 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:49 crc kubenswrapper[4718]: I1210 14:32:49.296161 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:49 crc kubenswrapper[4718]: I1210 14:32:49.296180 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:49 crc kubenswrapper[4718]: I1210 14:32:49.296190 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:49Z","lastTransitionTime":"2025-12-10T14:32:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:49 crc kubenswrapper[4718]: I1210 14:32:49.399079 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:49 crc kubenswrapper[4718]: I1210 14:32:49.399111 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:49 crc kubenswrapper[4718]: I1210 14:32:49.399153 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:49 crc kubenswrapper[4718]: I1210 14:32:49.399170 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:49 crc kubenswrapper[4718]: I1210 14:32:49.399181 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:49Z","lastTransitionTime":"2025-12-10T14:32:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:49 crc kubenswrapper[4718]: I1210 14:32:49.426608 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1494ebfa-d66c-4200-a336-2cedebcd5889-metrics-certs\") pod \"network-metrics-daemon-r8zbt\" (UID: \"1494ebfa-d66c-4200-a336-2cedebcd5889\") " pod="openshift-multus/network-metrics-daemon-r8zbt" Dec 10 14:32:49 crc kubenswrapper[4718]: E1210 14:32:49.426785 4718 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 10 14:32:49 crc kubenswrapper[4718]: E1210 14:32:49.426897 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1494ebfa-d66c-4200-a336-2cedebcd5889-metrics-certs podName:1494ebfa-d66c-4200-a336-2cedebcd5889 nodeName:}" failed. No retries permitted until 2025-12-10 14:33:21.426872908 +0000 UTC m=+106.376096325 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/1494ebfa-d66c-4200-a336-2cedebcd5889-metrics-certs") pod "network-metrics-daemon-r8zbt" (UID: "1494ebfa-d66c-4200-a336-2cedebcd5889") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 10 14:32:49 crc kubenswrapper[4718]: I1210 14:32:49.501638 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:49 crc kubenswrapper[4718]: I1210 14:32:49.501673 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:49 crc kubenswrapper[4718]: I1210 14:32:49.501681 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:49 crc kubenswrapper[4718]: I1210 14:32:49.501697 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:49 crc kubenswrapper[4718]: I1210 14:32:49.501708 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:49Z","lastTransitionTime":"2025-12-10T14:32:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:49 crc kubenswrapper[4718]: I1210 14:32:49.603787 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:49 crc kubenswrapper[4718]: I1210 14:32:49.603864 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:49 crc kubenswrapper[4718]: I1210 14:32:49.603882 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:49 crc kubenswrapper[4718]: I1210 14:32:49.603906 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:49 crc kubenswrapper[4718]: I1210 14:32:49.603922 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:49Z","lastTransitionTime":"2025-12-10T14:32:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:49 crc kubenswrapper[4718]: I1210 14:32:49.708665 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:49 crc kubenswrapper[4718]: I1210 14:32:49.708721 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:49 crc kubenswrapper[4718]: I1210 14:32:49.708735 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:49 crc kubenswrapper[4718]: I1210 14:32:49.708753 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:49 crc kubenswrapper[4718]: I1210 14:32:49.708767 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:49Z","lastTransitionTime":"2025-12-10T14:32:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:49 crc kubenswrapper[4718]: I1210 14:32:49.811450 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:49 crc kubenswrapper[4718]: I1210 14:32:49.811489 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:49 crc kubenswrapper[4718]: I1210 14:32:49.811498 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:49 crc kubenswrapper[4718]: I1210 14:32:49.811513 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:49 crc kubenswrapper[4718]: I1210 14:32:49.811523 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:49Z","lastTransitionTime":"2025-12-10T14:32:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:49 crc kubenswrapper[4718]: I1210 14:32:49.914530 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:49 crc kubenswrapper[4718]: I1210 14:32:49.914571 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:49 crc kubenswrapper[4718]: I1210 14:32:49.914580 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:49 crc kubenswrapper[4718]: I1210 14:32:49.914599 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:49 crc kubenswrapper[4718]: I1210 14:32:49.914627 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:49Z","lastTransitionTime":"2025-12-10T14:32:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:50 crc kubenswrapper[4718]: I1210 14:32:50.011292 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-dtch2_612af6cb-db4d-4874-a9ea-8b3c7eb8e30c/ovnkube-controller/2.log" Dec 10 14:32:50 crc kubenswrapper[4718]: I1210 14:32:50.011895 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-dtch2_612af6cb-db4d-4874-a9ea-8b3c7eb8e30c/ovnkube-controller/1.log" Dec 10 14:32:50 crc kubenswrapper[4718]: I1210 14:32:50.014709 4718 generic.go:334] "Generic (PLEG): container finished" podID="612af6cb-db4d-4874-a9ea-8b3c7eb8e30c" containerID="410548008fb250bc3e4c4409287aa72cf53991828f5c3f4ac74bcf8561cbf855" exitCode=1 Dec 10 14:32:50 crc kubenswrapper[4718]: I1210 14:32:50.014772 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dtch2" event={"ID":"612af6cb-db4d-4874-a9ea-8b3c7eb8e30c","Type":"ContainerDied","Data":"410548008fb250bc3e4c4409287aa72cf53991828f5c3f4ac74bcf8561cbf855"} Dec 10 14:32:50 crc kubenswrapper[4718]: I1210 14:32:50.015454 4718 scope.go:117] "RemoveContainer" containerID="b37a96430550aa45384959d4587f4ab21c637451e334481e14e7b3a666d7f936" Dec 10 14:32:50 crc kubenswrapper[4718]: I1210 14:32:50.015838 4718 scope.go:117] "RemoveContainer" containerID="410548008fb250bc3e4c4409287aa72cf53991828f5c3f4ac74bcf8561cbf855" Dec 10 14:32:50 crc kubenswrapper[4718]: I1210 14:32:50.017587 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:50 crc kubenswrapper[4718]: E1210 14:32:50.017751 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-dtch2_openshift-ovn-kubernetes(612af6cb-db4d-4874-a9ea-8b3c7eb8e30c)\"" pod="openshift-ovn-kubernetes/ovnkube-node-dtch2" podUID="612af6cb-db4d-4874-a9ea-8b3c7eb8e30c" Dec 10 14:32:50 crc kubenswrapper[4718]: I1210 14:32:50.017805 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:50 crc kubenswrapper[4718]: I1210 14:32:50.017825 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:50 crc kubenswrapper[4718]: I1210 14:32:50.017839 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:50 crc kubenswrapper[4718]: I1210 14:32:50.017849 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:50Z","lastTransitionTime":"2025-12-10T14:32:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:50 crc kubenswrapper[4718]: I1210 14:32:50.020554 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 10 14:32:50 crc kubenswrapper[4718]: E1210 14:32:50.020676 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 10 14:32:50 crc kubenswrapper[4718]: I1210 14:32:50.020758 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 10 14:32:50 crc kubenswrapper[4718]: E1210 14:32:50.020892 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 10 14:32:50 crc kubenswrapper[4718]: I1210 14:32:50.035577 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52b297ad-8ff7-498e-8248-b64014de744f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65966d33004e0a6740f0b450f404729baec86fc47c84a58ee020a78b7378bfee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfa44e8f79d7525847ab10384aaf41105a8e6769b384679417cd0a7379748dee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://32fcede9b35f9a45177a341e272e7c2cbe2e5c6bf699e877664a603a5f96124a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3d4b682855c5510a5957e308008602e4f749bd44ca52f974cb4be27d06a7c80\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9d6e990a4a3f575ffc8a6399b8dd1fd59a84db71b34885946046fc7c5bd0a8e8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-10T14:31:58Z\\\",\\\"message\\\":\\\"le observer\\\\nW1210 14:31:57.496102 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1210 14:31:57.496266 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1210 14:31:57.497076 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1744864549/tls.crt::/tmp/serving-cert-1744864549/tls.key\\\\\\\"\\\\nI1210 14:31:57.965947 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1210 14:31:57.968367 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1210 14:31:57.968400 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1210 14:31:57.968417 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1210 14:31:57.968422 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1210 14:31:57.972813 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1210 14:31:57.972849 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1210 14:31:57.972856 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1210 14:31:57.972862 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1210 14:31:57.972867 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1210 14:31:57.972870 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1210 14:31:57.972874 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1210 14:31:57.973028 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1210 14:31:57.977590 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-10T14:31:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f363cabfc65ba2cb737968ead6f21568f30416a1f385f511f2d1a45da1e831a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3cbf8dfd3e1278a733c6ca7e888788d0bec845ff72dfd618e2070262d05b6a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3cbf8dfd3e1278a733c6ca7e888788d0bec845ff72dfd618e2070262d05b6a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:31:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:31:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:50Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:50 crc kubenswrapper[4718]: I1210 14:32:50.050686 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:50Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:50 crc kubenswrapper[4718]: I1210 14:32:50.062653 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:50Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:50 crc kubenswrapper[4718]: I1210 14:32:50.073768 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://368fbae55ef9c78f2f88c5a07301f271e1ccffc66976a0c2dc9f80edce04613a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:50Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:50 crc kubenswrapper[4718]: I1210 14:32:50.095825 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dtch2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"612af6cb-db4d-4874-a9ea-8b3c7eb8e30c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f49783c85c42bd1d4afd9d2546c84af07c770b069b7e6f22cf3d95c0e173bcb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02ee125e78be56eace63f1077a3952d4a9b9ccb387af5413a9325eb4bff80ea9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e231c6918c86142643af798db9c4c6de9764822d2b60e1155c727b143191139\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://abe2f058bd1c2102c5f00a29af4869eb6dcd8b732ae90f5c4fda6759b460622a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2ac19fc0eeba332014eefac978a35798eb8bf1ff79212bca24690236bd46e01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9263fdd7b12f5852db038570b5a179fff5850893af790fa27694f00375a607f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://410548008fb250bc3e4c4409287aa72cf53991828f5c3f4ac74bcf8561cbf855\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b37a96430550aa45384959d4587f4ab21c637451e334481e14e7b3a666d7f936\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-10T14:32:27Z\\\",\\\"message\\\":\\\"shift-console/console]} name:Service_openshift-console/console_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.194:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {d7d7b270-1480-47f8-bdf9-690dbab310cb}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1210 14:32:26.076824 6228 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI1210 14:32:26.076822 6228 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-console/console]} name:Service_openshift-console/console_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.194:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {d7d7b270-1480-47f8-bdf9-690dbab310cb}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF1210 14:32:26.076881 6228 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:25Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://410548008fb250bc3e4c4409287aa72cf53991828f5c3f4ac74bcf8561cbf855\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-10T14:32:49Z\\\",\\\"message\\\":\\\"logical_ip:10.217.0.59 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {dce28c51-c9f1-478b-97c8-7e209d6e7cbe}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:dce28c51-c9f1-478b-97c8-7e209d6e7cbe}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {e3c4661a-36a6-47f0-a6c0-a4ee741f2224}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1210 14:32:48.189595 6552 obj_retry.go:365] Adding new object: *v1.Pod openshift-multus/network-metrics-daemon-r8zbt\\\\nI1210 14:32:48.189604 6552 ovn.go:134] Ensuring zone local for Pod openshift-multus/network-metrics-daemon-r8zbt in node crc\\\\nI1210 14:32:48.189627 6552 base_network_controller_pods.go:477] [default/openshift-multus/network-metrics-daemon-r8zbt] creating logical port openshift-multus_network-metrics-daemon-r8zbt for pod on switch crc\\\\nF1210 14:32:48.188496 6552 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f27c89c7dd6f5334f05c340d706e938abde36968f00c30bf84f7b2fb30dfcb3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df5db949bd1c53e8ae754b3e7e500e8c2e196aacd68a36d6331ddca378ef7504\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df5db949bd1c53e8ae754b3e7e500e8c2e196aacd68a36d6331ddca378ef7504\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-dtch2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:50Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:50 crc kubenswrapper[4718]: I1210 14:32:50.108441 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a6ff07473ddfa3d1db1e5fe47937274f803c400ee435102c3b0383c02af4340\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b56868fde34c04a22b78be1e6cb02ff5b11f0dd9fa0232e5a4c38725a8bf193\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:50Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:50 crc kubenswrapper[4718]: I1210 14:32:50.118639 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9fmrd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"186fd52a-c63c-461f-a551-8b57ead36f59\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c79dfef1bedaab660a35e29f8a48c6b153f7ca5028ef6a936a196699a30566e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r6f28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:03Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9fmrd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:50Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:50 crc kubenswrapper[4718]: I1210 14:32:50.123365 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:50 crc kubenswrapper[4718]: I1210 14:32:50.123562 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:50 crc kubenswrapper[4718]: I1210 14:32:50.123683 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:50 crc kubenswrapper[4718]: I1210 14:32:50.123773 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:50 crc kubenswrapper[4718]: I1210 14:32:50.123873 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:50Z","lastTransitionTime":"2025-12-10T14:32:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:50 crc kubenswrapper[4718]: I1210 14:32:50.136573 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8db53917-7cfb-496d-b8a0-5cc68f3be4e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ac0946788c7cef36abe84639ca343f7ba379f19b94308cd1a08f4dcac4ad249\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f420f195ef9f1879dc97af4b3feb8835025c2cbe0ede4113fb1b0e0b08ba6c2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8zmhn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:50Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:50 crc kubenswrapper[4718]: I1210 14:32:50.149074 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e88a635-7fd6-4100-9074-93065379b7af\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe972be45d5fffe525720a01120068ca8d0d5ad43be84a4dfea4ca04946ff711\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://98178725ebbf1d6fe01df267ec22fdd0b8bb049fba5d6790ca3e045b71083a05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://98178725ebbf1d6fe01df267ec22fdd0b8bb049fba5d6790ca3e045b71083a05\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:31:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:31:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:50Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:50 crc kubenswrapper[4718]: I1210 14:32:50.163080 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1432ae01-a453-47a7-be1d-5fdcabe6f0c8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://651112c229c32968527538e66f1b161c92411b37f45f36ca90b27b6710d7a43d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d1de02a2f41beed3e92a05e69a6c76b44861e7dba46d723a38fffe8c3d0ed42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9405e4c833e1d215824604c6d7ec42dec156f2968d3351f110df665cdac0ce1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d898946159da9ec7806b5daa074a2313a8d3cd6086f4ae2bdfe858a7ba70eca7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:31:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:50Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:50 crc kubenswrapper[4718]: I1210 14:32:50.174523 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5867ee7f-0b5d-48e7-9c39-956298a845a1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca56ecda46fc73e9f734cfd36b87b7353db70adefef0665771911d62633ce508\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65ade8ad5ab17011ad49a8283beac1b53ed3612052f8ad9c9b5521b18a71f970\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4c8b683dbf4a1d50d5987ea168baecaf4b2124a5a6286dc22c96cd7472db60a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://541c7810a9f52a43bd4d949a836dfed2fbf9384a1c46d64f81168a8eec1437e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://541c7810a9f52a43bd4d949a836dfed2fbf9384a1c46d64f81168a8eec1437e3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:31:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:31:37Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:31:36Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:50Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:50 crc kubenswrapper[4718]: I1210 14:32:50.187298 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://caa6ee7ab7f267517dcd1699f91f61cbc8b2dd0a135dd553e096f31a7f1dd93b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:50Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:50 crc kubenswrapper[4718]: I1210 14:32:50.202049 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:50Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:50 crc kubenswrapper[4718]: I1210 14:32:50.218125 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hv62w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9db3984f-4589-462f-94d7-89a885be63d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c18891e2a71e33784526db2ff2226ff6aa8a9df462f5cb11ff4f5a446509f54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d84p4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hv62w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:50Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:50 crc kubenswrapper[4718]: I1210 14:32:50.227469 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:50 crc kubenswrapper[4718]: I1210 14:32:50.227556 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:50 crc kubenswrapper[4718]: I1210 14:32:50.227579 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:50 crc kubenswrapper[4718]: I1210 14:32:50.227614 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:50 crc kubenswrapper[4718]: I1210 14:32:50.227635 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:50Z","lastTransitionTime":"2025-12-10T14:32:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:50 crc kubenswrapper[4718]: I1210 14:32:50.233468 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xvkg4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af814c60-50de-499d-a1b2-b18f3749bc35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01d0637707f68305ada3aab8a2cf56db32750665ef5c1bcfbfb1993fd135c3b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vsmbx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:00Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xvkg4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:50Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:50 crc kubenswrapper[4718]: I1210 14:32:50.249339 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-49r77" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22aace52-6f58-4459-89d2-9fec98b12ead\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b18d0c437956c7db43ad53ae225d4340e62f2c0ceb79ab6cc519d3c97570027\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l47m9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc20dc8e96606df154ecb5850afe3c1002d2869646e8a43a642e34d6f5ec8683\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l47m9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-49r77\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:50Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:50 crc kubenswrapper[4718]: I1210 14:32:50.267497 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kkfdg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db9c6842-03cb-4a28-baff-77f27f537aa4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bbdc104f3ec13789d339165601ac84cf9973e811127f8fe05ddb9b8d5ab333a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bxrt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34acb23b1eef53ae920322853c0503b9d26b0d30894e6c53ae52fb90e1671294\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34acb23b1eef53ae920322853c0503b9d26b0d30894e6c53ae52fb90e1671294\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bxrt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1b13fbe360cd4e791e86acdb311b78d1ab1d3344fd287fd70cfa87cf656affa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1b13fbe360cd4e791e86acdb311b78d1ab1d3344fd287fd70cfa87cf656affa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bxrt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d9ee16140b928b7e5c33811af2ca2b20b2fc98fdefc12bb1790fce59c407d9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d9ee16140b928b7e5c33811af2ca2b20b2fc98fdefc12bb1790fce59c407d9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bxrt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1daf49ff990637090db3ead68f5d0777b6502f79edfe524df743389ee220f8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1daf49ff990637090db3ead68f5d0777b6502f79edfe524df743389ee220f8d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bxrt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://193bd4ffc5c3324f05727b94ff852aaae455949fbb968b27c572f43fe0328854\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://193bd4ffc5c3324f05727b94ff852aaae455949fbb968b27c572f43fe0328854\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bxrt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://95db561c06acc2e7272a5a4a2c8a6a38315343379b80213c0e63f7a65a46f35d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://95db561c06acc2e7272a5a4a2c8a6a38315343379b80213c0e63f7a65a46f35d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bxrt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kkfdg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:50Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:50 crc kubenswrapper[4718]: I1210 14:32:50.278633 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-r8zbt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1494ebfa-d66c-4200-a336-2cedebcd5889\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p7qsk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p7qsk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:17Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-r8zbt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:50Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:50 crc kubenswrapper[4718]: I1210 14:32:50.330381 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:50 crc kubenswrapper[4718]: I1210 14:32:50.330478 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:50 crc kubenswrapper[4718]: I1210 14:32:50.330491 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:50 crc kubenswrapper[4718]: I1210 14:32:50.330525 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:50 crc kubenswrapper[4718]: I1210 14:32:50.330547 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:50Z","lastTransitionTime":"2025-12-10T14:32:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:50 crc kubenswrapper[4718]: I1210 14:32:50.433953 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:50 crc kubenswrapper[4718]: I1210 14:32:50.434017 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:50 crc kubenswrapper[4718]: I1210 14:32:50.434025 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:50 crc kubenswrapper[4718]: I1210 14:32:50.434039 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:50 crc kubenswrapper[4718]: I1210 14:32:50.434050 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:50Z","lastTransitionTime":"2025-12-10T14:32:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:50 crc kubenswrapper[4718]: I1210 14:32:50.537437 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:50 crc kubenswrapper[4718]: I1210 14:32:50.537499 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:50 crc kubenswrapper[4718]: I1210 14:32:50.537514 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:50 crc kubenswrapper[4718]: I1210 14:32:50.537531 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:50 crc kubenswrapper[4718]: I1210 14:32:50.537543 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:50Z","lastTransitionTime":"2025-12-10T14:32:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:50 crc kubenswrapper[4718]: I1210 14:32:50.640299 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:50 crc kubenswrapper[4718]: I1210 14:32:50.640359 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:50 crc kubenswrapper[4718]: I1210 14:32:50.640368 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:50 crc kubenswrapper[4718]: I1210 14:32:50.640400 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:50 crc kubenswrapper[4718]: I1210 14:32:50.640413 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:50Z","lastTransitionTime":"2025-12-10T14:32:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:50 crc kubenswrapper[4718]: I1210 14:32:50.743357 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:50 crc kubenswrapper[4718]: I1210 14:32:50.743427 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:50 crc kubenswrapper[4718]: I1210 14:32:50.743438 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:50 crc kubenswrapper[4718]: I1210 14:32:50.743457 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:50 crc kubenswrapper[4718]: I1210 14:32:50.743470 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:50Z","lastTransitionTime":"2025-12-10T14:32:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:50 crc kubenswrapper[4718]: I1210 14:32:50.846275 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:50 crc kubenswrapper[4718]: I1210 14:32:50.846368 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:50 crc kubenswrapper[4718]: I1210 14:32:50.846451 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:50 crc kubenswrapper[4718]: I1210 14:32:50.846490 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:50 crc kubenswrapper[4718]: I1210 14:32:50.846514 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:50Z","lastTransitionTime":"2025-12-10T14:32:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:50 crc kubenswrapper[4718]: I1210 14:32:50.880310 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:50 crc kubenswrapper[4718]: I1210 14:32:50.880374 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:50 crc kubenswrapper[4718]: I1210 14:32:50.880411 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:50 crc kubenswrapper[4718]: I1210 14:32:50.880439 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:50 crc kubenswrapper[4718]: I1210 14:32:50.880461 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:50Z","lastTransitionTime":"2025-12-10T14:32:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:50 crc kubenswrapper[4718]: E1210 14:32:50.893976 4718 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:32:50Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:32:50Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:50Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:32:50Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:32:50Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:50Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"fbef6824-3734-4f3d-bf18-030e5c72cff8\\\",\\\"systemUUID\\\":\\\"559eaef7-72a9-45f0-b8d3-0046c76adc0d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:50Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:50 crc kubenswrapper[4718]: I1210 14:32:50.898019 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:50 crc kubenswrapper[4718]: I1210 14:32:50.898078 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:50 crc kubenswrapper[4718]: I1210 14:32:50.898090 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:50 crc kubenswrapper[4718]: I1210 14:32:50.898109 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:50 crc kubenswrapper[4718]: I1210 14:32:50.898121 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:50Z","lastTransitionTime":"2025-12-10T14:32:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:50 crc kubenswrapper[4718]: E1210 14:32:50.911893 4718 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:32:50Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:32:50Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:50Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:32:50Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:32:50Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:50Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"fbef6824-3734-4f3d-bf18-030e5c72cff8\\\",\\\"systemUUID\\\":\\\"559eaef7-72a9-45f0-b8d3-0046c76adc0d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:50Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:50 crc kubenswrapper[4718]: I1210 14:32:50.915582 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:50 crc kubenswrapper[4718]: I1210 14:32:50.915623 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:50 crc kubenswrapper[4718]: I1210 14:32:50.915643 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:50 crc kubenswrapper[4718]: I1210 14:32:50.915668 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:50 crc kubenswrapper[4718]: I1210 14:32:50.915687 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:50Z","lastTransitionTime":"2025-12-10T14:32:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:50 crc kubenswrapper[4718]: E1210 14:32:50.930309 4718 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:32:50Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:32:50Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:50Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:32:50Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:32:50Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:50Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"fbef6824-3734-4f3d-bf18-030e5c72cff8\\\",\\\"systemUUID\\\":\\\"559eaef7-72a9-45f0-b8d3-0046c76adc0d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:50Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:50 crc kubenswrapper[4718]: I1210 14:32:50.935533 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:50 crc kubenswrapper[4718]: I1210 14:32:50.935579 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:50 crc kubenswrapper[4718]: I1210 14:32:50.935597 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:50 crc kubenswrapper[4718]: I1210 14:32:50.935620 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:50 crc kubenswrapper[4718]: I1210 14:32:50.935633 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:50Z","lastTransitionTime":"2025-12-10T14:32:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:50 crc kubenswrapper[4718]: E1210 14:32:50.960894 4718 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:32:50Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:32:50Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:50Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:32:50Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:32:50Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:50Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"fbef6824-3734-4f3d-bf18-030e5c72cff8\\\",\\\"systemUUID\\\":\\\"559eaef7-72a9-45f0-b8d3-0046c76adc0d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:50Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:50 crc kubenswrapper[4718]: I1210 14:32:50.965866 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:50 crc kubenswrapper[4718]: I1210 14:32:50.965907 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:50 crc kubenswrapper[4718]: I1210 14:32:50.965916 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:50 crc kubenswrapper[4718]: I1210 14:32:50.965933 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:50 crc kubenswrapper[4718]: I1210 14:32:50.965943 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:50Z","lastTransitionTime":"2025-12-10T14:32:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:50 crc kubenswrapper[4718]: E1210 14:32:50.980516 4718 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:32:50Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:32:50Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:50Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:32:50Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:32:50Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:50Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"fbef6824-3734-4f3d-bf18-030e5c72cff8\\\",\\\"systemUUID\\\":\\\"559eaef7-72a9-45f0-b8d3-0046c76adc0d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:50Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:50 crc kubenswrapper[4718]: E1210 14:32:50.980659 4718 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 10 14:32:50 crc kubenswrapper[4718]: I1210 14:32:50.982757 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:50 crc kubenswrapper[4718]: I1210 14:32:50.982794 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:50 crc kubenswrapper[4718]: I1210 14:32:50.982804 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:50 crc kubenswrapper[4718]: I1210 14:32:50.982820 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:50 crc kubenswrapper[4718]: I1210 14:32:50.982834 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:50Z","lastTransitionTime":"2025-12-10T14:32:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:51 crc kubenswrapper[4718]: I1210 14:32:51.019583 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 10 14:32:51 crc kubenswrapper[4718]: I1210 14:32:51.019742 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-r8zbt" Dec 10 14:32:51 crc kubenswrapper[4718]: E1210 14:32:51.019897 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 10 14:32:51 crc kubenswrapper[4718]: E1210 14:32:51.019967 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-r8zbt" podUID="1494ebfa-d66c-4200-a336-2cedebcd5889" Dec 10 14:32:51 crc kubenswrapper[4718]: I1210 14:32:51.021516 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-dtch2_612af6cb-db4d-4874-a9ea-8b3c7eb8e30c/ovnkube-controller/2.log" Dec 10 14:32:51 crc kubenswrapper[4718]: I1210 14:32:51.086181 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:51 crc kubenswrapper[4718]: I1210 14:32:51.086239 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:51 crc kubenswrapper[4718]: I1210 14:32:51.086251 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:51 crc kubenswrapper[4718]: I1210 14:32:51.086270 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:51 crc kubenswrapper[4718]: I1210 14:32:51.086281 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:51Z","lastTransitionTime":"2025-12-10T14:32:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:51 crc kubenswrapper[4718]: I1210 14:32:51.189103 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:51 crc kubenswrapper[4718]: I1210 14:32:51.189181 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:51 crc kubenswrapper[4718]: I1210 14:32:51.189196 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:51 crc kubenswrapper[4718]: I1210 14:32:51.189216 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:51 crc kubenswrapper[4718]: I1210 14:32:51.189231 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:51Z","lastTransitionTime":"2025-12-10T14:32:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:51 crc kubenswrapper[4718]: I1210 14:32:51.291820 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:51 crc kubenswrapper[4718]: I1210 14:32:51.291882 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:51 crc kubenswrapper[4718]: I1210 14:32:51.291896 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:51 crc kubenswrapper[4718]: I1210 14:32:51.291914 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:51 crc kubenswrapper[4718]: I1210 14:32:51.291929 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:51Z","lastTransitionTime":"2025-12-10T14:32:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:51 crc kubenswrapper[4718]: I1210 14:32:51.394679 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:51 crc kubenswrapper[4718]: I1210 14:32:51.394732 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:51 crc kubenswrapper[4718]: I1210 14:32:51.394744 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:51 crc kubenswrapper[4718]: I1210 14:32:51.394764 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:51 crc kubenswrapper[4718]: I1210 14:32:51.394777 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:51Z","lastTransitionTime":"2025-12-10T14:32:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:51 crc kubenswrapper[4718]: I1210 14:32:51.498172 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:51 crc kubenswrapper[4718]: I1210 14:32:51.498225 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:51 crc kubenswrapper[4718]: I1210 14:32:51.498238 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:51 crc kubenswrapper[4718]: I1210 14:32:51.498260 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:51 crc kubenswrapper[4718]: I1210 14:32:51.498273 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:51Z","lastTransitionTime":"2025-12-10T14:32:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:51 crc kubenswrapper[4718]: I1210 14:32:51.601265 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:51 crc kubenswrapper[4718]: I1210 14:32:51.601349 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:51 crc kubenswrapper[4718]: I1210 14:32:51.601361 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:51 crc kubenswrapper[4718]: I1210 14:32:51.601402 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:51 crc kubenswrapper[4718]: I1210 14:32:51.601415 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:51Z","lastTransitionTime":"2025-12-10T14:32:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:51 crc kubenswrapper[4718]: I1210 14:32:51.704662 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:51 crc kubenswrapper[4718]: I1210 14:32:51.704741 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:51 crc kubenswrapper[4718]: I1210 14:32:51.704762 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:51 crc kubenswrapper[4718]: I1210 14:32:51.704789 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:51 crc kubenswrapper[4718]: I1210 14:32:51.704807 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:51Z","lastTransitionTime":"2025-12-10T14:32:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:51 crc kubenswrapper[4718]: I1210 14:32:51.808794 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:51 crc kubenswrapper[4718]: I1210 14:32:51.808857 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:51 crc kubenswrapper[4718]: I1210 14:32:51.808869 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:51 crc kubenswrapper[4718]: I1210 14:32:51.808890 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:51 crc kubenswrapper[4718]: I1210 14:32:51.808903 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:51Z","lastTransitionTime":"2025-12-10T14:32:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:51 crc kubenswrapper[4718]: I1210 14:32:51.911506 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:51 crc kubenswrapper[4718]: I1210 14:32:51.911562 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:51 crc kubenswrapper[4718]: I1210 14:32:51.911575 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:51 crc kubenswrapper[4718]: I1210 14:32:51.911596 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:51 crc kubenswrapper[4718]: I1210 14:32:51.911611 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:51Z","lastTransitionTime":"2025-12-10T14:32:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:52 crc kubenswrapper[4718]: I1210 14:32:52.014565 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:52 crc kubenswrapper[4718]: I1210 14:32:52.014609 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:52 crc kubenswrapper[4718]: I1210 14:32:52.014621 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:52 crc kubenswrapper[4718]: I1210 14:32:52.014640 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:52 crc kubenswrapper[4718]: I1210 14:32:52.014651 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:52Z","lastTransitionTime":"2025-12-10T14:32:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:52 crc kubenswrapper[4718]: I1210 14:32:52.020294 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 10 14:32:52 crc kubenswrapper[4718]: I1210 14:32:52.020294 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 10 14:32:52 crc kubenswrapper[4718]: E1210 14:32:52.020488 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 10 14:32:52 crc kubenswrapper[4718]: E1210 14:32:52.020528 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 10 14:32:52 crc kubenswrapper[4718]: I1210 14:32:52.118148 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:52 crc kubenswrapper[4718]: I1210 14:32:52.118214 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:52 crc kubenswrapper[4718]: I1210 14:32:52.118228 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:52 crc kubenswrapper[4718]: I1210 14:32:52.118245 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:52 crc kubenswrapper[4718]: I1210 14:32:52.118259 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:52Z","lastTransitionTime":"2025-12-10T14:32:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:52 crc kubenswrapper[4718]: I1210 14:32:52.220650 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:52 crc kubenswrapper[4718]: I1210 14:32:52.220715 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:52 crc kubenswrapper[4718]: I1210 14:32:52.220726 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:52 crc kubenswrapper[4718]: I1210 14:32:52.220746 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:52 crc kubenswrapper[4718]: I1210 14:32:52.220758 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:52Z","lastTransitionTime":"2025-12-10T14:32:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:52 crc kubenswrapper[4718]: I1210 14:32:52.324458 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:52 crc kubenswrapper[4718]: I1210 14:32:52.324536 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:52 crc kubenswrapper[4718]: I1210 14:32:52.324551 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:52 crc kubenswrapper[4718]: I1210 14:32:52.324582 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:52 crc kubenswrapper[4718]: I1210 14:32:52.324601 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:52Z","lastTransitionTime":"2025-12-10T14:32:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:52 crc kubenswrapper[4718]: I1210 14:32:52.430697 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:52 crc kubenswrapper[4718]: I1210 14:32:52.430786 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:52 crc kubenswrapper[4718]: I1210 14:32:52.430799 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:52 crc kubenswrapper[4718]: I1210 14:32:52.430819 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:52 crc kubenswrapper[4718]: I1210 14:32:52.430836 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:52Z","lastTransitionTime":"2025-12-10T14:32:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:52 crc kubenswrapper[4718]: I1210 14:32:52.534136 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:52 crc kubenswrapper[4718]: I1210 14:32:52.534175 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:52 crc kubenswrapper[4718]: I1210 14:32:52.534185 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:52 crc kubenswrapper[4718]: I1210 14:32:52.534201 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:52 crc kubenswrapper[4718]: I1210 14:32:52.534214 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:52Z","lastTransitionTime":"2025-12-10T14:32:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:52 crc kubenswrapper[4718]: I1210 14:32:52.636827 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:52 crc kubenswrapper[4718]: I1210 14:32:52.636862 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:52 crc kubenswrapper[4718]: I1210 14:32:52.636873 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:52 crc kubenswrapper[4718]: I1210 14:32:52.636889 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:52 crc kubenswrapper[4718]: I1210 14:32:52.636903 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:52Z","lastTransitionTime":"2025-12-10T14:32:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:52 crc kubenswrapper[4718]: I1210 14:32:52.739949 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:52 crc kubenswrapper[4718]: I1210 14:32:52.740303 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:52 crc kubenswrapper[4718]: I1210 14:32:52.740532 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:52 crc kubenswrapper[4718]: I1210 14:32:52.740555 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:52 crc kubenswrapper[4718]: I1210 14:32:52.740568 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:52Z","lastTransitionTime":"2025-12-10T14:32:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:52 crc kubenswrapper[4718]: I1210 14:32:52.843044 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:52 crc kubenswrapper[4718]: I1210 14:32:52.843085 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:52 crc kubenswrapper[4718]: I1210 14:32:52.843094 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:52 crc kubenswrapper[4718]: I1210 14:32:52.843111 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:52 crc kubenswrapper[4718]: I1210 14:32:52.843126 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:52Z","lastTransitionTime":"2025-12-10T14:32:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:52 crc kubenswrapper[4718]: I1210 14:32:52.945869 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:52 crc kubenswrapper[4718]: I1210 14:32:52.946105 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:52 crc kubenswrapper[4718]: I1210 14:32:52.946170 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:52 crc kubenswrapper[4718]: I1210 14:32:52.946271 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:52 crc kubenswrapper[4718]: I1210 14:32:52.946342 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:52Z","lastTransitionTime":"2025-12-10T14:32:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:53 crc kubenswrapper[4718]: I1210 14:32:53.020116 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-r8zbt" Dec 10 14:32:53 crc kubenswrapper[4718]: E1210 14:32:53.020354 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-r8zbt" podUID="1494ebfa-d66c-4200-a336-2cedebcd5889" Dec 10 14:32:53 crc kubenswrapper[4718]: I1210 14:32:53.021177 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 10 14:32:53 crc kubenswrapper[4718]: E1210 14:32:53.021561 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 10 14:32:53 crc kubenswrapper[4718]: I1210 14:32:53.049175 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:53 crc kubenswrapper[4718]: I1210 14:32:53.049220 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:53 crc kubenswrapper[4718]: I1210 14:32:53.049232 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:53 crc kubenswrapper[4718]: I1210 14:32:53.049248 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:53 crc kubenswrapper[4718]: I1210 14:32:53.049260 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:53Z","lastTransitionTime":"2025-12-10T14:32:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:53 crc kubenswrapper[4718]: I1210 14:32:53.152624 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:53 crc kubenswrapper[4718]: I1210 14:32:53.152698 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:53 crc kubenswrapper[4718]: I1210 14:32:53.152715 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:53 crc kubenswrapper[4718]: I1210 14:32:53.152740 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:53 crc kubenswrapper[4718]: I1210 14:32:53.152759 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:53Z","lastTransitionTime":"2025-12-10T14:32:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:53 crc kubenswrapper[4718]: I1210 14:32:53.255871 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:53 crc kubenswrapper[4718]: I1210 14:32:53.256259 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:53 crc kubenswrapper[4718]: I1210 14:32:53.256273 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:53 crc kubenswrapper[4718]: I1210 14:32:53.256289 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:53 crc kubenswrapper[4718]: I1210 14:32:53.256300 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:53Z","lastTransitionTime":"2025-12-10T14:32:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:53 crc kubenswrapper[4718]: I1210 14:32:53.360430 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:53 crc kubenswrapper[4718]: I1210 14:32:53.360475 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:53 crc kubenswrapper[4718]: I1210 14:32:53.360487 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:53 crc kubenswrapper[4718]: I1210 14:32:53.360504 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:53 crc kubenswrapper[4718]: I1210 14:32:53.360518 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:53Z","lastTransitionTime":"2025-12-10T14:32:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:53 crc kubenswrapper[4718]: I1210 14:32:53.463770 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:53 crc kubenswrapper[4718]: I1210 14:32:53.463812 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:53 crc kubenswrapper[4718]: I1210 14:32:53.463822 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:53 crc kubenswrapper[4718]: I1210 14:32:53.463836 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:53 crc kubenswrapper[4718]: I1210 14:32:53.463846 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:53Z","lastTransitionTime":"2025-12-10T14:32:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:53 crc kubenswrapper[4718]: I1210 14:32:53.566548 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:53 crc kubenswrapper[4718]: I1210 14:32:53.566601 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:53 crc kubenswrapper[4718]: I1210 14:32:53.566613 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:53 crc kubenswrapper[4718]: I1210 14:32:53.566630 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:53 crc kubenswrapper[4718]: I1210 14:32:53.566643 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:53Z","lastTransitionTime":"2025-12-10T14:32:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:53 crc kubenswrapper[4718]: I1210 14:32:53.669325 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:53 crc kubenswrapper[4718]: I1210 14:32:53.669368 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:53 crc kubenswrapper[4718]: I1210 14:32:53.669376 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:53 crc kubenswrapper[4718]: I1210 14:32:53.669405 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:53 crc kubenswrapper[4718]: I1210 14:32:53.669416 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:53Z","lastTransitionTime":"2025-12-10T14:32:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:53 crc kubenswrapper[4718]: I1210 14:32:53.773363 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:53 crc kubenswrapper[4718]: I1210 14:32:53.773428 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:53 crc kubenswrapper[4718]: I1210 14:32:53.773440 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:53 crc kubenswrapper[4718]: I1210 14:32:53.773459 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:53 crc kubenswrapper[4718]: I1210 14:32:53.773471 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:53Z","lastTransitionTime":"2025-12-10T14:32:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:53 crc kubenswrapper[4718]: I1210 14:32:53.876371 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:53 crc kubenswrapper[4718]: I1210 14:32:53.876436 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:53 crc kubenswrapper[4718]: I1210 14:32:53.876453 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:53 crc kubenswrapper[4718]: I1210 14:32:53.876478 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:53 crc kubenswrapper[4718]: I1210 14:32:53.876495 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:53Z","lastTransitionTime":"2025-12-10T14:32:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:53 crc kubenswrapper[4718]: I1210 14:32:53.980185 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:53 crc kubenswrapper[4718]: I1210 14:32:53.980233 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:53 crc kubenswrapper[4718]: I1210 14:32:53.980252 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:53 crc kubenswrapper[4718]: I1210 14:32:53.980276 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:53 crc kubenswrapper[4718]: I1210 14:32:53.980299 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:53Z","lastTransitionTime":"2025-12-10T14:32:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:54 crc kubenswrapper[4718]: I1210 14:32:54.020190 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 10 14:32:54 crc kubenswrapper[4718]: I1210 14:32:54.020410 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 10 14:32:54 crc kubenswrapper[4718]: E1210 14:32:54.020463 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 10 14:32:54 crc kubenswrapper[4718]: E1210 14:32:54.020613 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 10 14:32:54 crc kubenswrapper[4718]: I1210 14:32:54.039829 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-hv62w_9db3984f-4589-462f-94d7-89a885be63d5/kube-multus/0.log" Dec 10 14:32:54 crc kubenswrapper[4718]: I1210 14:32:54.039926 4718 generic.go:334] "Generic (PLEG): container finished" podID="9db3984f-4589-462f-94d7-89a885be63d5" containerID="2c18891e2a71e33784526db2ff2226ff6aa8a9df462f5cb11ff4f5a446509f54" exitCode=1 Dec 10 14:32:54 crc kubenswrapper[4718]: I1210 14:32:54.039988 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-hv62w" event={"ID":"9db3984f-4589-462f-94d7-89a885be63d5","Type":"ContainerDied","Data":"2c18891e2a71e33784526db2ff2226ff6aa8a9df462f5cb11ff4f5a446509f54"} Dec 10 14:32:54 crc kubenswrapper[4718]: I1210 14:32:54.041187 4718 scope.go:117] "RemoveContainer" containerID="2c18891e2a71e33784526db2ff2226ff6aa8a9df462f5cb11ff4f5a446509f54" Dec 10 14:32:54 crc kubenswrapper[4718]: I1210 14:32:54.057117 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9fmrd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"186fd52a-c63c-461f-a551-8b57ead36f59\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c79dfef1bedaab660a35e29f8a48c6b153f7ca5028ef6a936a196699a30566e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r6f28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:03Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9fmrd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:54Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:54 crc kubenswrapper[4718]: I1210 14:32:54.075148 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a6ff07473ddfa3d1db1e5fe47937274f803c400ee435102c3b0383c02af4340\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b56868fde34c04a22b78be1e6cb02ff5b11f0dd9fa0232e5a4c38725a8bf193\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:54Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:54 crc kubenswrapper[4718]: I1210 14:32:54.084674 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:54 crc kubenswrapper[4718]: I1210 14:32:54.084723 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:54 crc kubenswrapper[4718]: I1210 14:32:54.084736 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:54 crc kubenswrapper[4718]: I1210 14:32:54.084752 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:54 crc kubenswrapper[4718]: I1210 14:32:54.084762 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:54Z","lastTransitionTime":"2025-12-10T14:32:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:54 crc kubenswrapper[4718]: I1210 14:32:54.096150 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1432ae01-a453-47a7-be1d-5fdcabe6f0c8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://651112c229c32968527538e66f1b161c92411b37f45f36ca90b27b6710d7a43d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d1de02a2f41beed3e92a05e69a6c76b44861e7dba46d723a38fffe8c3d0ed42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9405e4c833e1d215824604c6d7ec42dec156f2968d3351f110df665cdac0ce1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d898946159da9ec7806b5daa074a2313a8d3cd6086f4ae2bdfe858a7ba70eca7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:31:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:54Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:54 crc kubenswrapper[4718]: I1210 14:32:54.114985 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5867ee7f-0b5d-48e7-9c39-956298a845a1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca56ecda46fc73e9f734cfd36b87b7353db70adefef0665771911d62633ce508\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65ade8ad5ab17011ad49a8283beac1b53ed3612052f8ad9c9b5521b18a71f970\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4c8b683dbf4a1d50d5987ea168baecaf4b2124a5a6286dc22c96cd7472db60a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://541c7810a9f52a43bd4d949a836dfed2fbf9384a1c46d64f81168a8eec1437e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://541c7810a9f52a43bd4d949a836dfed2fbf9384a1c46d64f81168a8eec1437e3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:31:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:31:37Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:31:36Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:54Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:54 crc kubenswrapper[4718]: I1210 14:32:54.135401 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://caa6ee7ab7f267517dcd1699f91f61cbc8b2dd0a135dd553e096f31a7f1dd93b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:54Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:54 crc kubenswrapper[4718]: I1210 14:32:54.164937 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:54Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:54 crc kubenswrapper[4718]: I1210 14:32:54.187220 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:54 crc kubenswrapper[4718]: I1210 14:32:54.187279 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:54 crc kubenswrapper[4718]: I1210 14:32:54.187291 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:54 crc kubenswrapper[4718]: I1210 14:32:54.187307 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:54 crc kubenswrapper[4718]: I1210 14:32:54.187319 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:54Z","lastTransitionTime":"2025-12-10T14:32:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:54 crc kubenswrapper[4718]: I1210 14:32:54.189077 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hv62w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9db3984f-4589-462f-94d7-89a885be63d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c18891e2a71e33784526db2ff2226ff6aa8a9df462f5cb11ff4f5a446509f54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c18891e2a71e33784526db2ff2226ff6aa8a9df462f5cb11ff4f5a446509f54\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-10T14:32:53Z\\\",\\\"message\\\":\\\"2025-12-10T14:32:07+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_1cd36dea-7f3c-413f-b43c-2e7059f8f268\\\\n2025-12-10T14:32:07+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_1cd36dea-7f3c-413f-b43c-2e7059f8f268 to /host/opt/cni/bin/\\\\n2025-12-10T14:32:08Z [verbose] multus-daemon started\\\\n2025-12-10T14:32:08Z [verbose] Readiness Indicator file check\\\\n2025-12-10T14:32:53Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d84p4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hv62w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:54Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:54 crc kubenswrapper[4718]: I1210 14:32:54.207163 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xvkg4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af814c60-50de-499d-a1b2-b18f3749bc35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01d0637707f68305ada3aab8a2cf56db32750665ef5c1bcfbfb1993fd135c3b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vsmbx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:00Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xvkg4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:54Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:54 crc kubenswrapper[4718]: I1210 14:32:54.224352 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8db53917-7cfb-496d-b8a0-5cc68f3be4e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ac0946788c7cef36abe84639ca343f7ba379f19b94308cd1a08f4dcac4ad249\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f420f195ef9f1879dc97af4b3feb8835025c2cbe0ede4113fb1b0e0b08ba6c2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8zmhn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:54Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:54 crc kubenswrapper[4718]: I1210 14:32:54.235415 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e88a635-7fd6-4100-9074-93065379b7af\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe972be45d5fffe525720a01120068ca8d0d5ad43be84a4dfea4ca04946ff711\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://98178725ebbf1d6fe01df267ec22fdd0b8bb049fba5d6790ca3e045b71083a05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://98178725ebbf1d6fe01df267ec22fdd0b8bb049fba5d6790ca3e045b71083a05\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:31:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:31:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:54Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:54 crc kubenswrapper[4718]: I1210 14:32:54.246012 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-49r77" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22aace52-6f58-4459-89d2-9fec98b12ead\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b18d0c437956c7db43ad53ae225d4340e62f2c0ceb79ab6cc519d3c97570027\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l47m9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc20dc8e96606df154ecb5850afe3c1002d2869646e8a43a642e34d6f5ec8683\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l47m9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-49r77\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:54Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:54 crc kubenswrapper[4718]: I1210 14:32:54.260878 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kkfdg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db9c6842-03cb-4a28-baff-77f27f537aa4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bbdc104f3ec13789d339165601ac84cf9973e811127f8fe05ddb9b8d5ab333a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bxrt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34acb23b1eef53ae920322853c0503b9d26b0d30894e6c53ae52fb90e1671294\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34acb23b1eef53ae920322853c0503b9d26b0d30894e6c53ae52fb90e1671294\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bxrt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1b13fbe360cd4e791e86acdb311b78d1ab1d3344fd287fd70cfa87cf656affa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1b13fbe360cd4e791e86acdb311b78d1ab1d3344fd287fd70cfa87cf656affa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bxrt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d9ee16140b928b7e5c33811af2ca2b20b2fc98fdefc12bb1790fce59c407d9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d9ee16140b928b7e5c33811af2ca2b20b2fc98fdefc12bb1790fce59c407d9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bxrt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1daf49ff990637090db3ead68f5d0777b6502f79edfe524df743389ee220f8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1daf49ff990637090db3ead68f5d0777b6502f79edfe524df743389ee220f8d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bxrt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://193bd4ffc5c3324f05727b94ff852aaae455949fbb968b27c572f43fe0328854\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://193bd4ffc5c3324f05727b94ff852aaae455949fbb968b27c572f43fe0328854\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bxrt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://95db561c06acc2e7272a5a4a2c8a6a38315343379b80213c0e63f7a65a46f35d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://95db561c06acc2e7272a5a4a2c8a6a38315343379b80213c0e63f7a65a46f35d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bxrt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kkfdg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:54Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:54 crc kubenswrapper[4718]: I1210 14:32:54.275572 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-r8zbt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1494ebfa-d66c-4200-a336-2cedebcd5889\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p7qsk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p7qsk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:17Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-r8zbt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:54Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:54 crc kubenswrapper[4718]: I1210 14:32:54.289701 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:54Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:54 crc kubenswrapper[4718]: I1210 14:32:54.289879 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:54 crc kubenswrapper[4718]: I1210 14:32:54.289911 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:54 crc kubenswrapper[4718]: I1210 14:32:54.289925 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:54 crc kubenswrapper[4718]: I1210 14:32:54.289946 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:54 crc kubenswrapper[4718]: I1210 14:32:54.289961 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:54Z","lastTransitionTime":"2025-12-10T14:32:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:54 crc kubenswrapper[4718]: I1210 14:32:54.304250 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:54Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:54 crc kubenswrapper[4718]: I1210 14:32:54.318499 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://368fbae55ef9c78f2f88c5a07301f271e1ccffc66976a0c2dc9f80edce04613a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:54Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:54 crc kubenswrapper[4718]: I1210 14:32:54.340582 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dtch2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"612af6cb-db4d-4874-a9ea-8b3c7eb8e30c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f49783c85c42bd1d4afd9d2546c84af07c770b069b7e6f22cf3d95c0e173bcb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02ee125e78be56eace63f1077a3952d4a9b9ccb387af5413a9325eb4bff80ea9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e231c6918c86142643af798db9c4c6de9764822d2b60e1155c727b143191139\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://abe2f058bd1c2102c5f00a29af4869eb6dcd8b732ae90f5c4fda6759b460622a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2ac19fc0eeba332014eefac978a35798eb8bf1ff79212bca24690236bd46e01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9263fdd7b12f5852db038570b5a179fff5850893af790fa27694f00375a607f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://410548008fb250bc3e4c4409287aa72cf53991828f5c3f4ac74bcf8561cbf855\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b37a96430550aa45384959d4587f4ab21c637451e334481e14e7b3a666d7f936\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-10T14:32:27Z\\\",\\\"message\\\":\\\"shift-console/console]} name:Service_openshift-console/console_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.194:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {d7d7b270-1480-47f8-bdf9-690dbab310cb}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1210 14:32:26.076824 6228 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI1210 14:32:26.076822 6228 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-console/console]} name:Service_openshift-console/console_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.194:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {d7d7b270-1480-47f8-bdf9-690dbab310cb}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF1210 14:32:26.076881 6228 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:25Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://410548008fb250bc3e4c4409287aa72cf53991828f5c3f4ac74bcf8561cbf855\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-10T14:32:49Z\\\",\\\"message\\\":\\\"logical_ip:10.217.0.59 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {dce28c51-c9f1-478b-97c8-7e209d6e7cbe}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:dce28c51-c9f1-478b-97c8-7e209d6e7cbe}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {e3c4661a-36a6-47f0-a6c0-a4ee741f2224}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1210 14:32:48.189595 6552 obj_retry.go:365] Adding new object: *v1.Pod openshift-multus/network-metrics-daemon-r8zbt\\\\nI1210 14:32:48.189604 6552 ovn.go:134] Ensuring zone local for Pod openshift-multus/network-metrics-daemon-r8zbt in node crc\\\\nI1210 14:32:48.189627 6552 base_network_controller_pods.go:477] [default/openshift-multus/network-metrics-daemon-r8zbt] creating logical port openshift-multus_network-metrics-daemon-r8zbt for pod on switch crc\\\\nF1210 14:32:48.188496 6552 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f27c89c7dd6f5334f05c340d706e938abde36968f00c30bf84f7b2fb30dfcb3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df5db949bd1c53e8ae754b3e7e500e8c2e196aacd68a36d6331ddca378ef7504\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df5db949bd1c53e8ae754b3e7e500e8c2e196aacd68a36d6331ddca378ef7504\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-dtch2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:54Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:54 crc kubenswrapper[4718]: I1210 14:32:54.355723 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52b297ad-8ff7-498e-8248-b64014de744f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65966d33004e0a6740f0b450f404729baec86fc47c84a58ee020a78b7378bfee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfa44e8f79d7525847ab10384aaf41105a8e6769b384679417cd0a7379748dee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://32fcede9b35f9a45177a341e272e7c2cbe2e5c6bf699e877664a603a5f96124a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3d4b682855c5510a5957e308008602e4f749bd44ca52f974cb4be27d06a7c80\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9d6e990a4a3f575ffc8a6399b8dd1fd59a84db71b34885946046fc7c5bd0a8e8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-10T14:31:58Z\\\",\\\"message\\\":\\\"le observer\\\\nW1210 14:31:57.496102 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1210 14:31:57.496266 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1210 14:31:57.497076 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1744864549/tls.crt::/tmp/serving-cert-1744864549/tls.key\\\\\\\"\\\\nI1210 14:31:57.965947 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1210 14:31:57.968367 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1210 14:31:57.968400 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1210 14:31:57.968417 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1210 14:31:57.968422 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1210 14:31:57.972813 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1210 14:31:57.972849 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1210 14:31:57.972856 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1210 14:31:57.972862 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1210 14:31:57.972867 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1210 14:31:57.972870 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1210 14:31:57.972874 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1210 14:31:57.973028 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1210 14:31:57.977590 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-10T14:31:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f363cabfc65ba2cb737968ead6f21568f30416a1f385f511f2d1a45da1e831a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3cbf8dfd3e1278a733c6ca7e888788d0bec845ff72dfd618e2070262d05b6a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3cbf8dfd3e1278a733c6ca7e888788d0bec845ff72dfd618e2070262d05b6a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:31:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:31:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:54Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:54 crc kubenswrapper[4718]: I1210 14:32:54.393354 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:54 crc kubenswrapper[4718]: I1210 14:32:54.393435 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:54 crc kubenswrapper[4718]: I1210 14:32:54.393450 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:54 crc kubenswrapper[4718]: I1210 14:32:54.393475 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:54 crc kubenswrapper[4718]: I1210 14:32:54.393488 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:54Z","lastTransitionTime":"2025-12-10T14:32:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:54 crc kubenswrapper[4718]: I1210 14:32:54.496710 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:54 crc kubenswrapper[4718]: I1210 14:32:54.496764 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:54 crc kubenswrapper[4718]: I1210 14:32:54.496775 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:54 crc kubenswrapper[4718]: I1210 14:32:54.496797 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:54 crc kubenswrapper[4718]: I1210 14:32:54.496815 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:54Z","lastTransitionTime":"2025-12-10T14:32:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:54 crc kubenswrapper[4718]: I1210 14:32:54.599031 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:54 crc kubenswrapper[4718]: I1210 14:32:54.599076 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:54 crc kubenswrapper[4718]: I1210 14:32:54.599088 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:54 crc kubenswrapper[4718]: I1210 14:32:54.599105 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:54 crc kubenswrapper[4718]: I1210 14:32:54.599117 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:54Z","lastTransitionTime":"2025-12-10T14:32:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:54 crc kubenswrapper[4718]: I1210 14:32:54.702057 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:54 crc kubenswrapper[4718]: I1210 14:32:54.702090 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:54 crc kubenswrapper[4718]: I1210 14:32:54.702099 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:54 crc kubenswrapper[4718]: I1210 14:32:54.702113 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:54 crc kubenswrapper[4718]: I1210 14:32:54.702122 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:54Z","lastTransitionTime":"2025-12-10T14:32:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:54 crc kubenswrapper[4718]: I1210 14:32:54.805206 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:54 crc kubenswrapper[4718]: I1210 14:32:54.805270 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:54 crc kubenswrapper[4718]: I1210 14:32:54.805284 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:54 crc kubenswrapper[4718]: I1210 14:32:54.805310 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:54 crc kubenswrapper[4718]: I1210 14:32:54.805324 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:54Z","lastTransitionTime":"2025-12-10T14:32:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:54 crc kubenswrapper[4718]: I1210 14:32:54.908218 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:54 crc kubenswrapper[4718]: I1210 14:32:54.908270 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:54 crc kubenswrapper[4718]: I1210 14:32:54.908282 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:54 crc kubenswrapper[4718]: I1210 14:32:54.908304 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:54 crc kubenswrapper[4718]: I1210 14:32:54.908323 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:54Z","lastTransitionTime":"2025-12-10T14:32:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:55 crc kubenswrapper[4718]: I1210 14:32:55.011690 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:55 crc kubenswrapper[4718]: I1210 14:32:55.011736 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:55 crc kubenswrapper[4718]: I1210 14:32:55.011747 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:55 crc kubenswrapper[4718]: I1210 14:32:55.011772 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:55 crc kubenswrapper[4718]: I1210 14:32:55.011789 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:55Z","lastTransitionTime":"2025-12-10T14:32:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:55 crc kubenswrapper[4718]: I1210 14:32:55.019852 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-r8zbt" Dec 10 14:32:55 crc kubenswrapper[4718]: E1210 14:32:55.020122 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-r8zbt" podUID="1494ebfa-d66c-4200-a336-2cedebcd5889" Dec 10 14:32:55 crc kubenswrapper[4718]: I1210 14:32:55.020605 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 10 14:32:55 crc kubenswrapper[4718]: E1210 14:32:55.020779 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 10 14:32:55 crc kubenswrapper[4718]: I1210 14:32:55.045742 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-hv62w_9db3984f-4589-462f-94d7-89a885be63d5/kube-multus/0.log" Dec 10 14:32:55 crc kubenswrapper[4718]: I1210 14:32:55.045830 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-hv62w" event={"ID":"9db3984f-4589-462f-94d7-89a885be63d5","Type":"ContainerStarted","Data":"0fb2ef285e022a0800e4756e854df7efc695588b19f6162afdccad0898285b93"} Dec 10 14:32:55 crc kubenswrapper[4718]: I1210 14:32:55.070299 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kkfdg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db9c6842-03cb-4a28-baff-77f27f537aa4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bbdc104f3ec13789d339165601ac84cf9973e811127f8fe05ddb9b8d5ab333a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bxrt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34acb23b1eef53ae920322853c0503b9d26b0d30894e6c53ae52fb90e1671294\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34acb23b1eef53ae920322853c0503b9d26b0d30894e6c53ae52fb90e1671294\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bxrt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1b13fbe360cd4e791e86acdb311b78d1ab1d3344fd287fd70cfa87cf656affa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1b13fbe360cd4e791e86acdb311b78d1ab1d3344fd287fd70cfa87cf656affa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bxrt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d9ee16140b928b7e5c33811af2ca2b20b2fc98fdefc12bb1790fce59c407d9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d9ee16140b928b7e5c33811af2ca2b20b2fc98fdefc12bb1790fce59c407d9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bxrt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1daf49ff990637090db3ead68f5d0777b6502f79edfe524df743389ee220f8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1daf49ff990637090db3ead68f5d0777b6502f79edfe524df743389ee220f8d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bxrt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://193bd4ffc5c3324f05727b94ff852aaae455949fbb968b27c572f43fe0328854\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://193bd4ffc5c3324f05727b94ff852aaae455949fbb968b27c572f43fe0328854\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bxrt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://95db561c06acc2e7272a5a4a2c8a6a38315343379b80213c0e63f7a65a46f35d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://95db561c06acc2e7272a5a4a2c8a6a38315343379b80213c0e63f7a65a46f35d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bxrt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kkfdg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:55Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:55 crc kubenswrapper[4718]: I1210 14:32:55.102513 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-r8zbt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1494ebfa-d66c-4200-a336-2cedebcd5889\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p7qsk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p7qsk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:17Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-r8zbt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:55Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:55 crc kubenswrapper[4718]: I1210 14:32:55.115548 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:55 crc kubenswrapper[4718]: I1210 14:32:55.115615 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:55 crc kubenswrapper[4718]: I1210 14:32:55.115627 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:55 crc kubenswrapper[4718]: I1210 14:32:55.115647 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:55 crc kubenswrapper[4718]: I1210 14:32:55.115658 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:55Z","lastTransitionTime":"2025-12-10T14:32:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:55 crc kubenswrapper[4718]: I1210 14:32:55.130945 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dtch2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"612af6cb-db4d-4874-a9ea-8b3c7eb8e30c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f49783c85c42bd1d4afd9d2546c84af07c770b069b7e6f22cf3d95c0e173bcb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02ee125e78be56eace63f1077a3952d4a9b9ccb387af5413a9325eb4bff80ea9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e231c6918c86142643af798db9c4c6de9764822d2b60e1155c727b143191139\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://abe2f058bd1c2102c5f00a29af4869eb6dcd8b732ae90f5c4fda6759b460622a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2ac19fc0eeba332014eefac978a35798eb8bf1ff79212bca24690236bd46e01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9263fdd7b12f5852db038570b5a179fff5850893af790fa27694f00375a607f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://410548008fb250bc3e4c4409287aa72cf53991828f5c3f4ac74bcf8561cbf855\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b37a96430550aa45384959d4587f4ab21c637451e334481e14e7b3a666d7f936\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-10T14:32:27Z\\\",\\\"message\\\":\\\"shift-console/console]} name:Service_openshift-console/console_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.194:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {d7d7b270-1480-47f8-bdf9-690dbab310cb}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1210 14:32:26.076824 6228 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI1210 14:32:26.076822 6228 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-console/console]} name:Service_openshift-console/console_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.194:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {d7d7b270-1480-47f8-bdf9-690dbab310cb}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF1210 14:32:26.076881 6228 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:25Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://410548008fb250bc3e4c4409287aa72cf53991828f5c3f4ac74bcf8561cbf855\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-10T14:32:49Z\\\",\\\"message\\\":\\\"logical_ip:10.217.0.59 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {dce28c51-c9f1-478b-97c8-7e209d6e7cbe}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:dce28c51-c9f1-478b-97c8-7e209d6e7cbe}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {e3c4661a-36a6-47f0-a6c0-a4ee741f2224}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1210 14:32:48.189595 6552 obj_retry.go:365] Adding new object: *v1.Pod openshift-multus/network-metrics-daemon-r8zbt\\\\nI1210 14:32:48.189604 6552 ovn.go:134] Ensuring zone local for Pod openshift-multus/network-metrics-daemon-r8zbt in node crc\\\\nI1210 14:32:48.189627 6552 base_network_controller_pods.go:477] [default/openshift-multus/network-metrics-daemon-r8zbt] creating logical port openshift-multus_network-metrics-daemon-r8zbt for pod on switch crc\\\\nF1210 14:32:48.188496 6552 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f27c89c7dd6f5334f05c340d706e938abde36968f00c30bf84f7b2fb30dfcb3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df5db949bd1c53e8ae754b3e7e500e8c2e196aacd68a36d6331ddca378ef7504\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df5db949bd1c53e8ae754b3e7e500e8c2e196aacd68a36d6331ddca378ef7504\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-dtch2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:55Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:55 crc kubenswrapper[4718]: I1210 14:32:55.148874 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52b297ad-8ff7-498e-8248-b64014de744f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65966d33004e0a6740f0b450f404729baec86fc47c84a58ee020a78b7378bfee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfa44e8f79d7525847ab10384aaf41105a8e6769b384679417cd0a7379748dee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://32fcede9b35f9a45177a341e272e7c2cbe2e5c6bf699e877664a603a5f96124a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3d4b682855c5510a5957e308008602e4f749bd44ca52f974cb4be27d06a7c80\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9d6e990a4a3f575ffc8a6399b8dd1fd59a84db71b34885946046fc7c5bd0a8e8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-10T14:31:58Z\\\",\\\"message\\\":\\\"le observer\\\\nW1210 14:31:57.496102 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1210 14:31:57.496266 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1210 14:31:57.497076 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1744864549/tls.crt::/tmp/serving-cert-1744864549/tls.key\\\\\\\"\\\\nI1210 14:31:57.965947 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1210 14:31:57.968367 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1210 14:31:57.968400 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1210 14:31:57.968417 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1210 14:31:57.968422 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1210 14:31:57.972813 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1210 14:31:57.972849 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1210 14:31:57.972856 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1210 14:31:57.972862 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1210 14:31:57.972867 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1210 14:31:57.972870 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1210 14:31:57.972874 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1210 14:31:57.973028 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1210 14:31:57.977590 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-10T14:31:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f363cabfc65ba2cb737968ead6f21568f30416a1f385f511f2d1a45da1e831a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3cbf8dfd3e1278a733c6ca7e888788d0bec845ff72dfd618e2070262d05b6a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3cbf8dfd3e1278a733c6ca7e888788d0bec845ff72dfd618e2070262d05b6a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:31:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:31:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:55Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:55 crc kubenswrapper[4718]: I1210 14:32:55.164029 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:55Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:55 crc kubenswrapper[4718]: I1210 14:32:55.179904 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:55Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:55 crc kubenswrapper[4718]: I1210 14:32:55.194310 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://368fbae55ef9c78f2f88c5a07301f271e1ccffc66976a0c2dc9f80edce04613a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:55Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:55 crc kubenswrapper[4718]: I1210 14:32:55.208991 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a6ff07473ddfa3d1db1e5fe47937274f803c400ee435102c3b0383c02af4340\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b56868fde34c04a22b78be1e6cb02ff5b11f0dd9fa0232e5a4c38725a8bf193\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:55Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:55 crc kubenswrapper[4718]: I1210 14:32:55.219026 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:55 crc kubenswrapper[4718]: I1210 14:32:55.219865 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:55 crc kubenswrapper[4718]: I1210 14:32:55.219919 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:55 crc kubenswrapper[4718]: I1210 14:32:55.220023 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:55 crc kubenswrapper[4718]: I1210 14:32:55.220036 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:55Z","lastTransitionTime":"2025-12-10T14:32:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:55 crc kubenswrapper[4718]: I1210 14:32:55.226717 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9fmrd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"186fd52a-c63c-461f-a551-8b57ead36f59\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c79dfef1bedaab660a35e29f8a48c6b153f7ca5028ef6a936a196699a30566e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r6f28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:03Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9fmrd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:55Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:55 crc kubenswrapper[4718]: I1210 14:32:55.241506 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:55Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:55 crc kubenswrapper[4718]: I1210 14:32:55.257095 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hv62w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9db3984f-4589-462f-94d7-89a885be63d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fb2ef285e022a0800e4756e854df7efc695588b19f6162afdccad0898285b93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c18891e2a71e33784526db2ff2226ff6aa8a9df462f5cb11ff4f5a446509f54\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-10T14:32:53Z\\\",\\\"message\\\":\\\"2025-12-10T14:32:07+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_1cd36dea-7f3c-413f-b43c-2e7059f8f268\\\\n2025-12-10T14:32:07+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_1cd36dea-7f3c-413f-b43c-2e7059f8f268 to /host/opt/cni/bin/\\\\n2025-12-10T14:32:08Z [verbose] multus-daemon started\\\\n2025-12-10T14:32:08Z [verbose] Readiness Indicator file check\\\\n2025-12-10T14:32:53Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:01Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d84p4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hv62w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:55Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:55 crc kubenswrapper[4718]: I1210 14:32:55.270016 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xvkg4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af814c60-50de-499d-a1b2-b18f3749bc35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01d0637707f68305ada3aab8a2cf56db32750665ef5c1bcfbfb1993fd135c3b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vsmbx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:00Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xvkg4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:55Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:55 crc kubenswrapper[4718]: I1210 14:32:55.285239 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8db53917-7cfb-496d-b8a0-5cc68f3be4e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ac0946788c7cef36abe84639ca343f7ba379f19b94308cd1a08f4dcac4ad249\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f420f195ef9f1879dc97af4b3feb8835025c2cbe0ede4113fb1b0e0b08ba6c2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8zmhn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:55Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:55 crc kubenswrapper[4718]: I1210 14:32:55.300889 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e88a635-7fd6-4100-9074-93065379b7af\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe972be45d5fffe525720a01120068ca8d0d5ad43be84a4dfea4ca04946ff711\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://98178725ebbf1d6fe01df267ec22fdd0b8bb049fba5d6790ca3e045b71083a05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://98178725ebbf1d6fe01df267ec22fdd0b8bb049fba5d6790ca3e045b71083a05\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:31:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:31:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:55Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:55 crc kubenswrapper[4718]: I1210 14:32:55.316167 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1432ae01-a453-47a7-be1d-5fdcabe6f0c8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://651112c229c32968527538e66f1b161c92411b37f45f36ca90b27b6710d7a43d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d1de02a2f41beed3e92a05e69a6c76b44861e7dba46d723a38fffe8c3d0ed42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9405e4c833e1d215824604c6d7ec42dec156f2968d3351f110df665cdac0ce1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d898946159da9ec7806b5daa074a2313a8d3cd6086f4ae2bdfe858a7ba70eca7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:31:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:55Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:55 crc kubenswrapper[4718]: I1210 14:32:55.323782 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:55 crc kubenswrapper[4718]: I1210 14:32:55.323856 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:55 crc kubenswrapper[4718]: I1210 14:32:55.323868 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:55 crc kubenswrapper[4718]: I1210 14:32:55.323896 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:55 crc kubenswrapper[4718]: I1210 14:32:55.323910 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:55Z","lastTransitionTime":"2025-12-10T14:32:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:55 crc kubenswrapper[4718]: I1210 14:32:55.328286 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5867ee7f-0b5d-48e7-9c39-956298a845a1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca56ecda46fc73e9f734cfd36b87b7353db70adefef0665771911d62633ce508\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65ade8ad5ab17011ad49a8283beac1b53ed3612052f8ad9c9b5521b18a71f970\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4c8b683dbf4a1d50d5987ea168baecaf4b2124a5a6286dc22c96cd7472db60a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://541c7810a9f52a43bd4d949a836dfed2fbf9384a1c46d64f81168a8eec1437e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://541c7810a9f52a43bd4d949a836dfed2fbf9384a1c46d64f81168a8eec1437e3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:31:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:31:37Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:31:36Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:55Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:55 crc kubenswrapper[4718]: I1210 14:32:55.344699 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://caa6ee7ab7f267517dcd1699f91f61cbc8b2dd0a135dd553e096f31a7f1dd93b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:55Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:55 crc kubenswrapper[4718]: I1210 14:32:55.357587 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-49r77" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22aace52-6f58-4459-89d2-9fec98b12ead\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b18d0c437956c7db43ad53ae225d4340e62f2c0ceb79ab6cc519d3c97570027\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l47m9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc20dc8e96606df154ecb5850afe3c1002d2869646e8a43a642e34d6f5ec8683\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l47m9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-49r77\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:55Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:55 crc kubenswrapper[4718]: I1210 14:32:55.427348 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:55 crc kubenswrapper[4718]: I1210 14:32:55.427431 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:55 crc kubenswrapper[4718]: I1210 14:32:55.427444 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:55 crc kubenswrapper[4718]: I1210 14:32:55.427465 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:55 crc kubenswrapper[4718]: I1210 14:32:55.427476 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:55Z","lastTransitionTime":"2025-12-10T14:32:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:55 crc kubenswrapper[4718]: I1210 14:32:55.531306 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:55 crc kubenswrapper[4718]: I1210 14:32:55.531366 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:55 crc kubenswrapper[4718]: I1210 14:32:55.531383 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:55 crc kubenswrapper[4718]: I1210 14:32:55.531450 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:55 crc kubenswrapper[4718]: I1210 14:32:55.531482 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:55Z","lastTransitionTime":"2025-12-10T14:32:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:55 crc kubenswrapper[4718]: I1210 14:32:55.634647 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:55 crc kubenswrapper[4718]: I1210 14:32:55.634737 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:55 crc kubenswrapper[4718]: I1210 14:32:55.634765 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:55 crc kubenswrapper[4718]: I1210 14:32:55.634799 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:55 crc kubenswrapper[4718]: I1210 14:32:55.634825 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:55Z","lastTransitionTime":"2025-12-10T14:32:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:55 crc kubenswrapper[4718]: I1210 14:32:55.737938 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:55 crc kubenswrapper[4718]: I1210 14:32:55.738026 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:55 crc kubenswrapper[4718]: I1210 14:32:55.738042 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:55 crc kubenswrapper[4718]: I1210 14:32:55.738064 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:55 crc kubenswrapper[4718]: I1210 14:32:55.738086 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:55Z","lastTransitionTime":"2025-12-10T14:32:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:55 crc kubenswrapper[4718]: I1210 14:32:55.842292 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:55 crc kubenswrapper[4718]: I1210 14:32:55.842419 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:55 crc kubenswrapper[4718]: I1210 14:32:55.842455 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:55 crc kubenswrapper[4718]: I1210 14:32:55.842486 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:55 crc kubenswrapper[4718]: I1210 14:32:55.842509 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:55Z","lastTransitionTime":"2025-12-10T14:32:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:55 crc kubenswrapper[4718]: I1210 14:32:55.946678 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:55 crc kubenswrapper[4718]: I1210 14:32:55.946745 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:55 crc kubenswrapper[4718]: I1210 14:32:55.946763 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:55 crc kubenswrapper[4718]: I1210 14:32:55.946784 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:55 crc kubenswrapper[4718]: I1210 14:32:55.946800 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:55Z","lastTransitionTime":"2025-12-10T14:32:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:56 crc kubenswrapper[4718]: I1210 14:32:56.019909 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 10 14:32:56 crc kubenswrapper[4718]: I1210 14:32:56.019917 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 10 14:32:56 crc kubenswrapper[4718]: E1210 14:32:56.020101 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 10 14:32:56 crc kubenswrapper[4718]: E1210 14:32:56.020172 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 10 14:32:56 crc kubenswrapper[4718]: I1210 14:32:56.045706 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kkfdg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db9c6842-03cb-4a28-baff-77f27f537aa4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bbdc104f3ec13789d339165601ac84cf9973e811127f8fe05ddb9b8d5ab333a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bxrt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34acb23b1eef53ae920322853c0503b9d26b0d30894e6c53ae52fb90e1671294\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34acb23b1eef53ae920322853c0503b9d26b0d30894e6c53ae52fb90e1671294\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bxrt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1b13fbe360cd4e791e86acdb311b78d1ab1d3344fd287fd70cfa87cf656affa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1b13fbe360cd4e791e86acdb311b78d1ab1d3344fd287fd70cfa87cf656affa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bxrt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d9ee16140b928b7e5c33811af2ca2b20b2fc98fdefc12bb1790fce59c407d9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d9ee16140b928b7e5c33811af2ca2b20b2fc98fdefc12bb1790fce59c407d9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bxrt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1daf49ff990637090db3ead68f5d0777b6502f79edfe524df743389ee220f8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1daf49ff990637090db3ead68f5d0777b6502f79edfe524df743389ee220f8d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bxrt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://193bd4ffc5c3324f05727b94ff852aaae455949fbb968b27c572f43fe0328854\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://193bd4ffc5c3324f05727b94ff852aaae455949fbb968b27c572f43fe0328854\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bxrt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://95db561c06acc2e7272a5a4a2c8a6a38315343379b80213c0e63f7a65a46f35d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://95db561c06acc2e7272a5a4a2c8a6a38315343379b80213c0e63f7a65a46f35d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bxrt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kkfdg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:56Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:56 crc kubenswrapper[4718]: I1210 14:32:56.049358 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:56 crc kubenswrapper[4718]: I1210 14:32:56.049425 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:56 crc kubenswrapper[4718]: I1210 14:32:56.049435 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:56 crc kubenswrapper[4718]: I1210 14:32:56.049453 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:56 crc kubenswrapper[4718]: I1210 14:32:56.049466 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:56Z","lastTransitionTime":"2025-12-10T14:32:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:56 crc kubenswrapper[4718]: I1210 14:32:56.064753 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-r8zbt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1494ebfa-d66c-4200-a336-2cedebcd5889\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p7qsk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p7qsk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:17Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-r8zbt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:56Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:56 crc kubenswrapper[4718]: I1210 14:32:56.088834 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dtch2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"612af6cb-db4d-4874-a9ea-8b3c7eb8e30c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f49783c85c42bd1d4afd9d2546c84af07c770b069b7e6f22cf3d95c0e173bcb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02ee125e78be56eace63f1077a3952d4a9b9ccb387af5413a9325eb4bff80ea9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e231c6918c86142643af798db9c4c6de9764822d2b60e1155c727b143191139\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://abe2f058bd1c2102c5f00a29af4869eb6dcd8b732ae90f5c4fda6759b460622a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2ac19fc0eeba332014eefac978a35798eb8bf1ff79212bca24690236bd46e01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9263fdd7b12f5852db038570b5a179fff5850893af790fa27694f00375a607f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://410548008fb250bc3e4c4409287aa72cf53991828f5c3f4ac74bcf8561cbf855\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b37a96430550aa45384959d4587f4ab21c637451e334481e14e7b3a666d7f936\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-10T14:32:27Z\\\",\\\"message\\\":\\\"shift-console/console]} name:Service_openshift-console/console_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.194:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {d7d7b270-1480-47f8-bdf9-690dbab310cb}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1210 14:32:26.076824 6228 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI1210 14:32:26.076822 6228 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-console/console]} name:Service_openshift-console/console_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.194:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {d7d7b270-1480-47f8-bdf9-690dbab310cb}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF1210 14:32:26.076881 6228 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:25Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://410548008fb250bc3e4c4409287aa72cf53991828f5c3f4ac74bcf8561cbf855\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-10T14:32:49Z\\\",\\\"message\\\":\\\"logical_ip:10.217.0.59 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {dce28c51-c9f1-478b-97c8-7e209d6e7cbe}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:dce28c51-c9f1-478b-97c8-7e209d6e7cbe}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {e3c4661a-36a6-47f0-a6c0-a4ee741f2224}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1210 14:32:48.189595 6552 obj_retry.go:365] Adding new object: *v1.Pod openshift-multus/network-metrics-daemon-r8zbt\\\\nI1210 14:32:48.189604 6552 ovn.go:134] Ensuring zone local for Pod openshift-multus/network-metrics-daemon-r8zbt in node crc\\\\nI1210 14:32:48.189627 6552 base_network_controller_pods.go:477] [default/openshift-multus/network-metrics-daemon-r8zbt] creating logical port openshift-multus_network-metrics-daemon-r8zbt for pod on switch crc\\\\nF1210 14:32:48.188496 6552 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f27c89c7dd6f5334f05c340d706e938abde36968f00c30bf84f7b2fb30dfcb3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df5db949bd1c53e8ae754b3e7e500e8c2e196aacd68a36d6331ddca378ef7504\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df5db949bd1c53e8ae754b3e7e500e8c2e196aacd68a36d6331ddca378ef7504\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-dtch2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:56Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:56 crc kubenswrapper[4718]: I1210 14:32:56.106812 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52b297ad-8ff7-498e-8248-b64014de744f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65966d33004e0a6740f0b450f404729baec86fc47c84a58ee020a78b7378bfee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfa44e8f79d7525847ab10384aaf41105a8e6769b384679417cd0a7379748dee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://32fcede9b35f9a45177a341e272e7c2cbe2e5c6bf699e877664a603a5f96124a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3d4b682855c5510a5957e308008602e4f749bd44ca52f974cb4be27d06a7c80\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9d6e990a4a3f575ffc8a6399b8dd1fd59a84db71b34885946046fc7c5bd0a8e8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-10T14:31:58Z\\\",\\\"message\\\":\\\"le observer\\\\nW1210 14:31:57.496102 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1210 14:31:57.496266 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1210 14:31:57.497076 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1744864549/tls.crt::/tmp/serving-cert-1744864549/tls.key\\\\\\\"\\\\nI1210 14:31:57.965947 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1210 14:31:57.968367 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1210 14:31:57.968400 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1210 14:31:57.968417 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1210 14:31:57.968422 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1210 14:31:57.972813 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1210 14:31:57.972849 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1210 14:31:57.972856 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1210 14:31:57.972862 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1210 14:31:57.972867 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1210 14:31:57.972870 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1210 14:31:57.972874 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1210 14:31:57.973028 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1210 14:31:57.977590 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-10T14:31:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f363cabfc65ba2cb737968ead6f21568f30416a1f385f511f2d1a45da1e831a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3cbf8dfd3e1278a733c6ca7e888788d0bec845ff72dfd618e2070262d05b6a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3cbf8dfd3e1278a733c6ca7e888788d0bec845ff72dfd618e2070262d05b6a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:31:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:31:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:56Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:56 crc kubenswrapper[4718]: I1210 14:32:56.118321 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:56Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:56 crc kubenswrapper[4718]: I1210 14:32:56.129584 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:56Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:56 crc kubenswrapper[4718]: I1210 14:32:56.139660 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://368fbae55ef9c78f2f88c5a07301f271e1ccffc66976a0c2dc9f80edce04613a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:56Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:56 crc kubenswrapper[4718]: I1210 14:32:56.150512 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a6ff07473ddfa3d1db1e5fe47937274f803c400ee435102c3b0383c02af4340\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b56868fde34c04a22b78be1e6cb02ff5b11f0dd9fa0232e5a4c38725a8bf193\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:56Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:56 crc kubenswrapper[4718]: I1210 14:32:56.152146 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:56 crc kubenswrapper[4718]: I1210 14:32:56.152202 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:56 crc kubenswrapper[4718]: I1210 14:32:56.152216 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:56 crc kubenswrapper[4718]: I1210 14:32:56.152231 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:56 crc kubenswrapper[4718]: I1210 14:32:56.152266 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:56Z","lastTransitionTime":"2025-12-10T14:32:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:56 crc kubenswrapper[4718]: I1210 14:32:56.167234 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9fmrd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"186fd52a-c63c-461f-a551-8b57ead36f59\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c79dfef1bedaab660a35e29f8a48c6b153f7ca5028ef6a936a196699a30566e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r6f28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:03Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9fmrd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:56Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:56 crc kubenswrapper[4718]: I1210 14:32:56.185345 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:56Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:56 crc kubenswrapper[4718]: I1210 14:32:56.202077 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hv62w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9db3984f-4589-462f-94d7-89a885be63d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fb2ef285e022a0800e4756e854df7efc695588b19f6162afdccad0898285b93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c18891e2a71e33784526db2ff2226ff6aa8a9df462f5cb11ff4f5a446509f54\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-10T14:32:53Z\\\",\\\"message\\\":\\\"2025-12-10T14:32:07+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_1cd36dea-7f3c-413f-b43c-2e7059f8f268\\\\n2025-12-10T14:32:07+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_1cd36dea-7f3c-413f-b43c-2e7059f8f268 to /host/opt/cni/bin/\\\\n2025-12-10T14:32:08Z [verbose] multus-daemon started\\\\n2025-12-10T14:32:08Z [verbose] Readiness Indicator file check\\\\n2025-12-10T14:32:53Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:01Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d84p4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hv62w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:56Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:56 crc kubenswrapper[4718]: I1210 14:32:56.215024 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xvkg4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af814c60-50de-499d-a1b2-b18f3749bc35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01d0637707f68305ada3aab8a2cf56db32750665ef5c1bcfbfb1993fd135c3b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vsmbx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:00Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xvkg4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:56Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:56 crc kubenswrapper[4718]: I1210 14:32:56.227836 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8db53917-7cfb-496d-b8a0-5cc68f3be4e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ac0946788c7cef36abe84639ca343f7ba379f19b94308cd1a08f4dcac4ad249\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f420f195ef9f1879dc97af4b3feb8835025c2cbe0ede4113fb1b0e0b08ba6c2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8zmhn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:56Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:56 crc kubenswrapper[4718]: I1210 14:32:56.239464 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e88a635-7fd6-4100-9074-93065379b7af\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe972be45d5fffe525720a01120068ca8d0d5ad43be84a4dfea4ca04946ff711\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://98178725ebbf1d6fe01df267ec22fdd0b8bb049fba5d6790ca3e045b71083a05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://98178725ebbf1d6fe01df267ec22fdd0b8bb049fba5d6790ca3e045b71083a05\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:31:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:31:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:56Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:56 crc kubenswrapper[4718]: I1210 14:32:56.251518 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1432ae01-a453-47a7-be1d-5fdcabe6f0c8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://651112c229c32968527538e66f1b161c92411b37f45f36ca90b27b6710d7a43d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d1de02a2f41beed3e92a05e69a6c76b44861e7dba46d723a38fffe8c3d0ed42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9405e4c833e1d215824604c6d7ec42dec156f2968d3351f110df665cdac0ce1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d898946159da9ec7806b5daa074a2313a8d3cd6086f4ae2bdfe858a7ba70eca7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:31:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:56Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:56 crc kubenswrapper[4718]: I1210 14:32:56.254511 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:56 crc kubenswrapper[4718]: I1210 14:32:56.254569 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:56 crc kubenswrapper[4718]: I1210 14:32:56.254582 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:56 crc kubenswrapper[4718]: I1210 14:32:56.254611 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:56 crc kubenswrapper[4718]: I1210 14:32:56.254625 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:56Z","lastTransitionTime":"2025-12-10T14:32:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:56 crc kubenswrapper[4718]: I1210 14:32:56.264024 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5867ee7f-0b5d-48e7-9c39-956298a845a1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca56ecda46fc73e9f734cfd36b87b7353db70adefef0665771911d62633ce508\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65ade8ad5ab17011ad49a8283beac1b53ed3612052f8ad9c9b5521b18a71f970\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4c8b683dbf4a1d50d5987ea168baecaf4b2124a5a6286dc22c96cd7472db60a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://541c7810a9f52a43bd4d949a836dfed2fbf9384a1c46d64f81168a8eec1437e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://541c7810a9f52a43bd4d949a836dfed2fbf9384a1c46d64f81168a8eec1437e3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:31:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:31:37Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:31:36Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:56Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:56 crc kubenswrapper[4718]: I1210 14:32:56.277487 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://caa6ee7ab7f267517dcd1699f91f61cbc8b2dd0a135dd553e096f31a7f1dd93b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:56Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:56 crc kubenswrapper[4718]: I1210 14:32:56.290176 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-49r77" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22aace52-6f58-4459-89d2-9fec98b12ead\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b18d0c437956c7db43ad53ae225d4340e62f2c0ceb79ab6cc519d3c97570027\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l47m9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc20dc8e96606df154ecb5850afe3c1002d2869646e8a43a642e34d6f5ec8683\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l47m9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-49r77\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:32:56Z is after 2025-08-24T17:21:41Z" Dec 10 14:32:56 crc kubenswrapper[4718]: I1210 14:32:56.357853 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:56 crc kubenswrapper[4718]: I1210 14:32:56.357918 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:56 crc kubenswrapper[4718]: I1210 14:32:56.357931 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:56 crc kubenswrapper[4718]: I1210 14:32:56.357954 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:56 crc kubenswrapper[4718]: I1210 14:32:56.357967 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:56Z","lastTransitionTime":"2025-12-10T14:32:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:56 crc kubenswrapper[4718]: I1210 14:32:56.460045 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:56 crc kubenswrapper[4718]: I1210 14:32:56.460097 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:56 crc kubenswrapper[4718]: I1210 14:32:56.460110 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:56 crc kubenswrapper[4718]: I1210 14:32:56.460132 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:56 crc kubenswrapper[4718]: I1210 14:32:56.460147 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:56Z","lastTransitionTime":"2025-12-10T14:32:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:56 crc kubenswrapper[4718]: I1210 14:32:56.562678 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:56 crc kubenswrapper[4718]: I1210 14:32:56.562730 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:56 crc kubenswrapper[4718]: I1210 14:32:56.562739 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:56 crc kubenswrapper[4718]: I1210 14:32:56.562757 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:56 crc kubenswrapper[4718]: I1210 14:32:56.562769 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:56Z","lastTransitionTime":"2025-12-10T14:32:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:56 crc kubenswrapper[4718]: I1210 14:32:56.666105 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:56 crc kubenswrapper[4718]: I1210 14:32:56.666152 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:56 crc kubenswrapper[4718]: I1210 14:32:56.666163 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:56 crc kubenswrapper[4718]: I1210 14:32:56.666180 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:56 crc kubenswrapper[4718]: I1210 14:32:56.666193 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:56Z","lastTransitionTime":"2025-12-10T14:32:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:56 crc kubenswrapper[4718]: I1210 14:32:56.769258 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:56 crc kubenswrapper[4718]: I1210 14:32:56.769314 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:56 crc kubenswrapper[4718]: I1210 14:32:56.769327 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:56 crc kubenswrapper[4718]: I1210 14:32:56.769344 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:56 crc kubenswrapper[4718]: I1210 14:32:56.769355 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:56Z","lastTransitionTime":"2025-12-10T14:32:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:56 crc kubenswrapper[4718]: I1210 14:32:56.873121 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:56 crc kubenswrapper[4718]: I1210 14:32:56.873185 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:56 crc kubenswrapper[4718]: I1210 14:32:56.873196 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:56 crc kubenswrapper[4718]: I1210 14:32:56.873222 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:56 crc kubenswrapper[4718]: I1210 14:32:56.873243 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:56Z","lastTransitionTime":"2025-12-10T14:32:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:56 crc kubenswrapper[4718]: I1210 14:32:56.977590 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:56 crc kubenswrapper[4718]: I1210 14:32:56.977627 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:56 crc kubenswrapper[4718]: I1210 14:32:56.977638 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:56 crc kubenswrapper[4718]: I1210 14:32:56.977650 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:56 crc kubenswrapper[4718]: I1210 14:32:56.977661 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:56Z","lastTransitionTime":"2025-12-10T14:32:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:57 crc kubenswrapper[4718]: I1210 14:32:57.020089 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 10 14:32:57 crc kubenswrapper[4718]: I1210 14:32:57.020231 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-r8zbt" Dec 10 14:32:57 crc kubenswrapper[4718]: E1210 14:32:57.020460 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-r8zbt" podUID="1494ebfa-d66c-4200-a336-2cedebcd5889" Dec 10 14:32:57 crc kubenswrapper[4718]: E1210 14:32:57.020248 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 10 14:32:57 crc kubenswrapper[4718]: I1210 14:32:57.080797 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:57 crc kubenswrapper[4718]: I1210 14:32:57.080854 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:57 crc kubenswrapper[4718]: I1210 14:32:57.080872 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:57 crc kubenswrapper[4718]: I1210 14:32:57.080895 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:57 crc kubenswrapper[4718]: I1210 14:32:57.080914 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:57Z","lastTransitionTime":"2025-12-10T14:32:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:57 crc kubenswrapper[4718]: I1210 14:32:57.184221 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:57 crc kubenswrapper[4718]: I1210 14:32:57.184292 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:57 crc kubenswrapper[4718]: I1210 14:32:57.184304 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:57 crc kubenswrapper[4718]: I1210 14:32:57.184323 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:57 crc kubenswrapper[4718]: I1210 14:32:57.184336 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:57Z","lastTransitionTime":"2025-12-10T14:32:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:57 crc kubenswrapper[4718]: I1210 14:32:57.287422 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:57 crc kubenswrapper[4718]: I1210 14:32:57.287462 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:57 crc kubenswrapper[4718]: I1210 14:32:57.287488 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:57 crc kubenswrapper[4718]: I1210 14:32:57.287506 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:57 crc kubenswrapper[4718]: I1210 14:32:57.287518 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:57Z","lastTransitionTime":"2025-12-10T14:32:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:57 crc kubenswrapper[4718]: I1210 14:32:57.390898 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:57 crc kubenswrapper[4718]: I1210 14:32:57.390951 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:57 crc kubenswrapper[4718]: I1210 14:32:57.390961 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:57 crc kubenswrapper[4718]: I1210 14:32:57.390974 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:57 crc kubenswrapper[4718]: I1210 14:32:57.390990 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:57Z","lastTransitionTime":"2025-12-10T14:32:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:57 crc kubenswrapper[4718]: I1210 14:32:57.494142 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:57 crc kubenswrapper[4718]: I1210 14:32:57.494191 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:57 crc kubenswrapper[4718]: I1210 14:32:57.494204 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:57 crc kubenswrapper[4718]: I1210 14:32:57.494221 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:57 crc kubenswrapper[4718]: I1210 14:32:57.494235 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:57Z","lastTransitionTime":"2025-12-10T14:32:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:57 crc kubenswrapper[4718]: I1210 14:32:57.606260 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:57 crc kubenswrapper[4718]: I1210 14:32:57.606299 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:57 crc kubenswrapper[4718]: I1210 14:32:57.606308 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:57 crc kubenswrapper[4718]: I1210 14:32:57.606324 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:57 crc kubenswrapper[4718]: I1210 14:32:57.606336 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:57Z","lastTransitionTime":"2025-12-10T14:32:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:57 crc kubenswrapper[4718]: I1210 14:32:57.710302 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:57 crc kubenswrapper[4718]: I1210 14:32:57.710424 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:57 crc kubenswrapper[4718]: I1210 14:32:57.710445 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:57 crc kubenswrapper[4718]: I1210 14:32:57.710471 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:57 crc kubenswrapper[4718]: I1210 14:32:57.710489 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:57Z","lastTransitionTime":"2025-12-10T14:32:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:57 crc kubenswrapper[4718]: I1210 14:32:57.813878 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:57 crc kubenswrapper[4718]: I1210 14:32:57.813924 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:57 crc kubenswrapper[4718]: I1210 14:32:57.813941 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:57 crc kubenswrapper[4718]: I1210 14:32:57.813963 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:57 crc kubenswrapper[4718]: I1210 14:32:57.813974 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:57Z","lastTransitionTime":"2025-12-10T14:32:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:57 crc kubenswrapper[4718]: I1210 14:32:57.917061 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:57 crc kubenswrapper[4718]: I1210 14:32:57.917114 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:57 crc kubenswrapper[4718]: I1210 14:32:57.917125 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:57 crc kubenswrapper[4718]: I1210 14:32:57.917141 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:57 crc kubenswrapper[4718]: I1210 14:32:57.917153 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:57Z","lastTransitionTime":"2025-12-10T14:32:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:58 crc kubenswrapper[4718]: I1210 14:32:58.019376 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 10 14:32:58 crc kubenswrapper[4718]: I1210 14:32:58.019446 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 10 14:32:58 crc kubenswrapper[4718]: E1210 14:32:58.019566 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 10 14:32:58 crc kubenswrapper[4718]: I1210 14:32:58.019672 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:58 crc kubenswrapper[4718]: I1210 14:32:58.019702 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:58 crc kubenswrapper[4718]: I1210 14:32:58.019727 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:58 crc kubenswrapper[4718]: E1210 14:32:58.019717 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 10 14:32:58 crc kubenswrapper[4718]: I1210 14:32:58.019740 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:58 crc kubenswrapper[4718]: I1210 14:32:58.019765 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:58Z","lastTransitionTime":"2025-12-10T14:32:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:58 crc kubenswrapper[4718]: I1210 14:32:58.121918 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:58 crc kubenswrapper[4718]: I1210 14:32:58.121984 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:58 crc kubenswrapper[4718]: I1210 14:32:58.121997 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:58 crc kubenswrapper[4718]: I1210 14:32:58.122015 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:58 crc kubenswrapper[4718]: I1210 14:32:58.122027 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:58Z","lastTransitionTime":"2025-12-10T14:32:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:58 crc kubenswrapper[4718]: I1210 14:32:58.224284 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:58 crc kubenswrapper[4718]: I1210 14:32:58.224339 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:58 crc kubenswrapper[4718]: I1210 14:32:58.224349 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:58 crc kubenswrapper[4718]: I1210 14:32:58.224364 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:58 crc kubenswrapper[4718]: I1210 14:32:58.224375 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:58Z","lastTransitionTime":"2025-12-10T14:32:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:58 crc kubenswrapper[4718]: I1210 14:32:58.327153 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:58 crc kubenswrapper[4718]: I1210 14:32:58.327200 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:58 crc kubenswrapper[4718]: I1210 14:32:58.327212 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:58 crc kubenswrapper[4718]: I1210 14:32:58.327236 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:58 crc kubenswrapper[4718]: I1210 14:32:58.327250 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:58Z","lastTransitionTime":"2025-12-10T14:32:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:58 crc kubenswrapper[4718]: I1210 14:32:58.430276 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:58 crc kubenswrapper[4718]: I1210 14:32:58.430346 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:58 crc kubenswrapper[4718]: I1210 14:32:58.430363 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:58 crc kubenswrapper[4718]: I1210 14:32:58.430424 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:58 crc kubenswrapper[4718]: I1210 14:32:58.430442 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:58Z","lastTransitionTime":"2025-12-10T14:32:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:58 crc kubenswrapper[4718]: I1210 14:32:58.534317 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:58 crc kubenswrapper[4718]: I1210 14:32:58.534368 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:58 crc kubenswrapper[4718]: I1210 14:32:58.534380 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:58 crc kubenswrapper[4718]: I1210 14:32:58.534437 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:58 crc kubenswrapper[4718]: I1210 14:32:58.534462 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:58Z","lastTransitionTime":"2025-12-10T14:32:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:58 crc kubenswrapper[4718]: I1210 14:32:58.638104 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:58 crc kubenswrapper[4718]: I1210 14:32:58.638195 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:58 crc kubenswrapper[4718]: I1210 14:32:58.638211 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:58 crc kubenswrapper[4718]: I1210 14:32:58.638226 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:58 crc kubenswrapper[4718]: I1210 14:32:58.638236 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:58Z","lastTransitionTime":"2025-12-10T14:32:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:58 crc kubenswrapper[4718]: I1210 14:32:58.740718 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:58 crc kubenswrapper[4718]: I1210 14:32:58.740809 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:58 crc kubenswrapper[4718]: I1210 14:32:58.740827 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:58 crc kubenswrapper[4718]: I1210 14:32:58.740846 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:58 crc kubenswrapper[4718]: I1210 14:32:58.740860 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:58Z","lastTransitionTime":"2025-12-10T14:32:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:58 crc kubenswrapper[4718]: I1210 14:32:58.843932 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:58 crc kubenswrapper[4718]: I1210 14:32:58.844020 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:58 crc kubenswrapper[4718]: I1210 14:32:58.844046 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:58 crc kubenswrapper[4718]: I1210 14:32:58.844073 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:58 crc kubenswrapper[4718]: I1210 14:32:58.844092 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:58Z","lastTransitionTime":"2025-12-10T14:32:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:58 crc kubenswrapper[4718]: I1210 14:32:58.947540 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:58 crc kubenswrapper[4718]: I1210 14:32:58.947619 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:58 crc kubenswrapper[4718]: I1210 14:32:58.947637 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:58 crc kubenswrapper[4718]: I1210 14:32:58.947664 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:58 crc kubenswrapper[4718]: I1210 14:32:58.947683 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:58Z","lastTransitionTime":"2025-12-10T14:32:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:59 crc kubenswrapper[4718]: I1210 14:32:59.019912 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 10 14:32:59 crc kubenswrapper[4718]: I1210 14:32:59.020107 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-r8zbt" Dec 10 14:32:59 crc kubenswrapper[4718]: E1210 14:32:59.020204 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 10 14:32:59 crc kubenswrapper[4718]: E1210 14:32:59.020384 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-r8zbt" podUID="1494ebfa-d66c-4200-a336-2cedebcd5889" Dec 10 14:32:59 crc kubenswrapper[4718]: I1210 14:32:59.051569 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:59 crc kubenswrapper[4718]: I1210 14:32:59.051626 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:59 crc kubenswrapper[4718]: I1210 14:32:59.051648 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:59 crc kubenswrapper[4718]: I1210 14:32:59.051676 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:59 crc kubenswrapper[4718]: I1210 14:32:59.051695 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:59Z","lastTransitionTime":"2025-12-10T14:32:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:59 crc kubenswrapper[4718]: I1210 14:32:59.154880 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:59 crc kubenswrapper[4718]: I1210 14:32:59.155613 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:59 crc kubenswrapper[4718]: I1210 14:32:59.155878 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:59 crc kubenswrapper[4718]: I1210 14:32:59.155982 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:59 crc kubenswrapper[4718]: I1210 14:32:59.156082 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:59Z","lastTransitionTime":"2025-12-10T14:32:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:59 crc kubenswrapper[4718]: I1210 14:32:59.259806 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:59 crc kubenswrapper[4718]: I1210 14:32:59.259871 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:59 crc kubenswrapper[4718]: I1210 14:32:59.259881 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:59 crc kubenswrapper[4718]: I1210 14:32:59.259903 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:59 crc kubenswrapper[4718]: I1210 14:32:59.259920 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:59Z","lastTransitionTime":"2025-12-10T14:32:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:59 crc kubenswrapper[4718]: I1210 14:32:59.362699 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:59 crc kubenswrapper[4718]: I1210 14:32:59.362770 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:59 crc kubenswrapper[4718]: I1210 14:32:59.362778 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:59 crc kubenswrapper[4718]: I1210 14:32:59.362792 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:59 crc kubenswrapper[4718]: I1210 14:32:59.362804 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:59Z","lastTransitionTime":"2025-12-10T14:32:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:59 crc kubenswrapper[4718]: I1210 14:32:59.467181 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:59 crc kubenswrapper[4718]: I1210 14:32:59.467253 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:59 crc kubenswrapper[4718]: I1210 14:32:59.467276 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:59 crc kubenswrapper[4718]: I1210 14:32:59.467295 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:59 crc kubenswrapper[4718]: I1210 14:32:59.467307 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:59Z","lastTransitionTime":"2025-12-10T14:32:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:59 crc kubenswrapper[4718]: I1210 14:32:59.570843 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:59 crc kubenswrapper[4718]: I1210 14:32:59.570900 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:59 crc kubenswrapper[4718]: I1210 14:32:59.570913 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:59 crc kubenswrapper[4718]: I1210 14:32:59.570935 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:59 crc kubenswrapper[4718]: I1210 14:32:59.570953 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:59Z","lastTransitionTime":"2025-12-10T14:32:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:59 crc kubenswrapper[4718]: I1210 14:32:59.674345 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:59 crc kubenswrapper[4718]: I1210 14:32:59.674772 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:59 crc kubenswrapper[4718]: I1210 14:32:59.674876 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:59 crc kubenswrapper[4718]: I1210 14:32:59.674983 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:59 crc kubenswrapper[4718]: I1210 14:32:59.675075 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:59Z","lastTransitionTime":"2025-12-10T14:32:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:59 crc kubenswrapper[4718]: I1210 14:32:59.778513 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:59 crc kubenswrapper[4718]: I1210 14:32:59.778593 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:59 crc kubenswrapper[4718]: I1210 14:32:59.778619 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:59 crc kubenswrapper[4718]: I1210 14:32:59.778825 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:59 crc kubenswrapper[4718]: I1210 14:32:59.779024 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:59Z","lastTransitionTime":"2025-12-10T14:32:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:59 crc kubenswrapper[4718]: I1210 14:32:59.882010 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:59 crc kubenswrapper[4718]: I1210 14:32:59.882059 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:59 crc kubenswrapper[4718]: I1210 14:32:59.882074 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:59 crc kubenswrapper[4718]: I1210 14:32:59.882096 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:59 crc kubenswrapper[4718]: I1210 14:32:59.882110 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:59Z","lastTransitionTime":"2025-12-10T14:32:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:32:59 crc kubenswrapper[4718]: I1210 14:32:59.985926 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:32:59 crc kubenswrapper[4718]: I1210 14:32:59.985978 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:32:59 crc kubenswrapper[4718]: I1210 14:32:59.985989 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:32:59 crc kubenswrapper[4718]: I1210 14:32:59.986013 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:32:59 crc kubenswrapper[4718]: I1210 14:32:59.986027 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:32:59Z","lastTransitionTime":"2025-12-10T14:32:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:00 crc kubenswrapper[4718]: I1210 14:33:00.020415 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 10 14:33:00 crc kubenswrapper[4718]: E1210 14:33:00.020620 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 10 14:33:00 crc kubenswrapper[4718]: I1210 14:33:00.021065 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 10 14:33:00 crc kubenswrapper[4718]: E1210 14:33:00.021862 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 10 14:33:00 crc kubenswrapper[4718]: I1210 14:33:00.088501 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:00 crc kubenswrapper[4718]: I1210 14:33:00.088556 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:00 crc kubenswrapper[4718]: I1210 14:33:00.088570 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:00 crc kubenswrapper[4718]: I1210 14:33:00.088588 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:00 crc kubenswrapper[4718]: I1210 14:33:00.088599 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:00Z","lastTransitionTime":"2025-12-10T14:33:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:00 crc kubenswrapper[4718]: I1210 14:33:00.191295 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:00 crc kubenswrapper[4718]: I1210 14:33:00.191899 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:00 crc kubenswrapper[4718]: I1210 14:33:00.192003 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:00 crc kubenswrapper[4718]: I1210 14:33:00.192090 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:00 crc kubenswrapper[4718]: I1210 14:33:00.192190 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:00Z","lastTransitionTime":"2025-12-10T14:33:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:00 crc kubenswrapper[4718]: I1210 14:33:00.295688 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:00 crc kubenswrapper[4718]: I1210 14:33:00.296182 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:00 crc kubenswrapper[4718]: I1210 14:33:00.296286 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:00 crc kubenswrapper[4718]: I1210 14:33:00.296379 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:00 crc kubenswrapper[4718]: I1210 14:33:00.296495 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:00Z","lastTransitionTime":"2025-12-10T14:33:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:00 crc kubenswrapper[4718]: I1210 14:33:00.399695 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:00 crc kubenswrapper[4718]: I1210 14:33:00.399745 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:00 crc kubenswrapper[4718]: I1210 14:33:00.399760 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:00 crc kubenswrapper[4718]: I1210 14:33:00.399778 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:00 crc kubenswrapper[4718]: I1210 14:33:00.399791 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:00Z","lastTransitionTime":"2025-12-10T14:33:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:00 crc kubenswrapper[4718]: I1210 14:33:00.502931 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:00 crc kubenswrapper[4718]: I1210 14:33:00.502987 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:00 crc kubenswrapper[4718]: I1210 14:33:00.502999 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:00 crc kubenswrapper[4718]: I1210 14:33:00.503017 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:00 crc kubenswrapper[4718]: I1210 14:33:00.503031 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:00Z","lastTransitionTime":"2025-12-10T14:33:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:00 crc kubenswrapper[4718]: I1210 14:33:00.606340 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:00 crc kubenswrapper[4718]: I1210 14:33:00.606378 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:00 crc kubenswrapper[4718]: I1210 14:33:00.606413 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:00 crc kubenswrapper[4718]: I1210 14:33:00.606451 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:00 crc kubenswrapper[4718]: I1210 14:33:00.606465 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:00Z","lastTransitionTime":"2025-12-10T14:33:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:00 crc kubenswrapper[4718]: I1210 14:33:00.712344 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:00 crc kubenswrapper[4718]: I1210 14:33:00.712381 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:00 crc kubenswrapper[4718]: I1210 14:33:00.712419 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:00 crc kubenswrapper[4718]: I1210 14:33:00.712436 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:00 crc kubenswrapper[4718]: I1210 14:33:00.712450 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:00Z","lastTransitionTime":"2025-12-10T14:33:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:00 crc kubenswrapper[4718]: I1210 14:33:00.815853 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:00 crc kubenswrapper[4718]: I1210 14:33:00.815904 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:00 crc kubenswrapper[4718]: I1210 14:33:00.815915 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:00 crc kubenswrapper[4718]: I1210 14:33:00.815932 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:00 crc kubenswrapper[4718]: I1210 14:33:00.815944 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:00Z","lastTransitionTime":"2025-12-10T14:33:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:00 crc kubenswrapper[4718]: I1210 14:33:00.919457 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:00 crc kubenswrapper[4718]: I1210 14:33:00.919510 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:00 crc kubenswrapper[4718]: I1210 14:33:00.919520 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:00 crc kubenswrapper[4718]: I1210 14:33:00.919536 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:00 crc kubenswrapper[4718]: I1210 14:33:00.919547 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:00Z","lastTransitionTime":"2025-12-10T14:33:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:01 crc kubenswrapper[4718]: I1210 14:33:01.019638 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-r8zbt" Dec 10 14:33:01 crc kubenswrapper[4718]: I1210 14:33:01.019632 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 10 14:33:01 crc kubenswrapper[4718]: E1210 14:33:01.019806 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-r8zbt" podUID="1494ebfa-d66c-4200-a336-2cedebcd5889" Dec 10 14:33:01 crc kubenswrapper[4718]: E1210 14:33:01.019853 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 10 14:33:01 crc kubenswrapper[4718]: I1210 14:33:01.020945 4718 scope.go:117] "RemoveContainer" containerID="410548008fb250bc3e4c4409287aa72cf53991828f5c3f4ac74bcf8561cbf855" Dec 10 14:33:01 crc kubenswrapper[4718]: E1210 14:33:01.021250 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-dtch2_openshift-ovn-kubernetes(612af6cb-db4d-4874-a9ea-8b3c7eb8e30c)\"" pod="openshift-ovn-kubernetes/ovnkube-node-dtch2" podUID="612af6cb-db4d-4874-a9ea-8b3c7eb8e30c" Dec 10 14:33:01 crc kubenswrapper[4718]: I1210 14:33:01.021411 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:01 crc kubenswrapper[4718]: I1210 14:33:01.021474 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:01 crc kubenswrapper[4718]: I1210 14:33:01.021485 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:01 crc kubenswrapper[4718]: I1210 14:33:01.021506 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:01 crc kubenswrapper[4718]: I1210 14:33:01.021518 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:01Z","lastTransitionTime":"2025-12-10T14:33:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:01 crc kubenswrapper[4718]: I1210 14:33:01.037184 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://caa6ee7ab7f267517dcd1699f91f61cbc8b2dd0a135dd553e096f31a7f1dd93b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:33:01Z is after 2025-08-24T17:21:41Z" Dec 10 14:33:01 crc kubenswrapper[4718]: I1210 14:33:01.049557 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:33:01Z is after 2025-08-24T17:21:41Z" Dec 10 14:33:01 crc kubenswrapper[4718]: I1210 14:33:01.061671 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hv62w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9db3984f-4589-462f-94d7-89a885be63d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fb2ef285e022a0800e4756e854df7efc695588b19f6162afdccad0898285b93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c18891e2a71e33784526db2ff2226ff6aa8a9df462f5cb11ff4f5a446509f54\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-10T14:32:53Z\\\",\\\"message\\\":\\\"2025-12-10T14:32:07+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_1cd36dea-7f3c-413f-b43c-2e7059f8f268\\\\n2025-12-10T14:32:07+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_1cd36dea-7f3c-413f-b43c-2e7059f8f268 to /host/opt/cni/bin/\\\\n2025-12-10T14:32:08Z [verbose] multus-daemon started\\\\n2025-12-10T14:32:08Z [verbose] Readiness Indicator file check\\\\n2025-12-10T14:32:53Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:01Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d84p4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hv62w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:33:01Z is after 2025-08-24T17:21:41Z" Dec 10 14:33:01 crc kubenswrapper[4718]: I1210 14:33:01.075773 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xvkg4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af814c60-50de-499d-a1b2-b18f3749bc35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01d0637707f68305ada3aab8a2cf56db32750665ef5c1bcfbfb1993fd135c3b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vsmbx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:00Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xvkg4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:33:01Z is after 2025-08-24T17:21:41Z" Dec 10 14:33:01 crc kubenswrapper[4718]: I1210 14:33:01.086984 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8db53917-7cfb-496d-b8a0-5cc68f3be4e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ac0946788c7cef36abe84639ca343f7ba379f19b94308cd1a08f4dcac4ad249\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f420f195ef9f1879dc97af4b3feb8835025c2cbe0ede4113fb1b0e0b08ba6c2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8zmhn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:33:01Z is after 2025-08-24T17:21:41Z" Dec 10 14:33:01 crc kubenswrapper[4718]: I1210 14:33:01.097955 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e88a635-7fd6-4100-9074-93065379b7af\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe972be45d5fffe525720a01120068ca8d0d5ad43be84a4dfea4ca04946ff711\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://98178725ebbf1d6fe01df267ec22fdd0b8bb049fba5d6790ca3e045b71083a05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://98178725ebbf1d6fe01df267ec22fdd0b8bb049fba5d6790ca3e045b71083a05\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:31:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:31:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:33:01Z is after 2025-08-24T17:21:41Z" Dec 10 14:33:01 crc kubenswrapper[4718]: I1210 14:33:01.111071 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1432ae01-a453-47a7-be1d-5fdcabe6f0c8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://651112c229c32968527538e66f1b161c92411b37f45f36ca90b27b6710d7a43d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d1de02a2f41beed3e92a05e69a6c76b44861e7dba46d723a38fffe8c3d0ed42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9405e4c833e1d215824604c6d7ec42dec156f2968d3351f110df665cdac0ce1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d898946159da9ec7806b5daa074a2313a8d3cd6086f4ae2bdfe858a7ba70eca7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:31:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:33:01Z is after 2025-08-24T17:21:41Z" Dec 10 14:33:01 crc kubenswrapper[4718]: I1210 14:33:01.124876 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:01 crc kubenswrapper[4718]: I1210 14:33:01.124924 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:01 crc kubenswrapper[4718]: I1210 14:33:01.124934 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:01 crc kubenswrapper[4718]: I1210 14:33:01.124950 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:01 crc kubenswrapper[4718]: I1210 14:33:01.124965 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:01Z","lastTransitionTime":"2025-12-10T14:33:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:01 crc kubenswrapper[4718]: I1210 14:33:01.125156 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5867ee7f-0b5d-48e7-9c39-956298a845a1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca56ecda46fc73e9f734cfd36b87b7353db70adefef0665771911d62633ce508\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65ade8ad5ab17011ad49a8283beac1b53ed3612052f8ad9c9b5521b18a71f970\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4c8b683dbf4a1d50d5987ea168baecaf4b2124a5a6286dc22c96cd7472db60a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://541c7810a9f52a43bd4d949a836dfed2fbf9384a1c46d64f81168a8eec1437e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://541c7810a9f52a43bd4d949a836dfed2fbf9384a1c46d64f81168a8eec1437e3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:31:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:31:37Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:31:36Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:33:01Z is after 2025-08-24T17:21:41Z" Dec 10 14:33:01 crc kubenswrapper[4718]: I1210 14:33:01.137619 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-49r77" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22aace52-6f58-4459-89d2-9fec98b12ead\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b18d0c437956c7db43ad53ae225d4340e62f2c0ceb79ab6cc519d3c97570027\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l47m9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc20dc8e96606df154ecb5850afe3c1002d2869646e8a43a642e34d6f5ec8683\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l47m9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-49r77\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:33:01Z is after 2025-08-24T17:21:41Z" Dec 10 14:33:01 crc kubenswrapper[4718]: I1210 14:33:01.152430 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kkfdg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db9c6842-03cb-4a28-baff-77f27f537aa4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bbdc104f3ec13789d339165601ac84cf9973e811127f8fe05ddb9b8d5ab333a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bxrt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34acb23b1eef53ae920322853c0503b9d26b0d30894e6c53ae52fb90e1671294\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34acb23b1eef53ae920322853c0503b9d26b0d30894e6c53ae52fb90e1671294\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bxrt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1b13fbe360cd4e791e86acdb311b78d1ab1d3344fd287fd70cfa87cf656affa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1b13fbe360cd4e791e86acdb311b78d1ab1d3344fd287fd70cfa87cf656affa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bxrt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d9ee16140b928b7e5c33811af2ca2b20b2fc98fdefc12bb1790fce59c407d9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d9ee16140b928b7e5c33811af2ca2b20b2fc98fdefc12bb1790fce59c407d9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bxrt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1daf49ff990637090db3ead68f5d0777b6502f79edfe524df743389ee220f8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1daf49ff990637090db3ead68f5d0777b6502f79edfe524df743389ee220f8d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bxrt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://193bd4ffc5c3324f05727b94ff852aaae455949fbb968b27c572f43fe0328854\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://193bd4ffc5c3324f05727b94ff852aaae455949fbb968b27c572f43fe0328854\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bxrt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://95db561c06acc2e7272a5a4a2c8a6a38315343379b80213c0e63f7a65a46f35d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://95db561c06acc2e7272a5a4a2c8a6a38315343379b80213c0e63f7a65a46f35d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bxrt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kkfdg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:33:01Z is after 2025-08-24T17:21:41Z" Dec 10 14:33:01 crc kubenswrapper[4718]: I1210 14:33:01.162188 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-r8zbt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1494ebfa-d66c-4200-a336-2cedebcd5889\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p7qsk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p7qsk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:17Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-r8zbt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:33:01Z is after 2025-08-24T17:21:41Z" Dec 10 14:33:01 crc kubenswrapper[4718]: I1210 14:33:01.172951 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://368fbae55ef9c78f2f88c5a07301f271e1ccffc66976a0c2dc9f80edce04613a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:33:01Z is after 2025-08-24T17:21:41Z" Dec 10 14:33:01 crc kubenswrapper[4718]: I1210 14:33:01.192566 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dtch2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"612af6cb-db4d-4874-a9ea-8b3c7eb8e30c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f49783c85c42bd1d4afd9d2546c84af07c770b069b7e6f22cf3d95c0e173bcb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02ee125e78be56eace63f1077a3952d4a9b9ccb387af5413a9325eb4bff80ea9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e231c6918c86142643af798db9c4c6de9764822d2b60e1155c727b143191139\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://abe2f058bd1c2102c5f00a29af4869eb6dcd8b732ae90f5c4fda6759b460622a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2ac19fc0eeba332014eefac978a35798eb8bf1ff79212bca24690236bd46e01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9263fdd7b12f5852db038570b5a179fff5850893af790fa27694f00375a607f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://410548008fb250bc3e4c4409287aa72cf53991828f5c3f4ac74bcf8561cbf855\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://410548008fb250bc3e4c4409287aa72cf53991828f5c3f4ac74bcf8561cbf855\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-10T14:32:49Z\\\",\\\"message\\\":\\\"logical_ip:10.217.0.59 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {dce28c51-c9f1-478b-97c8-7e209d6e7cbe}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:dce28c51-c9f1-478b-97c8-7e209d6e7cbe}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {e3c4661a-36a6-47f0-a6c0-a4ee741f2224}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1210 14:32:48.189595 6552 obj_retry.go:365] Adding new object: *v1.Pod openshift-multus/network-metrics-daemon-r8zbt\\\\nI1210 14:32:48.189604 6552 ovn.go:134] Ensuring zone local for Pod openshift-multus/network-metrics-daemon-r8zbt in node crc\\\\nI1210 14:32:48.189627 6552 base_network_controller_pods.go:477] [default/openshift-multus/network-metrics-daemon-r8zbt] creating logical port openshift-multus_network-metrics-daemon-r8zbt for pod on switch crc\\\\nF1210 14:32:48.188496 6552 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:46Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-dtch2_openshift-ovn-kubernetes(612af6cb-db4d-4874-a9ea-8b3c7eb8e30c)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f27c89c7dd6f5334f05c340d706e938abde36968f00c30bf84f7b2fb30dfcb3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df5db949bd1c53e8ae754b3e7e500e8c2e196aacd68a36d6331ddca378ef7504\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df5db949bd1c53e8ae754b3e7e500e8c2e196aacd68a36d6331ddca378ef7504\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-dtch2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:33:01Z is after 2025-08-24T17:21:41Z" Dec 10 14:33:01 crc kubenswrapper[4718]: I1210 14:33:01.206953 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52b297ad-8ff7-498e-8248-b64014de744f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65966d33004e0a6740f0b450f404729baec86fc47c84a58ee020a78b7378bfee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfa44e8f79d7525847ab10384aaf41105a8e6769b384679417cd0a7379748dee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://32fcede9b35f9a45177a341e272e7c2cbe2e5c6bf699e877664a603a5f96124a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3d4b682855c5510a5957e308008602e4f749bd44ca52f974cb4be27d06a7c80\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9d6e990a4a3f575ffc8a6399b8dd1fd59a84db71b34885946046fc7c5bd0a8e8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-10T14:31:58Z\\\",\\\"message\\\":\\\"le observer\\\\nW1210 14:31:57.496102 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1210 14:31:57.496266 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1210 14:31:57.497076 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1744864549/tls.crt::/tmp/serving-cert-1744864549/tls.key\\\\\\\"\\\\nI1210 14:31:57.965947 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1210 14:31:57.968367 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1210 14:31:57.968400 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1210 14:31:57.968417 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1210 14:31:57.968422 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1210 14:31:57.972813 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1210 14:31:57.972849 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1210 14:31:57.972856 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1210 14:31:57.972862 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1210 14:31:57.972867 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1210 14:31:57.972870 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1210 14:31:57.972874 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1210 14:31:57.973028 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1210 14:31:57.977590 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-10T14:31:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f363cabfc65ba2cb737968ead6f21568f30416a1f385f511f2d1a45da1e831a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3cbf8dfd3e1278a733c6ca7e888788d0bec845ff72dfd618e2070262d05b6a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3cbf8dfd3e1278a733c6ca7e888788d0bec845ff72dfd618e2070262d05b6a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:31:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:31:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:33:01Z is after 2025-08-24T17:21:41Z" Dec 10 14:33:01 crc kubenswrapper[4718]: I1210 14:33:01.223664 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:33:01Z is after 2025-08-24T17:21:41Z" Dec 10 14:33:01 crc kubenswrapper[4718]: I1210 14:33:01.228052 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:01 crc kubenswrapper[4718]: I1210 14:33:01.228115 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:01 crc kubenswrapper[4718]: I1210 14:33:01.228130 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:01 crc kubenswrapper[4718]: I1210 14:33:01.228149 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:01 crc kubenswrapper[4718]: I1210 14:33:01.228163 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:01Z","lastTransitionTime":"2025-12-10T14:33:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:01 crc kubenswrapper[4718]: I1210 14:33:01.233651 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:01 crc kubenswrapper[4718]: I1210 14:33:01.233718 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:01 crc kubenswrapper[4718]: I1210 14:33:01.233740 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:01 crc kubenswrapper[4718]: I1210 14:33:01.233771 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:01 crc kubenswrapper[4718]: I1210 14:33:01.233795 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:01Z","lastTransitionTime":"2025-12-10T14:33:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:01 crc kubenswrapper[4718]: I1210 14:33:01.238491 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:33:01Z is after 2025-08-24T17:21:41Z" Dec 10 14:33:01 crc kubenswrapper[4718]: E1210 14:33:01.249144 4718 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:33:01Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T14:33:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:33:01Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T14:33:01Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:33:01Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T14:33:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:33:01Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T14:33:01Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"fbef6824-3734-4f3d-bf18-030e5c72cff8\\\",\\\"systemUUID\\\":\\\"559eaef7-72a9-45f0-b8d3-0046c76adc0d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:33:01Z is after 2025-08-24T17:21:41Z" Dec 10 14:33:01 crc kubenswrapper[4718]: I1210 14:33:01.253413 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:01 crc kubenswrapper[4718]: I1210 14:33:01.253466 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:01 crc kubenswrapper[4718]: I1210 14:33:01.253483 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:01 crc kubenswrapper[4718]: I1210 14:33:01.253504 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:01 crc kubenswrapper[4718]: I1210 14:33:01.253517 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:01Z","lastTransitionTime":"2025-12-10T14:33:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:01 crc kubenswrapper[4718]: I1210 14:33:01.254311 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a6ff07473ddfa3d1db1e5fe47937274f803c400ee435102c3b0383c02af4340\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b56868fde34c04a22b78be1e6cb02ff5b11f0dd9fa0232e5a4c38725a8bf193\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:33:01Z is after 2025-08-24T17:21:41Z" Dec 10 14:33:01 crc kubenswrapper[4718]: I1210 14:33:01.265981 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9fmrd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"186fd52a-c63c-461f-a551-8b57ead36f59\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c79dfef1bedaab660a35e29f8a48c6b153f7ca5028ef6a936a196699a30566e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r6f28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:03Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9fmrd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:33:01Z is after 2025-08-24T17:21:41Z" Dec 10 14:33:01 crc kubenswrapper[4718]: E1210 14:33:01.265934 4718 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:33:01Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T14:33:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:33:01Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T14:33:01Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:33:01Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T14:33:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:33:01Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T14:33:01Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"fbef6824-3734-4f3d-bf18-030e5c72cff8\\\",\\\"systemUUID\\\":\\\"559eaef7-72a9-45f0-b8d3-0046c76adc0d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:33:01Z is after 2025-08-24T17:21:41Z" Dec 10 14:33:01 crc kubenswrapper[4718]: I1210 14:33:01.269997 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:01 crc kubenswrapper[4718]: I1210 14:33:01.270035 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:01 crc kubenswrapper[4718]: I1210 14:33:01.270047 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:01 crc kubenswrapper[4718]: I1210 14:33:01.270064 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:01 crc kubenswrapper[4718]: I1210 14:33:01.270077 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:01Z","lastTransitionTime":"2025-12-10T14:33:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:01 crc kubenswrapper[4718]: E1210 14:33:01.286904 4718 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:33:01Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T14:33:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:33:01Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T14:33:01Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:33:01Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T14:33:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:33:01Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T14:33:01Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"fbef6824-3734-4f3d-bf18-030e5c72cff8\\\",\\\"systemUUID\\\":\\\"559eaef7-72a9-45f0-b8d3-0046c76adc0d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:33:01Z is after 2025-08-24T17:21:41Z" Dec 10 14:33:01 crc kubenswrapper[4718]: I1210 14:33:01.291264 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:01 crc kubenswrapper[4718]: I1210 14:33:01.291290 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:01 crc kubenswrapper[4718]: I1210 14:33:01.291299 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:01 crc kubenswrapper[4718]: I1210 14:33:01.291313 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:01 crc kubenswrapper[4718]: I1210 14:33:01.291324 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:01Z","lastTransitionTime":"2025-12-10T14:33:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:01 crc kubenswrapper[4718]: E1210 14:33:01.303947 4718 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:33:01Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T14:33:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:33:01Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T14:33:01Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:33:01Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T14:33:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:33:01Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T14:33:01Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"fbef6824-3734-4f3d-bf18-030e5c72cff8\\\",\\\"systemUUID\\\":\\\"559eaef7-72a9-45f0-b8d3-0046c76adc0d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:33:01Z is after 2025-08-24T17:21:41Z" Dec 10 14:33:01 crc kubenswrapper[4718]: I1210 14:33:01.307517 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:01 crc kubenswrapper[4718]: I1210 14:33:01.307553 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:01 crc kubenswrapper[4718]: I1210 14:33:01.307564 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:01 crc kubenswrapper[4718]: I1210 14:33:01.307581 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:01 crc kubenswrapper[4718]: I1210 14:33:01.307593 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:01Z","lastTransitionTime":"2025-12-10T14:33:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:01 crc kubenswrapper[4718]: E1210 14:33:01.319233 4718 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:33:01Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T14:33:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:33:01Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T14:33:01Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:33:01Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T14:33:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:33:01Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T14:33:01Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"fbef6824-3734-4f3d-bf18-030e5c72cff8\\\",\\\"systemUUID\\\":\\\"559eaef7-72a9-45f0-b8d3-0046c76adc0d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:33:01Z is after 2025-08-24T17:21:41Z" Dec 10 14:33:01 crc kubenswrapper[4718]: E1210 14:33:01.319432 4718 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 10 14:33:01 crc kubenswrapper[4718]: I1210 14:33:01.330585 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:01 crc kubenswrapper[4718]: I1210 14:33:01.330645 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:01 crc kubenswrapper[4718]: I1210 14:33:01.330656 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:01 crc kubenswrapper[4718]: I1210 14:33:01.330674 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:01 crc kubenswrapper[4718]: I1210 14:33:01.330687 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:01Z","lastTransitionTime":"2025-12-10T14:33:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:01 crc kubenswrapper[4718]: I1210 14:33:01.434158 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:01 crc kubenswrapper[4718]: I1210 14:33:01.434216 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:01 crc kubenswrapper[4718]: I1210 14:33:01.434228 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:01 crc kubenswrapper[4718]: I1210 14:33:01.434245 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:01 crc kubenswrapper[4718]: I1210 14:33:01.434260 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:01Z","lastTransitionTime":"2025-12-10T14:33:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:01 crc kubenswrapper[4718]: I1210 14:33:01.537649 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:01 crc kubenswrapper[4718]: I1210 14:33:01.537748 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:01 crc kubenswrapper[4718]: I1210 14:33:01.537775 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:01 crc kubenswrapper[4718]: I1210 14:33:01.537809 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:01 crc kubenswrapper[4718]: I1210 14:33:01.537835 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:01Z","lastTransitionTime":"2025-12-10T14:33:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:01 crc kubenswrapper[4718]: I1210 14:33:01.641791 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:01 crc kubenswrapper[4718]: I1210 14:33:01.641923 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:01 crc kubenswrapper[4718]: I1210 14:33:01.641955 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:01 crc kubenswrapper[4718]: I1210 14:33:01.641999 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:01 crc kubenswrapper[4718]: I1210 14:33:01.642024 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:01Z","lastTransitionTime":"2025-12-10T14:33:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:01 crc kubenswrapper[4718]: I1210 14:33:01.668898 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 10 14:33:01 crc kubenswrapper[4718]: E1210 14:33:01.669298 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-10 14:34:05.669252912 +0000 UTC m=+150.618476329 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 14:33:01 crc kubenswrapper[4718]: I1210 14:33:01.745085 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:01 crc kubenswrapper[4718]: I1210 14:33:01.745159 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:01 crc kubenswrapper[4718]: I1210 14:33:01.745171 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:01 crc kubenswrapper[4718]: I1210 14:33:01.745189 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:01 crc kubenswrapper[4718]: I1210 14:33:01.745200 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:01Z","lastTransitionTime":"2025-12-10T14:33:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:01 crc kubenswrapper[4718]: I1210 14:33:01.848358 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:01 crc kubenswrapper[4718]: I1210 14:33:01.848494 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:01 crc kubenswrapper[4718]: I1210 14:33:01.848527 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:01 crc kubenswrapper[4718]: I1210 14:33:01.848553 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:01 crc kubenswrapper[4718]: I1210 14:33:01.848573 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:01Z","lastTransitionTime":"2025-12-10T14:33:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:01 crc kubenswrapper[4718]: I1210 14:33:01.951177 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:01 crc kubenswrapper[4718]: I1210 14:33:01.951247 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:01 crc kubenswrapper[4718]: I1210 14:33:01.951268 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:01 crc kubenswrapper[4718]: I1210 14:33:01.951292 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:01 crc kubenswrapper[4718]: I1210 14:33:01.951313 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:01Z","lastTransitionTime":"2025-12-10T14:33:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:01 crc kubenswrapper[4718]: I1210 14:33:01.972361 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 10 14:33:01 crc kubenswrapper[4718]: I1210 14:33:01.972515 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 10 14:33:01 crc kubenswrapper[4718]: I1210 14:33:01.972569 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 10 14:33:01 crc kubenswrapper[4718]: E1210 14:33:01.972518 4718 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 10 14:33:01 crc kubenswrapper[4718]: E1210 14:33:01.972689 4718 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 10 14:33:01 crc kubenswrapper[4718]: E1210 14:33:01.972725 4718 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 10 14:33:01 crc kubenswrapper[4718]: E1210 14:33:01.972731 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-10 14:34:05.972690174 +0000 UTC m=+150.921913621 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 10 14:33:01 crc kubenswrapper[4718]: E1210 14:33:01.972735 4718 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 10 14:33:01 crc kubenswrapper[4718]: E1210 14:33:01.972819 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-10 14:34:05.972795247 +0000 UTC m=+150.922018704 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 10 14:33:01 crc kubenswrapper[4718]: E1210 14:33:01.972745 4718 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 10 14:33:01 crc kubenswrapper[4718]: E1210 14:33:01.972915 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-10 14:34:05.97289589 +0000 UTC m=+150.922119347 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 10 14:33:02 crc kubenswrapper[4718]: I1210 14:33:02.019808 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 10 14:33:02 crc kubenswrapper[4718]: I1210 14:33:02.019860 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 10 14:33:02 crc kubenswrapper[4718]: E1210 14:33:02.019936 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 10 14:33:02 crc kubenswrapper[4718]: E1210 14:33:02.020064 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 10 14:33:02 crc kubenswrapper[4718]: I1210 14:33:02.054710 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:02 crc kubenswrapper[4718]: I1210 14:33:02.054780 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:02 crc kubenswrapper[4718]: I1210 14:33:02.054791 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:02 crc kubenswrapper[4718]: I1210 14:33:02.054813 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:02 crc kubenswrapper[4718]: I1210 14:33:02.054828 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:02Z","lastTransitionTime":"2025-12-10T14:33:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:02 crc kubenswrapper[4718]: I1210 14:33:02.073056 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 10 14:33:02 crc kubenswrapper[4718]: E1210 14:33:02.073224 4718 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 10 14:33:02 crc kubenswrapper[4718]: E1210 14:33:02.073251 4718 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 10 14:33:02 crc kubenswrapper[4718]: E1210 14:33:02.073267 4718 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 10 14:33:02 crc kubenswrapper[4718]: E1210 14:33:02.073331 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-10 14:34:06.073310318 +0000 UTC m=+151.022533745 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 10 14:33:02 crc kubenswrapper[4718]: I1210 14:33:02.158122 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:02 crc kubenswrapper[4718]: I1210 14:33:02.158167 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:02 crc kubenswrapper[4718]: I1210 14:33:02.158177 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:02 crc kubenswrapper[4718]: I1210 14:33:02.158192 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:02 crc kubenswrapper[4718]: I1210 14:33:02.158203 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:02Z","lastTransitionTime":"2025-12-10T14:33:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:02 crc kubenswrapper[4718]: I1210 14:33:02.260452 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:02 crc kubenswrapper[4718]: I1210 14:33:02.260512 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:02 crc kubenswrapper[4718]: I1210 14:33:02.260530 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:02 crc kubenswrapper[4718]: I1210 14:33:02.260551 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:02 crc kubenswrapper[4718]: I1210 14:33:02.260567 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:02Z","lastTransitionTime":"2025-12-10T14:33:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:02 crc kubenswrapper[4718]: I1210 14:33:02.364919 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:02 crc kubenswrapper[4718]: I1210 14:33:02.365004 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:02 crc kubenswrapper[4718]: I1210 14:33:02.365023 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:02 crc kubenswrapper[4718]: I1210 14:33:02.365051 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:02 crc kubenswrapper[4718]: I1210 14:33:02.365070 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:02Z","lastTransitionTime":"2025-12-10T14:33:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:02 crc kubenswrapper[4718]: I1210 14:33:02.468454 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:02 crc kubenswrapper[4718]: I1210 14:33:02.468497 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:02 crc kubenswrapper[4718]: I1210 14:33:02.468507 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:02 crc kubenswrapper[4718]: I1210 14:33:02.468523 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:02 crc kubenswrapper[4718]: I1210 14:33:02.468532 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:02Z","lastTransitionTime":"2025-12-10T14:33:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:02 crc kubenswrapper[4718]: I1210 14:33:02.571895 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:02 crc kubenswrapper[4718]: I1210 14:33:02.572802 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:02 crc kubenswrapper[4718]: I1210 14:33:02.572838 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:02 crc kubenswrapper[4718]: I1210 14:33:02.572863 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:02 crc kubenswrapper[4718]: I1210 14:33:02.572879 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:02Z","lastTransitionTime":"2025-12-10T14:33:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:02 crc kubenswrapper[4718]: I1210 14:33:02.676597 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:02 crc kubenswrapper[4718]: I1210 14:33:02.676665 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:02 crc kubenswrapper[4718]: I1210 14:33:02.676680 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:02 crc kubenswrapper[4718]: I1210 14:33:02.676705 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:02 crc kubenswrapper[4718]: I1210 14:33:02.676718 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:02Z","lastTransitionTime":"2025-12-10T14:33:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:02 crc kubenswrapper[4718]: I1210 14:33:02.779591 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:02 crc kubenswrapper[4718]: I1210 14:33:02.779635 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:02 crc kubenswrapper[4718]: I1210 14:33:02.779647 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:02 crc kubenswrapper[4718]: I1210 14:33:02.779662 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:02 crc kubenswrapper[4718]: I1210 14:33:02.779674 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:02Z","lastTransitionTime":"2025-12-10T14:33:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:02 crc kubenswrapper[4718]: I1210 14:33:02.882324 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:02 crc kubenswrapper[4718]: I1210 14:33:02.882434 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:02 crc kubenswrapper[4718]: I1210 14:33:02.882454 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:02 crc kubenswrapper[4718]: I1210 14:33:02.882480 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:02 crc kubenswrapper[4718]: I1210 14:33:02.882498 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:02Z","lastTransitionTime":"2025-12-10T14:33:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:02 crc kubenswrapper[4718]: I1210 14:33:02.985314 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:02 crc kubenswrapper[4718]: I1210 14:33:02.985358 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:02 crc kubenswrapper[4718]: I1210 14:33:02.985370 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:02 crc kubenswrapper[4718]: I1210 14:33:02.985408 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:02 crc kubenswrapper[4718]: I1210 14:33:02.985423 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:02Z","lastTransitionTime":"2025-12-10T14:33:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:03 crc kubenswrapper[4718]: I1210 14:33:03.020155 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 10 14:33:03 crc kubenswrapper[4718]: I1210 14:33:03.020179 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-r8zbt" Dec 10 14:33:03 crc kubenswrapper[4718]: E1210 14:33:03.020304 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 10 14:33:03 crc kubenswrapper[4718]: E1210 14:33:03.020430 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-r8zbt" podUID="1494ebfa-d66c-4200-a336-2cedebcd5889" Dec 10 14:33:03 crc kubenswrapper[4718]: I1210 14:33:03.088339 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:03 crc kubenswrapper[4718]: I1210 14:33:03.088422 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:03 crc kubenswrapper[4718]: I1210 14:33:03.088433 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:03 crc kubenswrapper[4718]: I1210 14:33:03.088453 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:03 crc kubenswrapper[4718]: I1210 14:33:03.088464 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:03Z","lastTransitionTime":"2025-12-10T14:33:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:03 crc kubenswrapper[4718]: I1210 14:33:03.190695 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:03 crc kubenswrapper[4718]: I1210 14:33:03.190755 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:03 crc kubenswrapper[4718]: I1210 14:33:03.190769 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:03 crc kubenswrapper[4718]: I1210 14:33:03.190792 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:03 crc kubenswrapper[4718]: I1210 14:33:03.190809 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:03Z","lastTransitionTime":"2025-12-10T14:33:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:03 crc kubenswrapper[4718]: I1210 14:33:03.294274 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:03 crc kubenswrapper[4718]: I1210 14:33:03.294329 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:03 crc kubenswrapper[4718]: I1210 14:33:03.294346 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:03 crc kubenswrapper[4718]: I1210 14:33:03.294372 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:03 crc kubenswrapper[4718]: I1210 14:33:03.294421 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:03Z","lastTransitionTime":"2025-12-10T14:33:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:03 crc kubenswrapper[4718]: I1210 14:33:03.396827 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:03 crc kubenswrapper[4718]: I1210 14:33:03.396908 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:03 crc kubenswrapper[4718]: I1210 14:33:03.396945 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:03 crc kubenswrapper[4718]: I1210 14:33:03.396977 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:03 crc kubenswrapper[4718]: I1210 14:33:03.397000 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:03Z","lastTransitionTime":"2025-12-10T14:33:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:03 crc kubenswrapper[4718]: I1210 14:33:03.500756 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:03 crc kubenswrapper[4718]: I1210 14:33:03.500837 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:03 crc kubenswrapper[4718]: I1210 14:33:03.500875 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:03 crc kubenswrapper[4718]: I1210 14:33:03.500911 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:03 crc kubenswrapper[4718]: I1210 14:33:03.500937 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:03Z","lastTransitionTime":"2025-12-10T14:33:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:03 crc kubenswrapper[4718]: I1210 14:33:03.604846 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:03 crc kubenswrapper[4718]: I1210 14:33:03.604925 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:03 crc kubenswrapper[4718]: I1210 14:33:03.604949 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:03 crc kubenswrapper[4718]: I1210 14:33:03.604980 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:03 crc kubenswrapper[4718]: I1210 14:33:03.605002 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:03Z","lastTransitionTime":"2025-12-10T14:33:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:03 crc kubenswrapper[4718]: I1210 14:33:03.708203 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:03 crc kubenswrapper[4718]: I1210 14:33:03.708282 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:03 crc kubenswrapper[4718]: I1210 14:33:03.708309 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:03 crc kubenswrapper[4718]: I1210 14:33:03.708376 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:03 crc kubenswrapper[4718]: I1210 14:33:03.708432 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:03Z","lastTransitionTime":"2025-12-10T14:33:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:03 crc kubenswrapper[4718]: I1210 14:33:03.813587 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:03 crc kubenswrapper[4718]: I1210 14:33:03.813830 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:03 crc kubenswrapper[4718]: I1210 14:33:03.813868 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:03 crc kubenswrapper[4718]: I1210 14:33:03.813947 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:03 crc kubenswrapper[4718]: I1210 14:33:03.814013 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:03Z","lastTransitionTime":"2025-12-10T14:33:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:03 crc kubenswrapper[4718]: I1210 14:33:03.919860 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:03 crc kubenswrapper[4718]: I1210 14:33:03.920249 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:03 crc kubenswrapper[4718]: I1210 14:33:03.920357 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:03 crc kubenswrapper[4718]: I1210 14:33:03.920537 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:03 crc kubenswrapper[4718]: I1210 14:33:03.920678 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:03Z","lastTransitionTime":"2025-12-10T14:33:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:04 crc kubenswrapper[4718]: I1210 14:33:04.020207 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 10 14:33:04 crc kubenswrapper[4718]: I1210 14:33:04.020207 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 10 14:33:04 crc kubenswrapper[4718]: E1210 14:33:04.020589 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 10 14:33:04 crc kubenswrapper[4718]: E1210 14:33:04.020884 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 10 14:33:04 crc kubenswrapper[4718]: I1210 14:33:04.023092 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:04 crc kubenswrapper[4718]: I1210 14:33:04.023233 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:04 crc kubenswrapper[4718]: I1210 14:33:04.023334 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:04 crc kubenswrapper[4718]: I1210 14:33:04.023476 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:04 crc kubenswrapper[4718]: I1210 14:33:04.023573 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:04Z","lastTransitionTime":"2025-12-10T14:33:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:04 crc kubenswrapper[4718]: I1210 14:33:04.126324 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:04 crc kubenswrapper[4718]: I1210 14:33:04.126380 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:04 crc kubenswrapper[4718]: I1210 14:33:04.126423 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:04 crc kubenswrapper[4718]: I1210 14:33:04.126443 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:04 crc kubenswrapper[4718]: I1210 14:33:04.126456 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:04Z","lastTransitionTime":"2025-12-10T14:33:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:04 crc kubenswrapper[4718]: I1210 14:33:04.229877 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:04 crc kubenswrapper[4718]: I1210 14:33:04.229937 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:04 crc kubenswrapper[4718]: I1210 14:33:04.229955 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:04 crc kubenswrapper[4718]: I1210 14:33:04.230126 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:04 crc kubenswrapper[4718]: I1210 14:33:04.230159 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:04Z","lastTransitionTime":"2025-12-10T14:33:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:04 crc kubenswrapper[4718]: I1210 14:33:04.333085 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:04 crc kubenswrapper[4718]: I1210 14:33:04.333678 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:04 crc kubenswrapper[4718]: I1210 14:33:04.333766 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:04 crc kubenswrapper[4718]: I1210 14:33:04.333860 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:04 crc kubenswrapper[4718]: I1210 14:33:04.333947 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:04Z","lastTransitionTime":"2025-12-10T14:33:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:04 crc kubenswrapper[4718]: I1210 14:33:04.436185 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:04 crc kubenswrapper[4718]: I1210 14:33:04.436235 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:04 crc kubenswrapper[4718]: I1210 14:33:04.436252 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:04 crc kubenswrapper[4718]: I1210 14:33:04.436269 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:04 crc kubenswrapper[4718]: I1210 14:33:04.436283 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:04Z","lastTransitionTime":"2025-12-10T14:33:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:04 crc kubenswrapper[4718]: I1210 14:33:04.538890 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:04 crc kubenswrapper[4718]: I1210 14:33:04.538935 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:04 crc kubenswrapper[4718]: I1210 14:33:04.538947 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:04 crc kubenswrapper[4718]: I1210 14:33:04.538965 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:04 crc kubenswrapper[4718]: I1210 14:33:04.538978 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:04Z","lastTransitionTime":"2025-12-10T14:33:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:04 crc kubenswrapper[4718]: I1210 14:33:04.643022 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:04 crc kubenswrapper[4718]: I1210 14:33:04.643082 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:04 crc kubenswrapper[4718]: I1210 14:33:04.643099 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:04 crc kubenswrapper[4718]: I1210 14:33:04.643125 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:04 crc kubenswrapper[4718]: I1210 14:33:04.643143 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:04Z","lastTransitionTime":"2025-12-10T14:33:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:04 crc kubenswrapper[4718]: I1210 14:33:04.746079 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:04 crc kubenswrapper[4718]: I1210 14:33:04.746158 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:04 crc kubenswrapper[4718]: I1210 14:33:04.746176 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:04 crc kubenswrapper[4718]: I1210 14:33:04.746206 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:04 crc kubenswrapper[4718]: I1210 14:33:04.746229 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:04Z","lastTransitionTime":"2025-12-10T14:33:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:04 crc kubenswrapper[4718]: I1210 14:33:04.849568 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:04 crc kubenswrapper[4718]: I1210 14:33:04.849625 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:04 crc kubenswrapper[4718]: I1210 14:33:04.849633 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:04 crc kubenswrapper[4718]: I1210 14:33:04.849649 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:04 crc kubenswrapper[4718]: I1210 14:33:04.849662 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:04Z","lastTransitionTime":"2025-12-10T14:33:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:04 crc kubenswrapper[4718]: I1210 14:33:04.952681 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:04 crc kubenswrapper[4718]: I1210 14:33:04.952729 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:04 crc kubenswrapper[4718]: I1210 14:33:04.952737 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:04 crc kubenswrapper[4718]: I1210 14:33:04.952753 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:04 crc kubenswrapper[4718]: I1210 14:33:04.952765 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:04Z","lastTransitionTime":"2025-12-10T14:33:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:05 crc kubenswrapper[4718]: I1210 14:33:05.020029 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 10 14:33:05 crc kubenswrapper[4718]: I1210 14:33:05.020095 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-r8zbt" Dec 10 14:33:05 crc kubenswrapper[4718]: E1210 14:33:05.020240 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 10 14:33:05 crc kubenswrapper[4718]: E1210 14:33:05.020364 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-r8zbt" podUID="1494ebfa-d66c-4200-a336-2cedebcd5889" Dec 10 14:33:05 crc kubenswrapper[4718]: I1210 14:33:05.055899 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:05 crc kubenswrapper[4718]: I1210 14:33:05.055956 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:05 crc kubenswrapper[4718]: I1210 14:33:05.055969 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:05 crc kubenswrapper[4718]: I1210 14:33:05.055990 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:05 crc kubenswrapper[4718]: I1210 14:33:05.056006 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:05Z","lastTransitionTime":"2025-12-10T14:33:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:05 crc kubenswrapper[4718]: I1210 14:33:05.159124 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:05 crc kubenswrapper[4718]: I1210 14:33:05.159232 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:05 crc kubenswrapper[4718]: I1210 14:33:05.159253 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:05 crc kubenswrapper[4718]: I1210 14:33:05.159279 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:05 crc kubenswrapper[4718]: I1210 14:33:05.159296 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:05Z","lastTransitionTime":"2025-12-10T14:33:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:05 crc kubenswrapper[4718]: I1210 14:33:05.262087 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:05 crc kubenswrapper[4718]: I1210 14:33:05.262145 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:05 crc kubenswrapper[4718]: I1210 14:33:05.262162 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:05 crc kubenswrapper[4718]: I1210 14:33:05.262182 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:05 crc kubenswrapper[4718]: I1210 14:33:05.262194 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:05Z","lastTransitionTime":"2025-12-10T14:33:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:05 crc kubenswrapper[4718]: I1210 14:33:05.365225 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:05 crc kubenswrapper[4718]: I1210 14:33:05.365303 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:05 crc kubenswrapper[4718]: I1210 14:33:05.365314 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:05 crc kubenswrapper[4718]: I1210 14:33:05.365331 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:05 crc kubenswrapper[4718]: I1210 14:33:05.365341 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:05Z","lastTransitionTime":"2025-12-10T14:33:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:05 crc kubenswrapper[4718]: I1210 14:33:05.467813 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:05 crc kubenswrapper[4718]: I1210 14:33:05.467873 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:05 crc kubenswrapper[4718]: I1210 14:33:05.467886 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:05 crc kubenswrapper[4718]: I1210 14:33:05.467903 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:05 crc kubenswrapper[4718]: I1210 14:33:05.467913 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:05Z","lastTransitionTime":"2025-12-10T14:33:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:05 crc kubenswrapper[4718]: I1210 14:33:05.570689 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:05 crc kubenswrapper[4718]: I1210 14:33:05.570740 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:05 crc kubenswrapper[4718]: I1210 14:33:05.570751 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:05 crc kubenswrapper[4718]: I1210 14:33:05.570769 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:05 crc kubenswrapper[4718]: I1210 14:33:05.570781 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:05Z","lastTransitionTime":"2025-12-10T14:33:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:05 crc kubenswrapper[4718]: I1210 14:33:05.673754 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:05 crc kubenswrapper[4718]: I1210 14:33:05.673803 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:05 crc kubenswrapper[4718]: I1210 14:33:05.673813 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:05 crc kubenswrapper[4718]: I1210 14:33:05.673832 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:05 crc kubenswrapper[4718]: I1210 14:33:05.673844 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:05Z","lastTransitionTime":"2025-12-10T14:33:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:05 crc kubenswrapper[4718]: I1210 14:33:05.777273 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:05 crc kubenswrapper[4718]: I1210 14:33:05.777340 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:05 crc kubenswrapper[4718]: I1210 14:33:05.777351 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:05 crc kubenswrapper[4718]: I1210 14:33:05.777371 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:05 crc kubenswrapper[4718]: I1210 14:33:05.777382 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:05Z","lastTransitionTime":"2025-12-10T14:33:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:05 crc kubenswrapper[4718]: I1210 14:33:05.880855 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:05 crc kubenswrapper[4718]: I1210 14:33:05.880937 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:05 crc kubenswrapper[4718]: I1210 14:33:05.880949 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:05 crc kubenswrapper[4718]: I1210 14:33:05.880972 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:05 crc kubenswrapper[4718]: I1210 14:33:05.880987 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:05Z","lastTransitionTime":"2025-12-10T14:33:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:05 crc kubenswrapper[4718]: I1210 14:33:05.984406 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:05 crc kubenswrapper[4718]: I1210 14:33:05.984459 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:05 crc kubenswrapper[4718]: I1210 14:33:05.984471 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:05 crc kubenswrapper[4718]: I1210 14:33:05.984487 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:05 crc kubenswrapper[4718]: I1210 14:33:05.984500 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:05Z","lastTransitionTime":"2025-12-10T14:33:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:06 crc kubenswrapper[4718]: I1210 14:33:06.020279 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 10 14:33:06 crc kubenswrapper[4718]: I1210 14:33:06.020346 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 10 14:33:06 crc kubenswrapper[4718]: E1210 14:33:06.020499 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 10 14:33:06 crc kubenswrapper[4718]: E1210 14:33:06.020582 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 10 14:33:06 crc kubenswrapper[4718]: I1210 14:33:06.036279 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:33:06Z is after 2025-08-24T17:21:41Z" Dec 10 14:33:06 crc kubenswrapper[4718]: I1210 14:33:06.050720 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://368fbae55ef9c78f2f88c5a07301f271e1ccffc66976a0c2dc9f80edce04613a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:33:06Z is after 2025-08-24T17:21:41Z" Dec 10 14:33:06 crc kubenswrapper[4718]: I1210 14:33:06.073778 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dtch2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"612af6cb-db4d-4874-a9ea-8b3c7eb8e30c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f49783c85c42bd1d4afd9d2546c84af07c770b069b7e6f22cf3d95c0e173bcb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02ee125e78be56eace63f1077a3952d4a9b9ccb387af5413a9325eb4bff80ea9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e231c6918c86142643af798db9c4c6de9764822d2b60e1155c727b143191139\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://abe2f058bd1c2102c5f00a29af4869eb6dcd8b732ae90f5c4fda6759b460622a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2ac19fc0eeba332014eefac978a35798eb8bf1ff79212bca24690236bd46e01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9263fdd7b12f5852db038570b5a179fff5850893af790fa27694f00375a607f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://410548008fb250bc3e4c4409287aa72cf53991828f5c3f4ac74bcf8561cbf855\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://410548008fb250bc3e4c4409287aa72cf53991828f5c3f4ac74bcf8561cbf855\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-10T14:32:49Z\\\",\\\"message\\\":\\\"logical_ip:10.217.0.59 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {dce28c51-c9f1-478b-97c8-7e209d6e7cbe}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:dce28c51-c9f1-478b-97c8-7e209d6e7cbe}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {e3c4661a-36a6-47f0-a6c0-a4ee741f2224}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1210 14:32:48.189595 6552 obj_retry.go:365] Adding new object: *v1.Pod openshift-multus/network-metrics-daemon-r8zbt\\\\nI1210 14:32:48.189604 6552 ovn.go:134] Ensuring zone local for Pod openshift-multus/network-metrics-daemon-r8zbt in node crc\\\\nI1210 14:32:48.189627 6552 base_network_controller_pods.go:477] [default/openshift-multus/network-metrics-daemon-r8zbt] creating logical port openshift-multus_network-metrics-daemon-r8zbt for pod on switch crc\\\\nF1210 14:32:48.188496 6552 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:46Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-dtch2_openshift-ovn-kubernetes(612af6cb-db4d-4874-a9ea-8b3c7eb8e30c)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f27c89c7dd6f5334f05c340d706e938abde36968f00c30bf84f7b2fb30dfcb3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df5db949bd1c53e8ae754b3e7e500e8c2e196aacd68a36d6331ddca378ef7504\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df5db949bd1c53e8ae754b3e7e500e8c2e196aacd68a36d6331ddca378ef7504\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-dtch2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:33:06Z is after 2025-08-24T17:21:41Z" Dec 10 14:33:06 crc kubenswrapper[4718]: I1210 14:33:06.087136 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:06 crc kubenswrapper[4718]: I1210 14:33:06.087171 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:06 crc kubenswrapper[4718]: I1210 14:33:06.087180 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:06 crc kubenswrapper[4718]: I1210 14:33:06.087194 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:06 crc kubenswrapper[4718]: I1210 14:33:06.087203 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:06Z","lastTransitionTime":"2025-12-10T14:33:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:06 crc kubenswrapper[4718]: I1210 14:33:06.090916 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52b297ad-8ff7-498e-8248-b64014de744f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65966d33004e0a6740f0b450f404729baec86fc47c84a58ee020a78b7378bfee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfa44e8f79d7525847ab10384aaf41105a8e6769b384679417cd0a7379748dee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://32fcede9b35f9a45177a341e272e7c2cbe2e5c6bf699e877664a603a5f96124a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3d4b682855c5510a5957e308008602e4f749bd44ca52f974cb4be27d06a7c80\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9d6e990a4a3f575ffc8a6399b8dd1fd59a84db71b34885946046fc7c5bd0a8e8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-10T14:31:58Z\\\",\\\"message\\\":\\\"le observer\\\\nW1210 14:31:57.496102 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1210 14:31:57.496266 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1210 14:31:57.497076 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1744864549/tls.crt::/tmp/serving-cert-1744864549/tls.key\\\\\\\"\\\\nI1210 14:31:57.965947 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1210 14:31:57.968367 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1210 14:31:57.968400 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1210 14:31:57.968417 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1210 14:31:57.968422 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1210 14:31:57.972813 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1210 14:31:57.972849 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1210 14:31:57.972856 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1210 14:31:57.972862 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1210 14:31:57.972867 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1210 14:31:57.972870 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1210 14:31:57.972874 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1210 14:31:57.973028 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1210 14:31:57.977590 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-10T14:31:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f363cabfc65ba2cb737968ead6f21568f30416a1f385f511f2d1a45da1e831a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3cbf8dfd3e1278a733c6ca7e888788d0bec845ff72dfd618e2070262d05b6a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3cbf8dfd3e1278a733c6ca7e888788d0bec845ff72dfd618e2070262d05b6a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:31:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:31:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:33:06Z is after 2025-08-24T17:21:41Z" Dec 10 14:33:06 crc kubenswrapper[4718]: I1210 14:33:06.105152 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:33:06Z is after 2025-08-24T17:21:41Z" Dec 10 14:33:06 crc kubenswrapper[4718]: I1210 14:33:06.117211 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a6ff07473ddfa3d1db1e5fe47937274f803c400ee435102c3b0383c02af4340\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b56868fde34c04a22b78be1e6cb02ff5b11f0dd9fa0232e5a4c38725a8bf193\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:33:06Z is after 2025-08-24T17:21:41Z" Dec 10 14:33:06 crc kubenswrapper[4718]: I1210 14:33:06.128174 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9fmrd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"186fd52a-c63c-461f-a551-8b57ead36f59\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c79dfef1bedaab660a35e29f8a48c6b153f7ca5028ef6a936a196699a30566e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r6f28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:03Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9fmrd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:33:06Z is after 2025-08-24T17:21:41Z" Dec 10 14:33:06 crc kubenswrapper[4718]: I1210 14:33:06.141829 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5867ee7f-0b5d-48e7-9c39-956298a845a1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca56ecda46fc73e9f734cfd36b87b7353db70adefef0665771911d62633ce508\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65ade8ad5ab17011ad49a8283beac1b53ed3612052f8ad9c9b5521b18a71f970\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4c8b683dbf4a1d50d5987ea168baecaf4b2124a5a6286dc22c96cd7472db60a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://541c7810a9f52a43bd4d949a836dfed2fbf9384a1c46d64f81168a8eec1437e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://541c7810a9f52a43bd4d949a836dfed2fbf9384a1c46d64f81168a8eec1437e3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:31:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:31:37Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:31:36Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:33:06Z is after 2025-08-24T17:21:41Z" Dec 10 14:33:06 crc kubenswrapper[4718]: I1210 14:33:06.158088 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://caa6ee7ab7f267517dcd1699f91f61cbc8b2dd0a135dd553e096f31a7f1dd93b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:33:06Z is after 2025-08-24T17:21:41Z" Dec 10 14:33:06 crc kubenswrapper[4718]: I1210 14:33:06.177085 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:33:06Z is after 2025-08-24T17:21:41Z" Dec 10 14:33:06 crc kubenswrapper[4718]: I1210 14:33:06.190107 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:06 crc kubenswrapper[4718]: I1210 14:33:06.190190 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:06 crc kubenswrapper[4718]: I1210 14:33:06.190202 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:06 crc kubenswrapper[4718]: I1210 14:33:06.190223 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:06 crc kubenswrapper[4718]: I1210 14:33:06.190234 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:06Z","lastTransitionTime":"2025-12-10T14:33:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:06 crc kubenswrapper[4718]: I1210 14:33:06.194556 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hv62w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9db3984f-4589-462f-94d7-89a885be63d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fb2ef285e022a0800e4756e854df7efc695588b19f6162afdccad0898285b93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c18891e2a71e33784526db2ff2226ff6aa8a9df462f5cb11ff4f5a446509f54\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-10T14:32:53Z\\\",\\\"message\\\":\\\"2025-12-10T14:32:07+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_1cd36dea-7f3c-413f-b43c-2e7059f8f268\\\\n2025-12-10T14:32:07+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_1cd36dea-7f3c-413f-b43c-2e7059f8f268 to /host/opt/cni/bin/\\\\n2025-12-10T14:32:08Z [verbose] multus-daemon started\\\\n2025-12-10T14:32:08Z [verbose] Readiness Indicator file check\\\\n2025-12-10T14:32:53Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:01Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d84p4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hv62w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:33:06Z is after 2025-08-24T17:21:41Z" Dec 10 14:33:06 crc kubenswrapper[4718]: I1210 14:33:06.209820 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xvkg4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af814c60-50de-499d-a1b2-b18f3749bc35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01d0637707f68305ada3aab8a2cf56db32750665ef5c1bcfbfb1993fd135c3b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vsmbx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:00Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xvkg4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:33:06Z is after 2025-08-24T17:21:41Z" Dec 10 14:33:06 crc kubenswrapper[4718]: I1210 14:33:06.229007 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8db53917-7cfb-496d-b8a0-5cc68f3be4e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ac0946788c7cef36abe84639ca343f7ba379f19b94308cd1a08f4dcac4ad249\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f420f195ef9f1879dc97af4b3feb8835025c2cbe0ede4113fb1b0e0b08ba6c2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8zmhn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:33:06Z is after 2025-08-24T17:21:41Z" Dec 10 14:33:06 crc kubenswrapper[4718]: I1210 14:33:06.240587 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e88a635-7fd6-4100-9074-93065379b7af\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe972be45d5fffe525720a01120068ca8d0d5ad43be84a4dfea4ca04946ff711\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://98178725ebbf1d6fe01df267ec22fdd0b8bb049fba5d6790ca3e045b71083a05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://98178725ebbf1d6fe01df267ec22fdd0b8bb049fba5d6790ca3e045b71083a05\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:31:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:31:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:33:06Z is after 2025-08-24T17:21:41Z" Dec 10 14:33:06 crc kubenswrapper[4718]: I1210 14:33:06.253712 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1432ae01-a453-47a7-be1d-5fdcabe6f0c8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://651112c229c32968527538e66f1b161c92411b37f45f36ca90b27b6710d7a43d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d1de02a2f41beed3e92a05e69a6c76b44861e7dba46d723a38fffe8c3d0ed42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9405e4c833e1d215824604c6d7ec42dec156f2968d3351f110df665cdac0ce1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d898946159da9ec7806b5daa074a2313a8d3cd6086f4ae2bdfe858a7ba70eca7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:31:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:33:06Z is after 2025-08-24T17:21:41Z" Dec 10 14:33:06 crc kubenswrapper[4718]: I1210 14:33:06.264887 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-49r77" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22aace52-6f58-4459-89d2-9fec98b12ead\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b18d0c437956c7db43ad53ae225d4340e62f2c0ceb79ab6cc519d3c97570027\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l47m9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc20dc8e96606df154ecb5850afe3c1002d2869646e8a43a642e34d6f5ec8683\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l47m9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-49r77\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:33:06Z is after 2025-08-24T17:21:41Z" Dec 10 14:33:06 crc kubenswrapper[4718]: I1210 14:33:06.276727 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-r8zbt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1494ebfa-d66c-4200-a336-2cedebcd5889\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p7qsk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p7qsk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:17Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-r8zbt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:33:06Z is after 2025-08-24T17:21:41Z" Dec 10 14:33:06 crc kubenswrapper[4718]: I1210 14:33:06.294957 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:06 crc kubenswrapper[4718]: I1210 14:33:06.295018 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:06 crc kubenswrapper[4718]: I1210 14:33:06.295032 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:06 crc kubenswrapper[4718]: I1210 14:33:06.295058 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:06 crc kubenswrapper[4718]: I1210 14:33:06.295073 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:06Z","lastTransitionTime":"2025-12-10T14:33:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:06 crc kubenswrapper[4718]: I1210 14:33:06.297303 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kkfdg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db9c6842-03cb-4a28-baff-77f27f537aa4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bbdc104f3ec13789d339165601ac84cf9973e811127f8fe05ddb9b8d5ab333a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bxrt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34acb23b1eef53ae920322853c0503b9d26b0d30894e6c53ae52fb90e1671294\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34acb23b1eef53ae920322853c0503b9d26b0d30894e6c53ae52fb90e1671294\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bxrt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1b13fbe360cd4e791e86acdb311b78d1ab1d3344fd287fd70cfa87cf656affa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1b13fbe360cd4e791e86acdb311b78d1ab1d3344fd287fd70cfa87cf656affa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bxrt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d9ee16140b928b7e5c33811af2ca2b20b2fc98fdefc12bb1790fce59c407d9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d9ee16140b928b7e5c33811af2ca2b20b2fc98fdefc12bb1790fce59c407d9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bxrt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1daf49ff990637090db3ead68f5d0777b6502f79edfe524df743389ee220f8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1daf49ff990637090db3ead68f5d0777b6502f79edfe524df743389ee220f8d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bxrt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://193bd4ffc5c3324f05727b94ff852aaae455949fbb968b27c572f43fe0328854\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://193bd4ffc5c3324f05727b94ff852aaae455949fbb968b27c572f43fe0328854\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bxrt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://95db561c06acc2e7272a5a4a2c8a6a38315343379b80213c0e63f7a65a46f35d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://95db561c06acc2e7272a5a4a2c8a6a38315343379b80213c0e63f7a65a46f35d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bxrt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kkfdg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:33:06Z is after 2025-08-24T17:21:41Z" Dec 10 14:33:06 crc kubenswrapper[4718]: I1210 14:33:06.398685 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:06 crc kubenswrapper[4718]: I1210 14:33:06.398752 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:06 crc kubenswrapper[4718]: I1210 14:33:06.398770 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:06 crc kubenswrapper[4718]: I1210 14:33:06.398797 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:06 crc kubenswrapper[4718]: I1210 14:33:06.398819 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:06Z","lastTransitionTime":"2025-12-10T14:33:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:06 crc kubenswrapper[4718]: I1210 14:33:06.503021 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:06 crc kubenswrapper[4718]: I1210 14:33:06.503070 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:06 crc kubenswrapper[4718]: I1210 14:33:06.503082 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:06 crc kubenswrapper[4718]: I1210 14:33:06.503103 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:06 crc kubenswrapper[4718]: I1210 14:33:06.503116 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:06Z","lastTransitionTime":"2025-12-10T14:33:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:06 crc kubenswrapper[4718]: I1210 14:33:06.606682 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:06 crc kubenswrapper[4718]: I1210 14:33:06.607034 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:06 crc kubenswrapper[4718]: I1210 14:33:06.607260 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:06 crc kubenswrapper[4718]: I1210 14:33:06.607574 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:06 crc kubenswrapper[4718]: I1210 14:33:06.607801 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:06Z","lastTransitionTime":"2025-12-10T14:33:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:06 crc kubenswrapper[4718]: I1210 14:33:06.711652 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:06 crc kubenswrapper[4718]: I1210 14:33:06.712012 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:06 crc kubenswrapper[4718]: I1210 14:33:06.712234 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:06 crc kubenswrapper[4718]: I1210 14:33:06.712335 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:06 crc kubenswrapper[4718]: I1210 14:33:06.712442 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:06Z","lastTransitionTime":"2025-12-10T14:33:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:06 crc kubenswrapper[4718]: I1210 14:33:06.815754 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:06 crc kubenswrapper[4718]: I1210 14:33:06.815818 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:06 crc kubenswrapper[4718]: I1210 14:33:06.815833 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:06 crc kubenswrapper[4718]: I1210 14:33:06.815859 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:06 crc kubenswrapper[4718]: I1210 14:33:06.815873 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:06Z","lastTransitionTime":"2025-12-10T14:33:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:06 crc kubenswrapper[4718]: I1210 14:33:06.918655 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:06 crc kubenswrapper[4718]: I1210 14:33:06.918717 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:06 crc kubenswrapper[4718]: I1210 14:33:06.918728 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:06 crc kubenswrapper[4718]: I1210 14:33:06.918750 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:06 crc kubenswrapper[4718]: I1210 14:33:06.918761 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:06Z","lastTransitionTime":"2025-12-10T14:33:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:07 crc kubenswrapper[4718]: I1210 14:33:07.020060 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 10 14:33:07 crc kubenswrapper[4718]: I1210 14:33:07.020121 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-r8zbt" Dec 10 14:33:07 crc kubenswrapper[4718]: E1210 14:33:07.020817 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 10 14:33:07 crc kubenswrapper[4718]: E1210 14:33:07.021065 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-r8zbt" podUID="1494ebfa-d66c-4200-a336-2cedebcd5889" Dec 10 14:33:07 crc kubenswrapper[4718]: I1210 14:33:07.022796 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:07 crc kubenswrapper[4718]: I1210 14:33:07.023163 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:07 crc kubenswrapper[4718]: I1210 14:33:07.023315 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:07 crc kubenswrapper[4718]: I1210 14:33:07.023486 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:07 crc kubenswrapper[4718]: I1210 14:33:07.023655 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:07Z","lastTransitionTime":"2025-12-10T14:33:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:07 crc kubenswrapper[4718]: I1210 14:33:07.126646 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:07 crc kubenswrapper[4718]: I1210 14:33:07.126713 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:07 crc kubenswrapper[4718]: I1210 14:33:07.126730 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:07 crc kubenswrapper[4718]: I1210 14:33:07.126756 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:07 crc kubenswrapper[4718]: I1210 14:33:07.126777 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:07Z","lastTransitionTime":"2025-12-10T14:33:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:07 crc kubenswrapper[4718]: I1210 14:33:07.229173 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:07 crc kubenswrapper[4718]: I1210 14:33:07.229248 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:07 crc kubenswrapper[4718]: I1210 14:33:07.229259 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:07 crc kubenswrapper[4718]: I1210 14:33:07.229274 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:07 crc kubenswrapper[4718]: I1210 14:33:07.229284 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:07Z","lastTransitionTime":"2025-12-10T14:33:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:07 crc kubenswrapper[4718]: I1210 14:33:07.332537 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:07 crc kubenswrapper[4718]: I1210 14:33:07.332598 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:07 crc kubenswrapper[4718]: I1210 14:33:07.332608 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:07 crc kubenswrapper[4718]: I1210 14:33:07.332626 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:07 crc kubenswrapper[4718]: I1210 14:33:07.332640 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:07Z","lastTransitionTime":"2025-12-10T14:33:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:07 crc kubenswrapper[4718]: I1210 14:33:07.435358 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:07 crc kubenswrapper[4718]: I1210 14:33:07.435533 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:07 crc kubenswrapper[4718]: I1210 14:33:07.435561 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:07 crc kubenswrapper[4718]: I1210 14:33:07.435596 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:07 crc kubenswrapper[4718]: I1210 14:33:07.435623 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:07Z","lastTransitionTime":"2025-12-10T14:33:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:07 crc kubenswrapper[4718]: I1210 14:33:07.539375 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:07 crc kubenswrapper[4718]: I1210 14:33:07.539897 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:07 crc kubenswrapper[4718]: I1210 14:33:07.540100 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:07 crc kubenswrapper[4718]: I1210 14:33:07.540325 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:07 crc kubenswrapper[4718]: I1210 14:33:07.540641 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:07Z","lastTransitionTime":"2025-12-10T14:33:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:07 crc kubenswrapper[4718]: I1210 14:33:07.644232 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:07 crc kubenswrapper[4718]: I1210 14:33:07.644296 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:07 crc kubenswrapper[4718]: I1210 14:33:07.644314 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:07 crc kubenswrapper[4718]: I1210 14:33:07.644339 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:07 crc kubenswrapper[4718]: I1210 14:33:07.644360 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:07Z","lastTransitionTime":"2025-12-10T14:33:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:07 crc kubenswrapper[4718]: I1210 14:33:07.746840 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:07 crc kubenswrapper[4718]: I1210 14:33:07.747138 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:07 crc kubenswrapper[4718]: I1210 14:33:07.747212 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:07 crc kubenswrapper[4718]: I1210 14:33:07.747280 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:07 crc kubenswrapper[4718]: I1210 14:33:07.747354 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:07Z","lastTransitionTime":"2025-12-10T14:33:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:07 crc kubenswrapper[4718]: I1210 14:33:07.850321 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:07 crc kubenswrapper[4718]: I1210 14:33:07.850419 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:07 crc kubenswrapper[4718]: I1210 14:33:07.850437 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:07 crc kubenswrapper[4718]: I1210 14:33:07.850457 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:07 crc kubenswrapper[4718]: I1210 14:33:07.850471 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:07Z","lastTransitionTime":"2025-12-10T14:33:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:07 crc kubenswrapper[4718]: I1210 14:33:07.953279 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:07 crc kubenswrapper[4718]: I1210 14:33:07.953673 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:07 crc kubenswrapper[4718]: I1210 14:33:07.953792 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:07 crc kubenswrapper[4718]: I1210 14:33:07.953919 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:07 crc kubenswrapper[4718]: I1210 14:33:07.954028 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:07Z","lastTransitionTime":"2025-12-10T14:33:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:08 crc kubenswrapper[4718]: I1210 14:33:08.019895 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 10 14:33:08 crc kubenswrapper[4718]: I1210 14:33:08.020018 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 10 14:33:08 crc kubenswrapper[4718]: E1210 14:33:08.020538 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 10 14:33:08 crc kubenswrapper[4718]: E1210 14:33:08.020673 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 10 14:33:08 crc kubenswrapper[4718]: I1210 14:33:08.057058 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:08 crc kubenswrapper[4718]: I1210 14:33:08.057130 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:08 crc kubenswrapper[4718]: I1210 14:33:08.057141 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:08 crc kubenswrapper[4718]: I1210 14:33:08.057162 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:08 crc kubenswrapper[4718]: I1210 14:33:08.057178 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:08Z","lastTransitionTime":"2025-12-10T14:33:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:08 crc kubenswrapper[4718]: I1210 14:33:08.160443 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:08 crc kubenswrapper[4718]: I1210 14:33:08.160485 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:08 crc kubenswrapper[4718]: I1210 14:33:08.160496 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:08 crc kubenswrapper[4718]: I1210 14:33:08.160512 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:08 crc kubenswrapper[4718]: I1210 14:33:08.160522 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:08Z","lastTransitionTime":"2025-12-10T14:33:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:08 crc kubenswrapper[4718]: I1210 14:33:08.263193 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:08 crc kubenswrapper[4718]: I1210 14:33:08.263240 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:08 crc kubenswrapper[4718]: I1210 14:33:08.263252 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:08 crc kubenswrapper[4718]: I1210 14:33:08.263268 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:08 crc kubenswrapper[4718]: I1210 14:33:08.263280 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:08Z","lastTransitionTime":"2025-12-10T14:33:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:08 crc kubenswrapper[4718]: I1210 14:33:08.366481 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:08 crc kubenswrapper[4718]: I1210 14:33:08.366565 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:08 crc kubenswrapper[4718]: I1210 14:33:08.366587 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:08 crc kubenswrapper[4718]: I1210 14:33:08.366622 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:08 crc kubenswrapper[4718]: I1210 14:33:08.366644 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:08Z","lastTransitionTime":"2025-12-10T14:33:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:08 crc kubenswrapper[4718]: I1210 14:33:08.469736 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:08 crc kubenswrapper[4718]: I1210 14:33:08.469794 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:08 crc kubenswrapper[4718]: I1210 14:33:08.469804 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:08 crc kubenswrapper[4718]: I1210 14:33:08.469819 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:08 crc kubenswrapper[4718]: I1210 14:33:08.469829 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:08Z","lastTransitionTime":"2025-12-10T14:33:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:08 crc kubenswrapper[4718]: I1210 14:33:08.573313 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:08 crc kubenswrapper[4718]: I1210 14:33:08.573379 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:08 crc kubenswrapper[4718]: I1210 14:33:08.573421 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:08 crc kubenswrapper[4718]: I1210 14:33:08.573451 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:08 crc kubenswrapper[4718]: I1210 14:33:08.573468 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:08Z","lastTransitionTime":"2025-12-10T14:33:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:08 crc kubenswrapper[4718]: I1210 14:33:08.676452 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:08 crc kubenswrapper[4718]: I1210 14:33:08.676548 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:08 crc kubenswrapper[4718]: I1210 14:33:08.676583 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:08 crc kubenswrapper[4718]: I1210 14:33:08.676603 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:08 crc kubenswrapper[4718]: I1210 14:33:08.676617 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:08Z","lastTransitionTime":"2025-12-10T14:33:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:08 crc kubenswrapper[4718]: I1210 14:33:08.779996 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:08 crc kubenswrapper[4718]: I1210 14:33:08.780062 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:08 crc kubenswrapper[4718]: I1210 14:33:08.780071 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:08 crc kubenswrapper[4718]: I1210 14:33:08.780091 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:08 crc kubenswrapper[4718]: I1210 14:33:08.780108 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:08Z","lastTransitionTime":"2025-12-10T14:33:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:08 crc kubenswrapper[4718]: I1210 14:33:08.883320 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:08 crc kubenswrapper[4718]: I1210 14:33:08.883375 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:08 crc kubenswrapper[4718]: I1210 14:33:08.883404 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:08 crc kubenswrapper[4718]: I1210 14:33:08.883427 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:08 crc kubenswrapper[4718]: I1210 14:33:08.883439 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:08Z","lastTransitionTime":"2025-12-10T14:33:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:08 crc kubenswrapper[4718]: I1210 14:33:08.987167 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:08 crc kubenswrapper[4718]: I1210 14:33:08.987245 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:08 crc kubenswrapper[4718]: I1210 14:33:08.987279 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:08 crc kubenswrapper[4718]: I1210 14:33:08.987310 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:08 crc kubenswrapper[4718]: I1210 14:33:08.987334 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:08Z","lastTransitionTime":"2025-12-10T14:33:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:09 crc kubenswrapper[4718]: I1210 14:33:09.019580 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 10 14:33:09 crc kubenswrapper[4718]: E1210 14:33:09.019869 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 10 14:33:09 crc kubenswrapper[4718]: I1210 14:33:09.020196 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-r8zbt" Dec 10 14:33:09 crc kubenswrapper[4718]: E1210 14:33:09.020458 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-r8zbt" podUID="1494ebfa-d66c-4200-a336-2cedebcd5889" Dec 10 14:33:09 crc kubenswrapper[4718]: I1210 14:33:09.090547 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:09 crc kubenswrapper[4718]: I1210 14:33:09.090676 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:09 crc kubenswrapper[4718]: I1210 14:33:09.090688 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:09 crc kubenswrapper[4718]: I1210 14:33:09.090711 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:09 crc kubenswrapper[4718]: I1210 14:33:09.090730 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:09Z","lastTransitionTime":"2025-12-10T14:33:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:09 crc kubenswrapper[4718]: I1210 14:33:09.194618 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:09 crc kubenswrapper[4718]: I1210 14:33:09.194697 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:09 crc kubenswrapper[4718]: I1210 14:33:09.194715 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:09 crc kubenswrapper[4718]: I1210 14:33:09.194744 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:09 crc kubenswrapper[4718]: I1210 14:33:09.194765 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:09Z","lastTransitionTime":"2025-12-10T14:33:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:09 crc kubenswrapper[4718]: I1210 14:33:09.298632 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:09 crc kubenswrapper[4718]: I1210 14:33:09.298717 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:09 crc kubenswrapper[4718]: I1210 14:33:09.298740 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:09 crc kubenswrapper[4718]: I1210 14:33:09.298782 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:09 crc kubenswrapper[4718]: I1210 14:33:09.298812 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:09Z","lastTransitionTime":"2025-12-10T14:33:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:09 crc kubenswrapper[4718]: I1210 14:33:09.402225 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:09 crc kubenswrapper[4718]: I1210 14:33:09.402299 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:09 crc kubenswrapper[4718]: I1210 14:33:09.402312 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:09 crc kubenswrapper[4718]: I1210 14:33:09.402332 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:09 crc kubenswrapper[4718]: I1210 14:33:09.402345 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:09Z","lastTransitionTime":"2025-12-10T14:33:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:09 crc kubenswrapper[4718]: I1210 14:33:09.505229 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:09 crc kubenswrapper[4718]: I1210 14:33:09.506055 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:09 crc kubenswrapper[4718]: I1210 14:33:09.506143 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:09 crc kubenswrapper[4718]: I1210 14:33:09.506257 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:09 crc kubenswrapper[4718]: I1210 14:33:09.506355 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:09Z","lastTransitionTime":"2025-12-10T14:33:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:09 crc kubenswrapper[4718]: I1210 14:33:09.614214 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:09 crc kubenswrapper[4718]: I1210 14:33:09.614329 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:09 crc kubenswrapper[4718]: I1210 14:33:09.614349 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:09 crc kubenswrapper[4718]: I1210 14:33:09.614448 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:09 crc kubenswrapper[4718]: I1210 14:33:09.614581 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:09Z","lastTransitionTime":"2025-12-10T14:33:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:09 crc kubenswrapper[4718]: I1210 14:33:09.717877 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:09 crc kubenswrapper[4718]: I1210 14:33:09.717941 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:09 crc kubenswrapper[4718]: I1210 14:33:09.717959 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:09 crc kubenswrapper[4718]: I1210 14:33:09.717983 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:09 crc kubenswrapper[4718]: I1210 14:33:09.718003 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:09Z","lastTransitionTime":"2025-12-10T14:33:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:09 crc kubenswrapper[4718]: I1210 14:33:09.821548 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:09 crc kubenswrapper[4718]: I1210 14:33:09.821598 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:09 crc kubenswrapper[4718]: I1210 14:33:09.821609 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:09 crc kubenswrapper[4718]: I1210 14:33:09.821630 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:09 crc kubenswrapper[4718]: I1210 14:33:09.821643 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:09Z","lastTransitionTime":"2025-12-10T14:33:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:09 crc kubenswrapper[4718]: I1210 14:33:09.924620 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:09 crc kubenswrapper[4718]: I1210 14:33:09.924689 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:09 crc kubenswrapper[4718]: I1210 14:33:09.924729 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:09 crc kubenswrapper[4718]: I1210 14:33:09.924762 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:09 crc kubenswrapper[4718]: I1210 14:33:09.924858 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:09Z","lastTransitionTime":"2025-12-10T14:33:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:10 crc kubenswrapper[4718]: I1210 14:33:10.019820 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 10 14:33:10 crc kubenswrapper[4718]: E1210 14:33:10.020213 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 10 14:33:10 crc kubenswrapper[4718]: I1210 14:33:10.020647 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 10 14:33:10 crc kubenswrapper[4718]: E1210 14:33:10.020820 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 10 14:33:10 crc kubenswrapper[4718]: I1210 14:33:10.026974 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:10 crc kubenswrapper[4718]: I1210 14:33:10.027044 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:10 crc kubenswrapper[4718]: I1210 14:33:10.027069 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:10 crc kubenswrapper[4718]: I1210 14:33:10.027091 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:10 crc kubenswrapper[4718]: I1210 14:33:10.027109 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:10Z","lastTransitionTime":"2025-12-10T14:33:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:10 crc kubenswrapper[4718]: I1210 14:33:10.129975 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:10 crc kubenswrapper[4718]: I1210 14:33:10.130021 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:10 crc kubenswrapper[4718]: I1210 14:33:10.130036 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:10 crc kubenswrapper[4718]: I1210 14:33:10.130054 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:10 crc kubenswrapper[4718]: I1210 14:33:10.130067 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:10Z","lastTransitionTime":"2025-12-10T14:33:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:10 crc kubenswrapper[4718]: I1210 14:33:10.232601 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:10 crc kubenswrapper[4718]: I1210 14:33:10.232655 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:10 crc kubenswrapper[4718]: I1210 14:33:10.232731 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:10 crc kubenswrapper[4718]: I1210 14:33:10.232752 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:10 crc kubenswrapper[4718]: I1210 14:33:10.232788 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:10Z","lastTransitionTime":"2025-12-10T14:33:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:10 crc kubenswrapper[4718]: I1210 14:33:10.336009 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:10 crc kubenswrapper[4718]: I1210 14:33:10.336099 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:10 crc kubenswrapper[4718]: I1210 14:33:10.336116 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:10 crc kubenswrapper[4718]: I1210 14:33:10.336140 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:10 crc kubenswrapper[4718]: I1210 14:33:10.336169 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:10Z","lastTransitionTime":"2025-12-10T14:33:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:10 crc kubenswrapper[4718]: I1210 14:33:10.439075 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:10 crc kubenswrapper[4718]: I1210 14:33:10.439138 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:10 crc kubenswrapper[4718]: I1210 14:33:10.439155 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:10 crc kubenswrapper[4718]: I1210 14:33:10.439182 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:10 crc kubenswrapper[4718]: I1210 14:33:10.439200 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:10Z","lastTransitionTime":"2025-12-10T14:33:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:10 crc kubenswrapper[4718]: I1210 14:33:10.541868 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:10 crc kubenswrapper[4718]: I1210 14:33:10.541917 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:10 crc kubenswrapper[4718]: I1210 14:33:10.541937 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:10 crc kubenswrapper[4718]: I1210 14:33:10.541988 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:10 crc kubenswrapper[4718]: I1210 14:33:10.542002 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:10Z","lastTransitionTime":"2025-12-10T14:33:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:10 crc kubenswrapper[4718]: I1210 14:33:10.645164 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:10 crc kubenswrapper[4718]: I1210 14:33:10.645212 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:10 crc kubenswrapper[4718]: I1210 14:33:10.645221 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:10 crc kubenswrapper[4718]: I1210 14:33:10.645240 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:10 crc kubenswrapper[4718]: I1210 14:33:10.645251 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:10Z","lastTransitionTime":"2025-12-10T14:33:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:10 crc kubenswrapper[4718]: I1210 14:33:10.748812 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:10 crc kubenswrapper[4718]: I1210 14:33:10.748918 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:10 crc kubenswrapper[4718]: I1210 14:33:10.748930 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:10 crc kubenswrapper[4718]: I1210 14:33:10.748950 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:10 crc kubenswrapper[4718]: I1210 14:33:10.748964 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:10Z","lastTransitionTime":"2025-12-10T14:33:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:10 crc kubenswrapper[4718]: I1210 14:33:10.852419 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:10 crc kubenswrapper[4718]: I1210 14:33:10.852511 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:10 crc kubenswrapper[4718]: I1210 14:33:10.852527 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:10 crc kubenswrapper[4718]: I1210 14:33:10.852550 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:10 crc kubenswrapper[4718]: I1210 14:33:10.852565 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:10Z","lastTransitionTime":"2025-12-10T14:33:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:10 crc kubenswrapper[4718]: I1210 14:33:10.955345 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:10 crc kubenswrapper[4718]: I1210 14:33:10.955410 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:10 crc kubenswrapper[4718]: I1210 14:33:10.955423 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:10 crc kubenswrapper[4718]: I1210 14:33:10.955440 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:10 crc kubenswrapper[4718]: I1210 14:33:10.955450 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:10Z","lastTransitionTime":"2025-12-10T14:33:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:11 crc kubenswrapper[4718]: I1210 14:33:11.020147 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 10 14:33:11 crc kubenswrapper[4718]: I1210 14:33:11.020283 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-r8zbt" Dec 10 14:33:11 crc kubenswrapper[4718]: E1210 14:33:11.020356 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 10 14:33:11 crc kubenswrapper[4718]: E1210 14:33:11.020631 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-r8zbt" podUID="1494ebfa-d66c-4200-a336-2cedebcd5889" Dec 10 14:33:11 crc kubenswrapper[4718]: I1210 14:33:11.059509 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:11 crc kubenswrapper[4718]: I1210 14:33:11.059577 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:11 crc kubenswrapper[4718]: I1210 14:33:11.059588 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:11 crc kubenswrapper[4718]: I1210 14:33:11.059611 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:11 crc kubenswrapper[4718]: I1210 14:33:11.059623 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:11Z","lastTransitionTime":"2025-12-10T14:33:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:11 crc kubenswrapper[4718]: I1210 14:33:11.162498 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:11 crc kubenswrapper[4718]: I1210 14:33:11.162579 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:11 crc kubenswrapper[4718]: I1210 14:33:11.162592 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:11 crc kubenswrapper[4718]: I1210 14:33:11.162615 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:11 crc kubenswrapper[4718]: I1210 14:33:11.162629 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:11Z","lastTransitionTime":"2025-12-10T14:33:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:11 crc kubenswrapper[4718]: I1210 14:33:11.265990 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:11 crc kubenswrapper[4718]: I1210 14:33:11.266092 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:11 crc kubenswrapper[4718]: I1210 14:33:11.266124 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:11 crc kubenswrapper[4718]: I1210 14:33:11.266163 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:11 crc kubenswrapper[4718]: I1210 14:33:11.266192 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:11Z","lastTransitionTime":"2025-12-10T14:33:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:11 crc kubenswrapper[4718]: I1210 14:33:11.368682 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:11 crc kubenswrapper[4718]: I1210 14:33:11.368732 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:11 crc kubenswrapper[4718]: I1210 14:33:11.368744 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:11 crc kubenswrapper[4718]: I1210 14:33:11.368760 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:11 crc kubenswrapper[4718]: I1210 14:33:11.368772 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:11Z","lastTransitionTime":"2025-12-10T14:33:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:11 crc kubenswrapper[4718]: I1210 14:33:11.391843 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:11 crc kubenswrapper[4718]: I1210 14:33:11.391881 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:11 crc kubenswrapper[4718]: I1210 14:33:11.391892 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:11 crc kubenswrapper[4718]: I1210 14:33:11.391906 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:11 crc kubenswrapper[4718]: I1210 14:33:11.391918 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:11Z","lastTransitionTime":"2025-12-10T14:33:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:11 crc kubenswrapper[4718]: E1210 14:33:11.407073 4718 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:33:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T14:33:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:33:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T14:33:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:33:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T14:33:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:33:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T14:33:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"fbef6824-3734-4f3d-bf18-030e5c72cff8\\\",\\\"systemUUID\\\":\\\"559eaef7-72a9-45f0-b8d3-0046c76adc0d\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:33:11Z is after 2025-08-24T17:21:41Z" Dec 10 14:33:11 crc kubenswrapper[4718]: I1210 14:33:11.411822 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:11 crc kubenswrapper[4718]: I1210 14:33:11.411868 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:11 crc kubenswrapper[4718]: I1210 14:33:11.411880 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:11 crc kubenswrapper[4718]: I1210 14:33:11.411897 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:11 crc kubenswrapper[4718]: I1210 14:33:11.411913 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:11Z","lastTransitionTime":"2025-12-10T14:33:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:11 crc kubenswrapper[4718]: E1210 14:33:11.484319 4718 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:33:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T14:33:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:33:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T14:33:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:33:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T14:33:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:33:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T14:33:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"fbef6824-3734-4f3d-bf18-030e5c72cff8\\\",\\\"systemUUID\\\":\\\"559eaef7-72a9-45f0-b8d3-0046c76adc0d\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:33:11Z is after 2025-08-24T17:21:41Z" Dec 10 14:33:11 crc kubenswrapper[4718]: I1210 14:33:11.489030 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:11 crc kubenswrapper[4718]: I1210 14:33:11.489096 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:11 crc kubenswrapper[4718]: I1210 14:33:11.489109 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:11 crc kubenswrapper[4718]: I1210 14:33:11.489129 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:11 crc kubenswrapper[4718]: I1210 14:33:11.489141 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:11Z","lastTransitionTime":"2025-12-10T14:33:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:11 crc kubenswrapper[4718]: E1210 14:33:11.502840 4718 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:33:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T14:33:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:33:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T14:33:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:33:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T14:33:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:33:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T14:33:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"fbef6824-3734-4f3d-bf18-030e5c72cff8\\\",\\\"systemUUID\\\":\\\"559eaef7-72a9-45f0-b8d3-0046c76adc0d\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:33:11Z is after 2025-08-24T17:21:41Z" Dec 10 14:33:11 crc kubenswrapper[4718]: I1210 14:33:11.514061 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:11 crc kubenswrapper[4718]: I1210 14:33:11.514145 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:11 crc kubenswrapper[4718]: I1210 14:33:11.514158 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:11 crc kubenswrapper[4718]: I1210 14:33:11.514178 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:11 crc kubenswrapper[4718]: I1210 14:33:11.514192 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:11Z","lastTransitionTime":"2025-12-10T14:33:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:11 crc kubenswrapper[4718]: E1210 14:33:11.527104 4718 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:33:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T14:33:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:33:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T14:33:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:33:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T14:33:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:33:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T14:33:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"fbef6824-3734-4f3d-bf18-030e5c72cff8\\\",\\\"systemUUID\\\":\\\"559eaef7-72a9-45f0-b8d3-0046c76adc0d\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:33:11Z is after 2025-08-24T17:21:41Z" Dec 10 14:33:11 crc kubenswrapper[4718]: I1210 14:33:11.531860 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:11 crc kubenswrapper[4718]: I1210 14:33:11.531940 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:11 crc kubenswrapper[4718]: I1210 14:33:11.531954 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:11 crc kubenswrapper[4718]: I1210 14:33:11.531976 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:11 crc kubenswrapper[4718]: I1210 14:33:11.531990 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:11Z","lastTransitionTime":"2025-12-10T14:33:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:11 crc kubenswrapper[4718]: E1210 14:33:11.544308 4718 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:33:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T14:33:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:33:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T14:33:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:33:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T14:33:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:33:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-10T14:33:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"fbef6824-3734-4f3d-bf18-030e5c72cff8\\\",\\\"systemUUID\\\":\\\"559eaef7-72a9-45f0-b8d3-0046c76adc0d\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:33:11Z is after 2025-08-24T17:21:41Z" Dec 10 14:33:11 crc kubenswrapper[4718]: E1210 14:33:11.544460 4718 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 10 14:33:11 crc kubenswrapper[4718]: I1210 14:33:11.545987 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:11 crc kubenswrapper[4718]: I1210 14:33:11.546016 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:11 crc kubenswrapper[4718]: I1210 14:33:11.546027 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:11 crc kubenswrapper[4718]: I1210 14:33:11.546050 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:11 crc kubenswrapper[4718]: I1210 14:33:11.546063 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:11Z","lastTransitionTime":"2025-12-10T14:33:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:11 crc kubenswrapper[4718]: I1210 14:33:11.648861 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:11 crc kubenswrapper[4718]: I1210 14:33:11.648909 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:11 crc kubenswrapper[4718]: I1210 14:33:11.648921 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:11 crc kubenswrapper[4718]: I1210 14:33:11.648939 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:11 crc kubenswrapper[4718]: I1210 14:33:11.648951 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:11Z","lastTransitionTime":"2025-12-10T14:33:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:11 crc kubenswrapper[4718]: I1210 14:33:11.751887 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:11 crc kubenswrapper[4718]: I1210 14:33:11.751968 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:11 crc kubenswrapper[4718]: I1210 14:33:11.751993 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:11 crc kubenswrapper[4718]: I1210 14:33:11.752024 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:11 crc kubenswrapper[4718]: I1210 14:33:11.752050 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:11Z","lastTransitionTime":"2025-12-10T14:33:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:11 crc kubenswrapper[4718]: I1210 14:33:11.856028 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:11 crc kubenswrapper[4718]: I1210 14:33:11.856084 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:11 crc kubenswrapper[4718]: I1210 14:33:11.856094 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:11 crc kubenswrapper[4718]: I1210 14:33:11.856117 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:11 crc kubenswrapper[4718]: I1210 14:33:11.856129 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:11Z","lastTransitionTime":"2025-12-10T14:33:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:11 crc kubenswrapper[4718]: I1210 14:33:11.959039 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:11 crc kubenswrapper[4718]: I1210 14:33:11.959097 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:11 crc kubenswrapper[4718]: I1210 14:33:11.959105 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:11 crc kubenswrapper[4718]: I1210 14:33:11.959120 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:11 crc kubenswrapper[4718]: I1210 14:33:11.959151 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:11Z","lastTransitionTime":"2025-12-10T14:33:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:12 crc kubenswrapper[4718]: I1210 14:33:12.019519 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 10 14:33:12 crc kubenswrapper[4718]: I1210 14:33:12.019568 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 10 14:33:12 crc kubenswrapper[4718]: E1210 14:33:12.019820 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 10 14:33:12 crc kubenswrapper[4718]: E1210 14:33:12.020246 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 10 14:33:12 crc kubenswrapper[4718]: I1210 14:33:12.062780 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:12 crc kubenswrapper[4718]: I1210 14:33:12.062849 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:12 crc kubenswrapper[4718]: I1210 14:33:12.062871 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:12 crc kubenswrapper[4718]: I1210 14:33:12.062909 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:12 crc kubenswrapper[4718]: I1210 14:33:12.062932 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:12Z","lastTransitionTime":"2025-12-10T14:33:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:12 crc kubenswrapper[4718]: I1210 14:33:12.166033 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:12 crc kubenswrapper[4718]: I1210 14:33:12.166120 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:12 crc kubenswrapper[4718]: I1210 14:33:12.166139 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:12 crc kubenswrapper[4718]: I1210 14:33:12.166163 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:12 crc kubenswrapper[4718]: I1210 14:33:12.166182 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:12Z","lastTransitionTime":"2025-12-10T14:33:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:12 crc kubenswrapper[4718]: I1210 14:33:12.275679 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:12 crc kubenswrapper[4718]: I1210 14:33:12.275753 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:12 crc kubenswrapper[4718]: I1210 14:33:12.275775 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:12 crc kubenswrapper[4718]: I1210 14:33:12.275801 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:12 crc kubenswrapper[4718]: I1210 14:33:12.275820 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:12Z","lastTransitionTime":"2025-12-10T14:33:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:12 crc kubenswrapper[4718]: I1210 14:33:12.378805 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:12 crc kubenswrapper[4718]: I1210 14:33:12.378857 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:12 crc kubenswrapper[4718]: I1210 14:33:12.378866 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:12 crc kubenswrapper[4718]: I1210 14:33:12.378882 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:12 crc kubenswrapper[4718]: I1210 14:33:12.378894 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:12Z","lastTransitionTime":"2025-12-10T14:33:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:12 crc kubenswrapper[4718]: I1210 14:33:12.482862 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:12 crc kubenswrapper[4718]: I1210 14:33:12.482920 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:12 crc kubenswrapper[4718]: I1210 14:33:12.482934 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:12 crc kubenswrapper[4718]: I1210 14:33:12.482953 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:12 crc kubenswrapper[4718]: I1210 14:33:12.482966 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:12Z","lastTransitionTime":"2025-12-10T14:33:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:12 crc kubenswrapper[4718]: I1210 14:33:12.585562 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:12 crc kubenswrapper[4718]: I1210 14:33:12.585608 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:12 crc kubenswrapper[4718]: I1210 14:33:12.585619 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:12 crc kubenswrapper[4718]: I1210 14:33:12.585636 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:12 crc kubenswrapper[4718]: I1210 14:33:12.585656 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:12Z","lastTransitionTime":"2025-12-10T14:33:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:12 crc kubenswrapper[4718]: I1210 14:33:12.688213 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:12 crc kubenswrapper[4718]: I1210 14:33:12.688284 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:12 crc kubenswrapper[4718]: I1210 14:33:12.688294 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:12 crc kubenswrapper[4718]: I1210 14:33:12.688313 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:12 crc kubenswrapper[4718]: I1210 14:33:12.688325 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:12Z","lastTransitionTime":"2025-12-10T14:33:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:12 crc kubenswrapper[4718]: I1210 14:33:12.791497 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:12 crc kubenswrapper[4718]: I1210 14:33:12.791758 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:12 crc kubenswrapper[4718]: I1210 14:33:12.791863 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:12 crc kubenswrapper[4718]: I1210 14:33:12.791969 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:12 crc kubenswrapper[4718]: I1210 14:33:12.792060 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:12Z","lastTransitionTime":"2025-12-10T14:33:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:12 crc kubenswrapper[4718]: I1210 14:33:12.895252 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:12 crc kubenswrapper[4718]: I1210 14:33:12.895298 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:12 crc kubenswrapper[4718]: I1210 14:33:12.895310 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:12 crc kubenswrapper[4718]: I1210 14:33:12.895325 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:12 crc kubenswrapper[4718]: I1210 14:33:12.895336 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:12Z","lastTransitionTime":"2025-12-10T14:33:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:12 crc kubenswrapper[4718]: I1210 14:33:12.998525 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:12 crc kubenswrapper[4718]: I1210 14:33:12.998594 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:12 crc kubenswrapper[4718]: I1210 14:33:12.998607 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:12 crc kubenswrapper[4718]: I1210 14:33:12.998631 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:12 crc kubenswrapper[4718]: I1210 14:33:12.998647 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:12Z","lastTransitionTime":"2025-12-10T14:33:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:13 crc kubenswrapper[4718]: I1210 14:33:13.020024 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-r8zbt" Dec 10 14:33:13 crc kubenswrapper[4718]: I1210 14:33:13.020118 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 10 14:33:13 crc kubenswrapper[4718]: E1210 14:33:13.020200 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-r8zbt" podUID="1494ebfa-d66c-4200-a336-2cedebcd5889" Dec 10 14:33:13 crc kubenswrapper[4718]: E1210 14:33:13.020323 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 10 14:33:13 crc kubenswrapper[4718]: I1210 14:33:13.021202 4718 scope.go:117] "RemoveContainer" containerID="410548008fb250bc3e4c4409287aa72cf53991828f5c3f4ac74bcf8561cbf855" Dec 10 14:33:13 crc kubenswrapper[4718]: I1210 14:33:13.102229 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:13 crc kubenswrapper[4718]: I1210 14:33:13.102697 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:13 crc kubenswrapper[4718]: I1210 14:33:13.102706 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:13 crc kubenswrapper[4718]: I1210 14:33:13.102728 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:13 crc kubenswrapper[4718]: I1210 14:33:13.102741 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:13Z","lastTransitionTime":"2025-12-10T14:33:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:13 crc kubenswrapper[4718]: I1210 14:33:13.205988 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:13 crc kubenswrapper[4718]: I1210 14:33:13.206055 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:13 crc kubenswrapper[4718]: I1210 14:33:13.206070 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:13 crc kubenswrapper[4718]: I1210 14:33:13.206095 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:13 crc kubenswrapper[4718]: I1210 14:33:13.206108 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:13Z","lastTransitionTime":"2025-12-10T14:33:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:13 crc kubenswrapper[4718]: I1210 14:33:13.308890 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:13 crc kubenswrapper[4718]: I1210 14:33:13.308926 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:13 crc kubenswrapper[4718]: I1210 14:33:13.308936 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:13 crc kubenswrapper[4718]: I1210 14:33:13.308954 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:13 crc kubenswrapper[4718]: I1210 14:33:13.308965 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:13Z","lastTransitionTime":"2025-12-10T14:33:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:13 crc kubenswrapper[4718]: I1210 14:33:13.411830 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:13 crc kubenswrapper[4718]: I1210 14:33:13.411886 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:13 crc kubenswrapper[4718]: I1210 14:33:13.411898 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:13 crc kubenswrapper[4718]: I1210 14:33:13.411918 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:13 crc kubenswrapper[4718]: I1210 14:33:13.411930 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:13Z","lastTransitionTime":"2025-12-10T14:33:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:13 crc kubenswrapper[4718]: I1210 14:33:13.514640 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:13 crc kubenswrapper[4718]: I1210 14:33:13.514683 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:13 crc kubenswrapper[4718]: I1210 14:33:13.514694 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:13 crc kubenswrapper[4718]: I1210 14:33:13.514709 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:13 crc kubenswrapper[4718]: I1210 14:33:13.514720 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:13Z","lastTransitionTime":"2025-12-10T14:33:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:13 crc kubenswrapper[4718]: I1210 14:33:13.617687 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:13 crc kubenswrapper[4718]: I1210 14:33:13.617725 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:13 crc kubenswrapper[4718]: I1210 14:33:13.617737 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:13 crc kubenswrapper[4718]: I1210 14:33:13.617753 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:13 crc kubenswrapper[4718]: I1210 14:33:13.617765 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:13Z","lastTransitionTime":"2025-12-10T14:33:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:13 crc kubenswrapper[4718]: I1210 14:33:13.719999 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:13 crc kubenswrapper[4718]: I1210 14:33:13.720037 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:13 crc kubenswrapper[4718]: I1210 14:33:13.720054 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:13 crc kubenswrapper[4718]: I1210 14:33:13.720070 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:13 crc kubenswrapper[4718]: I1210 14:33:13.720080 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:13Z","lastTransitionTime":"2025-12-10T14:33:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:13 crc kubenswrapper[4718]: I1210 14:33:13.823228 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:13 crc kubenswrapper[4718]: I1210 14:33:13.823299 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:13 crc kubenswrapper[4718]: I1210 14:33:13.823312 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:13 crc kubenswrapper[4718]: I1210 14:33:13.823341 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:13 crc kubenswrapper[4718]: I1210 14:33:13.823356 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:13Z","lastTransitionTime":"2025-12-10T14:33:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:13 crc kubenswrapper[4718]: I1210 14:33:13.926588 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:13 crc kubenswrapper[4718]: I1210 14:33:13.926643 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:13 crc kubenswrapper[4718]: I1210 14:33:13.926652 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:13 crc kubenswrapper[4718]: I1210 14:33:13.926668 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:13 crc kubenswrapper[4718]: I1210 14:33:13.926678 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:13Z","lastTransitionTime":"2025-12-10T14:33:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:14 crc kubenswrapper[4718]: I1210 14:33:14.020254 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 10 14:33:14 crc kubenswrapper[4718]: I1210 14:33:14.020303 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 10 14:33:14 crc kubenswrapper[4718]: E1210 14:33:14.020454 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 10 14:33:14 crc kubenswrapper[4718]: E1210 14:33:14.020871 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 10 14:33:14 crc kubenswrapper[4718]: I1210 14:33:14.029046 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:14 crc kubenswrapper[4718]: I1210 14:33:14.029094 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:14 crc kubenswrapper[4718]: I1210 14:33:14.029105 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:14 crc kubenswrapper[4718]: I1210 14:33:14.029123 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:14 crc kubenswrapper[4718]: I1210 14:33:14.029137 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:14Z","lastTransitionTime":"2025-12-10T14:33:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:14 crc kubenswrapper[4718]: I1210 14:33:14.043803 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Dec 10 14:33:14 crc kubenswrapper[4718]: I1210 14:33:14.122716 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-dtch2_612af6cb-db4d-4874-a9ea-8b3c7eb8e30c/ovnkube-controller/2.log" Dec 10 14:33:14 crc kubenswrapper[4718]: I1210 14:33:14.127020 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dtch2" event={"ID":"612af6cb-db4d-4874-a9ea-8b3c7eb8e30c","Type":"ContainerStarted","Data":"f2ec586c9c00d7877087cb32e37d064b16317ea1c414ab474147f4c480026c70"} Dec 10 14:33:14 crc kubenswrapper[4718]: I1210 14:33:14.127892 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-dtch2" Dec 10 14:33:14 crc kubenswrapper[4718]: I1210 14:33:14.131834 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:14 crc kubenswrapper[4718]: I1210 14:33:14.131876 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:14 crc kubenswrapper[4718]: I1210 14:33:14.131888 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:14 crc kubenswrapper[4718]: I1210 14:33:14.131902 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:14 crc kubenswrapper[4718]: I1210 14:33:14.131912 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:14Z","lastTransitionTime":"2025-12-10T14:33:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:14 crc kubenswrapper[4718]: I1210 14:33:14.144476 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a6ff07473ddfa3d1db1e5fe47937274f803c400ee435102c3b0383c02af4340\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b56868fde34c04a22b78be1e6cb02ff5b11f0dd9fa0232e5a4c38725a8bf193\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:33:14Z is after 2025-08-24T17:21:41Z" Dec 10 14:33:14 crc kubenswrapper[4718]: I1210 14:33:14.159905 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9fmrd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"186fd52a-c63c-461f-a551-8b57ead36f59\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c79dfef1bedaab660a35e29f8a48c6b153f7ca5028ef6a936a196699a30566e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r6f28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:03Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9fmrd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:33:14Z is after 2025-08-24T17:21:41Z" Dec 10 14:33:14 crc kubenswrapper[4718]: I1210 14:33:14.173252 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:33:14Z is after 2025-08-24T17:21:41Z" Dec 10 14:33:14 crc kubenswrapper[4718]: I1210 14:33:14.188462 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hv62w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9db3984f-4589-462f-94d7-89a885be63d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fb2ef285e022a0800e4756e854df7efc695588b19f6162afdccad0898285b93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c18891e2a71e33784526db2ff2226ff6aa8a9df462f5cb11ff4f5a446509f54\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-10T14:32:53Z\\\",\\\"message\\\":\\\"2025-12-10T14:32:07+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_1cd36dea-7f3c-413f-b43c-2e7059f8f268\\\\n2025-12-10T14:32:07+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_1cd36dea-7f3c-413f-b43c-2e7059f8f268 to /host/opt/cni/bin/\\\\n2025-12-10T14:32:08Z [verbose] multus-daemon started\\\\n2025-12-10T14:32:08Z [verbose] Readiness Indicator file check\\\\n2025-12-10T14:32:53Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:01Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d84p4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hv62w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:33:14Z is after 2025-08-24T17:21:41Z" Dec 10 14:33:14 crc kubenswrapper[4718]: I1210 14:33:14.199453 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xvkg4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af814c60-50de-499d-a1b2-b18f3749bc35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01d0637707f68305ada3aab8a2cf56db32750665ef5c1bcfbfb1993fd135c3b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vsmbx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:00Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xvkg4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:33:14Z is after 2025-08-24T17:21:41Z" Dec 10 14:33:14 crc kubenswrapper[4718]: I1210 14:33:14.212021 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8db53917-7cfb-496d-b8a0-5cc68f3be4e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ac0946788c7cef36abe84639ca343f7ba379f19b94308cd1a08f4dcac4ad249\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f420f195ef9f1879dc97af4b3feb8835025c2cbe0ede4113fb1b0e0b08ba6c2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8zmhn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:33:14Z is after 2025-08-24T17:21:41Z" Dec 10 14:33:14 crc kubenswrapper[4718]: I1210 14:33:14.231188 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e88a635-7fd6-4100-9074-93065379b7af\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe972be45d5fffe525720a01120068ca8d0d5ad43be84a4dfea4ca04946ff711\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://98178725ebbf1d6fe01df267ec22fdd0b8bb049fba5d6790ca3e045b71083a05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://98178725ebbf1d6fe01df267ec22fdd0b8bb049fba5d6790ca3e045b71083a05\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:31:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:31:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:33:14Z is after 2025-08-24T17:21:41Z" Dec 10 14:33:14 crc kubenswrapper[4718]: I1210 14:33:14.233908 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:14 crc kubenswrapper[4718]: I1210 14:33:14.233950 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:14 crc kubenswrapper[4718]: I1210 14:33:14.233960 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:14 crc kubenswrapper[4718]: I1210 14:33:14.233980 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:14 crc kubenswrapper[4718]: I1210 14:33:14.233993 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:14Z","lastTransitionTime":"2025-12-10T14:33:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:14 crc kubenswrapper[4718]: I1210 14:33:14.249595 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1432ae01-a453-47a7-be1d-5fdcabe6f0c8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://651112c229c32968527538e66f1b161c92411b37f45f36ca90b27b6710d7a43d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d1de02a2f41beed3e92a05e69a6c76b44861e7dba46d723a38fffe8c3d0ed42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9405e4c833e1d215824604c6d7ec42dec156f2968d3351f110df665cdac0ce1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d898946159da9ec7806b5daa074a2313a8d3cd6086f4ae2bdfe858a7ba70eca7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:31:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:33:14Z is after 2025-08-24T17:21:41Z" Dec 10 14:33:14 crc kubenswrapper[4718]: I1210 14:33:14.261720 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5867ee7f-0b5d-48e7-9c39-956298a845a1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca56ecda46fc73e9f734cfd36b87b7353db70adefef0665771911d62633ce508\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65ade8ad5ab17011ad49a8283beac1b53ed3612052f8ad9c9b5521b18a71f970\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4c8b683dbf4a1d50d5987ea168baecaf4b2124a5a6286dc22c96cd7472db60a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://541c7810a9f52a43bd4d949a836dfed2fbf9384a1c46d64f81168a8eec1437e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://541c7810a9f52a43bd4d949a836dfed2fbf9384a1c46d64f81168a8eec1437e3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:31:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:31:37Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:31:36Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:33:14Z is after 2025-08-24T17:21:41Z" Dec 10 14:33:14 crc kubenswrapper[4718]: I1210 14:33:14.273267 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://caa6ee7ab7f267517dcd1699f91f61cbc8b2dd0a135dd553e096f31a7f1dd93b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:33:14Z is after 2025-08-24T17:21:41Z" Dec 10 14:33:14 crc kubenswrapper[4718]: I1210 14:33:14.286827 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-49r77" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22aace52-6f58-4459-89d2-9fec98b12ead\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b18d0c437956c7db43ad53ae225d4340e62f2c0ceb79ab6cc519d3c97570027\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l47m9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc20dc8e96606df154ecb5850afe3c1002d2869646e8a43a642e34d6f5ec8683\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l47m9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-49r77\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:33:14Z is after 2025-08-24T17:21:41Z" Dec 10 14:33:14 crc kubenswrapper[4718]: I1210 14:33:14.311942 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"edc5ecab-9954-4a78-948a-e88c389044cd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92692024d66d9335da5886842043cbe9f547d164de3fdac6376b164ec8768113\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d9bd6f4d07250b40c63c763479f4fda1c23d3c9f71eedb262fd9cb238354605\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://199df3bd0e866ea57708c66a7cc92e346afc57c8e49449d4cee70825f658684a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8437bd6fcc9fe1422502b2feedbb03714ddbb2620d3b6431118c34511a950de6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fd301838aa5021be88873ee897a124950b0e5de604202ef2e9adc1d5ae40732\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86141b1111aec255b1850d00a497e541509f05d03aba5ce3e1c26bf8d7181fdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86141b1111aec255b1850d00a497e541509f05d03aba5ce3e1c26bf8d7181fdc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:31:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f71357acb00a7b2364de56823e8f662069885964da1fc1aa8a3e91947869f334\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f71357acb00a7b2364de56823e8f662069885964da1fc1aa8a3e91947869f334\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:31:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:31:38Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://569b47e5967ce0ac32c9e788a893311624b65ce53898f938b656290e3521eac1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://569b47e5967ce0ac32c9e788a893311624b65ce53898f938b656290e3521eac1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:31:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:31:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:31:36Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:33:14Z is after 2025-08-24T17:21:41Z" Dec 10 14:33:14 crc kubenswrapper[4718]: I1210 14:33:14.329187 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kkfdg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db9c6842-03cb-4a28-baff-77f27f537aa4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bbdc104f3ec13789d339165601ac84cf9973e811127f8fe05ddb9b8d5ab333a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bxrt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34acb23b1eef53ae920322853c0503b9d26b0d30894e6c53ae52fb90e1671294\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34acb23b1eef53ae920322853c0503b9d26b0d30894e6c53ae52fb90e1671294\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bxrt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1b13fbe360cd4e791e86acdb311b78d1ab1d3344fd287fd70cfa87cf656affa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1b13fbe360cd4e791e86acdb311b78d1ab1d3344fd287fd70cfa87cf656affa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bxrt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d9ee16140b928b7e5c33811af2ca2b20b2fc98fdefc12bb1790fce59c407d9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d9ee16140b928b7e5c33811af2ca2b20b2fc98fdefc12bb1790fce59c407d9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bxrt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1daf49ff990637090db3ead68f5d0777b6502f79edfe524df743389ee220f8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1daf49ff990637090db3ead68f5d0777b6502f79edfe524df743389ee220f8d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bxrt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://193bd4ffc5c3324f05727b94ff852aaae455949fbb968b27c572f43fe0328854\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://193bd4ffc5c3324f05727b94ff852aaae455949fbb968b27c572f43fe0328854\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bxrt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://95db561c06acc2e7272a5a4a2c8a6a38315343379b80213c0e63f7a65a46f35d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://95db561c06acc2e7272a5a4a2c8a6a38315343379b80213c0e63f7a65a46f35d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bxrt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kkfdg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:33:14Z is after 2025-08-24T17:21:41Z" Dec 10 14:33:14 crc kubenswrapper[4718]: I1210 14:33:14.336892 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:14 crc kubenswrapper[4718]: I1210 14:33:14.336958 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:14 crc kubenswrapper[4718]: I1210 14:33:14.336972 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:14 crc kubenswrapper[4718]: I1210 14:33:14.336993 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:14 crc kubenswrapper[4718]: I1210 14:33:14.337006 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:14Z","lastTransitionTime":"2025-12-10T14:33:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:14 crc kubenswrapper[4718]: I1210 14:33:14.344693 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-r8zbt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1494ebfa-d66c-4200-a336-2cedebcd5889\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p7qsk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p7qsk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:17Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-r8zbt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:33:14Z is after 2025-08-24T17:21:41Z" Dec 10 14:33:14 crc kubenswrapper[4718]: I1210 14:33:14.366121 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dtch2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"612af6cb-db4d-4874-a9ea-8b3c7eb8e30c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f49783c85c42bd1d4afd9d2546c84af07c770b069b7e6f22cf3d95c0e173bcb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02ee125e78be56eace63f1077a3952d4a9b9ccb387af5413a9325eb4bff80ea9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e231c6918c86142643af798db9c4c6de9764822d2b60e1155c727b143191139\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://abe2f058bd1c2102c5f00a29af4869eb6dcd8b732ae90f5c4fda6759b460622a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2ac19fc0eeba332014eefac978a35798eb8bf1ff79212bca24690236bd46e01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9263fdd7b12f5852db038570b5a179fff5850893af790fa27694f00375a607f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2ec586c9c00d7877087cb32e37d064b16317ea1c414ab474147f4c480026c70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://410548008fb250bc3e4c4409287aa72cf53991828f5c3f4ac74bcf8561cbf855\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-10T14:32:49Z\\\",\\\"message\\\":\\\"logical_ip:10.217.0.59 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {dce28c51-c9f1-478b-97c8-7e209d6e7cbe}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:dce28c51-c9f1-478b-97c8-7e209d6e7cbe}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {e3c4661a-36a6-47f0-a6c0-a4ee741f2224}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1210 14:32:48.189595 6552 obj_retry.go:365] Adding new object: *v1.Pod openshift-multus/network-metrics-daemon-r8zbt\\\\nI1210 14:32:48.189604 6552 ovn.go:134] Ensuring zone local for Pod openshift-multus/network-metrics-daemon-r8zbt in node crc\\\\nI1210 14:32:48.189627 6552 base_network_controller_pods.go:477] [default/openshift-multus/network-metrics-daemon-r8zbt] creating logical port openshift-multus_network-metrics-daemon-r8zbt for pod on switch crc\\\\nF1210 14:32:48.188496 6552 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:46Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:33:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f27c89c7dd6f5334f05c340d706e938abde36968f00c30bf84f7b2fb30dfcb3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df5db949bd1c53e8ae754b3e7e500e8c2e196aacd68a36d6331ddca378ef7504\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df5db949bd1c53e8ae754b3e7e500e8c2e196aacd68a36d6331ddca378ef7504\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-dtch2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:33:14Z is after 2025-08-24T17:21:41Z" Dec 10 14:33:14 crc kubenswrapper[4718]: I1210 14:33:14.383378 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52b297ad-8ff7-498e-8248-b64014de744f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65966d33004e0a6740f0b450f404729baec86fc47c84a58ee020a78b7378bfee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfa44e8f79d7525847ab10384aaf41105a8e6769b384679417cd0a7379748dee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://32fcede9b35f9a45177a341e272e7c2cbe2e5c6bf699e877664a603a5f96124a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3d4b682855c5510a5957e308008602e4f749bd44ca52f974cb4be27d06a7c80\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9d6e990a4a3f575ffc8a6399b8dd1fd59a84db71b34885946046fc7c5bd0a8e8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-10T14:31:58Z\\\",\\\"message\\\":\\\"le observer\\\\nW1210 14:31:57.496102 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1210 14:31:57.496266 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1210 14:31:57.497076 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1744864549/tls.crt::/tmp/serving-cert-1744864549/tls.key\\\\\\\"\\\\nI1210 14:31:57.965947 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1210 14:31:57.968367 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1210 14:31:57.968400 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1210 14:31:57.968417 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1210 14:31:57.968422 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1210 14:31:57.972813 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1210 14:31:57.972849 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1210 14:31:57.972856 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1210 14:31:57.972862 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1210 14:31:57.972867 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1210 14:31:57.972870 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1210 14:31:57.972874 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1210 14:31:57.973028 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1210 14:31:57.977590 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-10T14:31:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f363cabfc65ba2cb737968ead6f21568f30416a1f385f511f2d1a45da1e831a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3cbf8dfd3e1278a733c6ca7e888788d0bec845ff72dfd618e2070262d05b6a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3cbf8dfd3e1278a733c6ca7e888788d0bec845ff72dfd618e2070262d05b6a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:31:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:31:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:33:14Z is after 2025-08-24T17:21:41Z" Dec 10 14:33:14 crc kubenswrapper[4718]: I1210 14:33:14.398996 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:33:14Z is after 2025-08-24T17:21:41Z" Dec 10 14:33:14 crc kubenswrapper[4718]: I1210 14:33:14.414615 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:33:14Z is after 2025-08-24T17:21:41Z" Dec 10 14:33:14 crc kubenswrapper[4718]: I1210 14:33:14.430755 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://368fbae55ef9c78f2f88c5a07301f271e1ccffc66976a0c2dc9f80edce04613a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:33:14Z is after 2025-08-24T17:21:41Z" Dec 10 14:33:14 crc kubenswrapper[4718]: I1210 14:33:14.439412 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:14 crc kubenswrapper[4718]: I1210 14:33:14.439448 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:14 crc kubenswrapper[4718]: I1210 14:33:14.439458 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:14 crc kubenswrapper[4718]: I1210 14:33:14.439474 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:14 crc kubenswrapper[4718]: I1210 14:33:14.439485 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:14Z","lastTransitionTime":"2025-12-10T14:33:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:14 crc kubenswrapper[4718]: I1210 14:33:14.541693 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:14 crc kubenswrapper[4718]: I1210 14:33:14.541739 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:14 crc kubenswrapper[4718]: I1210 14:33:14.541750 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:14 crc kubenswrapper[4718]: I1210 14:33:14.541776 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:14 crc kubenswrapper[4718]: I1210 14:33:14.541794 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:14Z","lastTransitionTime":"2025-12-10T14:33:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:14 crc kubenswrapper[4718]: I1210 14:33:14.644934 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:14 crc kubenswrapper[4718]: I1210 14:33:14.644996 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:14 crc kubenswrapper[4718]: I1210 14:33:14.645008 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:14 crc kubenswrapper[4718]: I1210 14:33:14.645032 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:14 crc kubenswrapper[4718]: I1210 14:33:14.645049 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:14Z","lastTransitionTime":"2025-12-10T14:33:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:14 crc kubenswrapper[4718]: I1210 14:33:14.747894 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:14 crc kubenswrapper[4718]: I1210 14:33:14.747952 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:14 crc kubenswrapper[4718]: I1210 14:33:14.747962 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:14 crc kubenswrapper[4718]: I1210 14:33:14.747976 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:14 crc kubenswrapper[4718]: I1210 14:33:14.747985 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:14Z","lastTransitionTime":"2025-12-10T14:33:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:14 crc kubenswrapper[4718]: I1210 14:33:14.850825 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:14 crc kubenswrapper[4718]: I1210 14:33:14.850894 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:14 crc kubenswrapper[4718]: I1210 14:33:14.850905 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:14 crc kubenswrapper[4718]: I1210 14:33:14.850926 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:14 crc kubenswrapper[4718]: I1210 14:33:14.850939 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:14Z","lastTransitionTime":"2025-12-10T14:33:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:14 crc kubenswrapper[4718]: I1210 14:33:14.954690 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:14 crc kubenswrapper[4718]: I1210 14:33:14.954743 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:14 crc kubenswrapper[4718]: I1210 14:33:14.954754 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:14 crc kubenswrapper[4718]: I1210 14:33:14.954774 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:14 crc kubenswrapper[4718]: I1210 14:33:14.954785 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:14Z","lastTransitionTime":"2025-12-10T14:33:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:15 crc kubenswrapper[4718]: I1210 14:33:15.019476 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-r8zbt" Dec 10 14:33:15 crc kubenswrapper[4718]: I1210 14:33:15.019561 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 10 14:33:15 crc kubenswrapper[4718]: E1210 14:33:15.019777 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-r8zbt" podUID="1494ebfa-d66c-4200-a336-2cedebcd5889" Dec 10 14:33:15 crc kubenswrapper[4718]: E1210 14:33:15.019962 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 10 14:33:15 crc kubenswrapper[4718]: I1210 14:33:15.058068 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:15 crc kubenswrapper[4718]: I1210 14:33:15.058107 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:15 crc kubenswrapper[4718]: I1210 14:33:15.058117 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:15 crc kubenswrapper[4718]: I1210 14:33:15.058130 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:15 crc kubenswrapper[4718]: I1210 14:33:15.058139 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:15Z","lastTransitionTime":"2025-12-10T14:33:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:15 crc kubenswrapper[4718]: I1210 14:33:15.131841 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-dtch2_612af6cb-db4d-4874-a9ea-8b3c7eb8e30c/ovnkube-controller/3.log" Dec 10 14:33:15 crc kubenswrapper[4718]: I1210 14:33:15.132985 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-dtch2_612af6cb-db4d-4874-a9ea-8b3c7eb8e30c/ovnkube-controller/2.log" Dec 10 14:33:15 crc kubenswrapper[4718]: I1210 14:33:15.136372 4718 generic.go:334] "Generic (PLEG): container finished" podID="612af6cb-db4d-4874-a9ea-8b3c7eb8e30c" containerID="f2ec586c9c00d7877087cb32e37d064b16317ea1c414ab474147f4c480026c70" exitCode=1 Dec 10 14:33:15 crc kubenswrapper[4718]: I1210 14:33:15.136444 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dtch2" event={"ID":"612af6cb-db4d-4874-a9ea-8b3c7eb8e30c","Type":"ContainerDied","Data":"f2ec586c9c00d7877087cb32e37d064b16317ea1c414ab474147f4c480026c70"} Dec 10 14:33:15 crc kubenswrapper[4718]: I1210 14:33:15.136493 4718 scope.go:117] "RemoveContainer" containerID="410548008fb250bc3e4c4409287aa72cf53991828f5c3f4ac74bcf8561cbf855" Dec 10 14:33:15 crc kubenswrapper[4718]: I1210 14:33:15.137739 4718 scope.go:117] "RemoveContainer" containerID="f2ec586c9c00d7877087cb32e37d064b16317ea1c414ab474147f4c480026c70" Dec 10 14:33:15 crc kubenswrapper[4718]: E1210 14:33:15.137939 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-dtch2_openshift-ovn-kubernetes(612af6cb-db4d-4874-a9ea-8b3c7eb8e30c)\"" pod="openshift-ovn-kubernetes/ovnkube-node-dtch2" podUID="612af6cb-db4d-4874-a9ea-8b3c7eb8e30c" Dec 10 14:33:15 crc kubenswrapper[4718]: I1210 14:33:15.152432 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e88a635-7fd6-4100-9074-93065379b7af\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe972be45d5fffe525720a01120068ca8d0d5ad43be84a4dfea4ca04946ff711\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://98178725ebbf1d6fe01df267ec22fdd0b8bb049fba5d6790ca3e045b71083a05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://98178725ebbf1d6fe01df267ec22fdd0b8bb049fba5d6790ca3e045b71083a05\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:31:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:31:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:33:15Z is after 2025-08-24T17:21:41Z" Dec 10 14:33:15 crc kubenswrapper[4718]: I1210 14:33:15.161870 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:15 crc kubenswrapper[4718]: I1210 14:33:15.161925 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:15 crc kubenswrapper[4718]: I1210 14:33:15.161938 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:15 crc kubenswrapper[4718]: I1210 14:33:15.161957 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:15 crc kubenswrapper[4718]: I1210 14:33:15.161978 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:15Z","lastTransitionTime":"2025-12-10T14:33:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:15 crc kubenswrapper[4718]: I1210 14:33:15.174480 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1432ae01-a453-47a7-be1d-5fdcabe6f0c8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://651112c229c32968527538e66f1b161c92411b37f45f36ca90b27b6710d7a43d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d1de02a2f41beed3e92a05e69a6c76b44861e7dba46d723a38fffe8c3d0ed42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9405e4c833e1d215824604c6d7ec42dec156f2968d3351f110df665cdac0ce1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d898946159da9ec7806b5daa074a2313a8d3cd6086f4ae2bdfe858a7ba70eca7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:31:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:33:15Z is after 2025-08-24T17:21:41Z" Dec 10 14:33:15 crc kubenswrapper[4718]: I1210 14:33:15.190567 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5867ee7f-0b5d-48e7-9c39-956298a845a1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ca56ecda46fc73e9f734cfd36b87b7353db70adefef0665771911d62633ce508\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65ade8ad5ab17011ad49a8283beac1b53ed3612052f8ad9c9b5521b18a71f970\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4c8b683dbf4a1d50d5987ea168baecaf4b2124a5a6286dc22c96cd7472db60a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://541c7810a9f52a43bd4d949a836dfed2fbf9384a1c46d64f81168a8eec1437e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://541c7810a9f52a43bd4d949a836dfed2fbf9384a1c46d64f81168a8eec1437e3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:31:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:31:37Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:31:36Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:33:15Z is after 2025-08-24T17:21:41Z" Dec 10 14:33:15 crc kubenswrapper[4718]: I1210 14:33:15.211426 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://caa6ee7ab7f267517dcd1699f91f61cbc8b2dd0a135dd553e096f31a7f1dd93b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:33:15Z is after 2025-08-24T17:21:41Z" Dec 10 14:33:15 crc kubenswrapper[4718]: I1210 14:33:15.227883 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:33:15Z is after 2025-08-24T17:21:41Z" Dec 10 14:33:15 crc kubenswrapper[4718]: I1210 14:33:15.245232 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hv62w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9db3984f-4589-462f-94d7-89a885be63d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fb2ef285e022a0800e4756e854df7efc695588b19f6162afdccad0898285b93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c18891e2a71e33784526db2ff2226ff6aa8a9df462f5cb11ff4f5a446509f54\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-10T14:32:53Z\\\",\\\"message\\\":\\\"2025-12-10T14:32:07+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_1cd36dea-7f3c-413f-b43c-2e7059f8f268\\\\n2025-12-10T14:32:07+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_1cd36dea-7f3c-413f-b43c-2e7059f8f268 to /host/opt/cni/bin/\\\\n2025-12-10T14:32:08Z [verbose] multus-daemon started\\\\n2025-12-10T14:32:08Z [verbose] Readiness Indicator file check\\\\n2025-12-10T14:32:53Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:01Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d84p4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hv62w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:33:15Z is after 2025-08-24T17:21:41Z" Dec 10 14:33:15 crc kubenswrapper[4718]: I1210 14:33:15.258305 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xvkg4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af814c60-50de-499d-a1b2-b18f3749bc35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01d0637707f68305ada3aab8a2cf56db32750665ef5c1bcfbfb1993fd135c3b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vsmbx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:00Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xvkg4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:33:15Z is after 2025-08-24T17:21:41Z" Dec 10 14:33:15 crc kubenswrapper[4718]: I1210 14:33:15.264673 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:15 crc kubenswrapper[4718]: I1210 14:33:15.264712 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:15 crc kubenswrapper[4718]: I1210 14:33:15.264726 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:15 crc kubenswrapper[4718]: I1210 14:33:15.264747 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:15 crc kubenswrapper[4718]: I1210 14:33:15.264761 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:15Z","lastTransitionTime":"2025-12-10T14:33:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:15 crc kubenswrapper[4718]: I1210 14:33:15.274519 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8db53917-7cfb-496d-b8a0-5cc68f3be4e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ac0946788c7cef36abe84639ca343f7ba379f19b94308cd1a08f4dcac4ad249\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f420f195ef9f1879dc97af4b3feb8835025c2cbe0ede4113fb1b0e0b08ba6c2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8zmhn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:33:15Z is after 2025-08-24T17:21:41Z" Dec 10 14:33:15 crc kubenswrapper[4718]: I1210 14:33:15.290682 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-49r77" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"22aace52-6f58-4459-89d2-9fec98b12ead\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b18d0c437956c7db43ad53ae225d4340e62f2c0ceb79ab6cc519d3c97570027\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l47m9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc20dc8e96606df154ecb5850afe3c1002d2869646e8a43a642e34d6f5ec8683\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l47m9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-49r77\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:33:15Z is after 2025-08-24T17:21:41Z" Dec 10 14:33:15 crc kubenswrapper[4718]: I1210 14:33:15.312645 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"edc5ecab-9954-4a78-948a-e88c389044cd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92692024d66d9335da5886842043cbe9f547d164de3fdac6376b164ec8768113\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d9bd6f4d07250b40c63c763479f4fda1c23d3c9f71eedb262fd9cb238354605\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://199df3bd0e866ea57708c66a7cc92e346afc57c8e49449d4cee70825f658684a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8437bd6fcc9fe1422502b2feedbb03714ddbb2620d3b6431118c34511a950de6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fd301838aa5021be88873ee897a124950b0e5de604202ef2e9adc1d5ae40732\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86141b1111aec255b1850d00a497e541509f05d03aba5ce3e1c26bf8d7181fdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86141b1111aec255b1850d00a497e541509f05d03aba5ce3e1c26bf8d7181fdc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:31:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f71357acb00a7b2364de56823e8f662069885964da1fc1aa8a3e91947869f334\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f71357acb00a7b2364de56823e8f662069885964da1fc1aa8a3e91947869f334\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:31:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:31:38Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://569b47e5967ce0ac32c9e788a893311624b65ce53898f938b656290e3521eac1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://569b47e5967ce0ac32c9e788a893311624b65ce53898f938b656290e3521eac1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:31:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:31:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:31:36Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:33:15Z is after 2025-08-24T17:21:41Z" Dec 10 14:33:15 crc kubenswrapper[4718]: I1210 14:33:15.330086 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kkfdg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db9c6842-03cb-4a28-baff-77f27f537aa4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bbdc104f3ec13789d339165601ac84cf9973e811127f8fe05ddb9b8d5ab333a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bxrt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34acb23b1eef53ae920322853c0503b9d26b0d30894e6c53ae52fb90e1671294\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34acb23b1eef53ae920322853c0503b9d26b0d30894e6c53ae52fb90e1671294\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bxrt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1b13fbe360cd4e791e86acdb311b78d1ab1d3344fd287fd70cfa87cf656affa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1b13fbe360cd4e791e86acdb311b78d1ab1d3344fd287fd70cfa87cf656affa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bxrt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d9ee16140b928b7e5c33811af2ca2b20b2fc98fdefc12bb1790fce59c407d9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d9ee16140b928b7e5c33811af2ca2b20b2fc98fdefc12bb1790fce59c407d9c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bxrt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1daf49ff990637090db3ead68f5d0777b6502f79edfe524df743389ee220f8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1daf49ff990637090db3ead68f5d0777b6502f79edfe524df743389ee220f8d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bxrt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://193bd4ffc5c3324f05727b94ff852aaae455949fbb968b27c572f43fe0328854\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://193bd4ffc5c3324f05727b94ff852aaae455949fbb968b27c572f43fe0328854\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bxrt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://95db561c06acc2e7272a5a4a2c8a6a38315343379b80213c0e63f7a65a46f35d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://95db561c06acc2e7272a5a4a2c8a6a38315343379b80213c0e63f7a65a46f35d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bxrt7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kkfdg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:33:15Z is after 2025-08-24T17:21:41Z" Dec 10 14:33:15 crc kubenswrapper[4718]: I1210 14:33:15.344209 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-r8zbt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1494ebfa-d66c-4200-a336-2cedebcd5889\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p7qsk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p7qsk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:17Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-r8zbt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:33:15Z is after 2025-08-24T17:21:41Z" Dec 10 14:33:15 crc kubenswrapper[4718]: I1210 14:33:15.362961 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52b297ad-8ff7-498e-8248-b64014de744f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65966d33004e0a6740f0b450f404729baec86fc47c84a58ee020a78b7378bfee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfa44e8f79d7525847ab10384aaf41105a8e6769b384679417cd0a7379748dee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://32fcede9b35f9a45177a341e272e7c2cbe2e5c6bf699e877664a603a5f96124a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3d4b682855c5510a5957e308008602e4f749bd44ca52f974cb4be27d06a7c80\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9d6e990a4a3f575ffc8a6399b8dd1fd59a84db71b34885946046fc7c5bd0a8e8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-10T14:31:58Z\\\",\\\"message\\\":\\\"le observer\\\\nW1210 14:31:57.496102 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1210 14:31:57.496266 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1210 14:31:57.497076 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1744864549/tls.crt::/tmp/serving-cert-1744864549/tls.key\\\\\\\"\\\\nI1210 14:31:57.965947 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1210 14:31:57.968367 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1210 14:31:57.968400 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1210 14:31:57.968417 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1210 14:31:57.968422 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1210 14:31:57.972813 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1210 14:31:57.972849 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1210 14:31:57.972856 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1210 14:31:57.972862 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1210 14:31:57.972867 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1210 14:31:57.972870 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1210 14:31:57.972874 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1210 14:31:57.973028 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1210 14:31:57.977590 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-10T14:31:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f363cabfc65ba2cb737968ead6f21568f30416a1f385f511f2d1a45da1e831a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3cbf8dfd3e1278a733c6ca7e888788d0bec845ff72dfd618e2070262d05b6a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3cbf8dfd3e1278a733c6ca7e888788d0bec845ff72dfd618e2070262d05b6a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:31:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:31:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:33:15Z is after 2025-08-24T17:21:41Z" Dec 10 14:33:15 crc kubenswrapper[4718]: I1210 14:33:15.367158 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:15 crc kubenswrapper[4718]: I1210 14:33:15.367190 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:15 crc kubenswrapper[4718]: I1210 14:33:15.367199 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:15 crc kubenswrapper[4718]: I1210 14:33:15.367218 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:15 crc kubenswrapper[4718]: I1210 14:33:15.367230 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:15Z","lastTransitionTime":"2025-12-10T14:33:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:15 crc kubenswrapper[4718]: I1210 14:33:15.378931 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:33:15Z is after 2025-08-24T17:21:41Z" Dec 10 14:33:15 crc kubenswrapper[4718]: I1210 14:33:15.392698 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:33:15Z is after 2025-08-24T17:21:41Z" Dec 10 14:33:15 crc kubenswrapper[4718]: I1210 14:33:15.406017 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://368fbae55ef9c78f2f88c5a07301f271e1ccffc66976a0c2dc9f80edce04613a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:33:15Z is after 2025-08-24T17:21:41Z" Dec 10 14:33:15 crc kubenswrapper[4718]: I1210 14:33:15.426721 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dtch2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"612af6cb-db4d-4874-a9ea-8b3c7eb8e30c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f49783c85c42bd1d4afd9d2546c84af07c770b069b7e6f22cf3d95c0e173bcb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02ee125e78be56eace63f1077a3952d4a9b9ccb387af5413a9325eb4bff80ea9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e231c6918c86142643af798db9c4c6de9764822d2b60e1155c727b143191139\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://abe2f058bd1c2102c5f00a29af4869eb6dcd8b732ae90f5c4fda6759b460622a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2ac19fc0eeba332014eefac978a35798eb8bf1ff79212bca24690236bd46e01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9263fdd7b12f5852db038570b5a179fff5850893af790fa27694f00375a607f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2ec586c9c00d7877087cb32e37d064b16317ea1c414ab474147f4c480026c70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://410548008fb250bc3e4c4409287aa72cf53991828f5c3f4ac74bcf8561cbf855\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-10T14:32:49Z\\\",\\\"message\\\":\\\"logical_ip:10.217.0.59 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {dce28c51-c9f1-478b-97c8-7e209d6e7cbe}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:dce28c51-c9f1-478b-97c8-7e209d6e7cbe}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {e3c4661a-36a6-47f0-a6c0-a4ee741f2224}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1210 14:32:48.189595 6552 obj_retry.go:365] Adding new object: *v1.Pod openshift-multus/network-metrics-daemon-r8zbt\\\\nI1210 14:32:48.189604 6552 ovn.go:134] Ensuring zone local for Pod openshift-multus/network-metrics-daemon-r8zbt in node crc\\\\nI1210 14:32:48.189627 6552 base_network_controller_pods.go:477] [default/openshift-multus/network-metrics-daemon-r8zbt] creating logical port openshift-multus_network-metrics-daemon-r8zbt for pod on switch crc\\\\nF1210 14:32:48.188496 6552 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:46Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f2ec586c9c00d7877087cb32e37d064b16317ea1c414ab474147f4c480026c70\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-10T14:33:14Z\\\",\\\"message\\\":\\\" network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:33:14Z is after 2025-08-24T17:21:41Z]\\\\nI1210 14:33:14.525162 6855 obj_retry.go:409] Going to retry *v1.Pod resource setup for 19 objects: [openshift-kube-apiserver/kube-apiserver-crc openshift-kube-controller-manager/kube-controller-manager-crc openshift-machine-config-operator/machine-config-daemon-8zmhn openshift-network-console/networking-console-plugin-85b44fc459-gdk6g openshift-network-diagnostics/network-check-source-55646444c4-trplf openshift-dns/node-resolver-xvkg4 openshift-etcd/etcd-crc openshift-image-registry/node-ca-9fmrd openshift-network-diagnostics/network-check-target-xd92c openshift-network-operator/network-operator-58b4c7f79c-55gtf openshift-ovn-kubernetes/ovnkube-node-dtch2 openshift-network-nod\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-10T14:33:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f27c89c7dd6f5334f05c340d706e938abde36968f00c30bf84f7b2fb30dfcb3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df5db949bd1c53e8ae754b3e7e500e8c2e196aacd68a36d6331ddca378ef7504\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df5db949bd1c53e8ae754b3e7e500e8c2e196aacd68a36d6331ddca378ef7504\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:32:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:32:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhb7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-dtch2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:33:15Z is after 2025-08-24T17:21:41Z" Dec 10 14:33:15 crc kubenswrapper[4718]: I1210 14:33:15.440851 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a6ff07473ddfa3d1db1e5fe47937274f803c400ee435102c3b0383c02af4340\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b56868fde34c04a22b78be1e6cb02ff5b11f0dd9fa0232e5a4c38725a8bf193\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:33:15Z is after 2025-08-24T17:21:41Z" Dec 10 14:33:15 crc kubenswrapper[4718]: I1210 14:33:15.453448 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9fmrd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"186fd52a-c63c-461f-a551-8b57ead36f59\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c79dfef1bedaab660a35e29f8a48c6b153f7ca5028ef6a936a196699a30566e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r6f28\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:03Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9fmrd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:33:15Z is after 2025-08-24T17:21:41Z" Dec 10 14:33:15 crc kubenswrapper[4718]: I1210 14:33:15.470477 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:15 crc kubenswrapper[4718]: I1210 14:33:15.470855 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:15 crc kubenswrapper[4718]: I1210 14:33:15.470968 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:15 crc kubenswrapper[4718]: I1210 14:33:15.471057 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:15 crc kubenswrapper[4718]: I1210 14:33:15.471140 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:15Z","lastTransitionTime":"2025-12-10T14:33:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:15 crc kubenswrapper[4718]: I1210 14:33:15.575204 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:15 crc kubenswrapper[4718]: I1210 14:33:15.575571 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:15 crc kubenswrapper[4718]: I1210 14:33:15.575656 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:15 crc kubenswrapper[4718]: I1210 14:33:15.575761 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:15 crc kubenswrapper[4718]: I1210 14:33:15.575887 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:15Z","lastTransitionTime":"2025-12-10T14:33:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:15 crc kubenswrapper[4718]: I1210 14:33:15.679936 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:15 crc kubenswrapper[4718]: I1210 14:33:15.680624 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:15 crc kubenswrapper[4718]: I1210 14:33:15.680664 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:15 crc kubenswrapper[4718]: I1210 14:33:15.680694 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:15 crc kubenswrapper[4718]: I1210 14:33:15.680716 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:15Z","lastTransitionTime":"2025-12-10T14:33:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:15 crc kubenswrapper[4718]: I1210 14:33:15.783881 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:15 crc kubenswrapper[4718]: I1210 14:33:15.783931 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:15 crc kubenswrapper[4718]: I1210 14:33:15.783942 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:15 crc kubenswrapper[4718]: I1210 14:33:15.783961 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:15 crc kubenswrapper[4718]: I1210 14:33:15.783979 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:15Z","lastTransitionTime":"2025-12-10T14:33:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:15 crc kubenswrapper[4718]: I1210 14:33:15.888229 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:15 crc kubenswrapper[4718]: I1210 14:33:15.888296 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:15 crc kubenswrapper[4718]: I1210 14:33:15.888312 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:15 crc kubenswrapper[4718]: I1210 14:33:15.888340 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:15 crc kubenswrapper[4718]: I1210 14:33:15.888365 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:15Z","lastTransitionTime":"2025-12-10T14:33:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:15 crc kubenswrapper[4718]: I1210 14:33:15.991597 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:15 crc kubenswrapper[4718]: I1210 14:33:15.991669 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:15 crc kubenswrapper[4718]: I1210 14:33:15.991679 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:15 crc kubenswrapper[4718]: I1210 14:33:15.991696 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:15 crc kubenswrapper[4718]: I1210 14:33:15.991715 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:15Z","lastTransitionTime":"2025-12-10T14:33:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:16 crc kubenswrapper[4718]: I1210 14:33:16.019729 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 10 14:33:16 crc kubenswrapper[4718]: I1210 14:33:16.020319 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 10 14:33:16 crc kubenswrapper[4718]: E1210 14:33:16.020581 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 10 14:33:16 crc kubenswrapper[4718]: E1210 14:33:16.025589 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 10 14:33:16 crc kubenswrapper[4718]: I1210 14:33:16.035499 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8db53917-7cfb-496d-b8a0-5cc68f3be4e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:32:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ac0946788c7cef36abe84639ca343f7ba379f19b94308cd1a08f4dcac4ad249\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f420f195ef9f1879dc97af4b3feb8835025c2cbe0ede4113fb1b0e0b08ba6c2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:32:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kfb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:32:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8zmhn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:33:16Z is after 2025-08-24T17:21:41Z" Dec 10 14:33:16 crc kubenswrapper[4718]: I1210 14:33:16.052870 4718 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e88a635-7fd6-4100-9074-93065379b7af\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-10T14:31:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe972be45d5fffe525720a01120068ca8d0d5ad43be84a4dfea4ca04946ff711\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-10T14:31:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://98178725ebbf1d6fe01df267ec22fdd0b8bb049fba5d6790ca3e045b71083a05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://98178725ebbf1d6fe01df267ec22fdd0b8bb049fba5d6790ca3e045b71083a05\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-10T14:31:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-10T14:31:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-10T14:31:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-10T14:33:16Z is after 2025-08-24T17:21:41Z" Dec 10 14:33:16 crc kubenswrapper[4718]: I1210 14:33:16.083585 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=74.08353047 podStartE2EDuration="1m14.08353047s" podCreationTimestamp="2025-12-10 14:32:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 14:33:16.083410507 +0000 UTC m=+101.032633924" watchObservedRunningTime="2025-12-10 14:33:16.08353047 +0000 UTC m=+101.032753887" Dec 10 14:33:16 crc kubenswrapper[4718]: I1210 14:33:16.095083 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:16 crc kubenswrapper[4718]: I1210 14:33:16.095153 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:16 crc kubenswrapper[4718]: I1210 14:33:16.095165 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:16 crc kubenswrapper[4718]: I1210 14:33:16.095207 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:16 crc kubenswrapper[4718]: I1210 14:33:16.095221 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:16Z","lastTransitionTime":"2025-12-10T14:33:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:16 crc kubenswrapper[4718]: I1210 14:33:16.142530 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-dtch2_612af6cb-db4d-4874-a9ea-8b3c7eb8e30c/ovnkube-controller/3.log" Dec 10 14:33:16 crc kubenswrapper[4718]: I1210 14:33:16.147134 4718 scope.go:117] "RemoveContainer" containerID="f2ec586c9c00d7877087cb32e37d064b16317ea1c414ab474147f4c480026c70" Dec 10 14:33:16 crc kubenswrapper[4718]: E1210 14:33:16.147575 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-dtch2_openshift-ovn-kubernetes(612af6cb-db4d-4874-a9ea-8b3c7eb8e30c)\"" pod="openshift-ovn-kubernetes/ovnkube-node-dtch2" podUID="612af6cb-db4d-4874-a9ea-8b3c7eb8e30c" Dec 10 14:33:16 crc kubenswrapper[4718]: I1210 14:33:16.199345 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:16 crc kubenswrapper[4718]: I1210 14:33:16.199456 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:16 crc kubenswrapper[4718]: I1210 14:33:16.199506 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:16 crc kubenswrapper[4718]: I1210 14:33:16.199530 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:16 crc kubenswrapper[4718]: I1210 14:33:16.199546 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:16Z","lastTransitionTime":"2025-12-10T14:33:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:16 crc kubenswrapper[4718]: I1210 14:33:16.201978 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=51.201948211 podStartE2EDuration="51.201948211s" podCreationTimestamp="2025-12-10 14:32:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 14:33:16.105779481 +0000 UTC m=+101.055002898" watchObservedRunningTime="2025-12-10 14:33:16.201948211 +0000 UTC m=+101.151171628" Dec 10 14:33:16 crc kubenswrapper[4718]: I1210 14:33:16.235548 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-hv62w" podStartSLOduration=77.235514093 podStartE2EDuration="1m17.235514093s" podCreationTimestamp="2025-12-10 14:31:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 14:33:16.235170954 +0000 UTC m=+101.184394371" watchObservedRunningTime="2025-12-10 14:33:16.235514093 +0000 UTC m=+101.184737510" Dec 10 14:33:16 crc kubenswrapper[4718]: I1210 14:33:16.251257 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-xvkg4" podStartSLOduration=77.251214596 podStartE2EDuration="1m17.251214596s" podCreationTimestamp="2025-12-10 14:31:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 14:33:16.250375165 +0000 UTC m=+101.199598572" watchObservedRunningTime="2025-12-10 14:33:16.251214596 +0000 UTC m=+101.200438013" Dec 10 14:33:16 crc kubenswrapper[4718]: I1210 14:33:16.297373 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=2.29732626 podStartE2EDuration="2.29732626s" podCreationTimestamp="2025-12-10 14:33:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 14:33:16.297104355 +0000 UTC m=+101.246327772" watchObservedRunningTime="2025-12-10 14:33:16.29732626 +0000 UTC m=+101.246549677" Dec 10 14:33:16 crc kubenswrapper[4718]: I1210 14:33:16.297772 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-49r77" podStartSLOduration=75.297765112 podStartE2EDuration="1m15.297765112s" podCreationTimestamp="2025-12-10 14:32:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 14:33:16.265945404 +0000 UTC m=+101.215168821" watchObservedRunningTime="2025-12-10 14:33:16.297765112 +0000 UTC m=+101.246988529" Dec 10 14:33:16 crc kubenswrapper[4718]: I1210 14:33:16.302041 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:16 crc kubenswrapper[4718]: I1210 14:33:16.302116 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:16 crc kubenswrapper[4718]: I1210 14:33:16.302135 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:16 crc kubenswrapper[4718]: I1210 14:33:16.302165 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:16 crc kubenswrapper[4718]: I1210 14:33:16.302185 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:16Z","lastTransitionTime":"2025-12-10T14:33:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:16 crc kubenswrapper[4718]: I1210 14:33:16.319553 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-kkfdg" podStartSLOduration=77.31952185 podStartE2EDuration="1m17.31952185s" podCreationTimestamp="2025-12-10 14:31:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 14:33:16.319206252 +0000 UTC m=+101.268429679" watchObservedRunningTime="2025-12-10 14:33:16.31952185 +0000 UTC m=+101.268745267" Dec 10 14:33:16 crc kubenswrapper[4718]: I1210 14:33:16.356074 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=78.356045308 podStartE2EDuration="1m18.356045308s" podCreationTimestamp="2025-12-10 14:31:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 14:33:16.354992681 +0000 UTC m=+101.304216098" watchObservedRunningTime="2025-12-10 14:33:16.356045308 +0000 UTC m=+101.305268725" Dec 10 14:33:16 crc kubenswrapper[4718]: I1210 14:33:16.405305 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:16 crc kubenswrapper[4718]: I1210 14:33:16.405376 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:16 crc kubenswrapper[4718]: I1210 14:33:16.405409 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:16 crc kubenswrapper[4718]: I1210 14:33:16.405454 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:16 crc kubenswrapper[4718]: I1210 14:33:16.405467 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:16Z","lastTransitionTime":"2025-12-10T14:33:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:16 crc kubenswrapper[4718]: I1210 14:33:16.505236 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-9fmrd" podStartSLOduration=77.505196129 podStartE2EDuration="1m17.505196129s" podCreationTimestamp="2025-12-10 14:31:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 14:33:16.505014024 +0000 UTC m=+101.454237451" watchObservedRunningTime="2025-12-10 14:33:16.505196129 +0000 UTC m=+101.454419546" Dec 10 14:33:16 crc kubenswrapper[4718]: I1210 14:33:16.508292 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:16 crc kubenswrapper[4718]: I1210 14:33:16.508351 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:16 crc kubenswrapper[4718]: I1210 14:33:16.508363 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:16 crc kubenswrapper[4718]: I1210 14:33:16.508454 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:16 crc kubenswrapper[4718]: I1210 14:33:16.508479 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:16Z","lastTransitionTime":"2025-12-10T14:33:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:16 crc kubenswrapper[4718]: I1210 14:33:16.564848 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" podStartSLOduration=77.56482515 podStartE2EDuration="1m17.56482515s" podCreationTimestamp="2025-12-10 14:31:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 14:33:16.564008589 +0000 UTC m=+101.513232006" watchObservedRunningTime="2025-12-10 14:33:16.56482515 +0000 UTC m=+101.514048567" Dec 10 14:33:16 crc kubenswrapper[4718]: I1210 14:33:16.578954 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=29.578925202 podStartE2EDuration="29.578925202s" podCreationTimestamp="2025-12-10 14:32:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 14:33:16.578525332 +0000 UTC m=+101.527748749" watchObservedRunningTime="2025-12-10 14:33:16.578925202 +0000 UTC m=+101.528148619" Dec 10 14:33:16 crc kubenswrapper[4718]: I1210 14:33:16.611833 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:16 crc kubenswrapper[4718]: I1210 14:33:16.611897 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:16 crc kubenswrapper[4718]: I1210 14:33:16.611912 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:16 crc kubenswrapper[4718]: I1210 14:33:16.611931 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:16 crc kubenswrapper[4718]: I1210 14:33:16.611943 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:16Z","lastTransitionTime":"2025-12-10T14:33:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:16 crc kubenswrapper[4718]: I1210 14:33:16.715014 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:16 crc kubenswrapper[4718]: I1210 14:33:16.715065 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:16 crc kubenswrapper[4718]: I1210 14:33:16.715076 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:16 crc kubenswrapper[4718]: I1210 14:33:16.715094 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:16 crc kubenswrapper[4718]: I1210 14:33:16.715129 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:16Z","lastTransitionTime":"2025-12-10T14:33:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:16 crc kubenswrapper[4718]: I1210 14:33:16.818668 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:16 crc kubenswrapper[4718]: I1210 14:33:16.818730 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:16 crc kubenswrapper[4718]: I1210 14:33:16.818743 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:16 crc kubenswrapper[4718]: I1210 14:33:16.818761 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:16 crc kubenswrapper[4718]: I1210 14:33:16.818773 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:16Z","lastTransitionTime":"2025-12-10T14:33:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:16 crc kubenswrapper[4718]: I1210 14:33:16.921951 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:16 crc kubenswrapper[4718]: I1210 14:33:16.922012 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:16 crc kubenswrapper[4718]: I1210 14:33:16.922030 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:16 crc kubenswrapper[4718]: I1210 14:33:16.922055 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:16 crc kubenswrapper[4718]: I1210 14:33:16.922074 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:16Z","lastTransitionTime":"2025-12-10T14:33:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:17 crc kubenswrapper[4718]: I1210 14:33:17.020459 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 10 14:33:17 crc kubenswrapper[4718]: I1210 14:33:17.020556 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-r8zbt" Dec 10 14:33:17 crc kubenswrapper[4718]: E1210 14:33:17.020734 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 10 14:33:17 crc kubenswrapper[4718]: E1210 14:33:17.020957 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-r8zbt" podUID="1494ebfa-d66c-4200-a336-2cedebcd5889" Dec 10 14:33:17 crc kubenswrapper[4718]: I1210 14:33:17.025085 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:17 crc kubenswrapper[4718]: I1210 14:33:17.025148 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:17 crc kubenswrapper[4718]: I1210 14:33:17.025164 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:17 crc kubenswrapper[4718]: I1210 14:33:17.025187 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:17 crc kubenswrapper[4718]: I1210 14:33:17.025205 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:17Z","lastTransitionTime":"2025-12-10T14:33:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:17 crc kubenswrapper[4718]: I1210 14:33:17.128297 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:17 crc kubenswrapper[4718]: I1210 14:33:17.128352 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:17 crc kubenswrapper[4718]: I1210 14:33:17.128365 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:17 crc kubenswrapper[4718]: I1210 14:33:17.128399 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:17 crc kubenswrapper[4718]: I1210 14:33:17.128417 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:17Z","lastTransitionTime":"2025-12-10T14:33:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:17 crc kubenswrapper[4718]: I1210 14:33:17.231822 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:17 crc kubenswrapper[4718]: I1210 14:33:17.231895 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:17 crc kubenswrapper[4718]: I1210 14:33:17.231911 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:17 crc kubenswrapper[4718]: I1210 14:33:17.231937 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:17 crc kubenswrapper[4718]: I1210 14:33:17.231950 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:17Z","lastTransitionTime":"2025-12-10T14:33:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:17 crc kubenswrapper[4718]: I1210 14:33:17.335490 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:17 crc kubenswrapper[4718]: I1210 14:33:17.335534 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:17 crc kubenswrapper[4718]: I1210 14:33:17.335544 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:17 crc kubenswrapper[4718]: I1210 14:33:17.335560 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:17 crc kubenswrapper[4718]: I1210 14:33:17.335573 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:17Z","lastTransitionTime":"2025-12-10T14:33:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:17 crc kubenswrapper[4718]: I1210 14:33:17.438821 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:17 crc kubenswrapper[4718]: I1210 14:33:17.438872 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:17 crc kubenswrapper[4718]: I1210 14:33:17.438886 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:17 crc kubenswrapper[4718]: I1210 14:33:17.438906 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:17 crc kubenswrapper[4718]: I1210 14:33:17.438920 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:17Z","lastTransitionTime":"2025-12-10T14:33:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:17 crc kubenswrapper[4718]: I1210 14:33:17.541743 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:17 crc kubenswrapper[4718]: I1210 14:33:17.541781 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:17 crc kubenswrapper[4718]: I1210 14:33:17.541790 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:17 crc kubenswrapper[4718]: I1210 14:33:17.541805 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:17 crc kubenswrapper[4718]: I1210 14:33:17.541816 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:17Z","lastTransitionTime":"2025-12-10T14:33:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:17 crc kubenswrapper[4718]: I1210 14:33:17.644559 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:17 crc kubenswrapper[4718]: I1210 14:33:17.644625 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:17 crc kubenswrapper[4718]: I1210 14:33:17.644641 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:17 crc kubenswrapper[4718]: I1210 14:33:17.644664 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:17 crc kubenswrapper[4718]: I1210 14:33:17.644686 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:17Z","lastTransitionTime":"2025-12-10T14:33:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:17 crc kubenswrapper[4718]: I1210 14:33:17.747509 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:17 crc kubenswrapper[4718]: I1210 14:33:17.747554 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:17 crc kubenswrapper[4718]: I1210 14:33:17.747563 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:17 crc kubenswrapper[4718]: I1210 14:33:17.747576 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:17 crc kubenswrapper[4718]: I1210 14:33:17.747587 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:17Z","lastTransitionTime":"2025-12-10T14:33:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:17 crc kubenswrapper[4718]: I1210 14:33:17.851118 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:17 crc kubenswrapper[4718]: I1210 14:33:17.851442 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:17 crc kubenswrapper[4718]: I1210 14:33:17.851574 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:17 crc kubenswrapper[4718]: I1210 14:33:17.851690 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:17 crc kubenswrapper[4718]: I1210 14:33:17.851770 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:17Z","lastTransitionTime":"2025-12-10T14:33:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:17 crc kubenswrapper[4718]: I1210 14:33:17.955147 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:17 crc kubenswrapper[4718]: I1210 14:33:17.955433 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:17 crc kubenswrapper[4718]: I1210 14:33:17.955520 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:17 crc kubenswrapper[4718]: I1210 14:33:17.955598 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:17 crc kubenswrapper[4718]: I1210 14:33:17.955707 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:17Z","lastTransitionTime":"2025-12-10T14:33:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:18 crc kubenswrapper[4718]: I1210 14:33:18.020039 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 10 14:33:18 crc kubenswrapper[4718]: I1210 14:33:18.020073 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 10 14:33:18 crc kubenswrapper[4718]: E1210 14:33:18.020616 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 10 14:33:18 crc kubenswrapper[4718]: E1210 14:33:18.020912 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 10 14:33:18 crc kubenswrapper[4718]: I1210 14:33:18.059653 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:18 crc kubenswrapper[4718]: I1210 14:33:18.059758 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:18 crc kubenswrapper[4718]: I1210 14:33:18.059772 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:18 crc kubenswrapper[4718]: I1210 14:33:18.060357 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:18 crc kubenswrapper[4718]: I1210 14:33:18.060422 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:18Z","lastTransitionTime":"2025-12-10T14:33:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:18 crc kubenswrapper[4718]: I1210 14:33:18.164110 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:18 crc kubenswrapper[4718]: I1210 14:33:18.164192 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:18 crc kubenswrapper[4718]: I1210 14:33:18.164210 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:18 crc kubenswrapper[4718]: I1210 14:33:18.164234 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:18 crc kubenswrapper[4718]: I1210 14:33:18.164252 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:18Z","lastTransitionTime":"2025-12-10T14:33:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:18 crc kubenswrapper[4718]: I1210 14:33:18.268048 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:18 crc kubenswrapper[4718]: I1210 14:33:18.268117 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:18 crc kubenswrapper[4718]: I1210 14:33:18.268128 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:18 crc kubenswrapper[4718]: I1210 14:33:18.268150 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:18 crc kubenswrapper[4718]: I1210 14:33:18.268164 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:18Z","lastTransitionTime":"2025-12-10T14:33:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:18 crc kubenswrapper[4718]: I1210 14:33:18.371790 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:18 crc kubenswrapper[4718]: I1210 14:33:18.371860 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:18 crc kubenswrapper[4718]: I1210 14:33:18.371873 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:18 crc kubenswrapper[4718]: I1210 14:33:18.371899 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:18 crc kubenswrapper[4718]: I1210 14:33:18.371912 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:18Z","lastTransitionTime":"2025-12-10T14:33:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:18 crc kubenswrapper[4718]: I1210 14:33:18.475615 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:18 crc kubenswrapper[4718]: I1210 14:33:18.475671 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:18 crc kubenswrapper[4718]: I1210 14:33:18.475681 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:18 crc kubenswrapper[4718]: I1210 14:33:18.475714 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:18 crc kubenswrapper[4718]: I1210 14:33:18.475735 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:18Z","lastTransitionTime":"2025-12-10T14:33:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:18 crc kubenswrapper[4718]: I1210 14:33:18.579895 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:18 crc kubenswrapper[4718]: I1210 14:33:18.579955 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:18 crc kubenswrapper[4718]: I1210 14:33:18.579964 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:18 crc kubenswrapper[4718]: I1210 14:33:18.579981 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:18 crc kubenswrapper[4718]: I1210 14:33:18.579991 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:18Z","lastTransitionTime":"2025-12-10T14:33:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:18 crc kubenswrapper[4718]: I1210 14:33:18.683122 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:18 crc kubenswrapper[4718]: I1210 14:33:18.683193 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:18 crc kubenswrapper[4718]: I1210 14:33:18.683222 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:18 crc kubenswrapper[4718]: I1210 14:33:18.683270 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:18 crc kubenswrapper[4718]: I1210 14:33:18.683300 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:18Z","lastTransitionTime":"2025-12-10T14:33:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:18 crc kubenswrapper[4718]: I1210 14:33:18.786838 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:18 crc kubenswrapper[4718]: I1210 14:33:18.786896 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:18 crc kubenswrapper[4718]: I1210 14:33:18.786906 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:18 crc kubenswrapper[4718]: I1210 14:33:18.786931 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:18 crc kubenswrapper[4718]: I1210 14:33:18.786946 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:18Z","lastTransitionTime":"2025-12-10T14:33:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:18 crc kubenswrapper[4718]: I1210 14:33:18.889736 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:18 crc kubenswrapper[4718]: I1210 14:33:18.889817 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:18 crc kubenswrapper[4718]: I1210 14:33:18.889830 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:18 crc kubenswrapper[4718]: I1210 14:33:18.889848 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:18 crc kubenswrapper[4718]: I1210 14:33:18.889863 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:18Z","lastTransitionTime":"2025-12-10T14:33:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:18 crc kubenswrapper[4718]: I1210 14:33:18.992287 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:18 crc kubenswrapper[4718]: I1210 14:33:18.992343 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:18 crc kubenswrapper[4718]: I1210 14:33:18.992355 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:18 crc kubenswrapper[4718]: I1210 14:33:18.992372 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:18 crc kubenswrapper[4718]: I1210 14:33:18.992405 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:18Z","lastTransitionTime":"2025-12-10T14:33:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:19 crc kubenswrapper[4718]: I1210 14:33:19.020018 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-r8zbt" Dec 10 14:33:19 crc kubenswrapper[4718]: I1210 14:33:19.020063 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 10 14:33:19 crc kubenswrapper[4718]: E1210 14:33:19.020315 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-r8zbt" podUID="1494ebfa-d66c-4200-a336-2cedebcd5889" Dec 10 14:33:19 crc kubenswrapper[4718]: E1210 14:33:19.020837 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 10 14:33:19 crc kubenswrapper[4718]: I1210 14:33:19.095621 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:19 crc kubenswrapper[4718]: I1210 14:33:19.095675 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:19 crc kubenswrapper[4718]: I1210 14:33:19.095691 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:19 crc kubenswrapper[4718]: I1210 14:33:19.095713 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:19 crc kubenswrapper[4718]: I1210 14:33:19.095729 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:19Z","lastTransitionTime":"2025-12-10T14:33:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:19 crc kubenswrapper[4718]: I1210 14:33:19.199992 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:19 crc kubenswrapper[4718]: I1210 14:33:19.200050 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:19 crc kubenswrapper[4718]: I1210 14:33:19.200062 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:19 crc kubenswrapper[4718]: I1210 14:33:19.200087 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:19 crc kubenswrapper[4718]: I1210 14:33:19.200101 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:19Z","lastTransitionTime":"2025-12-10T14:33:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:19 crc kubenswrapper[4718]: I1210 14:33:19.303333 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:19 crc kubenswrapper[4718]: I1210 14:33:19.303383 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:19 crc kubenswrapper[4718]: I1210 14:33:19.303411 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:19 crc kubenswrapper[4718]: I1210 14:33:19.303429 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:19 crc kubenswrapper[4718]: I1210 14:33:19.303441 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:19Z","lastTransitionTime":"2025-12-10T14:33:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:19 crc kubenswrapper[4718]: I1210 14:33:19.406131 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:19 crc kubenswrapper[4718]: I1210 14:33:19.406189 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:19 crc kubenswrapper[4718]: I1210 14:33:19.406199 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:19 crc kubenswrapper[4718]: I1210 14:33:19.406219 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:19 crc kubenswrapper[4718]: I1210 14:33:19.406236 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:19Z","lastTransitionTime":"2025-12-10T14:33:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:19 crc kubenswrapper[4718]: I1210 14:33:19.509930 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:19 crc kubenswrapper[4718]: I1210 14:33:19.509976 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:19 crc kubenswrapper[4718]: I1210 14:33:19.509988 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:19 crc kubenswrapper[4718]: I1210 14:33:19.510007 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:19 crc kubenswrapper[4718]: I1210 14:33:19.510022 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:19Z","lastTransitionTime":"2025-12-10T14:33:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:19 crc kubenswrapper[4718]: I1210 14:33:19.612665 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:19 crc kubenswrapper[4718]: I1210 14:33:19.612712 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:19 crc kubenswrapper[4718]: I1210 14:33:19.612723 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:19 crc kubenswrapper[4718]: I1210 14:33:19.612740 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:19 crc kubenswrapper[4718]: I1210 14:33:19.612752 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:19Z","lastTransitionTime":"2025-12-10T14:33:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:19 crc kubenswrapper[4718]: I1210 14:33:19.717093 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:19 crc kubenswrapper[4718]: I1210 14:33:19.717160 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:19 crc kubenswrapper[4718]: I1210 14:33:19.717172 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:19 crc kubenswrapper[4718]: I1210 14:33:19.717197 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:19 crc kubenswrapper[4718]: I1210 14:33:19.717210 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:19Z","lastTransitionTime":"2025-12-10T14:33:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:19 crc kubenswrapper[4718]: I1210 14:33:19.820533 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:19 crc kubenswrapper[4718]: I1210 14:33:19.820575 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:19 crc kubenswrapper[4718]: I1210 14:33:19.820585 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:19 crc kubenswrapper[4718]: I1210 14:33:19.820605 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:19 crc kubenswrapper[4718]: I1210 14:33:19.820616 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:19Z","lastTransitionTime":"2025-12-10T14:33:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:19 crc kubenswrapper[4718]: I1210 14:33:19.923780 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:19 crc kubenswrapper[4718]: I1210 14:33:19.923829 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:19 crc kubenswrapper[4718]: I1210 14:33:19.923842 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:19 crc kubenswrapper[4718]: I1210 14:33:19.923859 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:19 crc kubenswrapper[4718]: I1210 14:33:19.923872 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:19Z","lastTransitionTime":"2025-12-10T14:33:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:20 crc kubenswrapper[4718]: I1210 14:33:20.019590 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 10 14:33:20 crc kubenswrapper[4718]: I1210 14:33:20.019622 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 10 14:33:20 crc kubenswrapper[4718]: E1210 14:33:20.019802 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 10 14:33:20 crc kubenswrapper[4718]: E1210 14:33:20.020111 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 10 14:33:20 crc kubenswrapper[4718]: I1210 14:33:20.026677 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:20 crc kubenswrapper[4718]: I1210 14:33:20.026719 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:20 crc kubenswrapper[4718]: I1210 14:33:20.026733 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:20 crc kubenswrapper[4718]: I1210 14:33:20.026749 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:20 crc kubenswrapper[4718]: I1210 14:33:20.026764 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:20Z","lastTransitionTime":"2025-12-10T14:33:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:20 crc kubenswrapper[4718]: I1210 14:33:20.130173 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:20 crc kubenswrapper[4718]: I1210 14:33:20.130230 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:20 crc kubenswrapper[4718]: I1210 14:33:20.130243 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:20 crc kubenswrapper[4718]: I1210 14:33:20.130263 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:20 crc kubenswrapper[4718]: I1210 14:33:20.130276 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:20Z","lastTransitionTime":"2025-12-10T14:33:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:20 crc kubenswrapper[4718]: I1210 14:33:20.233529 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:20 crc kubenswrapper[4718]: I1210 14:33:20.233573 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:20 crc kubenswrapper[4718]: I1210 14:33:20.233588 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:20 crc kubenswrapper[4718]: I1210 14:33:20.233608 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:20 crc kubenswrapper[4718]: I1210 14:33:20.233620 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:20Z","lastTransitionTime":"2025-12-10T14:33:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:20 crc kubenswrapper[4718]: I1210 14:33:20.336747 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:20 crc kubenswrapper[4718]: I1210 14:33:20.336824 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:20 crc kubenswrapper[4718]: I1210 14:33:20.336847 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:20 crc kubenswrapper[4718]: I1210 14:33:20.336878 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:20 crc kubenswrapper[4718]: I1210 14:33:20.336934 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:20Z","lastTransitionTime":"2025-12-10T14:33:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:20 crc kubenswrapper[4718]: I1210 14:33:20.439739 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:20 crc kubenswrapper[4718]: I1210 14:33:20.439784 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:20 crc kubenswrapper[4718]: I1210 14:33:20.439801 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:20 crc kubenswrapper[4718]: I1210 14:33:20.439815 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:20 crc kubenswrapper[4718]: I1210 14:33:20.439825 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:20Z","lastTransitionTime":"2025-12-10T14:33:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:20 crc kubenswrapper[4718]: I1210 14:33:20.544105 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:20 crc kubenswrapper[4718]: I1210 14:33:20.544162 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:20 crc kubenswrapper[4718]: I1210 14:33:20.544179 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:20 crc kubenswrapper[4718]: I1210 14:33:20.544202 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:20 crc kubenswrapper[4718]: I1210 14:33:20.544217 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:20Z","lastTransitionTime":"2025-12-10T14:33:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:20 crc kubenswrapper[4718]: I1210 14:33:20.647370 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:20 crc kubenswrapper[4718]: I1210 14:33:20.647442 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:20 crc kubenswrapper[4718]: I1210 14:33:20.647454 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:20 crc kubenswrapper[4718]: I1210 14:33:20.647472 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:20 crc kubenswrapper[4718]: I1210 14:33:20.647485 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:20Z","lastTransitionTime":"2025-12-10T14:33:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:20 crc kubenswrapper[4718]: I1210 14:33:20.750950 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:20 crc kubenswrapper[4718]: I1210 14:33:20.751010 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:20 crc kubenswrapper[4718]: I1210 14:33:20.751020 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:20 crc kubenswrapper[4718]: I1210 14:33:20.751039 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:20 crc kubenswrapper[4718]: I1210 14:33:20.751050 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:20Z","lastTransitionTime":"2025-12-10T14:33:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:20 crc kubenswrapper[4718]: I1210 14:33:20.854270 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:20 crc kubenswrapper[4718]: I1210 14:33:20.854349 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:20 crc kubenswrapper[4718]: I1210 14:33:20.854373 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:20 crc kubenswrapper[4718]: I1210 14:33:20.854442 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:20 crc kubenswrapper[4718]: I1210 14:33:20.854466 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:20Z","lastTransitionTime":"2025-12-10T14:33:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:20 crc kubenswrapper[4718]: I1210 14:33:20.957874 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:20 crc kubenswrapper[4718]: I1210 14:33:20.957922 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:20 crc kubenswrapper[4718]: I1210 14:33:20.957936 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:20 crc kubenswrapper[4718]: I1210 14:33:20.957957 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:20 crc kubenswrapper[4718]: I1210 14:33:20.957971 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:20Z","lastTransitionTime":"2025-12-10T14:33:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:21 crc kubenswrapper[4718]: I1210 14:33:21.019994 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-r8zbt" Dec 10 14:33:21 crc kubenswrapper[4718]: I1210 14:33:21.020146 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 10 14:33:21 crc kubenswrapper[4718]: E1210 14:33:21.020174 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-r8zbt" podUID="1494ebfa-d66c-4200-a336-2cedebcd5889" Dec 10 14:33:21 crc kubenswrapper[4718]: E1210 14:33:21.020311 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 10 14:33:21 crc kubenswrapper[4718]: I1210 14:33:21.061613 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:21 crc kubenswrapper[4718]: I1210 14:33:21.061678 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:21 crc kubenswrapper[4718]: I1210 14:33:21.061690 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:21 crc kubenswrapper[4718]: I1210 14:33:21.061705 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:21 crc kubenswrapper[4718]: I1210 14:33:21.061715 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:21Z","lastTransitionTime":"2025-12-10T14:33:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:21 crc kubenswrapper[4718]: I1210 14:33:21.165539 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:21 crc kubenswrapper[4718]: I1210 14:33:21.165596 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:21 crc kubenswrapper[4718]: I1210 14:33:21.165608 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:21 crc kubenswrapper[4718]: I1210 14:33:21.165626 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:21 crc kubenswrapper[4718]: I1210 14:33:21.165638 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:21Z","lastTransitionTime":"2025-12-10T14:33:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:21 crc kubenswrapper[4718]: I1210 14:33:21.268669 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:21 crc kubenswrapper[4718]: I1210 14:33:21.268721 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:21 crc kubenswrapper[4718]: I1210 14:33:21.268732 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:21 crc kubenswrapper[4718]: I1210 14:33:21.268756 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:21 crc kubenswrapper[4718]: I1210 14:33:21.268772 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:21Z","lastTransitionTime":"2025-12-10T14:33:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:21 crc kubenswrapper[4718]: I1210 14:33:21.379217 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:21 crc kubenswrapper[4718]: I1210 14:33:21.379278 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:21 crc kubenswrapper[4718]: I1210 14:33:21.379298 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:21 crc kubenswrapper[4718]: I1210 14:33:21.379322 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:21 crc kubenswrapper[4718]: I1210 14:33:21.379338 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:21Z","lastTransitionTime":"2025-12-10T14:33:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:21 crc kubenswrapper[4718]: I1210 14:33:21.483614 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:21 crc kubenswrapper[4718]: I1210 14:33:21.483660 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:21 crc kubenswrapper[4718]: I1210 14:33:21.483669 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:21 crc kubenswrapper[4718]: I1210 14:33:21.483686 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:21 crc kubenswrapper[4718]: I1210 14:33:21.483696 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:21Z","lastTransitionTime":"2025-12-10T14:33:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:21 crc kubenswrapper[4718]: I1210 14:33:21.525710 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1494ebfa-d66c-4200-a336-2cedebcd5889-metrics-certs\") pod \"network-metrics-daemon-r8zbt\" (UID: \"1494ebfa-d66c-4200-a336-2cedebcd5889\") " pod="openshift-multus/network-metrics-daemon-r8zbt" Dec 10 14:33:21 crc kubenswrapper[4718]: E1210 14:33:21.525949 4718 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 10 14:33:21 crc kubenswrapper[4718]: E1210 14:33:21.526034 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1494ebfa-d66c-4200-a336-2cedebcd5889-metrics-certs podName:1494ebfa-d66c-4200-a336-2cedebcd5889 nodeName:}" failed. No retries permitted until 2025-12-10 14:34:25.526013777 +0000 UTC m=+170.475237194 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/1494ebfa-d66c-4200-a336-2cedebcd5889-metrics-certs") pod "network-metrics-daemon-r8zbt" (UID: "1494ebfa-d66c-4200-a336-2cedebcd5889") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 10 14:33:21 crc kubenswrapper[4718]: I1210 14:33:21.586651 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:21 crc kubenswrapper[4718]: I1210 14:33:21.586697 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:21 crc kubenswrapper[4718]: I1210 14:33:21.586709 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:21 crc kubenswrapper[4718]: I1210 14:33:21.586723 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:21 crc kubenswrapper[4718]: I1210 14:33:21.586733 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:21Z","lastTransitionTime":"2025-12-10T14:33:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:21 crc kubenswrapper[4718]: I1210 14:33:21.689233 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:21 crc kubenswrapper[4718]: I1210 14:33:21.689274 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:21 crc kubenswrapper[4718]: I1210 14:33:21.689284 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:21 crc kubenswrapper[4718]: I1210 14:33:21.689296 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:21 crc kubenswrapper[4718]: I1210 14:33:21.689305 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:21Z","lastTransitionTime":"2025-12-10T14:33:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:21 crc kubenswrapper[4718]: I1210 14:33:21.791376 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:21 crc kubenswrapper[4718]: I1210 14:33:21.791435 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:21 crc kubenswrapper[4718]: I1210 14:33:21.791447 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:21 crc kubenswrapper[4718]: I1210 14:33:21.791463 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:21 crc kubenswrapper[4718]: I1210 14:33:21.791478 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:21Z","lastTransitionTime":"2025-12-10T14:33:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:21 crc kubenswrapper[4718]: I1210 14:33:21.881359 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 10 14:33:21 crc kubenswrapper[4718]: I1210 14:33:21.881422 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 10 14:33:21 crc kubenswrapper[4718]: I1210 14:33:21.881434 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 10 14:33:21 crc kubenswrapper[4718]: I1210 14:33:21.881451 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 10 14:33:21 crc kubenswrapper[4718]: I1210 14:33:21.881463 4718 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-10T14:33:21Z","lastTransitionTime":"2025-12-10T14:33:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 10 14:33:21 crc kubenswrapper[4718]: I1210 14:33:21.933603 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-mqqg9"] Dec 10 14:33:21 crc kubenswrapper[4718]: I1210 14:33:21.934138 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-mqqg9" Dec 10 14:33:21 crc kubenswrapper[4718]: I1210 14:33:21.936553 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Dec 10 14:33:21 crc kubenswrapper[4718]: I1210 14:33:21.939169 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Dec 10 14:33:21 crc kubenswrapper[4718]: I1210 14:33:21.939352 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Dec 10 14:33:21 crc kubenswrapper[4718]: I1210 14:33:21.939513 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Dec 10 14:33:22 crc kubenswrapper[4718]: I1210 14:33:22.019657 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 10 14:33:22 crc kubenswrapper[4718]: E1210 14:33:22.019817 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 10 14:33:22 crc kubenswrapper[4718]: I1210 14:33:22.019657 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 10 14:33:22 crc kubenswrapper[4718]: E1210 14:33:22.020139 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 10 14:33:22 crc kubenswrapper[4718]: I1210 14:33:22.030260 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c865edb9-6af1-45a1-9ebb-7dfbfa4d4026-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-mqqg9\" (UID: \"c865edb9-6af1-45a1-9ebb-7dfbfa4d4026\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-mqqg9" Dec 10 14:33:22 crc kubenswrapper[4718]: I1210 14:33:22.030298 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/c865edb9-6af1-45a1-9ebb-7dfbfa4d4026-service-ca\") pod \"cluster-version-operator-5c965bbfc6-mqqg9\" (UID: \"c865edb9-6af1-45a1-9ebb-7dfbfa4d4026\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-mqqg9" Dec 10 14:33:22 crc kubenswrapper[4718]: I1210 14:33:22.030333 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c865edb9-6af1-45a1-9ebb-7dfbfa4d4026-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-mqqg9\" (UID: \"c865edb9-6af1-45a1-9ebb-7dfbfa4d4026\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-mqqg9" Dec 10 14:33:22 crc kubenswrapper[4718]: I1210 14:33:22.030369 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/c865edb9-6af1-45a1-9ebb-7dfbfa4d4026-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-mqqg9\" (UID: \"c865edb9-6af1-45a1-9ebb-7dfbfa4d4026\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-mqqg9" Dec 10 14:33:22 crc kubenswrapper[4718]: I1210 14:33:22.030401 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/c865edb9-6af1-45a1-9ebb-7dfbfa4d4026-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-mqqg9\" (UID: \"c865edb9-6af1-45a1-9ebb-7dfbfa4d4026\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-mqqg9" Dec 10 14:33:22 crc kubenswrapper[4718]: I1210 14:33:22.131639 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/c865edb9-6af1-45a1-9ebb-7dfbfa4d4026-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-mqqg9\" (UID: \"c865edb9-6af1-45a1-9ebb-7dfbfa4d4026\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-mqqg9" Dec 10 14:33:22 crc kubenswrapper[4718]: I1210 14:33:22.131724 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c865edb9-6af1-45a1-9ebb-7dfbfa4d4026-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-mqqg9\" (UID: \"c865edb9-6af1-45a1-9ebb-7dfbfa4d4026\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-mqqg9" Dec 10 14:33:22 crc kubenswrapper[4718]: I1210 14:33:22.131748 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/c865edb9-6af1-45a1-9ebb-7dfbfa4d4026-service-ca\") pod \"cluster-version-operator-5c965bbfc6-mqqg9\" (UID: \"c865edb9-6af1-45a1-9ebb-7dfbfa4d4026\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-mqqg9" Dec 10 14:33:22 crc kubenswrapper[4718]: I1210 14:33:22.131788 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c865edb9-6af1-45a1-9ebb-7dfbfa4d4026-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-mqqg9\" (UID: \"c865edb9-6af1-45a1-9ebb-7dfbfa4d4026\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-mqqg9" Dec 10 14:33:22 crc kubenswrapper[4718]: I1210 14:33:22.131817 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/c865edb9-6af1-45a1-9ebb-7dfbfa4d4026-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-mqqg9\" (UID: \"c865edb9-6af1-45a1-9ebb-7dfbfa4d4026\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-mqqg9" Dec 10 14:33:22 crc kubenswrapper[4718]: I1210 14:33:22.131918 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/c865edb9-6af1-45a1-9ebb-7dfbfa4d4026-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-mqqg9\" (UID: \"c865edb9-6af1-45a1-9ebb-7dfbfa4d4026\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-mqqg9" Dec 10 14:33:22 crc kubenswrapper[4718]: I1210 14:33:22.131945 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/c865edb9-6af1-45a1-9ebb-7dfbfa4d4026-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-mqqg9\" (UID: \"c865edb9-6af1-45a1-9ebb-7dfbfa4d4026\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-mqqg9" Dec 10 14:33:22 crc kubenswrapper[4718]: I1210 14:33:22.133113 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/c865edb9-6af1-45a1-9ebb-7dfbfa4d4026-service-ca\") pod \"cluster-version-operator-5c965bbfc6-mqqg9\" (UID: \"c865edb9-6af1-45a1-9ebb-7dfbfa4d4026\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-mqqg9" Dec 10 14:33:22 crc kubenswrapper[4718]: I1210 14:33:22.141456 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c865edb9-6af1-45a1-9ebb-7dfbfa4d4026-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-mqqg9\" (UID: \"c865edb9-6af1-45a1-9ebb-7dfbfa4d4026\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-mqqg9" Dec 10 14:33:22 crc kubenswrapper[4718]: I1210 14:33:22.150878 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c865edb9-6af1-45a1-9ebb-7dfbfa4d4026-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-mqqg9\" (UID: \"c865edb9-6af1-45a1-9ebb-7dfbfa4d4026\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-mqqg9" Dec 10 14:33:22 crc kubenswrapper[4718]: I1210 14:33:22.255307 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-mqqg9" Dec 10 14:33:23 crc kubenswrapper[4718]: I1210 14:33:23.019819 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-r8zbt" Dec 10 14:33:23 crc kubenswrapper[4718]: I1210 14:33:23.020051 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 10 14:33:23 crc kubenswrapper[4718]: E1210 14:33:23.020627 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-r8zbt" podUID="1494ebfa-d66c-4200-a336-2cedebcd5889" Dec 10 14:33:23 crc kubenswrapper[4718]: E1210 14:33:23.020775 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 10 14:33:23 crc kubenswrapper[4718]: I1210 14:33:23.174580 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-mqqg9" event={"ID":"c865edb9-6af1-45a1-9ebb-7dfbfa4d4026","Type":"ContainerStarted","Data":"3bb07be16d5650cae4799945995c287fec977ca4ed1fc43767db6e17a739c285"} Dec 10 14:33:23 crc kubenswrapper[4718]: I1210 14:33:23.174636 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-mqqg9" event={"ID":"c865edb9-6af1-45a1-9ebb-7dfbfa4d4026","Type":"ContainerStarted","Data":"a6c3f29b3c768d6acdef27ba713f96993cd42e263740969c2bff8f12d56f364f"} Dec 10 14:33:24 crc kubenswrapper[4718]: I1210 14:33:24.019875 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 10 14:33:24 crc kubenswrapper[4718]: I1210 14:33:24.019937 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 10 14:33:24 crc kubenswrapper[4718]: E1210 14:33:24.021120 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 10 14:33:24 crc kubenswrapper[4718]: E1210 14:33:24.021252 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 10 14:33:25 crc kubenswrapper[4718]: I1210 14:33:25.019321 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 10 14:33:25 crc kubenswrapper[4718]: I1210 14:33:25.019320 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-r8zbt" Dec 10 14:33:25 crc kubenswrapper[4718]: E1210 14:33:25.020003 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 10 14:33:25 crc kubenswrapper[4718]: E1210 14:33:25.020175 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-r8zbt" podUID="1494ebfa-d66c-4200-a336-2cedebcd5889" Dec 10 14:33:26 crc kubenswrapper[4718]: I1210 14:33:26.020379 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 10 14:33:26 crc kubenswrapper[4718]: I1210 14:33:26.020512 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 10 14:33:26 crc kubenswrapper[4718]: E1210 14:33:26.021514 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 10 14:33:26 crc kubenswrapper[4718]: E1210 14:33:26.021739 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 10 14:33:27 crc kubenswrapper[4718]: I1210 14:33:27.020006 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-r8zbt" Dec 10 14:33:27 crc kubenswrapper[4718]: E1210 14:33:27.020191 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-r8zbt" podUID="1494ebfa-d66c-4200-a336-2cedebcd5889" Dec 10 14:33:27 crc kubenswrapper[4718]: I1210 14:33:27.021229 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 10 14:33:27 crc kubenswrapper[4718]: E1210 14:33:27.021571 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 10 14:33:28 crc kubenswrapper[4718]: I1210 14:33:28.020262 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 10 14:33:28 crc kubenswrapper[4718]: I1210 14:33:28.020373 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 10 14:33:28 crc kubenswrapper[4718]: E1210 14:33:28.020456 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 10 14:33:28 crc kubenswrapper[4718]: E1210 14:33:28.020755 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 10 14:33:29 crc kubenswrapper[4718]: I1210 14:33:29.020105 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 10 14:33:29 crc kubenswrapper[4718]: E1210 14:33:29.020305 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 10 14:33:29 crc kubenswrapper[4718]: I1210 14:33:29.020578 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-r8zbt" Dec 10 14:33:29 crc kubenswrapper[4718]: E1210 14:33:29.020661 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-r8zbt" podUID="1494ebfa-d66c-4200-a336-2cedebcd5889" Dec 10 14:33:30 crc kubenswrapper[4718]: I1210 14:33:30.019819 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 10 14:33:30 crc kubenswrapper[4718]: I1210 14:33:30.019847 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 10 14:33:30 crc kubenswrapper[4718]: E1210 14:33:30.020008 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 10 14:33:30 crc kubenswrapper[4718]: E1210 14:33:30.020096 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 10 14:33:31 crc kubenswrapper[4718]: I1210 14:33:31.020284 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 10 14:33:31 crc kubenswrapper[4718]: I1210 14:33:31.020522 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-r8zbt" Dec 10 14:33:31 crc kubenswrapper[4718]: E1210 14:33:31.021025 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-r8zbt" podUID="1494ebfa-d66c-4200-a336-2cedebcd5889" Dec 10 14:33:31 crc kubenswrapper[4718]: E1210 14:33:31.021113 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 10 14:33:31 crc kubenswrapper[4718]: I1210 14:33:31.021434 4718 scope.go:117] "RemoveContainer" containerID="f2ec586c9c00d7877087cb32e37d064b16317ea1c414ab474147f4c480026c70" Dec 10 14:33:31 crc kubenswrapper[4718]: E1210 14:33:31.021647 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-dtch2_openshift-ovn-kubernetes(612af6cb-db4d-4874-a9ea-8b3c7eb8e30c)\"" pod="openshift-ovn-kubernetes/ovnkube-node-dtch2" podUID="612af6cb-db4d-4874-a9ea-8b3c7eb8e30c" Dec 10 14:33:32 crc kubenswrapper[4718]: I1210 14:33:32.020063 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 10 14:33:32 crc kubenswrapper[4718]: I1210 14:33:32.020327 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 10 14:33:32 crc kubenswrapper[4718]: E1210 14:33:32.020450 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 10 14:33:32 crc kubenswrapper[4718]: E1210 14:33:32.020563 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 10 14:33:33 crc kubenswrapper[4718]: I1210 14:33:33.019781 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-r8zbt" Dec 10 14:33:33 crc kubenswrapper[4718]: I1210 14:33:33.019832 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 10 14:33:33 crc kubenswrapper[4718]: E1210 14:33:33.020766 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-r8zbt" podUID="1494ebfa-d66c-4200-a336-2cedebcd5889" Dec 10 14:33:33 crc kubenswrapper[4718]: E1210 14:33:33.021093 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 10 14:33:34 crc kubenswrapper[4718]: I1210 14:33:34.019627 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 10 14:33:34 crc kubenswrapper[4718]: I1210 14:33:34.019837 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 10 14:33:34 crc kubenswrapper[4718]: E1210 14:33:34.019929 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 10 14:33:34 crc kubenswrapper[4718]: E1210 14:33:34.020006 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 10 14:33:35 crc kubenswrapper[4718]: I1210 14:33:35.020032 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 10 14:33:35 crc kubenswrapper[4718]: I1210 14:33:35.020089 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-r8zbt" Dec 10 14:33:35 crc kubenswrapper[4718]: E1210 14:33:35.020282 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 10 14:33:35 crc kubenswrapper[4718]: E1210 14:33:35.020441 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-r8zbt" podUID="1494ebfa-d66c-4200-a336-2cedebcd5889" Dec 10 14:33:35 crc kubenswrapper[4718]: E1210 14:33:35.779873 4718 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Dec 10 14:33:36 crc kubenswrapper[4718]: I1210 14:33:36.019846 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 10 14:33:36 crc kubenswrapper[4718]: I1210 14:33:36.020025 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 10 14:33:36 crc kubenswrapper[4718]: E1210 14:33:36.021739 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 10 14:33:36 crc kubenswrapper[4718]: E1210 14:33:36.021907 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 10 14:33:36 crc kubenswrapper[4718]: E1210 14:33:36.203594 4718 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Dec 10 14:33:37 crc kubenswrapper[4718]: I1210 14:33:37.019667 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 10 14:33:37 crc kubenswrapper[4718]: I1210 14:33:37.019709 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-r8zbt" Dec 10 14:33:37 crc kubenswrapper[4718]: E1210 14:33:37.019869 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 10 14:33:37 crc kubenswrapper[4718]: E1210 14:33:37.020035 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-r8zbt" podUID="1494ebfa-d66c-4200-a336-2cedebcd5889" Dec 10 14:33:38 crc kubenswrapper[4718]: I1210 14:33:38.019767 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 10 14:33:38 crc kubenswrapper[4718]: E1210 14:33:38.019995 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 10 14:33:38 crc kubenswrapper[4718]: I1210 14:33:38.020228 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 10 14:33:38 crc kubenswrapper[4718]: E1210 14:33:38.020568 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 10 14:33:39 crc kubenswrapper[4718]: I1210 14:33:39.019640 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 10 14:33:39 crc kubenswrapper[4718]: I1210 14:33:39.019806 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-r8zbt" Dec 10 14:33:39 crc kubenswrapper[4718]: E1210 14:33:39.019937 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 10 14:33:39 crc kubenswrapper[4718]: E1210 14:33:39.020029 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-r8zbt" podUID="1494ebfa-d66c-4200-a336-2cedebcd5889" Dec 10 14:33:40 crc kubenswrapper[4718]: I1210 14:33:40.019775 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 10 14:33:40 crc kubenswrapper[4718]: I1210 14:33:40.019861 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 10 14:33:40 crc kubenswrapper[4718]: E1210 14:33:40.019952 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 10 14:33:40 crc kubenswrapper[4718]: E1210 14:33:40.020022 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 10 14:33:41 crc kubenswrapper[4718]: I1210 14:33:41.019445 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-r8zbt" Dec 10 14:33:41 crc kubenswrapper[4718]: I1210 14:33:41.019456 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 10 14:33:41 crc kubenswrapper[4718]: E1210 14:33:41.019888 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-r8zbt" podUID="1494ebfa-d66c-4200-a336-2cedebcd5889" Dec 10 14:33:41 crc kubenswrapper[4718]: E1210 14:33:41.019942 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 10 14:33:41 crc kubenswrapper[4718]: E1210 14:33:41.205154 4718 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Dec 10 14:33:41 crc kubenswrapper[4718]: I1210 14:33:41.240358 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-hv62w_9db3984f-4589-462f-94d7-89a885be63d5/kube-multus/1.log" Dec 10 14:33:41 crc kubenswrapper[4718]: I1210 14:33:41.241074 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-hv62w_9db3984f-4589-462f-94d7-89a885be63d5/kube-multus/0.log" Dec 10 14:33:41 crc kubenswrapper[4718]: I1210 14:33:41.241145 4718 generic.go:334] "Generic (PLEG): container finished" podID="9db3984f-4589-462f-94d7-89a885be63d5" containerID="0fb2ef285e022a0800e4756e854df7efc695588b19f6162afdccad0898285b93" exitCode=1 Dec 10 14:33:41 crc kubenswrapper[4718]: I1210 14:33:41.241190 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-hv62w" event={"ID":"9db3984f-4589-462f-94d7-89a885be63d5","Type":"ContainerDied","Data":"0fb2ef285e022a0800e4756e854df7efc695588b19f6162afdccad0898285b93"} Dec 10 14:33:41 crc kubenswrapper[4718]: I1210 14:33:41.241243 4718 scope.go:117] "RemoveContainer" containerID="2c18891e2a71e33784526db2ff2226ff6aa8a9df462f5cb11ff4f5a446509f54" Dec 10 14:33:41 crc kubenswrapper[4718]: I1210 14:33:41.241837 4718 scope.go:117] "RemoveContainer" containerID="0fb2ef285e022a0800e4756e854df7efc695588b19f6162afdccad0898285b93" Dec 10 14:33:41 crc kubenswrapper[4718]: E1210 14:33:41.244537 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-hv62w_openshift-multus(9db3984f-4589-462f-94d7-89a885be63d5)\"" pod="openshift-multus/multus-hv62w" podUID="9db3984f-4589-462f-94d7-89a885be63d5" Dec 10 14:33:41 crc kubenswrapper[4718]: I1210 14:33:41.269674 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-mqqg9" podStartSLOduration=101.269644168 podStartE2EDuration="1m41.269644168s" podCreationTimestamp="2025-12-10 14:32:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 14:33:23.19164814 +0000 UTC m=+108.140871557" watchObservedRunningTime="2025-12-10 14:33:41.269644168 +0000 UTC m=+126.218867605" Dec 10 14:33:42 crc kubenswrapper[4718]: I1210 14:33:42.019607 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 10 14:33:42 crc kubenswrapper[4718]: I1210 14:33:42.019648 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 10 14:33:42 crc kubenswrapper[4718]: E1210 14:33:42.020006 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 10 14:33:42 crc kubenswrapper[4718]: E1210 14:33:42.020174 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 10 14:33:42 crc kubenswrapper[4718]: I1210 14:33:42.020350 4718 scope.go:117] "RemoveContainer" containerID="f2ec586c9c00d7877087cb32e37d064b16317ea1c414ab474147f4c480026c70" Dec 10 14:33:42 crc kubenswrapper[4718]: E1210 14:33:42.020536 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-dtch2_openshift-ovn-kubernetes(612af6cb-db4d-4874-a9ea-8b3c7eb8e30c)\"" pod="openshift-ovn-kubernetes/ovnkube-node-dtch2" podUID="612af6cb-db4d-4874-a9ea-8b3c7eb8e30c" Dec 10 14:33:42 crc kubenswrapper[4718]: I1210 14:33:42.247175 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-hv62w_9db3984f-4589-462f-94d7-89a885be63d5/kube-multus/1.log" Dec 10 14:33:43 crc kubenswrapper[4718]: I1210 14:33:43.019516 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-r8zbt" Dec 10 14:33:43 crc kubenswrapper[4718]: I1210 14:33:43.019547 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 10 14:33:43 crc kubenswrapper[4718]: E1210 14:33:43.019782 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-r8zbt" podUID="1494ebfa-d66c-4200-a336-2cedebcd5889" Dec 10 14:33:43 crc kubenswrapper[4718]: E1210 14:33:43.019950 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 10 14:33:44 crc kubenswrapper[4718]: I1210 14:33:44.019801 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 10 14:33:44 crc kubenswrapper[4718]: I1210 14:33:44.019959 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 10 14:33:44 crc kubenswrapper[4718]: E1210 14:33:44.020143 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 10 14:33:44 crc kubenswrapper[4718]: E1210 14:33:44.020708 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 10 14:33:45 crc kubenswrapper[4718]: I1210 14:33:45.019954 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 10 14:33:45 crc kubenswrapper[4718]: I1210 14:33:45.019964 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-r8zbt" Dec 10 14:33:45 crc kubenswrapper[4718]: E1210 14:33:45.020180 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 10 14:33:45 crc kubenswrapper[4718]: E1210 14:33:45.020305 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-r8zbt" podUID="1494ebfa-d66c-4200-a336-2cedebcd5889" Dec 10 14:33:46 crc kubenswrapper[4718]: I1210 14:33:46.020237 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 10 14:33:46 crc kubenswrapper[4718]: I1210 14:33:46.020246 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 10 14:33:46 crc kubenswrapper[4718]: E1210 14:33:46.022144 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 10 14:33:46 crc kubenswrapper[4718]: E1210 14:33:46.023516 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 10 14:33:46 crc kubenswrapper[4718]: E1210 14:33:46.206993 4718 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Dec 10 14:33:47 crc kubenswrapper[4718]: I1210 14:33:47.020127 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 10 14:33:47 crc kubenswrapper[4718]: I1210 14:33:47.020181 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-r8zbt" Dec 10 14:33:47 crc kubenswrapper[4718]: E1210 14:33:47.020380 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 10 14:33:47 crc kubenswrapper[4718]: E1210 14:33:47.020754 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-r8zbt" podUID="1494ebfa-d66c-4200-a336-2cedebcd5889" Dec 10 14:33:48 crc kubenswrapper[4718]: I1210 14:33:48.020103 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 10 14:33:48 crc kubenswrapper[4718]: E1210 14:33:48.020312 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 10 14:33:48 crc kubenswrapper[4718]: I1210 14:33:48.020633 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 10 14:33:48 crc kubenswrapper[4718]: E1210 14:33:48.020741 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 10 14:33:49 crc kubenswrapper[4718]: I1210 14:33:49.019968 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-r8zbt" Dec 10 14:33:49 crc kubenswrapper[4718]: I1210 14:33:49.020164 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 10 14:33:49 crc kubenswrapper[4718]: E1210 14:33:49.020242 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-r8zbt" podUID="1494ebfa-d66c-4200-a336-2cedebcd5889" Dec 10 14:33:49 crc kubenswrapper[4718]: E1210 14:33:49.020366 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 10 14:33:50 crc kubenswrapper[4718]: I1210 14:33:50.020421 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 10 14:33:50 crc kubenswrapper[4718]: E1210 14:33:50.020685 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 10 14:33:50 crc kubenswrapper[4718]: I1210 14:33:50.020849 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 10 14:33:50 crc kubenswrapper[4718]: E1210 14:33:50.021082 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 10 14:33:51 crc kubenswrapper[4718]: I1210 14:33:51.019588 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-r8zbt" Dec 10 14:33:51 crc kubenswrapper[4718]: I1210 14:33:51.019675 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 10 14:33:51 crc kubenswrapper[4718]: E1210 14:33:51.019814 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-r8zbt" podUID="1494ebfa-d66c-4200-a336-2cedebcd5889" Dec 10 14:33:51 crc kubenswrapper[4718]: E1210 14:33:51.019911 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 10 14:33:51 crc kubenswrapper[4718]: E1210 14:33:51.208909 4718 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Dec 10 14:33:52 crc kubenswrapper[4718]: I1210 14:33:52.019812 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 10 14:33:52 crc kubenswrapper[4718]: I1210 14:33:52.019912 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 10 14:33:52 crc kubenswrapper[4718]: E1210 14:33:52.019981 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 10 14:33:52 crc kubenswrapper[4718]: E1210 14:33:52.020126 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 10 14:33:53 crc kubenswrapper[4718]: I1210 14:33:53.019705 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 10 14:33:53 crc kubenswrapper[4718]: I1210 14:33:53.019780 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-r8zbt" Dec 10 14:33:53 crc kubenswrapper[4718]: E1210 14:33:53.019890 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 10 14:33:53 crc kubenswrapper[4718]: E1210 14:33:53.020017 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-r8zbt" podUID="1494ebfa-d66c-4200-a336-2cedebcd5889" Dec 10 14:33:54 crc kubenswrapper[4718]: I1210 14:33:54.019737 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 10 14:33:54 crc kubenswrapper[4718]: I1210 14:33:54.020565 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 10 14:33:54 crc kubenswrapper[4718]: I1210 14:33:54.020801 4718 scope.go:117] "RemoveContainer" containerID="0fb2ef285e022a0800e4756e854df7efc695588b19f6162afdccad0898285b93" Dec 10 14:33:54 crc kubenswrapper[4718]: E1210 14:33:54.021360 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 10 14:33:54 crc kubenswrapper[4718]: E1210 14:33:54.021578 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 10 14:33:54 crc kubenswrapper[4718]: I1210 14:33:54.292154 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-hv62w_9db3984f-4589-462f-94d7-89a885be63d5/kube-multus/1.log" Dec 10 14:33:54 crc kubenswrapper[4718]: I1210 14:33:54.292258 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-hv62w" event={"ID":"9db3984f-4589-462f-94d7-89a885be63d5","Type":"ContainerStarted","Data":"b0c86f72e14e1a163070f2925d7a72fc1412fc781bcff0326c25e2db755af5ba"} Dec 10 14:33:55 crc kubenswrapper[4718]: I1210 14:33:55.019817 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 10 14:33:55 crc kubenswrapper[4718]: I1210 14:33:55.019890 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-r8zbt" Dec 10 14:33:55 crc kubenswrapper[4718]: E1210 14:33:55.020336 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 10 14:33:55 crc kubenswrapper[4718]: E1210 14:33:55.020383 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-r8zbt" podUID="1494ebfa-d66c-4200-a336-2cedebcd5889" Dec 10 14:33:56 crc kubenswrapper[4718]: I1210 14:33:56.019560 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 10 14:33:56 crc kubenswrapper[4718]: I1210 14:33:56.019677 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 10 14:33:56 crc kubenswrapper[4718]: E1210 14:33:56.020936 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 10 14:33:56 crc kubenswrapper[4718]: E1210 14:33:56.021075 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 10 14:33:56 crc kubenswrapper[4718]: I1210 14:33:56.021874 4718 scope.go:117] "RemoveContainer" containerID="f2ec586c9c00d7877087cb32e37d064b16317ea1c414ab474147f4c480026c70" Dec 10 14:33:56 crc kubenswrapper[4718]: E1210 14:33:56.210795 4718 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Dec 10 14:33:56 crc kubenswrapper[4718]: I1210 14:33:56.302019 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-dtch2_612af6cb-db4d-4874-a9ea-8b3c7eb8e30c/ovnkube-controller/3.log" Dec 10 14:33:56 crc kubenswrapper[4718]: I1210 14:33:56.304996 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dtch2" event={"ID":"612af6cb-db4d-4874-a9ea-8b3c7eb8e30c","Type":"ContainerStarted","Data":"32146f91ffb2907607dc1436062d7e3fe2a52f19e1e18831c50a5b3e94055750"} Dec 10 14:33:56 crc kubenswrapper[4718]: I1210 14:33:56.305842 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-dtch2" Dec 10 14:33:56 crc kubenswrapper[4718]: I1210 14:33:56.331824 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-dtch2" podStartSLOduration=116.331796484 podStartE2EDuration="1m56.331796484s" podCreationTimestamp="2025-12-10 14:32:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 14:33:56.331778413 +0000 UTC m=+141.281001840" watchObservedRunningTime="2025-12-10 14:33:56.331796484 +0000 UTC m=+141.281019911" Dec 10 14:33:56 crc kubenswrapper[4718]: I1210 14:33:56.897896 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-r8zbt"] Dec 10 14:33:56 crc kubenswrapper[4718]: I1210 14:33:56.898032 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-r8zbt" Dec 10 14:33:56 crc kubenswrapper[4718]: E1210 14:33:56.898123 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-r8zbt" podUID="1494ebfa-d66c-4200-a336-2cedebcd5889" Dec 10 14:33:57 crc kubenswrapper[4718]: I1210 14:33:57.020432 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 10 14:33:57 crc kubenswrapper[4718]: E1210 14:33:57.020615 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 10 14:33:58 crc kubenswrapper[4718]: I1210 14:33:58.020291 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 10 14:33:58 crc kubenswrapper[4718]: I1210 14:33:58.020370 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 10 14:33:58 crc kubenswrapper[4718]: E1210 14:33:58.020523 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 10 14:33:58 crc kubenswrapper[4718]: E1210 14:33:58.020825 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 10 14:33:59 crc kubenswrapper[4718]: I1210 14:33:59.019889 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 10 14:33:59 crc kubenswrapper[4718]: I1210 14:33:59.019914 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-r8zbt" Dec 10 14:33:59 crc kubenswrapper[4718]: E1210 14:33:59.020459 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 10 14:33:59 crc kubenswrapper[4718]: E1210 14:33:59.020558 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-r8zbt" podUID="1494ebfa-d66c-4200-a336-2cedebcd5889" Dec 10 14:34:00 crc kubenswrapper[4718]: I1210 14:34:00.020303 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 10 14:34:00 crc kubenswrapper[4718]: I1210 14:34:00.020443 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 10 14:34:00 crc kubenswrapper[4718]: E1210 14:34:00.020581 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 10 14:34:00 crc kubenswrapper[4718]: E1210 14:34:00.020674 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 10 14:34:01 crc kubenswrapper[4718]: I1210 14:34:01.020014 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 10 14:34:01 crc kubenswrapper[4718]: I1210 14:34:01.020048 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-r8zbt" Dec 10 14:34:01 crc kubenswrapper[4718]: E1210 14:34:01.020192 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 10 14:34:01 crc kubenswrapper[4718]: E1210 14:34:01.020358 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-r8zbt" podUID="1494ebfa-d66c-4200-a336-2cedebcd5889" Dec 10 14:34:01 crc kubenswrapper[4718]: I1210 14:34:01.082284 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-dtch2" Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.020003 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.019989 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.024440 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.024459 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.024459 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.024934 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.511351 4718 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.561645 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-p7wpl"] Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.561998 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-5qn5w"] Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.562322 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-l6wsm"] Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.562582 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-l6wsm" Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.563488 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-p7wpl" Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.564042 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-5qn5w" Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.566252 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-t2gmf"] Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.566595 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-tk99n"] Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.566733 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.566790 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-rkbqm"] Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.566939 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.567058 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-rkbqm" Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.567110 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-t2gmf" Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.567419 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-tk99n" Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.573475 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.573565 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.573605 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.574117 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.574600 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.575412 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.575620 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.575730 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.576018 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.576173 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.575789 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.576639 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.576724 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.576663 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.577329 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.577944 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-mx69k"] Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.578649 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-mx69k" Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.579461 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.579619 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.579732 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.579739 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.579834 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.579898 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.579966 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.580002 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.580091 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.580161 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.580244 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.580309 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.580331 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.580248 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.580527 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.580627 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.584117 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.584816 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.584875 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.586861 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-q5c57"] Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.587565 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-q5c57" Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.593631 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.594053 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.594615 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.594739 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.594858 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.595020 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.595211 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.595442 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.596120 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.600147 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.601062 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.643323 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-lh6qt"] Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.644131 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-prs2h"] Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.644683 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-prs2h" Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.645695 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-lh6qt" Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.645698 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-qq8sm"] Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.646814 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-qq8sm" Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.653534 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-vws9k"] Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.654222 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/90744db6-7e3a-4f7f-a0ae-4f4dfed7df33-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-tk99n\" (UID: \"90744db6-7e3a-4f7f-a0ae-4f4dfed7df33\") " pod="openshift-authentication/oauth-openshift-558db77b4-tk99n" Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.654268 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/17fe734a-f022-4fd4-8276-661e662e2c6b-service-ca\") pod \"console-f9d7485db-t2gmf\" (UID: \"17fe734a-f022-4fd4-8276-661e662e2c6b\") " pod="openshift-console/console-f9d7485db-t2gmf" Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.654290 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ab797808-0785-40c6-8399-1a3bfef7b7f1-serving-cert\") pod \"openshift-config-operator-7777fb866f-mx69k\" (UID: \"ab797808-0785-40c6-8399-1a3bfef7b7f1\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-mx69k" Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.654311 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/f18678b9-691a-4582-b327-b5bc9f1983d8-etcd-client\") pod \"apiserver-76f77b778f-5qn5w\" (UID: \"f18678b9-691a-4582-b327-b5bc9f1983d8\") " pod="openshift-apiserver/apiserver-76f77b778f-5qn5w" Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.654331 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/fdfd948f-0ed8-45ed-90ab-3126ba209608-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-q5c57\" (UID: \"fdfd948f-0ed8-45ed-90ab-3126ba209608\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-q5c57" Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.654373 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/90744db6-7e3a-4f7f-a0ae-4f4dfed7df33-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-tk99n\" (UID: \"90744db6-7e3a-4f7f-a0ae-4f4dfed7df33\") " pod="openshift-authentication/oauth-openshift-558db77b4-tk99n" Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.654406 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r6484\" (UniqueName: \"kubernetes.io/projected/f18678b9-691a-4582-b327-b5bc9f1983d8-kube-api-access-r6484\") pod \"apiserver-76f77b778f-5qn5w\" (UID: \"f18678b9-691a-4582-b327-b5bc9f1983d8\") " pod="openshift-apiserver/apiserver-76f77b778f-5qn5w" Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.654524 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-vws9k" Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.656688 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.658324 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.661230 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.661847 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/90744db6-7e3a-4f7f-a0ae-4f4dfed7df33-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-tk99n\" (UID: \"90744db6-7e3a-4f7f-a0ae-4f4dfed7df33\") " pod="openshift-authentication/oauth-openshift-558db77b4-tk99n" Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.661898 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f18678b9-691a-4582-b327-b5bc9f1983d8-config\") pod \"apiserver-76f77b778f-5qn5w\" (UID: \"f18678b9-691a-4582-b327-b5bc9f1983d8\") " pod="openshift-apiserver/apiserver-76f77b778f-5qn5w" Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.661922 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/ab797808-0785-40c6-8399-1a3bfef7b7f1-available-featuregates\") pod \"openshift-config-operator-7777fb866f-mx69k\" (UID: \"ab797808-0785-40c6-8399-1a3bfef7b7f1\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-mx69k" Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.661949 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6krnc\" (UniqueName: \"kubernetes.io/projected/ab797808-0785-40c6-8399-1a3bfef7b7f1-kube-api-access-6krnc\") pod \"openshift-config-operator-7777fb866f-mx69k\" (UID: \"ab797808-0785-40c6-8399-1a3bfef7b7f1\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-mx69k" Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.661973 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/17fe734a-f022-4fd4-8276-661e662e2c6b-console-config\") pod \"console-f9d7485db-t2gmf\" (UID: \"17fe734a-f022-4fd4-8276-661e662e2c6b\") " pod="openshift-console/console-f9d7485db-t2gmf" Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.662001 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/f18678b9-691a-4582-b327-b5bc9f1983d8-etcd-serving-ca\") pod \"apiserver-76f77b778f-5qn5w\" (UID: \"f18678b9-691a-4582-b327-b5bc9f1983d8\") " pod="openshift-apiserver/apiserver-76f77b778f-5qn5w" Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.662023 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f18678b9-691a-4582-b327-b5bc9f1983d8-serving-cert\") pod \"apiserver-76f77b778f-5qn5w\" (UID: \"f18678b9-691a-4582-b327-b5bc9f1983d8\") " pod="openshift-apiserver/apiserver-76f77b778f-5qn5w" Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.662044 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/da647d92-a61b-4c5d-b97c-730df809d8fb-service-ca-bundle\") pod \"authentication-operator-69f744f599-p7wpl\" (UID: \"da647d92-a61b-4c5d-b97c-730df809d8fb\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-p7wpl" Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.662066 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8908165f-bd25-44b8-916e-5b910ce5d74c-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-rkbqm\" (UID: \"8908165f-bd25-44b8-916e-5b910ce5d74c\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-rkbqm" Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.662091 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/f18678b9-691a-4582-b327-b5bc9f1983d8-node-pullsecrets\") pod \"apiserver-76f77b778f-5qn5w\" (UID: \"f18678b9-691a-4582-b327-b5bc9f1983d8\") " pod="openshift-apiserver/apiserver-76f77b778f-5qn5w" Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.662110 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/f18678b9-691a-4582-b327-b5bc9f1983d8-audit\") pod \"apiserver-76f77b778f-5qn5w\" (UID: \"f18678b9-691a-4582-b327-b5bc9f1983d8\") " pod="openshift-apiserver/apiserver-76f77b778f-5qn5w" Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.662129 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/f18678b9-691a-4582-b327-b5bc9f1983d8-encryption-config\") pod \"apiserver-76f77b778f-5qn5w\" (UID: \"f18678b9-691a-4582-b327-b5bc9f1983d8\") " pod="openshift-apiserver/apiserver-76f77b778f-5qn5w" Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.662151 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f7lrd\" (UniqueName: \"kubernetes.io/projected/8908165f-bd25-44b8-916e-5b910ce5d74c-kube-api-access-f7lrd\") pod \"cluster-image-registry-operator-dc59b4c8b-rkbqm\" (UID: \"8908165f-bd25-44b8-916e-5b910ce5d74c\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-rkbqm" Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.662173 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/90744db6-7e3a-4f7f-a0ae-4f4dfed7df33-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-tk99n\" (UID: \"90744db6-7e3a-4f7f-a0ae-4f4dfed7df33\") " pod="openshift-authentication/oauth-openshift-558db77b4-tk99n" Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.662193 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/17fe734a-f022-4fd4-8276-661e662e2c6b-console-serving-cert\") pod \"console-f9d7485db-t2gmf\" (UID: \"17fe734a-f022-4fd4-8276-661e662e2c6b\") " pod="openshift-console/console-f9d7485db-t2gmf" Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.662208 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/17fe734a-f022-4fd4-8276-661e662e2c6b-console-oauth-config\") pod \"console-f9d7485db-t2gmf\" (UID: \"17fe734a-f022-4fd4-8276-661e662e2c6b\") " pod="openshift-console/console-f9d7485db-t2gmf" Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.662201 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.662231 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/da647d92-a61b-4c5d-b97c-730df809d8fb-serving-cert\") pod \"authentication-operator-69f744f599-p7wpl\" (UID: \"da647d92-a61b-4c5d-b97c-730df809d8fb\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-p7wpl" Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.662250 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/62d109b9-dd6e-47c6-b384-336b480f804d-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-l6wsm\" (UID: \"62d109b9-dd6e-47c6-b384-336b480f804d\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-l6wsm" Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.662268 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zx8j4\" (UniqueName: \"kubernetes.io/projected/62d109b9-dd6e-47c6-b384-336b480f804d-kube-api-access-zx8j4\") pod \"openshift-controller-manager-operator-756b6f6bc6-l6wsm\" (UID: \"62d109b9-dd6e-47c6-b384-336b480f804d\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-l6wsm" Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.662284 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/90744db6-7e3a-4f7f-a0ae-4f4dfed7df33-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-tk99n\" (UID: \"90744db6-7e3a-4f7f-a0ae-4f4dfed7df33\") " pod="openshift-authentication/oauth-openshift-558db77b4-tk99n" Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.662307 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/da647d92-a61b-4c5d-b97c-730df809d8fb-config\") pod \"authentication-operator-69f744f599-p7wpl\" (UID: \"da647d92-a61b-4c5d-b97c-730df809d8fb\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-p7wpl" Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.662328 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/90744db6-7e3a-4f7f-a0ae-4f4dfed7df33-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-tk99n\" (UID: \"90744db6-7e3a-4f7f-a0ae-4f4dfed7df33\") " pod="openshift-authentication/oauth-openshift-558db77b4-tk99n" Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.662351 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kclr9\" (UniqueName: \"kubernetes.io/projected/90744db6-7e3a-4f7f-a0ae-4f4dfed7df33-kube-api-access-kclr9\") pod \"oauth-openshift-558db77b4-tk99n\" (UID: \"90744db6-7e3a-4f7f-a0ae-4f4dfed7df33\") " pod="openshift-authentication/oauth-openshift-558db77b4-tk99n" Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.662374 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bx7xw\" (UniqueName: \"kubernetes.io/projected/da647d92-a61b-4c5d-b97c-730df809d8fb-kube-api-access-bx7xw\") pod \"authentication-operator-69f744f599-p7wpl\" (UID: \"da647d92-a61b-4c5d-b97c-730df809d8fb\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-p7wpl" Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.662410 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/90744db6-7e3a-4f7f-a0ae-4f4dfed7df33-audit-policies\") pod \"oauth-openshift-558db77b4-tk99n\" (UID: \"90744db6-7e3a-4f7f-a0ae-4f4dfed7df33\") " pod="openshift-authentication/oauth-openshift-558db77b4-tk99n" Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.662429 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bl7ln\" (UniqueName: \"kubernetes.io/projected/17fe734a-f022-4fd4-8276-661e662e2c6b-kube-api-access-bl7ln\") pod \"console-f9d7485db-t2gmf\" (UID: \"17fe734a-f022-4fd4-8276-661e662e2c6b\") " pod="openshift-console/console-f9d7485db-t2gmf" Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.662447 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v27dx\" (UniqueName: \"kubernetes.io/projected/fdfd948f-0ed8-45ed-90ab-3126ba209608-kube-api-access-v27dx\") pod \"cluster-samples-operator-665b6dd947-q5c57\" (UID: \"fdfd948f-0ed8-45ed-90ab-3126ba209608\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-q5c57" Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.662510 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f18678b9-691a-4582-b327-b5bc9f1983d8-audit-dir\") pod \"apiserver-76f77b778f-5qn5w\" (UID: \"f18678b9-691a-4582-b327-b5bc9f1983d8\") " pod="openshift-apiserver/apiserver-76f77b778f-5qn5w" Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.662538 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/90744db6-7e3a-4f7f-a0ae-4f4dfed7df33-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-tk99n\" (UID: \"90744db6-7e3a-4f7f-a0ae-4f4dfed7df33\") " pod="openshift-authentication/oauth-openshift-558db77b4-tk99n" Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.662555 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/90744db6-7e3a-4f7f-a0ae-4f4dfed7df33-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-tk99n\" (UID: \"90744db6-7e3a-4f7f-a0ae-4f4dfed7df33\") " pod="openshift-authentication/oauth-openshift-558db77b4-tk99n" Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.662618 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/8908165f-bd25-44b8-916e-5b910ce5d74c-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-rkbqm\" (UID: \"8908165f-bd25-44b8-916e-5b910ce5d74c\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-rkbqm" Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.662647 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f18678b9-691a-4582-b327-b5bc9f1983d8-trusted-ca-bundle\") pod \"apiserver-76f77b778f-5qn5w\" (UID: \"f18678b9-691a-4582-b327-b5bc9f1983d8\") " pod="openshift-apiserver/apiserver-76f77b778f-5qn5w" Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.662683 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/da647d92-a61b-4c5d-b97c-730df809d8fb-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-p7wpl\" (UID: \"da647d92-a61b-4c5d-b97c-730df809d8fb\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-p7wpl" Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.662712 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/90744db6-7e3a-4f7f-a0ae-4f4dfed7df33-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-tk99n\" (UID: \"90744db6-7e3a-4f7f-a0ae-4f4dfed7df33\") " pod="openshift-authentication/oauth-openshift-558db77b4-tk99n" Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.662777 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8908165f-bd25-44b8-916e-5b910ce5d74c-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-rkbqm\" (UID: \"8908165f-bd25-44b8-916e-5b910ce5d74c\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-rkbqm" Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.662832 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/17fe734a-f022-4fd4-8276-661e662e2c6b-trusted-ca-bundle\") pod \"console-f9d7485db-t2gmf\" (UID: \"17fe734a-f022-4fd4-8276-661e662e2c6b\") " pod="openshift-console/console-f9d7485db-t2gmf" Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.662881 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/90744db6-7e3a-4f7f-a0ae-4f4dfed7df33-audit-dir\") pod \"oauth-openshift-558db77b4-tk99n\" (UID: \"90744db6-7e3a-4f7f-a0ae-4f4dfed7df33\") " pod="openshift-authentication/oauth-openshift-558db77b4-tk99n" Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.662903 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/90744db6-7e3a-4f7f-a0ae-4f4dfed7df33-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-tk99n\" (UID: \"90744db6-7e3a-4f7f-a0ae-4f4dfed7df33\") " pod="openshift-authentication/oauth-openshift-558db77b4-tk99n" Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.662944 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/f18678b9-691a-4582-b327-b5bc9f1983d8-image-import-ca\") pod \"apiserver-76f77b778f-5qn5w\" (UID: \"f18678b9-691a-4582-b327-b5bc9f1983d8\") " pod="openshift-apiserver/apiserver-76f77b778f-5qn5w" Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.662964 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/17fe734a-f022-4fd4-8276-661e662e2c6b-oauth-serving-cert\") pod \"console-f9d7485db-t2gmf\" (UID: \"17fe734a-f022-4fd4-8276-661e662e2c6b\") " pod="openshift-console/console-f9d7485db-t2gmf" Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.662994 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/90744db6-7e3a-4f7f-a0ae-4f4dfed7df33-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-tk99n\" (UID: \"90744db6-7e3a-4f7f-a0ae-4f4dfed7df33\") " pod="openshift-authentication/oauth-openshift-558db77b4-tk99n" Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.663044 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/62d109b9-dd6e-47c6-b384-336b480f804d-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-l6wsm\" (UID: \"62d109b9-dd6e-47c6-b384-336b480f804d\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-l6wsm" Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.664775 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.668029 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.668312 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.668484 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.668639 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.668927 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.669136 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.670980 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.671737 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.671808 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.671937 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.672055 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.672108 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.672141 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.672240 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.672274 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.672443 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.672604 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.672728 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.672862 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.671755 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.673232 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.672995 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.673537 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-76q8c"] Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.673402 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.673480 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.673546 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.674356 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.674537 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.675156 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.675638 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.676113 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.697881 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-hbnxp"] Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.698794 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-44d9w"] Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.700881 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-brrd5"] Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.704473 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-44d9w" Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.706313 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-hbnxp" Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.706575 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-76q8c" Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.706759 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-btqhn"] Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.707096 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-brrd5" Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.707774 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.715452 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.716285 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-r4272"] Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.716561 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-btqhn" Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.716892 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-r4272" Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.718966 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.719290 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.719741 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.719866 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.719989 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.720238 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.720523 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.720544 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-xtqqs"] Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.720644 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.720807 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.720917 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.721039 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.721087 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-qn8tr"] Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.721158 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.721277 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.721432 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.721536 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-dskl4"] Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.721567 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.721764 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-xtqqs" Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.722202 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-dskl4" Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.722507 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-qn8tr" Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.722591 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.722869 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.723408 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-7p66s"] Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.724284 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-rnbkw"] Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.724743 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-7p66s" Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.724759 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-rnbkw" Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.725266 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-n2lcz"] Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.725800 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-n2lcz" Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.726145 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29422950-4cxn6"] Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.726865 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29422950-4cxn6" Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.730474 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-9zbxh"] Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.731537 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-rn9bk"] Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.732061 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-rn9bk" Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.733859 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-7fd22"] Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.734739 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-s92b8"] Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.735146 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-s92b8" Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.735244 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-9zbxh" Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.735479 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-7fd22" Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.740533 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.740899 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.741075 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.741327 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.741847 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.745185 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-2x76l"] Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.746312 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-2x76l" Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.747732 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.751373 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-4bmlt"] Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.751830 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-bvd4w"] Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.752149 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-twtlk"] Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.753213 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-4bmlt" Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.753272 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-bvd4w" Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.755273 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-8dm4f"] Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.755743 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-v2z2q"] Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.756122 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-v2z2q" Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.756676 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-twtlk" Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.757223 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-8dm4f" Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.764563 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-wgzns"] Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.765418 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wgzns" Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.767373 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/da647d92-a61b-4c5d-b97c-730df809d8fb-serving-cert\") pod \"authentication-operator-69f744f599-p7wpl\" (UID: \"da647d92-a61b-4c5d-b97c-730df809d8fb\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-p7wpl" Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.767416 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/62d109b9-dd6e-47c6-b384-336b480f804d-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-l6wsm\" (UID: \"62d109b9-dd6e-47c6-b384-336b480f804d\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-l6wsm" Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.767441 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/90744db6-7e3a-4f7f-a0ae-4f4dfed7df33-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-tk99n\" (UID: \"90744db6-7e3a-4f7f-a0ae-4f4dfed7df33\") " pod="openshift-authentication/oauth-openshift-558db77b4-tk99n" Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.767458 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zx8j4\" (UniqueName: \"kubernetes.io/projected/62d109b9-dd6e-47c6-b384-336b480f804d-kube-api-access-zx8j4\") pod \"openshift-controller-manager-operator-756b6f6bc6-l6wsm\" (UID: \"62d109b9-dd6e-47c6-b384-336b480f804d\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-l6wsm" Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.768091 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.768838 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/da647d92-a61b-4c5d-b97c-730df809d8fb-config\") pod \"authentication-operator-69f744f599-p7wpl\" (UID: \"da647d92-a61b-4c5d-b97c-730df809d8fb\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-p7wpl" Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.768869 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/90744db6-7e3a-4f7f-a0ae-4f4dfed7df33-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-tk99n\" (UID: \"90744db6-7e3a-4f7f-a0ae-4f4dfed7df33\") " pod="openshift-authentication/oauth-openshift-558db77b4-tk99n" Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.768893 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kclr9\" (UniqueName: \"kubernetes.io/projected/90744db6-7e3a-4f7f-a0ae-4f4dfed7df33-kube-api-access-kclr9\") pod \"oauth-openshift-558db77b4-tk99n\" (UID: \"90744db6-7e3a-4f7f-a0ae-4f4dfed7df33\") " pod="openshift-authentication/oauth-openshift-558db77b4-tk99n" Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.768916 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/f4e99f92-30bf-44e1-a7b3-b5d481af2100-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-2x76l\" (UID: \"f4e99f92-30bf-44e1-a7b3-b5d481af2100\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-2x76l" Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.768937 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bx7xw\" (UniqueName: \"kubernetes.io/projected/da647d92-a61b-4c5d-b97c-730df809d8fb-kube-api-access-bx7xw\") pod \"authentication-operator-69f744f599-p7wpl\" (UID: \"da647d92-a61b-4c5d-b97c-730df809d8fb\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-p7wpl" Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.768961 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/90744db6-7e3a-4f7f-a0ae-4f4dfed7df33-audit-policies\") pod \"oauth-openshift-558db77b4-tk99n\" (UID: \"90744db6-7e3a-4f7f-a0ae-4f4dfed7df33\") " pod="openshift-authentication/oauth-openshift-558db77b4-tk99n" Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.768980 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bl7ln\" (UniqueName: \"kubernetes.io/projected/17fe734a-f022-4fd4-8276-661e662e2c6b-kube-api-access-bl7ln\") pod \"console-f9d7485db-t2gmf\" (UID: \"17fe734a-f022-4fd4-8276-661e662e2c6b\") " pod="openshift-console/console-f9d7485db-t2gmf" Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.768998 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v27dx\" (UniqueName: \"kubernetes.io/projected/fdfd948f-0ed8-45ed-90ab-3126ba209608-kube-api-access-v27dx\") pod \"cluster-samples-operator-665b6dd947-q5c57\" (UID: \"fdfd948f-0ed8-45ed-90ab-3126ba209608\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-q5c57" Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.770213 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/da647d92-a61b-4c5d-b97c-730df809d8fb-config\") pod \"authentication-operator-69f744f599-p7wpl\" (UID: \"da647d92-a61b-4c5d-b97c-730df809d8fb\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-p7wpl" Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.770698 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5hfx8\" (UniqueName: \"kubernetes.io/projected/f4e99f92-30bf-44e1-a7b3-b5d481af2100-kube-api-access-5hfx8\") pod \"machine-config-controller-84d6567774-2x76l\" (UID: \"f4e99f92-30bf-44e1-a7b3-b5d481af2100\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-2x76l" Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.770757 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f18678b9-691a-4582-b327-b5bc9f1983d8-audit-dir\") pod \"apiserver-76f77b778f-5qn5w\" (UID: \"f18678b9-691a-4582-b327-b5bc9f1983d8\") " pod="openshift-apiserver/apiserver-76f77b778f-5qn5w" Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.770782 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/90744db6-7e3a-4f7f-a0ae-4f4dfed7df33-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-tk99n\" (UID: \"90744db6-7e3a-4f7f-a0ae-4f4dfed7df33\") " pod="openshift-authentication/oauth-openshift-558db77b4-tk99n" Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.770801 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/8908165f-bd25-44b8-916e-5b910ce5d74c-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-rkbqm\" (UID: \"8908165f-bd25-44b8-916e-5b910ce5d74c\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-rkbqm" Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.770823 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/90744db6-7e3a-4f7f-a0ae-4f4dfed7df33-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-tk99n\" (UID: \"90744db6-7e3a-4f7f-a0ae-4f4dfed7df33\") " pod="openshift-authentication/oauth-openshift-558db77b4-tk99n" Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.770844 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f18678b9-691a-4582-b327-b5bc9f1983d8-trusted-ca-bundle\") pod \"apiserver-76f77b778f-5qn5w\" (UID: \"f18678b9-691a-4582-b327-b5bc9f1983d8\") " pod="openshift-apiserver/apiserver-76f77b778f-5qn5w" Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.770898 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/da647d92-a61b-4c5d-b97c-730df809d8fb-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-p7wpl\" (UID: \"da647d92-a61b-4c5d-b97c-730df809d8fb\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-p7wpl" Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.770921 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/90744db6-7e3a-4f7f-a0ae-4f4dfed7df33-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-tk99n\" (UID: \"90744db6-7e3a-4f7f-a0ae-4f4dfed7df33\") " pod="openshift-authentication/oauth-openshift-558db77b4-tk99n" Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.770942 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8908165f-bd25-44b8-916e-5b910ce5d74c-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-rkbqm\" (UID: \"8908165f-bd25-44b8-916e-5b910ce5d74c\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-rkbqm" Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.770960 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/17fe734a-f022-4fd4-8276-661e662e2c6b-trusted-ca-bundle\") pod \"console-f9d7485db-t2gmf\" (UID: \"17fe734a-f022-4fd4-8276-661e662e2c6b\") " pod="openshift-console/console-f9d7485db-t2gmf" Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.770989 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/90744db6-7e3a-4f7f-a0ae-4f4dfed7df33-audit-dir\") pod \"oauth-openshift-558db77b4-tk99n\" (UID: \"90744db6-7e3a-4f7f-a0ae-4f4dfed7df33\") " pod="openshift-authentication/oauth-openshift-558db77b4-tk99n" Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.771009 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/90744db6-7e3a-4f7f-a0ae-4f4dfed7df33-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-tk99n\" (UID: \"90744db6-7e3a-4f7f-a0ae-4f4dfed7df33\") " pod="openshift-authentication/oauth-openshift-558db77b4-tk99n" Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.771038 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/f18678b9-691a-4582-b327-b5bc9f1983d8-image-import-ca\") pod \"apiserver-76f77b778f-5qn5w\" (UID: \"f18678b9-691a-4582-b327-b5bc9f1983d8\") " pod="openshift-apiserver/apiserver-76f77b778f-5qn5w" Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.771067 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/17fe734a-f022-4fd4-8276-661e662e2c6b-oauth-serving-cert\") pod \"console-f9d7485db-t2gmf\" (UID: \"17fe734a-f022-4fd4-8276-661e662e2c6b\") " pod="openshift-console/console-f9d7485db-t2gmf" Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.771109 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/90744db6-7e3a-4f7f-a0ae-4f4dfed7df33-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-tk99n\" (UID: \"90744db6-7e3a-4f7f-a0ae-4f4dfed7df33\") " pod="openshift-authentication/oauth-openshift-558db77b4-tk99n" Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.771123 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/90744db6-7e3a-4f7f-a0ae-4f4dfed7df33-audit-policies\") pod \"oauth-openshift-558db77b4-tk99n\" (UID: \"90744db6-7e3a-4f7f-a0ae-4f4dfed7df33\") " pod="openshift-authentication/oauth-openshift-558db77b4-tk99n" Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.771139 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/62d109b9-dd6e-47c6-b384-336b480f804d-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-l6wsm\" (UID: \"62d109b9-dd6e-47c6-b384-336b480f804d\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-l6wsm" Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.771199 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/90744db6-7e3a-4f7f-a0ae-4f4dfed7df33-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-tk99n\" (UID: \"90744db6-7e3a-4f7f-a0ae-4f4dfed7df33\") " pod="openshift-authentication/oauth-openshift-558db77b4-tk99n" Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.771219 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/17fe734a-f022-4fd4-8276-661e662e2c6b-service-ca\") pod \"console-f9d7485db-t2gmf\" (UID: \"17fe734a-f022-4fd4-8276-661e662e2c6b\") " pod="openshift-console/console-f9d7485db-t2gmf" Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.771240 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ab797808-0785-40c6-8399-1a3bfef7b7f1-serving-cert\") pod \"openshift-config-operator-7777fb866f-mx69k\" (UID: \"ab797808-0785-40c6-8399-1a3bfef7b7f1\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-mx69k" Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.771262 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/f18678b9-691a-4582-b327-b5bc9f1983d8-etcd-client\") pod \"apiserver-76f77b778f-5qn5w\" (UID: \"f18678b9-691a-4582-b327-b5bc9f1983d8\") " pod="openshift-apiserver/apiserver-76f77b778f-5qn5w" Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.771281 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/fdfd948f-0ed8-45ed-90ab-3126ba209608-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-q5c57\" (UID: \"fdfd948f-0ed8-45ed-90ab-3126ba209608\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-q5c57" Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.771314 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/90744db6-7e3a-4f7f-a0ae-4f4dfed7df33-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-tk99n\" (UID: \"90744db6-7e3a-4f7f-a0ae-4f4dfed7df33\") " pod="openshift-authentication/oauth-openshift-558db77b4-tk99n" Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.771343 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r6484\" (UniqueName: \"kubernetes.io/projected/f18678b9-691a-4582-b327-b5bc9f1983d8-kube-api-access-r6484\") pod \"apiserver-76f77b778f-5qn5w\" (UID: \"f18678b9-691a-4582-b327-b5bc9f1983d8\") " pod="openshift-apiserver/apiserver-76f77b778f-5qn5w" Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.771362 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/90744db6-7e3a-4f7f-a0ae-4f4dfed7df33-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-tk99n\" (UID: \"90744db6-7e3a-4f7f-a0ae-4f4dfed7df33\") " pod="openshift-authentication/oauth-openshift-558db77b4-tk99n" Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.771382 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f18678b9-691a-4582-b327-b5bc9f1983d8-config\") pod \"apiserver-76f77b778f-5qn5w\" (UID: \"f18678b9-691a-4582-b327-b5bc9f1983d8\") " pod="openshift-apiserver/apiserver-76f77b778f-5qn5w" Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.771445 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/ab797808-0785-40c6-8399-1a3bfef7b7f1-available-featuregates\") pod \"openshift-config-operator-7777fb866f-mx69k\" (UID: \"ab797808-0785-40c6-8399-1a3bfef7b7f1\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-mx69k" Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.771466 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6krnc\" (UniqueName: \"kubernetes.io/projected/ab797808-0785-40c6-8399-1a3bfef7b7f1-kube-api-access-6krnc\") pod \"openshift-config-operator-7777fb866f-mx69k\" (UID: \"ab797808-0785-40c6-8399-1a3bfef7b7f1\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-mx69k" Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.771484 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/17fe734a-f022-4fd4-8276-661e662e2c6b-console-config\") pod \"console-f9d7485db-t2gmf\" (UID: \"17fe734a-f022-4fd4-8276-661e662e2c6b\") " pod="openshift-console/console-f9d7485db-t2gmf" Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.771512 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f18678b9-691a-4582-b327-b5bc9f1983d8-serving-cert\") pod \"apiserver-76f77b778f-5qn5w\" (UID: \"f18678b9-691a-4582-b327-b5bc9f1983d8\") " pod="openshift-apiserver/apiserver-76f77b778f-5qn5w" Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.774995 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/17fe734a-f022-4fd4-8276-661e662e2c6b-console-config\") pod \"console-f9d7485db-t2gmf\" (UID: \"17fe734a-f022-4fd4-8276-661e662e2c6b\") " pod="openshift-console/console-f9d7485db-t2gmf" Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.775064 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f18678b9-691a-4582-b327-b5bc9f1983d8-audit-dir\") pod \"apiserver-76f77b778f-5qn5w\" (UID: \"f18678b9-691a-4582-b327-b5bc9f1983d8\") " pod="openshift-apiserver/apiserver-76f77b778f-5qn5w" Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.775113 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/da647d92-a61b-4c5d-b97c-730df809d8fb-service-ca-bundle\") pod \"authentication-operator-69f744f599-p7wpl\" (UID: \"da647d92-a61b-4c5d-b97c-730df809d8fb\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-p7wpl" Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.775139 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8908165f-bd25-44b8-916e-5b910ce5d74c-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-rkbqm\" (UID: \"8908165f-bd25-44b8-916e-5b910ce5d74c\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-rkbqm" Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.775159 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/f18678b9-691a-4582-b327-b5bc9f1983d8-etcd-serving-ca\") pod \"apiserver-76f77b778f-5qn5w\" (UID: \"f18678b9-691a-4582-b327-b5bc9f1983d8\") " pod="openshift-apiserver/apiserver-76f77b778f-5qn5w" Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.775181 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/f18678b9-691a-4582-b327-b5bc9f1983d8-node-pullsecrets\") pod \"apiserver-76f77b778f-5qn5w\" (UID: \"f18678b9-691a-4582-b327-b5bc9f1983d8\") " pod="openshift-apiserver/apiserver-76f77b778f-5qn5w" Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.775200 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/f18678b9-691a-4582-b327-b5bc9f1983d8-audit\") pod \"apiserver-76f77b778f-5qn5w\" (UID: \"f18678b9-691a-4582-b327-b5bc9f1983d8\") " pod="openshift-apiserver/apiserver-76f77b778f-5qn5w" Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.775225 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/f18678b9-691a-4582-b327-b5bc9f1983d8-encryption-config\") pod \"apiserver-76f77b778f-5qn5w\" (UID: \"f18678b9-691a-4582-b327-b5bc9f1983d8\") " pod="openshift-apiserver/apiserver-76f77b778f-5qn5w" Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.775251 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/f4e99f92-30bf-44e1-a7b3-b5d481af2100-proxy-tls\") pod \"machine-config-controller-84d6567774-2x76l\" (UID: \"f4e99f92-30bf-44e1-a7b3-b5d481af2100\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-2x76l" Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.775278 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f7lrd\" (UniqueName: \"kubernetes.io/projected/8908165f-bd25-44b8-916e-5b910ce5d74c-kube-api-access-f7lrd\") pod \"cluster-image-registry-operator-dc59b4c8b-rkbqm\" (UID: \"8908165f-bd25-44b8-916e-5b910ce5d74c\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-rkbqm" Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.775306 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/90744db6-7e3a-4f7f-a0ae-4f4dfed7df33-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-tk99n\" (UID: \"90744db6-7e3a-4f7f-a0ae-4f4dfed7df33\") " pod="openshift-authentication/oauth-openshift-558db77b4-tk99n" Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.775324 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/17fe734a-f022-4fd4-8276-661e662e2c6b-console-serving-cert\") pod \"console-f9d7485db-t2gmf\" (UID: \"17fe734a-f022-4fd4-8276-661e662e2c6b\") " pod="openshift-console/console-f9d7485db-t2gmf" Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.775342 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/17fe734a-f022-4fd4-8276-661e662e2c6b-console-oauth-config\") pod \"console-f9d7485db-t2gmf\" (UID: \"17fe734a-f022-4fd4-8276-661e662e2c6b\") " pod="openshift-console/console-f9d7485db-t2gmf" Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.776442 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/f18678b9-691a-4582-b327-b5bc9f1983d8-node-pullsecrets\") pod \"apiserver-76f77b778f-5qn5w\" (UID: \"f18678b9-691a-4582-b327-b5bc9f1983d8\") " pod="openshift-apiserver/apiserver-76f77b778f-5qn5w" Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.776923 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/ab797808-0785-40c6-8399-1a3bfef7b7f1-available-featuregates\") pod \"openshift-config-operator-7777fb866f-mx69k\" (UID: \"ab797808-0785-40c6-8399-1a3bfef7b7f1\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-mx69k" Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.777018 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f18678b9-691a-4582-b327-b5bc9f1983d8-config\") pod \"apiserver-76f77b778f-5qn5w\" (UID: \"f18678b9-691a-4582-b327-b5bc9f1983d8\") " pod="openshift-apiserver/apiserver-76f77b778f-5qn5w" Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.778350 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/90744db6-7e3a-4f7f-a0ae-4f4dfed7df33-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-tk99n\" (UID: \"90744db6-7e3a-4f7f-a0ae-4f4dfed7df33\") " pod="openshift-authentication/oauth-openshift-558db77b4-tk99n" Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.778829 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/62d109b9-dd6e-47c6-b384-336b480f804d-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-l6wsm\" (UID: \"62d109b9-dd6e-47c6-b384-336b480f804d\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-l6wsm" Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.779076 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/da647d92-a61b-4c5d-b97c-730df809d8fb-service-ca-bundle\") pod \"authentication-operator-69f744f599-p7wpl\" (UID: \"da647d92-a61b-4c5d-b97c-730df809d8fb\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-p7wpl" Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.771256 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-kkv25"] Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.779815 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/90744db6-7e3a-4f7f-a0ae-4f4dfed7df33-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-tk99n\" (UID: \"90744db6-7e3a-4f7f-a0ae-4f4dfed7df33\") " pod="openshift-authentication/oauth-openshift-558db77b4-tk99n" Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.779841 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/da647d92-a61b-4c5d-b97c-730df809d8fb-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-p7wpl\" (UID: \"da647d92-a61b-4c5d-b97c-730df809d8fb\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-p7wpl" Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.780425 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/90744db6-7e3a-4f7f-a0ae-4f4dfed7df33-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-tk99n\" (UID: \"90744db6-7e3a-4f7f-a0ae-4f4dfed7df33\") " pod="openshift-authentication/oauth-openshift-558db77b4-tk99n" Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.780580 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-g5xg4"] Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.807360 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/90744db6-7e3a-4f7f-a0ae-4f4dfed7df33-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-tk99n\" (UID: \"90744db6-7e3a-4f7f-a0ae-4f4dfed7df33\") " pod="openshift-authentication/oauth-openshift-558db77b4-tk99n" Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.807613 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/90744db6-7e3a-4f7f-a0ae-4f4dfed7df33-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-tk99n\" (UID: \"90744db6-7e3a-4f7f-a0ae-4f4dfed7df33\") " pod="openshift-authentication/oauth-openshift-558db77b4-tk99n" Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.808055 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/17fe734a-f022-4fd4-8276-661e662e2c6b-console-oauth-config\") pod \"console-f9d7485db-t2gmf\" (UID: \"17fe734a-f022-4fd4-8276-661e662e2c6b\") " pod="openshift-console/console-f9d7485db-t2gmf" Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.808531 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/f18678b9-691a-4582-b327-b5bc9f1983d8-image-import-ca\") pod \"apiserver-76f77b778f-5qn5w\" (UID: \"f18678b9-691a-4582-b327-b5bc9f1983d8\") " pod="openshift-apiserver/apiserver-76f77b778f-5qn5w" Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.808651 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/17fe734a-f022-4fd4-8276-661e662e2c6b-service-ca\") pod \"console-f9d7485db-t2gmf\" (UID: \"17fe734a-f022-4fd4-8276-661e662e2c6b\") " pod="openshift-console/console-f9d7485db-t2gmf" Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.809081 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/f18678b9-691a-4582-b327-b5bc9f1983d8-etcd-client\") pod \"apiserver-76f77b778f-5qn5w\" (UID: \"f18678b9-691a-4582-b327-b5bc9f1983d8\") " pod="openshift-apiserver/apiserver-76f77b778f-5qn5w" Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.809472 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/90744db6-7e3a-4f7f-a0ae-4f4dfed7df33-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-tk99n\" (UID: \"90744db6-7e3a-4f7f-a0ae-4f4dfed7df33\") " pod="openshift-authentication/oauth-openshift-558db77b4-tk99n" Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.810205 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/17fe734a-f022-4fd4-8276-661e662e2c6b-oauth-serving-cert\") pod \"console-f9d7485db-t2gmf\" (UID: \"17fe734a-f022-4fd4-8276-661e662e2c6b\") " pod="openshift-console/console-f9d7485db-t2gmf" Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.810519 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f18678b9-691a-4582-b327-b5bc9f1983d8-serving-cert\") pod \"apiserver-76f77b778f-5qn5w\" (UID: \"f18678b9-691a-4582-b327-b5bc9f1983d8\") " pod="openshift-apiserver/apiserver-76f77b778f-5qn5w" Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.811486 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/17fe734a-f022-4fd4-8276-661e662e2c6b-trusted-ca-bundle\") pod \"console-f9d7485db-t2gmf\" (UID: \"17fe734a-f022-4fd4-8276-661e662e2c6b\") " pod="openshift-console/console-f9d7485db-t2gmf" Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.811827 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/90744db6-7e3a-4f7f-a0ae-4f4dfed7df33-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-tk99n\" (UID: \"90744db6-7e3a-4f7f-a0ae-4f4dfed7df33\") " pod="openshift-authentication/oauth-openshift-558db77b4-tk99n" Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.811879 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/f18678b9-691a-4582-b327-b5bc9f1983d8-etcd-serving-ca\") pod \"apiserver-76f77b778f-5qn5w\" (UID: \"f18678b9-691a-4582-b327-b5bc9f1983d8\") " pod="openshift-apiserver/apiserver-76f77b778f-5qn5w" Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.812002 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/f18678b9-691a-4582-b327-b5bc9f1983d8-audit\") pod \"apiserver-76f77b778f-5qn5w\" (UID: \"f18678b9-691a-4582-b327-b5bc9f1983d8\") " pod="openshift-apiserver/apiserver-76f77b778f-5qn5w" Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.812189 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/90744db6-7e3a-4f7f-a0ae-4f4dfed7df33-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-tk99n\" (UID: \"90744db6-7e3a-4f7f-a0ae-4f4dfed7df33\") " pod="openshift-authentication/oauth-openshift-558db77b4-tk99n" Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.812728 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/90744db6-7e3a-4f7f-a0ae-4f4dfed7df33-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-tk99n\" (UID: \"90744db6-7e3a-4f7f-a0ae-4f4dfed7df33\") " pod="openshift-authentication/oauth-openshift-558db77b4-tk99n" Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.812777 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f18678b9-691a-4582-b327-b5bc9f1983d8-trusted-ca-bundle\") pod \"apiserver-76f77b778f-5qn5w\" (UID: \"f18678b9-691a-4582-b327-b5bc9f1983d8\") " pod="openshift-apiserver/apiserver-76f77b778f-5qn5w" Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.813448 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/62d109b9-dd6e-47c6-b384-336b480f804d-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-l6wsm\" (UID: \"62d109b9-dd6e-47c6-b384-336b480f804d\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-l6wsm" Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.813731 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8908165f-bd25-44b8-916e-5b910ce5d74c-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-rkbqm\" (UID: \"8908165f-bd25-44b8-916e-5b910ce5d74c\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-rkbqm" Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.813795 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/da647d92-a61b-4c5d-b97c-730df809d8fb-serving-cert\") pod \"authentication-operator-69f744f599-p7wpl\" (UID: \"da647d92-a61b-4c5d-b97c-730df809d8fb\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-p7wpl" Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.814299 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ab797808-0785-40c6-8399-1a3bfef7b7f1-serving-cert\") pod \"openshift-config-operator-7777fb866f-mx69k\" (UID: \"ab797808-0785-40c6-8399-1a3bfef7b7f1\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-mx69k" Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.814715 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/fdfd948f-0ed8-45ed-90ab-3126ba209608-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-q5c57\" (UID: \"fdfd948f-0ed8-45ed-90ab-3126ba209608\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-q5c57" Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.814940 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.815264 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-g5xg4" Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.815261 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.815784 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/90744db6-7e3a-4f7f-a0ae-4f4dfed7df33-audit-dir\") pod \"oauth-openshift-558db77b4-tk99n\" (UID: \"90744db6-7e3a-4f7f-a0ae-4f4dfed7df33\") " pod="openshift-authentication/oauth-openshift-558db77b4-tk99n" Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.817466 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-kkv25" Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.818106 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/f18678b9-691a-4582-b327-b5bc9f1983d8-encryption-config\") pod \"apiserver-76f77b778f-5qn5w\" (UID: \"f18678b9-691a-4582-b327-b5bc9f1983d8\") " pod="openshift-apiserver/apiserver-76f77b778f-5qn5w" Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.818607 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-q5c57"] Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.819705 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/8908165f-bd25-44b8-916e-5b910ce5d74c-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-rkbqm\" (UID: \"8908165f-bd25-44b8-916e-5b910ce5d74c\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-rkbqm" Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.820203 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/90744db6-7e3a-4f7f-a0ae-4f4dfed7df33-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-tk99n\" (UID: \"90744db6-7e3a-4f7f-a0ae-4f4dfed7df33\") " pod="openshift-authentication/oauth-openshift-558db77b4-tk99n" Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.820878 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/90744db6-7e3a-4f7f-a0ae-4f4dfed7df33-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-tk99n\" (UID: \"90744db6-7e3a-4f7f-a0ae-4f4dfed7df33\") " pod="openshift-authentication/oauth-openshift-558db77b4-tk99n" Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.823526 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/17fe734a-f022-4fd4-8276-661e662e2c6b-console-serving-cert\") pod \"console-f9d7485db-t2gmf\" (UID: \"17fe734a-f022-4fd4-8276-661e662e2c6b\") " pod="openshift-console/console-f9d7485db-t2gmf" Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.825243 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-5qn5w"] Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.825725 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.827062 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-rkbqm"] Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.829720 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-l6wsm"] Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.831751 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-lh6qt"] Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.837514 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-prs2h"] Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.840036 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-xrpzk"] Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.841451 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-hbnxp"] Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.841816 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-vws9k"] Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.842482 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-xrpzk" Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.843801 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-qq8sm"] Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.845397 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.845610 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-p7wpl"] Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.847077 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-t2gmf"] Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.848546 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-44d9w"] Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.850020 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-rnbkw"] Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.852662 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-mx69k"] Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.856247 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-r4272"] Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.857721 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-s92b8"] Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.859100 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-tk99n"] Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.860663 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-rn9bk"] Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.862127 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-bvd4w"] Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.863292 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-gvvtl"] Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.865035 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gvvtl" Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.865429 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-l47zr"] Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.865502 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.866096 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-l47zr" Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.868242 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-xtqqs"] Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.869423 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-g5xg4"] Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.870451 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-qn8tr"] Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.871514 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-9zbxh"] Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.872522 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29422950-4cxn6"] Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.873547 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-dskl4"] Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.874555 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-4bmlt"] Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.875529 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-n2lcz"] Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.877723 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-8dm4f"] Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.878782 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-btqhn"] Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.879836 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-twtlk"] Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.880459 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/f4e99f92-30bf-44e1-a7b3-b5d481af2100-proxy-tls\") pod \"machine-config-controller-84d6567774-2x76l\" (UID: \"f4e99f92-30bf-44e1-a7b3-b5d481af2100\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-2x76l" Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.880524 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/f4e99f92-30bf-44e1-a7b3-b5d481af2100-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-2x76l\" (UID: \"f4e99f92-30bf-44e1-a7b3-b5d481af2100\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-2x76l" Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.880601 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5hfx8\" (UniqueName: \"kubernetes.io/projected/f4e99f92-30bf-44e1-a7b3-b5d481af2100-kube-api-access-5hfx8\") pod \"machine-config-controller-84d6567774-2x76l\" (UID: \"f4e99f92-30bf-44e1-a7b3-b5d481af2100\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-2x76l" Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.880893 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-2x76l"] Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.881682 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/f4e99f92-30bf-44e1-a7b3-b5d481af2100-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-2x76l\" (UID: \"f4e99f92-30bf-44e1-a7b3-b5d481af2100\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-2x76l" Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.881890 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-7p66s"] Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.882911 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-gvvtl"] Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.883918 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-brrd5"] Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.885646 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.885949 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-kkv25"] Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.886136 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-l47zr"] Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.887237 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-wgzns"] Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.888328 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-xrpzk"] Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.889340 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-v2z2q"] Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.890331 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-fqhgq"] Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.891135 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-fqhgq" Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.905743 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.924435 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.944946 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.965112 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Dec 10 14:34:02 crc kubenswrapper[4718]: I1210 14:34:02.984721 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Dec 10 14:34:03 crc kubenswrapper[4718]: I1210 14:34:03.004615 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Dec 10 14:34:03 crc kubenswrapper[4718]: I1210 14:34:03.020252 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-r8zbt" Dec 10 14:34:03 crc kubenswrapper[4718]: I1210 14:34:03.020270 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 10 14:34:03 crc kubenswrapper[4718]: I1210 14:34:03.024720 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Dec 10 14:34:03 crc kubenswrapper[4718]: I1210 14:34:03.044772 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Dec 10 14:34:03 crc kubenswrapper[4718]: I1210 14:34:03.065295 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Dec 10 14:34:03 crc kubenswrapper[4718]: I1210 14:34:03.085035 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Dec 10 14:34:03 crc kubenswrapper[4718]: I1210 14:34:03.105990 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Dec 10 14:34:03 crc kubenswrapper[4718]: I1210 14:34:03.126401 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Dec 10 14:34:03 crc kubenswrapper[4718]: I1210 14:34:03.144929 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 10 14:34:03 crc kubenswrapper[4718]: I1210 14:34:03.165157 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 10 14:34:03 crc kubenswrapper[4718]: I1210 14:34:03.186823 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Dec 10 14:34:03 crc kubenswrapper[4718]: I1210 14:34:03.205187 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Dec 10 14:34:03 crc kubenswrapper[4718]: I1210 14:34:03.225077 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Dec 10 14:34:03 crc kubenswrapper[4718]: I1210 14:34:03.245051 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Dec 10 14:34:03 crc kubenswrapper[4718]: I1210 14:34:03.265308 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Dec 10 14:34:03 crc kubenswrapper[4718]: I1210 14:34:03.284703 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Dec 10 14:34:03 crc kubenswrapper[4718]: I1210 14:34:03.305754 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Dec 10 14:34:03 crc kubenswrapper[4718]: I1210 14:34:03.325236 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Dec 10 14:34:03 crc kubenswrapper[4718]: I1210 14:34:03.353037 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Dec 10 14:34:03 crc kubenswrapper[4718]: I1210 14:34:03.365760 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Dec 10 14:34:03 crc kubenswrapper[4718]: I1210 14:34:03.384696 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Dec 10 14:34:03 crc kubenswrapper[4718]: I1210 14:34:03.406060 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Dec 10 14:34:03 crc kubenswrapper[4718]: I1210 14:34:03.425127 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Dec 10 14:34:03 crc kubenswrapper[4718]: I1210 14:34:03.446216 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Dec 10 14:34:03 crc kubenswrapper[4718]: I1210 14:34:03.466021 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Dec 10 14:34:03 crc kubenswrapper[4718]: I1210 14:34:03.486009 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Dec 10 14:34:03 crc kubenswrapper[4718]: I1210 14:34:03.504729 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Dec 10 14:34:03 crc kubenswrapper[4718]: I1210 14:34:03.526297 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Dec 10 14:34:03 crc kubenswrapper[4718]: I1210 14:34:03.546609 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Dec 10 14:34:03 crc kubenswrapper[4718]: I1210 14:34:03.565570 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Dec 10 14:34:03 crc kubenswrapper[4718]: I1210 14:34:03.585674 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Dec 10 14:34:03 crc kubenswrapper[4718]: I1210 14:34:03.606079 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Dec 10 14:34:03 crc kubenswrapper[4718]: I1210 14:34:03.625665 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Dec 10 14:34:03 crc kubenswrapper[4718]: I1210 14:34:03.645812 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Dec 10 14:34:03 crc kubenswrapper[4718]: I1210 14:34:03.685300 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Dec 10 14:34:03 crc kubenswrapper[4718]: I1210 14:34:03.694552 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/f4e99f92-30bf-44e1-a7b3-b5d481af2100-proxy-tls\") pod \"machine-config-controller-84d6567774-2x76l\" (UID: \"f4e99f92-30bf-44e1-a7b3-b5d481af2100\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-2x76l" Dec 10 14:34:03 crc kubenswrapper[4718]: I1210 14:34:03.705293 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Dec 10 14:34:03 crc kubenswrapper[4718]: I1210 14:34:03.725293 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Dec 10 14:34:03 crc kubenswrapper[4718]: I1210 14:34:03.745486 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Dec 10 14:34:03 crc kubenswrapper[4718]: I1210 14:34:03.763640 4718 request.go:700] Waited for 1.009888059s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/secrets?fieldSelector=metadata.name%3Dolm-operator-serving-cert&limit=500&resourceVersion=0 Dec 10 14:34:03 crc kubenswrapper[4718]: I1210 14:34:03.765909 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Dec 10 14:34:03 crc kubenswrapper[4718]: I1210 14:34:03.806772 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Dec 10 14:34:03 crc kubenswrapper[4718]: I1210 14:34:03.825775 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Dec 10 14:34:03 crc kubenswrapper[4718]: I1210 14:34:03.846078 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Dec 10 14:34:03 crc kubenswrapper[4718]: I1210 14:34:03.866105 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Dec 10 14:34:03 crc kubenswrapper[4718]: I1210 14:34:03.885778 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Dec 10 14:34:03 crc kubenswrapper[4718]: I1210 14:34:03.906224 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Dec 10 14:34:03 crc kubenswrapper[4718]: I1210 14:34:03.925814 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Dec 10 14:34:03 crc kubenswrapper[4718]: I1210 14:34:03.946004 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Dec 10 14:34:03 crc kubenswrapper[4718]: I1210 14:34:03.970317 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Dec 10 14:34:03 crc kubenswrapper[4718]: I1210 14:34:03.996655 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Dec 10 14:34:04 crc kubenswrapper[4718]: I1210 14:34:04.005979 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Dec 10 14:34:04 crc kubenswrapper[4718]: I1210 14:34:04.026264 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Dec 10 14:34:04 crc kubenswrapper[4718]: I1210 14:34:04.045338 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Dec 10 14:34:04 crc kubenswrapper[4718]: I1210 14:34:04.066230 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Dec 10 14:34:04 crc kubenswrapper[4718]: I1210 14:34:04.085445 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Dec 10 14:34:04 crc kubenswrapper[4718]: I1210 14:34:04.106212 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Dec 10 14:34:04 crc kubenswrapper[4718]: I1210 14:34:04.126057 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Dec 10 14:34:04 crc kubenswrapper[4718]: I1210 14:34:04.145647 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Dec 10 14:34:04 crc kubenswrapper[4718]: I1210 14:34:04.165366 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Dec 10 14:34:04 crc kubenswrapper[4718]: I1210 14:34:04.185243 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Dec 10 14:34:04 crc kubenswrapper[4718]: I1210 14:34:04.209784 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Dec 10 14:34:04 crc kubenswrapper[4718]: I1210 14:34:04.242168 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zx8j4\" (UniqueName: \"kubernetes.io/projected/62d109b9-dd6e-47c6-b384-336b480f804d-kube-api-access-zx8j4\") pod \"openshift-controller-manager-operator-756b6f6bc6-l6wsm\" (UID: \"62d109b9-dd6e-47c6-b384-336b480f804d\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-l6wsm" Dec 10 14:34:04 crc kubenswrapper[4718]: I1210 14:34:04.280512 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kclr9\" (UniqueName: \"kubernetes.io/projected/90744db6-7e3a-4f7f-a0ae-4f4dfed7df33-kube-api-access-kclr9\") pod \"oauth-openshift-558db77b4-tk99n\" (UID: \"90744db6-7e3a-4f7f-a0ae-4f4dfed7df33\") " pod="openshift-authentication/oauth-openshift-558db77b4-tk99n" Dec 10 14:34:04 crc kubenswrapper[4718]: I1210 14:34:04.295887 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bx7xw\" (UniqueName: \"kubernetes.io/projected/da647d92-a61b-4c5d-b97c-730df809d8fb-kube-api-access-bx7xw\") pod \"authentication-operator-69f744f599-p7wpl\" (UID: \"da647d92-a61b-4c5d-b97c-730df809d8fb\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-p7wpl" Dec 10 14:34:04 crc kubenswrapper[4718]: I1210 14:34:04.312360 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8908165f-bd25-44b8-916e-5b910ce5d74c-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-rkbqm\" (UID: \"8908165f-bd25-44b8-916e-5b910ce5d74c\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-rkbqm" Dec 10 14:34:04 crc kubenswrapper[4718]: I1210 14:34:04.322856 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bl7ln\" (UniqueName: \"kubernetes.io/projected/17fe734a-f022-4fd4-8276-661e662e2c6b-kube-api-access-bl7ln\") pod \"console-f9d7485db-t2gmf\" (UID: \"17fe734a-f022-4fd4-8276-661e662e2c6b\") " pod="openshift-console/console-f9d7485db-t2gmf" Dec 10 14:34:04 crc kubenswrapper[4718]: I1210 14:34:04.342379 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v27dx\" (UniqueName: \"kubernetes.io/projected/fdfd948f-0ed8-45ed-90ab-3126ba209608-kube-api-access-v27dx\") pod \"cluster-samples-operator-665b6dd947-q5c57\" (UID: \"fdfd948f-0ed8-45ed-90ab-3126ba209608\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-q5c57" Dec 10 14:34:04 crc kubenswrapper[4718]: I1210 14:34:04.363548 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6krnc\" (UniqueName: \"kubernetes.io/projected/ab797808-0785-40c6-8399-1a3bfef7b7f1-kube-api-access-6krnc\") pod \"openshift-config-operator-7777fb866f-mx69k\" (UID: \"ab797808-0785-40c6-8399-1a3bfef7b7f1\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-mx69k" Dec 10 14:34:04 crc kubenswrapper[4718]: I1210 14:34:04.391571 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r6484\" (UniqueName: \"kubernetes.io/projected/f18678b9-691a-4582-b327-b5bc9f1983d8-kube-api-access-r6484\") pod \"apiserver-76f77b778f-5qn5w\" (UID: \"f18678b9-691a-4582-b327-b5bc9f1983d8\") " pod="openshift-apiserver/apiserver-76f77b778f-5qn5w" Dec 10 14:34:04 crc kubenswrapper[4718]: I1210 14:34:04.396423 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-l6wsm" Dec 10 14:34:04 crc kubenswrapper[4718]: I1210 14:34:04.404177 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Dec 10 14:34:04 crc kubenswrapper[4718]: I1210 14:34:04.404924 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f7lrd\" (UniqueName: \"kubernetes.io/projected/8908165f-bd25-44b8-916e-5b910ce5d74c-kube-api-access-f7lrd\") pod \"cluster-image-registry-operator-dc59b4c8b-rkbqm\" (UID: \"8908165f-bd25-44b8-916e-5b910ce5d74c\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-rkbqm" Dec 10 14:34:04 crc kubenswrapper[4718]: I1210 14:34:04.419710 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-p7wpl" Dec 10 14:34:04 crc kubenswrapper[4718]: I1210 14:34:04.424809 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Dec 10 14:34:04 crc kubenswrapper[4718]: I1210 14:34:04.445216 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Dec 10 14:34:04 crc kubenswrapper[4718]: I1210 14:34:04.466030 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Dec 10 14:34:04 crc kubenswrapper[4718]: I1210 14:34:04.469465 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-5qn5w" Dec 10 14:34:04 crc kubenswrapper[4718]: I1210 14:34:04.477555 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-rkbqm" Dec 10 14:34:04 crc kubenswrapper[4718]: I1210 14:34:04.485549 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Dec 10 14:34:04 crc kubenswrapper[4718]: I1210 14:34:04.506082 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Dec 10 14:34:04 crc kubenswrapper[4718]: I1210 14:34:04.516631 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-t2gmf" Dec 10 14:34:04 crc kubenswrapper[4718]: I1210 14:34:04.525490 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Dec 10 14:34:04 crc kubenswrapper[4718]: I1210 14:34:04.540852 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-tk99n" Dec 10 14:34:04 crc kubenswrapper[4718]: I1210 14:34:04.545571 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Dec 10 14:34:04 crc kubenswrapper[4718]: I1210 14:34:04.549517 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-mx69k" Dec 10 14:34:04 crc kubenswrapper[4718]: I1210 14:34:04.557911 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-q5c57" Dec 10 14:34:04 crc kubenswrapper[4718]: I1210 14:34:04.568072 4718 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Dec 10 14:34:04 crc kubenswrapper[4718]: I1210 14:34:04.586240 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Dec 10 14:34:04 crc kubenswrapper[4718]: I1210 14:34:04.609549 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Dec 10 14:34:04 crc kubenswrapper[4718]: I1210 14:34:04.618427 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-p7wpl"] Dec 10 14:34:04 crc kubenswrapper[4718]: I1210 14:34:04.626227 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Dec 10 14:34:04 crc kubenswrapper[4718]: I1210 14:34:04.636204 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-l6wsm"] Dec 10 14:34:04 crc kubenswrapper[4718]: I1210 14:34:04.645708 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Dec 10 14:34:04 crc kubenswrapper[4718]: I1210 14:34:04.671345 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Dec 10 14:34:04 crc kubenswrapper[4718]: I1210 14:34:04.678999 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-5qn5w"] Dec 10 14:34:04 crc kubenswrapper[4718]: W1210 14:34:04.680338 4718 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod62d109b9_dd6e_47c6_b384_336b480f804d.slice/crio-f28a930ed5cc2c2af0cb340d605d2c470ebbfeae9bec98c86c5bb43776ca38db WatchSource:0}: Error finding container f28a930ed5cc2c2af0cb340d605d2c470ebbfeae9bec98c86c5bb43776ca38db: Status 404 returned error can't find the container with id f28a930ed5cc2c2af0cb340d605d2c470ebbfeae9bec98c86c5bb43776ca38db Dec 10 14:34:04 crc kubenswrapper[4718]: I1210 14:34:04.686707 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Dec 10 14:34:04 crc kubenswrapper[4718]: I1210 14:34:04.704983 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Dec 10 14:34:04 crc kubenswrapper[4718]: I1210 14:34:04.730535 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Dec 10 14:34:04 crc kubenswrapper[4718]: I1210 14:34:04.764102 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5hfx8\" (UniqueName: \"kubernetes.io/projected/f4e99f92-30bf-44e1-a7b3-b5d481af2100-kube-api-access-5hfx8\") pod \"machine-config-controller-84d6567774-2x76l\" (UID: \"f4e99f92-30bf-44e1-a7b3-b5d481af2100\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-2x76l" Dec 10 14:34:04 crc kubenswrapper[4718]: I1210 14:34:04.765417 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Dec 10 14:34:04 crc kubenswrapper[4718]: I1210 14:34:04.783620 4718 request.go:700] Waited for 1.892199921s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/secrets?fieldSelector=metadata.name%3Dmachine-config-server-dockercfg-qx5rd&limit=500&resourceVersion=0 Dec 10 14:34:04 crc kubenswrapper[4718]: I1210 14:34:04.785854 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Dec 10 14:34:04 crc kubenswrapper[4718]: I1210 14:34:04.806015 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-2x76l" Dec 10 14:34:04 crc kubenswrapper[4718]: I1210 14:34:04.807433 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-rkbqm"] Dec 10 14:34:04 crc kubenswrapper[4718]: I1210 14:34:04.809759 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Dec 10 14:34:04 crc kubenswrapper[4718]: I1210 14:34:04.824918 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Dec 10 14:34:04 crc kubenswrapper[4718]: I1210 14:34:04.846903 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-tk99n"] Dec 10 14:34:04 crc kubenswrapper[4718]: I1210 14:34:04.847983 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Dec 10 14:34:04 crc kubenswrapper[4718]: I1210 14:34:04.868487 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-q5c57"] Dec 10 14:34:04 crc kubenswrapper[4718]: I1210 14:34:04.968508 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-t2gmf"] Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.077218 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-mx69k"] Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.296208 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/2638a0da-6209-4691-a4d4-6aa91a4ca547-registry-tls\") pod \"image-registry-697d97f7c8-n2lcz\" (UID: \"2638a0da-6209-4691-a4d4-6aa91a4ca547\") " pod="openshift-image-registry/image-registry-697d97f7c8-n2lcz" Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.296367 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2638a0da-6209-4691-a4d4-6aa91a4ca547-trusted-ca\") pod \"image-registry-697d97f7c8-n2lcz\" (UID: \"2638a0da-6209-4691-a4d4-6aa91a4ca547\") " pod="openshift-image-registry/image-registry-697d97f7c8-n2lcz" Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.296473 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n2lcz\" (UID: \"2638a0da-6209-4691-a4d4-6aa91a4ca547\") " pod="openshift-image-registry/image-registry-697d97f7c8-n2lcz" Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.296534 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/2638a0da-6209-4691-a4d4-6aa91a4ca547-ca-trust-extracted\") pod \"image-registry-697d97f7c8-n2lcz\" (UID: \"2638a0da-6209-4691-a4d4-6aa91a4ca547\") " pod="openshift-image-registry/image-registry-697d97f7c8-n2lcz" Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.296698 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/2638a0da-6209-4691-a4d4-6aa91a4ca547-registry-certificates\") pod \"image-registry-697d97f7c8-n2lcz\" (UID: \"2638a0da-6209-4691-a4d4-6aa91a4ca547\") " pod="openshift-image-registry/image-registry-697d97f7c8-n2lcz" Dec 10 14:34:05 crc kubenswrapper[4718]: E1210 14:34:05.297823 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-10 14:34:05.797796048 +0000 UTC m=+150.747019465 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n2lcz" (UID: "2638a0da-6209-4691-a4d4-6aa91a4ca547") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.298847 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/2638a0da-6209-4691-a4d4-6aa91a4ca547-installation-pull-secrets\") pod \"image-registry-697d97f7c8-n2lcz\" (UID: \"2638a0da-6209-4691-a4d4-6aa91a4ca547\") " pod="openshift-image-registry/image-registry-697d97f7c8-n2lcz" Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.301798 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/2638a0da-6209-4691-a4d4-6aa91a4ca547-bound-sa-token\") pod \"image-registry-697d97f7c8-n2lcz\" (UID: \"2638a0da-6209-4691-a4d4-6aa91a4ca547\") " pod="openshift-image-registry/image-registry-697d97f7c8-n2lcz" Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.372888 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-rkbqm" event={"ID":"8908165f-bd25-44b8-916e-5b910ce5d74c","Type":"ContainerStarted","Data":"138c93a1d25f053c1f3ffc01bdb42fede7483d5dc05c16a2de7e6a185e4e7aaa"} Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.377572 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-p7wpl" event={"ID":"da647d92-a61b-4c5d-b97c-730df809d8fb","Type":"ContainerStarted","Data":"f6c7a5ee29cafa2aee56e9f307ced5b7e73f9e87216e757d249e0ca4d4bbf173"} Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.378592 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-tk99n" event={"ID":"90744db6-7e3a-4f7f-a0ae-4f4dfed7df33","Type":"ContainerStarted","Data":"1f06d59f36d52e5f658bf3dca6193202b48ddbf4d6f959bb6067ec2aa5367e47"} Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.379264 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-t2gmf" event={"ID":"17fe734a-f022-4fd4-8276-661e662e2c6b","Type":"ContainerStarted","Data":"7f568676b729681a6b13944ff1b882ef67b601ac056db46dc1a853e7fb57cb21"} Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.380047 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-5qn5w" event={"ID":"f18678b9-691a-4582-b327-b5bc9f1983d8","Type":"ContainerStarted","Data":"8c0175e52c7b148bde06e95ff044d71dda29e1ff472090798f3fd0756807f0c9"} Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.380585 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-l6wsm" event={"ID":"62d109b9-dd6e-47c6-b384-336b480f804d","Type":"ContainerStarted","Data":"f28a930ed5cc2c2af0cb340d605d2c470ebbfeae9bec98c86c5bb43776ca38db"} Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.381149 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-mx69k" event={"ID":"ab797808-0785-40c6-8399-1a3bfef7b7f1","Type":"ContainerStarted","Data":"857d4c48c6db914d366e1a59975d9c20954f631ad87a936c3ac9c1fcfc733386"} Dec 10 14:34:05 crc kubenswrapper[4718]: E1210 14:34:05.405360 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-10 14:34:05.905333155 +0000 UTC m=+150.854556582 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.405302 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.406019 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/451fb12e-a97f-441e-8a8c-d4c217640aef-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-twtlk\" (UID: \"451fb12e-a97f-441e-8a8c-d4c217640aef\") " pod="openshift-marketplace/marketplace-operator-79b997595-twtlk" Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.406068 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2cw6v\" (UniqueName: \"kubernetes.io/projected/8b5042f5-52a1-42da-9a21-72d7b2a75c75-kube-api-access-2cw6v\") pod \"machine-api-operator-5694c8668f-prs2h\" (UID: \"8b5042f5-52a1-42da-9a21-72d7b2a75c75\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-prs2h" Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.406095 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/dff4ff8a-f156-4da4-ba81-477078a8345d-config-volume\") pod \"collect-profiles-29422950-4cxn6\" (UID: \"dff4ff8a-f156-4da4-ba81-477078a8345d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29422950-4cxn6" Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.406121 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/451fb12e-a97f-441e-8a8c-d4c217640aef-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-twtlk\" (UID: \"451fb12e-a97f-441e-8a8c-d4c217640aef\") " pod="openshift-marketplace/marketplace-operator-79b997595-twtlk" Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.406146 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/e82e7008-55b0-4f3c-bdd7-8c9bfbc9284f-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-7p66s\" (UID: \"e82e7008-55b0-4f3c-bdd7-8c9bfbc9284f\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-7p66s" Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.406169 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4f9989fa-f608-438c-9bfd-f06ef32470c7-trusted-ca\") pod \"console-operator-58897d9998-qq8sm\" (UID: \"4f9989fa-f608-438c-9bfd-f06ef32470c7\") " pod="openshift-console-operator/console-operator-58897d9998-qq8sm" Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.406219 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/4ec81ced-fd94-49a3-aa46-d7febbbfb825-profile-collector-cert\") pod \"olm-operator-6b444d44fb-4bmlt\" (UID: \"4ec81ced-fd94-49a3-aa46-d7febbbfb825\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-4bmlt" Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.406244 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/28dfccad-4a7e-463f-a8b3-77a451a865ae-trusted-ca\") pod \"ingress-operator-5b745b69d9-9zbxh\" (UID: \"28dfccad-4a7e-463f-a8b3-77a451a865ae\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-9zbxh" Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.406267 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/799e0d54-7a8e-48e5-b5d0-db944b3cbe25-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-s92b8\" (UID: \"799e0d54-7a8e-48e5-b5d0-db944b3cbe25\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-s92b8" Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.406294 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f201fb44-80ca-4142-aaf9-4cfbd8215309-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-xtqqs\" (UID: \"f201fb44-80ca-4142-aaf9-4cfbd8215309\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-xtqqs" Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.406318 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/799e0d54-7a8e-48e5-b5d0-db944b3cbe25-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-s92b8\" (UID: \"799e0d54-7a8e-48e5-b5d0-db944b3cbe25\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-s92b8" Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.406344 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1c9a984d-0762-47e3-82e4-8b4f9d6bea39-config\") pod \"kube-controller-manager-operator-78b949d7b-g5xg4\" (UID: \"1c9a984d-0762-47e3-82e4-8b4f9d6bea39\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-g5xg4" Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.406429 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/40856145-6171-40e7-97b9-51268a8c348b-default-certificate\") pod \"router-default-5444994796-7fd22\" (UID: \"40856145-6171-40e7-97b9-51268a8c348b\") " pod="openshift-ingress/router-default-5444994796-7fd22" Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.406457 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1c9a984d-0762-47e3-82e4-8b4f9d6bea39-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-g5xg4\" (UID: \"1c9a984d-0762-47e3-82e4-8b4f9d6bea39\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-g5xg4" Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.406500 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/3bc2976e-bdb0-4450-9c5e-73052e705f7a-webhook-cert\") pod \"packageserver-d55dfcdfc-r4272\" (UID: \"3bc2976e-bdb0-4450-9c5e-73052e705f7a\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-r4272" Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.406523 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5vrx6\" (UniqueName: \"kubernetes.io/projected/522ecb50-d5ad-4834-9a63-a4c843ca824d-kube-api-access-5vrx6\") pod \"ingress-canary-l47zr\" (UID: \"522ecb50-d5ad-4834-9a63-a4c843ca824d\") " pod="openshift-ingress-canary/ingress-canary-l47zr" Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.406545 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6wjdq\" (UniqueName: \"kubernetes.io/projected/bc13833f-8cfa-417a-a16a-420e5b00843b-kube-api-access-6wjdq\") pod \"dns-default-gvvtl\" (UID: \"bc13833f-8cfa-417a-a16a-420e5b00843b\") " pod="openshift-dns/dns-default-gvvtl" Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.406580 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/850edb82-dd95-474f-a41c-3aa48faa4b87-config\") pod \"controller-manager-879f6c89f-vws9k\" (UID: \"850edb82-dd95-474f-a41c-3aa48faa4b87\") " pod="openshift-controller-manager/controller-manager-879f6c89f-vws9k" Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.406601 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/850edb82-dd95-474f-a41c-3aa48faa4b87-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-vws9k\" (UID: \"850edb82-dd95-474f-a41c-3aa48faa4b87\") " pod="openshift-controller-manager/controller-manager-879f6c89f-vws9k" Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.406640 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/2638a0da-6209-4691-a4d4-6aa91a4ca547-ca-trust-extracted\") pod \"image-registry-697d97f7c8-n2lcz\" (UID: \"2638a0da-6209-4691-a4d4-6aa91a4ca547\") " pod="openshift-image-registry/image-registry-697d97f7c8-n2lcz" Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.406664 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/dc6efe70-637d-42bf-a57e-92ed4503a23a-auth-proxy-config\") pod \"machine-approver-56656f9798-76q8c\" (UID: \"dc6efe70-637d-42bf-a57e-92ed4503a23a\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-76q8c" Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.406685 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/db2db73c-d11d-46e6-9cc6-331faf9a21ca-socket-dir\") pod \"csi-hostpathplugin-xrpzk\" (UID: \"db2db73c-d11d-46e6-9cc6-331faf9a21ca\") " pod="hostpath-provisioner/csi-hostpathplugin-xrpzk" Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.406713 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4f9989fa-f608-438c-9bfd-f06ef32470c7-config\") pod \"console-operator-58897d9998-qq8sm\" (UID: \"4f9989fa-f608-438c-9bfd-f06ef32470c7\") " pod="openshift-console-operator/console-operator-58897d9998-qq8sm" Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.406737 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/78119cb9-deb1-4a37-bc05-7b911969047f-audit-dir\") pod \"apiserver-7bbb656c7d-lh6qt\" (UID: \"78119cb9-deb1-4a37-bc05-7b911969047f\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-lh6qt" Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.406759 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/42d025d0-f89c-4e18-aae4-f923d5797693-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-rnbkw\" (UID: \"42d025d0-f89c-4e18-aae4-f923d5797693\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-rnbkw" Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.406784 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/3bc2976e-bdb0-4450-9c5e-73052e705f7a-apiservice-cert\") pod \"packageserver-d55dfcdfc-r4272\" (UID: \"3bc2976e-bdb0-4450-9c5e-73052e705f7a\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-r4272" Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.406812 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/2638a0da-6209-4691-a4d4-6aa91a4ca547-registry-certificates\") pod \"image-registry-697d97f7c8-n2lcz\" (UID: \"2638a0da-6209-4691-a4d4-6aa91a4ca547\") " pod="openshift-image-registry/image-registry-697d97f7c8-n2lcz" Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.406837 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/8b5042f5-52a1-42da-9a21-72d7b2a75c75-images\") pod \"machine-api-operator-5694c8668f-prs2h\" (UID: \"8b5042f5-52a1-42da-9a21-72d7b2a75c75\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-prs2h" Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.406883 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4f9989fa-f608-438c-9bfd-f06ef32470c7-serving-cert\") pod \"console-operator-58897d9998-qq8sm\" (UID: \"4f9989fa-f608-438c-9bfd-f06ef32470c7\") " pod="openshift-console-operator/console-operator-58897d9998-qq8sm" Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.406905 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dc6efe70-637d-42bf-a57e-92ed4503a23a-config\") pod \"machine-approver-56656f9798-76q8c\" (UID: \"dc6efe70-637d-42bf-a57e-92ed4503a23a\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-76q8c" Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.406937 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/28dfccad-4a7e-463f-a8b3-77a451a865ae-bound-sa-token\") pod \"ingress-operator-5b745b69d9-9zbxh\" (UID: \"28dfccad-4a7e-463f-a8b3-77a451a865ae\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-9zbxh" Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.406962 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/079f1ed7-7f70-4b4c-9afd-cf0286348562-srv-cert\") pod \"catalog-operator-68c6474976-qn8tr\" (UID: \"079f1ed7-7f70-4b4c-9afd-cf0286348562\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-qn8tr" Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.407030 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/28dfccad-4a7e-463f-a8b3-77a451a865ae-metrics-tls\") pod \"ingress-operator-5b745b69d9-9zbxh\" (UID: \"28dfccad-4a7e-463f-a8b3-77a451a865ae\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-9zbxh" Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.407826 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/2638a0da-6209-4691-a4d4-6aa91a4ca547-ca-trust-extracted\") pod \"image-registry-697d97f7c8-n2lcz\" (UID: \"2638a0da-6209-4691-a4d4-6aa91a4ca547\") " pod="openshift-image-registry/image-registry-697d97f7c8-n2lcz" Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.407989 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/079f1ed7-7f70-4b4c-9afd-cf0286348562-profile-collector-cert\") pod \"catalog-operator-68c6474976-qn8tr\" (UID: \"079f1ed7-7f70-4b4c-9afd-cf0286348562\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-qn8tr" Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.408066 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/deb5d218-90f7-429b-b09d-137dd3422ea0-signing-key\") pod \"service-ca-9c57cc56f-v2z2q\" (UID: \"deb5d218-90f7-429b-b09d-137dd3422ea0\") " pod="openshift-service-ca/service-ca-9c57cc56f-v2z2q" Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.408084 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bwk8h\" (UniqueName: \"kubernetes.io/projected/850edb82-dd95-474f-a41c-3aa48faa4b87-kube-api-access-bwk8h\") pod \"controller-manager-879f6c89f-vws9k\" (UID: \"850edb82-dd95-474f-a41c-3aa48faa4b87\") " pod="openshift-controller-manager/controller-manager-879f6c89f-vws9k" Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.408147 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/78119cb9-deb1-4a37-bc05-7b911969047f-audit-policies\") pod \"apiserver-7bbb656c7d-lh6qt\" (UID: \"78119cb9-deb1-4a37-bc05-7b911969047f\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-lh6qt" Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.408169 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/2638a0da-6209-4691-a4d4-6aa91a4ca547-registry-certificates\") pod \"image-registry-697d97f7c8-n2lcz\" (UID: \"2638a0da-6209-4691-a4d4-6aa91a4ca547\") " pod="openshift-image-registry/image-registry-697d97f7c8-n2lcz" Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.408173 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/a07b76f5-7f05-410e-a0e9-3aff034786ad-etcd-service-ca\") pod \"etcd-operator-b45778765-rn9bk\" (UID: \"a07b76f5-7f05-410e-a0e9-3aff034786ad\") " pod="openshift-etcd-operator/etcd-operator-b45778765-rn9bk" Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.408221 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ms24s\" (UniqueName: \"kubernetes.io/projected/3bc2976e-bdb0-4450-9c5e-73052e705f7a-kube-api-access-ms24s\") pod \"packageserver-d55dfcdfc-r4272\" (UID: \"3bc2976e-bdb0-4450-9c5e-73052e705f7a\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-r4272" Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.408250 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/8b5042f5-52a1-42da-9a21-72d7b2a75c75-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-prs2h\" (UID: \"8b5042f5-52a1-42da-9a21-72d7b2a75c75\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-prs2h" Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.408272 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5wjsx\" (UniqueName: \"kubernetes.io/projected/9c06237f-9a58-430d-93c8-69297f9e1363-kube-api-access-5wjsx\") pod \"migrator-59844c95c7-kkv25\" (UID: \"9c06237f-9a58-430d-93c8-69297f9e1363\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-kkv25" Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.408288 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/522ecb50-d5ad-4834-9a63-a4c843ca824d-cert\") pod \"ingress-canary-l47zr\" (UID: \"522ecb50-d5ad-4834-9a63-a4c843ca824d\") " pod="openshift-ingress-canary/ingress-canary-l47zr" Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.408324 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4fb46f0a-fd98-475f-a2fe-db2c2f8c3d35-serving-cert\") pod \"service-ca-operator-777779d784-8dm4f\" (UID: \"4fb46f0a-fd98-475f-a2fe-db2c2f8c3d35\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-8dm4f" Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.408343 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/bc13833f-8cfa-417a-a16a-420e5b00843b-metrics-tls\") pod \"dns-default-gvvtl\" (UID: \"bc13833f-8cfa-417a-a16a-420e5b00843b\") " pod="openshift-dns/dns-default-gvvtl" Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.408366 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pdhv9\" (UniqueName: \"kubernetes.io/projected/4f9989fa-f608-438c-9bfd-f06ef32470c7-kube-api-access-pdhv9\") pod \"console-operator-58897d9998-qq8sm\" (UID: \"4f9989fa-f608-438c-9bfd-f06ef32470c7\") " pod="openshift-console-operator/console-operator-58897d9998-qq8sm" Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.408443 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7lg7h\" (UniqueName: \"kubernetes.io/projected/78119cb9-deb1-4a37-bc05-7b911969047f-kube-api-access-7lg7h\") pod \"apiserver-7bbb656c7d-lh6qt\" (UID: \"78119cb9-deb1-4a37-bc05-7b911969047f\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-lh6qt" Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.408466 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/a07b76f5-7f05-410e-a0e9-3aff034786ad-etcd-client\") pod \"etcd-operator-b45778765-rn9bk\" (UID: \"a07b76f5-7f05-410e-a0e9-3aff034786ad\") " pod="openshift-etcd-operator/etcd-operator-b45778765-rn9bk" Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.408493 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1c9a984d-0762-47e3-82e4-8b4f9d6bea39-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-g5xg4\" (UID: \"1c9a984d-0762-47e3-82e4-8b4f9d6bea39\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-g5xg4" Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.408524 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/78119cb9-deb1-4a37-bc05-7b911969047f-etcd-client\") pod \"apiserver-7bbb656c7d-lh6qt\" (UID: \"78119cb9-deb1-4a37-bc05-7b911969047f\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-lh6qt" Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.408644 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2e7d85a9-ce97-4481-934c-3e6f3c720007-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-44d9w\" (UID: \"2e7d85a9-ce97-4481-934c-3e6f3c720007\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-44d9w" Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.408774 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k5dnw\" (UniqueName: \"kubernetes.io/projected/079f1ed7-7f70-4b4c-9afd-cf0286348562-kube-api-access-k5dnw\") pod \"catalog-operator-68c6474976-qn8tr\" (UID: \"079f1ed7-7f70-4b4c-9afd-cf0286348562\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-qn8tr" Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.408808 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/fe1f76c3-fb22-4c92-bc66-7048e04e63b0-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-bvd4w\" (UID: \"fe1f76c3-fb22-4c92-bc66-7048e04e63b0\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-bvd4w" Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.408831 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bc13833f-8cfa-417a-a16a-420e5b00843b-config-volume\") pod \"dns-default-gvvtl\" (UID: \"bc13833f-8cfa-417a-a16a-420e5b00843b\") " pod="openshift-dns/dns-default-gvvtl" Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.408862 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nnn2f\" (UniqueName: \"kubernetes.io/projected/061a3cd6-2958-4635-9ce1-4989d98d8432-kube-api-access-nnn2f\") pod \"package-server-manager-789f6589d5-dskl4\" (UID: \"061a3cd6-2958-4635-9ce1-4989d98d8432\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-dskl4" Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.408885 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/3bc2976e-bdb0-4450-9c5e-73052e705f7a-tmpfs\") pod \"packageserver-d55dfcdfc-r4272\" (UID: \"3bc2976e-bdb0-4450-9c5e-73052e705f7a\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-r4272" Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.408907 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/96a4fa95-2d84-4fc6-94a4-5629fe98e3ce-images\") pod \"machine-config-operator-74547568cd-btqhn\" (UID: \"96a4fa95-2d84-4fc6-94a4-5629fe98e3ce\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-btqhn" Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.408930 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k9ncs\" (UniqueName: \"kubernetes.io/projected/deb5d218-90f7-429b-b09d-137dd3422ea0-kube-api-access-k9ncs\") pod \"service-ca-9c57cc56f-v2z2q\" (UID: \"deb5d218-90f7-429b-b09d-137dd3422ea0\") " pod="openshift-service-ca/service-ca-9c57cc56f-v2z2q" Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.408975 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tqt2w\" (UniqueName: \"kubernetes.io/projected/40856145-6171-40e7-97b9-51268a8c348b-kube-api-access-tqt2w\") pod \"router-default-5444994796-7fd22\" (UID: \"40856145-6171-40e7-97b9-51268a8c348b\") " pod="openshift-ingress/router-default-5444994796-7fd22" Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.408998 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/78119cb9-deb1-4a37-bc05-7b911969047f-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-lh6qt\" (UID: \"78119cb9-deb1-4a37-bc05-7b911969047f\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-lh6qt" Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.409019 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/42d025d0-f89c-4e18-aae4-f923d5797693-config\") pod \"openshift-apiserver-operator-796bbdcf4f-rnbkw\" (UID: \"42d025d0-f89c-4e18-aae4-f923d5797693\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-rnbkw" Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.409118 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/ecc5d9e7-3f61-4863-8b70-3f71e1c0278b-node-bootstrap-token\") pod \"machine-config-server-fqhgq\" (UID: \"ecc5d9e7-3f61-4863-8b70-3f71e1c0278b\") " pod="openshift-machine-config-operator/machine-config-server-fqhgq" Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.409230 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/2638a0da-6209-4691-a4d4-6aa91a4ca547-bound-sa-token\") pod \"image-registry-697d97f7c8-n2lcz\" (UID: \"2638a0da-6209-4691-a4d4-6aa91a4ca547\") " pod="openshift-image-registry/image-registry-697d97f7c8-n2lcz" Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.409277 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f201fb44-80ca-4142-aaf9-4cfbd8215309-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-xtqqs\" (UID: \"f201fb44-80ca-4142-aaf9-4cfbd8215309\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-xtqqs" Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.409398 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/deb5d218-90f7-429b-b09d-137dd3422ea0-signing-cabundle\") pod \"service-ca-9c57cc56f-v2z2q\" (UID: \"deb5d218-90f7-429b-b09d-137dd3422ea0\") " pod="openshift-service-ca/service-ca-9c57cc56f-v2z2q" Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.409454 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b0aac105-3797-4d99-ad0f-443048b96b0a-config\") pod \"route-controller-manager-6576b87f9c-wgzns\" (UID: \"b0aac105-3797-4d99-ad0f-443048b96b0a\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wgzns" Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.409511 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/40856145-6171-40e7-97b9-51268a8c348b-stats-auth\") pod \"router-default-5444994796-7fd22\" (UID: \"40856145-6171-40e7-97b9-51268a8c348b\") " pod="openshift-ingress/router-default-5444994796-7fd22" Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.409543 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/850edb82-dd95-474f-a41c-3aa48faa4b87-serving-cert\") pod \"controller-manager-879f6c89f-vws9k\" (UID: \"850edb82-dd95-474f-a41c-3aa48faa4b87\") " pod="openshift-controller-manager/controller-manager-879f6c89f-vws9k" Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.409565 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n9gpp\" (UniqueName: \"kubernetes.io/projected/b0aac105-3797-4d99-ad0f-443048b96b0a-kube-api-access-n9gpp\") pod \"route-controller-manager-6576b87f9c-wgzns\" (UID: \"b0aac105-3797-4d99-ad0f-443048b96b0a\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wgzns" Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.409592 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/a07b76f5-7f05-410e-a0e9-3aff034786ad-etcd-ca\") pod \"etcd-operator-b45778765-rn9bk\" (UID: \"a07b76f5-7f05-410e-a0e9-3aff034786ad\") " pod="openshift-etcd-operator/etcd-operator-b45778765-rn9bk" Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.409612 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s2nkp\" (UniqueName: \"kubernetes.io/projected/dc6efe70-637d-42bf-a57e-92ed4503a23a-kube-api-access-s2nkp\") pod \"machine-approver-56656f9798-76q8c\" (UID: \"dc6efe70-637d-42bf-a57e-92ed4503a23a\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-76q8c" Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.409649 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jjb4c\" (UniqueName: \"kubernetes.io/projected/4ec81ced-fd94-49a3-aa46-d7febbbfb825-kube-api-access-jjb4c\") pod \"olm-operator-6b444d44fb-4bmlt\" (UID: \"4ec81ced-fd94-49a3-aa46-d7febbbfb825\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-4bmlt" Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.409675 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2e7d85a9-ce97-4481-934c-3e6f3c720007-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-44d9w\" (UID: \"2e7d85a9-ce97-4481-934c-3e6f3c720007\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-44d9w" Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.409761 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6zb9k\" (UniqueName: \"kubernetes.io/projected/cc343e66-a0f1-4f9b-bbba-77b38c0260e7-kube-api-access-6zb9k\") pod \"dns-operator-744455d44c-brrd5\" (UID: \"cc343e66-a0f1-4f9b-bbba-77b38c0260e7\") " pod="openshift-dns-operator/dns-operator-744455d44c-brrd5" Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.409784 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cqfcl\" (UniqueName: \"kubernetes.io/projected/e82e7008-55b0-4f3c-bdd7-8c9bfbc9284f-kube-api-access-cqfcl\") pod \"multus-admission-controller-857f4d67dd-7p66s\" (UID: \"e82e7008-55b0-4f3c-bdd7-8c9bfbc9284f\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-7p66s" Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.409911 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2638a0da-6209-4691-a4d4-6aa91a4ca547-trusted-ca\") pod \"image-registry-697d97f7c8-n2lcz\" (UID: \"2638a0da-6209-4691-a4d4-6aa91a4ca547\") " pod="openshift-image-registry/image-registry-697d97f7c8-n2lcz" Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.410044 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/78119cb9-deb1-4a37-bc05-7b911969047f-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-lh6qt\" (UID: \"78119cb9-deb1-4a37-bc05-7b911969047f\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-lh6qt" Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.410081 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l5pnb\" (UniqueName: \"kubernetes.io/projected/42d025d0-f89c-4e18-aae4-f923d5797693-kube-api-access-l5pnb\") pod \"openshift-apiserver-operator-796bbdcf4f-rnbkw\" (UID: \"42d025d0-f89c-4e18-aae4-f923d5797693\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-rnbkw" Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.410121 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/db2db73c-d11d-46e6-9cc6-331faf9a21ca-mountpoint-dir\") pod \"csi-hostpathplugin-xrpzk\" (UID: \"db2db73c-d11d-46e6-9cc6-331faf9a21ca\") " pod="hostpath-provisioner/csi-hostpathplugin-xrpzk" Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.410149 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b0aac105-3797-4d99-ad0f-443048b96b0a-client-ca\") pod \"route-controller-manager-6576b87f9c-wgzns\" (UID: \"b0aac105-3797-4d99-ad0f-443048b96b0a\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wgzns" Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.410240 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/2638a0da-6209-4691-a4d4-6aa91a4ca547-registry-tls\") pod \"image-registry-697d97f7c8-n2lcz\" (UID: \"2638a0da-6209-4691-a4d4-6aa91a4ca547\") " pod="openshift-image-registry/image-registry-697d97f7c8-n2lcz" Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.410268 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b0aac105-3797-4d99-ad0f-443048b96b0a-serving-cert\") pod \"route-controller-manager-6576b87f9c-wgzns\" (UID: \"b0aac105-3797-4d99-ad0f-443048b96b0a\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wgzns" Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.410304 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a07b76f5-7f05-410e-a0e9-3aff034786ad-config\") pod \"etcd-operator-b45778765-rn9bk\" (UID: \"a07b76f5-7f05-410e-a0e9-3aff034786ad\") " pod="openshift-etcd-operator/etcd-operator-b45778765-rn9bk" Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.410328 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/96a4fa95-2d84-4fc6-94a4-5629fe98e3ce-proxy-tls\") pod \"machine-config-operator-74547568cd-btqhn\" (UID: \"96a4fa95-2d84-4fc6-94a4-5629fe98e3ce\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-btqhn" Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.410352 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/dff4ff8a-f156-4da4-ba81-477078a8345d-secret-volume\") pod \"collect-profiles-29422950-4cxn6\" (UID: \"dff4ff8a-f156-4da4-ba81-477078a8345d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29422950-4cxn6" Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.410374 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nq9rj\" (UniqueName: \"kubernetes.io/projected/dff4ff8a-f156-4da4-ba81-477078a8345d-kube-api-access-nq9rj\") pod \"collect-profiles-29422950-4cxn6\" (UID: \"dff4ff8a-f156-4da4-ba81-477078a8345d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29422950-4cxn6" Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.410416 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/4ec81ced-fd94-49a3-aa46-d7febbbfb825-srv-cert\") pod \"olm-operator-6b444d44fb-4bmlt\" (UID: \"4ec81ced-fd94-49a3-aa46-d7febbbfb825\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-4bmlt" Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.410491 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/db2db73c-d11d-46e6-9cc6-331faf9a21ca-registration-dir\") pod \"csi-hostpathplugin-xrpzk\" (UID: \"db2db73c-d11d-46e6-9cc6-331faf9a21ca\") " pod="hostpath-provisioner/csi-hostpathplugin-xrpzk" Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.410519 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a07b76f5-7f05-410e-a0e9-3aff034786ad-serving-cert\") pod \"etcd-operator-b45778765-rn9bk\" (UID: \"a07b76f5-7f05-410e-a0e9-3aff034786ad\") " pod="openshift-etcd-operator/etcd-operator-b45778765-rn9bk" Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.410543 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bpbxj\" (UniqueName: \"kubernetes.io/projected/a07b76f5-7f05-410e-a0e9-3aff034786ad-kube-api-access-bpbxj\") pod \"etcd-operator-b45778765-rn9bk\" (UID: \"a07b76f5-7f05-410e-a0e9-3aff034786ad\") " pod="openshift-etcd-operator/etcd-operator-b45778765-rn9bk" Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.410583 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n2lcz\" (UID: \"2638a0da-6209-4691-a4d4-6aa91a4ca547\") " pod="openshift-image-registry/image-registry-697d97f7c8-n2lcz" Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.410631 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jzqrt\" (UniqueName: \"kubernetes.io/projected/451fb12e-a97f-441e-8a8c-d4c217640aef-kube-api-access-jzqrt\") pod \"marketplace-operator-79b997595-twtlk\" (UID: \"451fb12e-a97f-441e-8a8c-d4c217640aef\") " pod="openshift-marketplace/marketplace-operator-79b997595-twtlk" Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.410651 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/db2db73c-d11d-46e6-9cc6-331faf9a21ca-csi-data-dir\") pod \"csi-hostpathplugin-xrpzk\" (UID: \"db2db73c-d11d-46e6-9cc6-331faf9a21ca\") " pod="hostpath-provisioner/csi-hostpathplugin-xrpzk" Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.412269 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2638a0da-6209-4691-a4d4-6aa91a4ca547-trusted-ca\") pod \"image-registry-697d97f7c8-n2lcz\" (UID: \"2638a0da-6209-4691-a4d4-6aa91a4ca547\") " pod="openshift-image-registry/image-registry-697d97f7c8-n2lcz" Dec 10 14:34:05 crc kubenswrapper[4718]: E1210 14:34:05.412298 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-10 14:34:05.912281068 +0000 UTC m=+150.861504485 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n2lcz" (UID: "2638a0da-6209-4691-a4d4-6aa91a4ca547") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.412724 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2e7d85a9-ce97-4481-934c-3e6f3c720007-config\") pod \"kube-apiserver-operator-766d6c64bb-44d9w\" (UID: \"2e7d85a9-ce97-4481-934c-3e6f3c720007\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-44d9w" Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.412779 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4fb46f0a-fd98-475f-a2fe-db2c2f8c3d35-config\") pod \"service-ca-operator-777779d784-8dm4f\" (UID: \"4fb46f0a-fd98-475f-a2fe-db2c2f8c3d35\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-8dm4f" Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.412861 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/dc6efe70-637d-42bf-a57e-92ed4503a23a-machine-approver-tls\") pod \"machine-approver-56656f9798-76q8c\" (UID: \"dc6efe70-637d-42bf-a57e-92ed4503a23a\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-76q8c" Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.412898 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-scwp4\" (UniqueName: \"kubernetes.io/projected/fe1f76c3-fb22-4c92-bc66-7048e04e63b0-kube-api-access-scwp4\") pod \"control-plane-machine-set-operator-78cbb6b69f-bvd4w\" (UID: \"fe1f76c3-fb22-4c92-bc66-7048e04e63b0\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-bvd4w" Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.412918 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2qp95\" (UniqueName: \"kubernetes.io/projected/96a4fa95-2d84-4fc6-94a4-5629fe98e3ce-kube-api-access-2qp95\") pod \"machine-config-operator-74547568cd-btqhn\" (UID: \"96a4fa95-2d84-4fc6-94a4-5629fe98e3ce\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-btqhn" Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.412940 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8d7xz\" (UniqueName: \"kubernetes.io/projected/f201fb44-80ca-4142-aaf9-4cfbd8215309-kube-api-access-8d7xz\") pod \"kube-storage-version-migrator-operator-b67b599dd-xtqqs\" (UID: \"f201fb44-80ca-4142-aaf9-4cfbd8215309\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-xtqqs" Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.413280 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/850edb82-dd95-474f-a41c-3aa48faa4b87-client-ca\") pod \"controller-manager-879f6c89f-vws9k\" (UID: \"850edb82-dd95-474f-a41c-3aa48faa4b87\") " pod="openshift-controller-manager/controller-manager-879f6c89f-vws9k" Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.413310 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/db2db73c-d11d-46e6-9cc6-331faf9a21ca-plugins-dir\") pod \"csi-hostpathplugin-xrpzk\" (UID: \"db2db73c-d11d-46e6-9cc6-331faf9a21ca\") " pod="hostpath-provisioner/csi-hostpathplugin-xrpzk" Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.413334 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fmlmm\" (UniqueName: \"kubernetes.io/projected/28dfccad-4a7e-463f-a8b3-77a451a865ae-kube-api-access-fmlmm\") pod \"ingress-operator-5b745b69d9-9zbxh\" (UID: \"28dfccad-4a7e-463f-a8b3-77a451a865ae\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-9zbxh" Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.413353 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/40856145-6171-40e7-97b9-51268a8c348b-service-ca-bundle\") pod \"router-default-5444994796-7fd22\" (UID: \"40856145-6171-40e7-97b9-51268a8c348b\") " pod="openshift-ingress/router-default-5444994796-7fd22" Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.413377 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/799e0d54-7a8e-48e5-b5d0-db944b3cbe25-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-s92b8\" (UID: \"799e0d54-7a8e-48e5-b5d0-db944b3cbe25\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-s92b8" Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.413412 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rdnpm\" (UniqueName: \"kubernetes.io/projected/db2db73c-d11d-46e6-9cc6-331faf9a21ca-kube-api-access-rdnpm\") pod \"csi-hostpathplugin-xrpzk\" (UID: \"db2db73c-d11d-46e6-9cc6-331faf9a21ca\") " pod="hostpath-provisioner/csi-hostpathplugin-xrpzk" Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.414445 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4qkpt\" (UniqueName: \"kubernetes.io/projected/2638a0da-6209-4691-a4d4-6aa91a4ca547-kube-api-access-4qkpt\") pod \"image-registry-697d97f7c8-n2lcz\" (UID: \"2638a0da-6209-4691-a4d4-6aa91a4ca547\") " pod="openshift-image-registry/image-registry-697d97f7c8-n2lcz" Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.414475 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gmgmh\" (UniqueName: \"kubernetes.io/projected/ecc5d9e7-3f61-4863-8b70-3f71e1c0278b-kube-api-access-gmgmh\") pod \"machine-config-server-fqhgq\" (UID: \"ecc5d9e7-3f61-4863-8b70-3f71e1c0278b\") " pod="openshift-machine-config-operator/machine-config-server-fqhgq" Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.414518 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/2638a0da-6209-4691-a4d4-6aa91a4ca547-installation-pull-secrets\") pod \"image-registry-697d97f7c8-n2lcz\" (UID: \"2638a0da-6209-4691-a4d4-6aa91a4ca547\") " pod="openshift-image-registry/image-registry-697d97f7c8-n2lcz" Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.414555 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/cc343e66-a0f1-4f9b-bbba-77b38c0260e7-metrics-tls\") pod \"dns-operator-744455d44c-brrd5\" (UID: \"cc343e66-a0f1-4f9b-bbba-77b38c0260e7\") " pod="openshift-dns-operator/dns-operator-744455d44c-brrd5" Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.414875 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8b5042f5-52a1-42da-9a21-72d7b2a75c75-config\") pod \"machine-api-operator-5694c8668f-prs2h\" (UID: \"8b5042f5-52a1-42da-9a21-72d7b2a75c75\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-prs2h" Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.414973 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/96a4fa95-2d84-4fc6-94a4-5629fe98e3ce-auth-proxy-config\") pod \"machine-config-operator-74547568cd-btqhn\" (UID: \"96a4fa95-2d84-4fc6-94a4-5629fe98e3ce\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-btqhn" Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.415201 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/ecc5d9e7-3f61-4863-8b70-3f71e1c0278b-certs\") pod \"machine-config-server-fqhgq\" (UID: \"ecc5d9e7-3f61-4863-8b70-3f71e1c0278b\") " pod="openshift-machine-config-operator/machine-config-server-fqhgq" Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.415263 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/78119cb9-deb1-4a37-bc05-7b911969047f-encryption-config\") pod \"apiserver-7bbb656c7d-lh6qt\" (UID: \"78119cb9-deb1-4a37-bc05-7b911969047f\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-lh6qt" Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.415297 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vtvg7\" (UniqueName: \"kubernetes.io/projected/b7432a0e-050f-4112-a75d-f8687233cf0f-kube-api-access-vtvg7\") pod \"downloads-7954f5f757-hbnxp\" (UID: \"b7432a0e-050f-4112-a75d-f8687233cf0f\") " pod="openshift-console/downloads-7954f5f757-hbnxp" Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.415320 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/40856145-6171-40e7-97b9-51268a8c348b-metrics-certs\") pod \"router-default-5444994796-7fd22\" (UID: \"40856145-6171-40e7-97b9-51268a8c348b\") " pod="openshift-ingress/router-default-5444994796-7fd22" Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.415354 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/78119cb9-deb1-4a37-bc05-7b911969047f-serving-cert\") pod \"apiserver-7bbb656c7d-lh6qt\" (UID: \"78119cb9-deb1-4a37-bc05-7b911969047f\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-lh6qt" Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.415456 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/061a3cd6-2958-4635-9ce1-4989d98d8432-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-dskl4\" (UID: \"061a3cd6-2958-4635-9ce1-4989d98d8432\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-dskl4" Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.415528 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gdv4x\" (UniqueName: \"kubernetes.io/projected/4fb46f0a-fd98-475f-a2fe-db2c2f8c3d35-kube-api-access-gdv4x\") pod \"service-ca-operator-777779d784-8dm4f\" (UID: \"4fb46f0a-fd98-475f-a2fe-db2c2f8c3d35\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-8dm4f" Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.416325 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/2638a0da-6209-4691-a4d4-6aa91a4ca547-registry-tls\") pod \"image-registry-697d97f7c8-n2lcz\" (UID: \"2638a0da-6209-4691-a4d4-6aa91a4ca547\") " pod="openshift-image-registry/image-registry-697d97f7c8-n2lcz" Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.419231 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/2638a0da-6209-4691-a4d4-6aa91a4ca547-installation-pull-secrets\") pod \"image-registry-697d97f7c8-n2lcz\" (UID: \"2638a0da-6209-4691-a4d4-6aa91a4ca547\") " pod="openshift-image-registry/image-registry-697d97f7c8-n2lcz" Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.440823 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/2638a0da-6209-4691-a4d4-6aa91a4ca547-bound-sa-token\") pod \"image-registry-697d97f7c8-n2lcz\" (UID: \"2638a0da-6209-4691-a4d4-6aa91a4ca547\") " pod="openshift-image-registry/image-registry-697d97f7c8-n2lcz" Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.471124 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-2x76l"] Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.516566 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.516827 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/799e0d54-7a8e-48e5-b5d0-db944b3cbe25-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-s92b8\" (UID: \"799e0d54-7a8e-48e5-b5d0-db944b3cbe25\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-s92b8" Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.516856 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1c9a984d-0762-47e3-82e4-8b4f9d6bea39-config\") pod \"kube-controller-manager-operator-78b949d7b-g5xg4\" (UID: \"1c9a984d-0762-47e3-82e4-8b4f9d6bea39\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-g5xg4" Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.516905 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/40856145-6171-40e7-97b9-51268a8c348b-default-certificate\") pod \"router-default-5444994796-7fd22\" (UID: \"40856145-6171-40e7-97b9-51268a8c348b\") " pod="openshift-ingress/router-default-5444994796-7fd22" Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.516923 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1c9a984d-0762-47e3-82e4-8b4f9d6bea39-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-g5xg4\" (UID: \"1c9a984d-0762-47e3-82e4-8b4f9d6bea39\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-g5xg4" Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.516950 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6wjdq\" (UniqueName: \"kubernetes.io/projected/bc13833f-8cfa-417a-a16a-420e5b00843b-kube-api-access-6wjdq\") pod \"dns-default-gvvtl\" (UID: \"bc13833f-8cfa-417a-a16a-420e5b00843b\") " pod="openshift-dns/dns-default-gvvtl" Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.516977 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/3bc2976e-bdb0-4450-9c5e-73052e705f7a-webhook-cert\") pod \"packageserver-d55dfcdfc-r4272\" (UID: \"3bc2976e-bdb0-4450-9c5e-73052e705f7a\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-r4272" Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.516995 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5vrx6\" (UniqueName: \"kubernetes.io/projected/522ecb50-d5ad-4834-9a63-a4c843ca824d-kube-api-access-5vrx6\") pod \"ingress-canary-l47zr\" (UID: \"522ecb50-d5ad-4834-9a63-a4c843ca824d\") " pod="openshift-ingress-canary/ingress-canary-l47zr" Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.517011 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/850edb82-dd95-474f-a41c-3aa48faa4b87-config\") pod \"controller-manager-879f6c89f-vws9k\" (UID: \"850edb82-dd95-474f-a41c-3aa48faa4b87\") " pod="openshift-controller-manager/controller-manager-879f6c89f-vws9k" Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.517025 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/850edb82-dd95-474f-a41c-3aa48faa4b87-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-vws9k\" (UID: \"850edb82-dd95-474f-a41c-3aa48faa4b87\") " pod="openshift-controller-manager/controller-manager-879f6c89f-vws9k" Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.517041 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/dc6efe70-637d-42bf-a57e-92ed4503a23a-auth-proxy-config\") pod \"machine-approver-56656f9798-76q8c\" (UID: \"dc6efe70-637d-42bf-a57e-92ed4503a23a\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-76q8c" Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.517055 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/db2db73c-d11d-46e6-9cc6-331faf9a21ca-socket-dir\") pod \"csi-hostpathplugin-xrpzk\" (UID: \"db2db73c-d11d-46e6-9cc6-331faf9a21ca\") " pod="hostpath-provisioner/csi-hostpathplugin-xrpzk" Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.517073 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/78119cb9-deb1-4a37-bc05-7b911969047f-audit-dir\") pod \"apiserver-7bbb656c7d-lh6qt\" (UID: \"78119cb9-deb1-4a37-bc05-7b911969047f\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-lh6qt" Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.517092 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/42d025d0-f89c-4e18-aae4-f923d5797693-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-rnbkw\" (UID: \"42d025d0-f89c-4e18-aae4-f923d5797693\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-rnbkw" Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.517110 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4f9989fa-f608-438c-9bfd-f06ef32470c7-config\") pod \"console-operator-58897d9998-qq8sm\" (UID: \"4f9989fa-f608-438c-9bfd-f06ef32470c7\") " pod="openshift-console-operator/console-operator-58897d9998-qq8sm" Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.517126 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/3bc2976e-bdb0-4450-9c5e-73052e705f7a-apiservice-cert\") pod \"packageserver-d55dfcdfc-r4272\" (UID: \"3bc2976e-bdb0-4450-9c5e-73052e705f7a\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-r4272" Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.517148 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/8b5042f5-52a1-42da-9a21-72d7b2a75c75-images\") pod \"machine-api-operator-5694c8668f-prs2h\" (UID: \"8b5042f5-52a1-42da-9a21-72d7b2a75c75\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-prs2h" Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.517166 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4f9989fa-f608-438c-9bfd-f06ef32470c7-serving-cert\") pod \"console-operator-58897d9998-qq8sm\" (UID: \"4f9989fa-f608-438c-9bfd-f06ef32470c7\") " pod="openshift-console-operator/console-operator-58897d9998-qq8sm" Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.517182 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dc6efe70-637d-42bf-a57e-92ed4503a23a-config\") pod \"machine-approver-56656f9798-76q8c\" (UID: \"dc6efe70-637d-42bf-a57e-92ed4503a23a\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-76q8c" Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.517199 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/079f1ed7-7f70-4b4c-9afd-cf0286348562-srv-cert\") pod \"catalog-operator-68c6474976-qn8tr\" (UID: \"079f1ed7-7f70-4b4c-9afd-cf0286348562\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-qn8tr" Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.517217 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/28dfccad-4a7e-463f-a8b3-77a451a865ae-bound-sa-token\") pod \"ingress-operator-5b745b69d9-9zbxh\" (UID: \"28dfccad-4a7e-463f-a8b3-77a451a865ae\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-9zbxh" Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.517237 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/28dfccad-4a7e-463f-a8b3-77a451a865ae-metrics-tls\") pod \"ingress-operator-5b745b69d9-9zbxh\" (UID: \"28dfccad-4a7e-463f-a8b3-77a451a865ae\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-9zbxh" Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.517254 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/079f1ed7-7f70-4b4c-9afd-cf0286348562-profile-collector-cert\") pod \"catalog-operator-68c6474976-qn8tr\" (UID: \"079f1ed7-7f70-4b4c-9afd-cf0286348562\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-qn8tr" Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.517271 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/deb5d218-90f7-429b-b09d-137dd3422ea0-signing-key\") pod \"service-ca-9c57cc56f-v2z2q\" (UID: \"deb5d218-90f7-429b-b09d-137dd3422ea0\") " pod="openshift-service-ca/service-ca-9c57cc56f-v2z2q" Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.517287 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/78119cb9-deb1-4a37-bc05-7b911969047f-audit-policies\") pod \"apiserver-7bbb656c7d-lh6qt\" (UID: \"78119cb9-deb1-4a37-bc05-7b911969047f\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-lh6qt" Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.517304 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/a07b76f5-7f05-410e-a0e9-3aff034786ad-etcd-service-ca\") pod \"etcd-operator-b45778765-rn9bk\" (UID: \"a07b76f5-7f05-410e-a0e9-3aff034786ad\") " pod="openshift-etcd-operator/etcd-operator-b45778765-rn9bk" Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.517322 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bwk8h\" (UniqueName: \"kubernetes.io/projected/850edb82-dd95-474f-a41c-3aa48faa4b87-kube-api-access-bwk8h\") pod \"controller-manager-879f6c89f-vws9k\" (UID: \"850edb82-dd95-474f-a41c-3aa48faa4b87\") " pod="openshift-controller-manager/controller-manager-879f6c89f-vws9k" Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.517342 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/8b5042f5-52a1-42da-9a21-72d7b2a75c75-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-prs2h\" (UID: \"8b5042f5-52a1-42da-9a21-72d7b2a75c75\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-prs2h" Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.517365 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ms24s\" (UniqueName: \"kubernetes.io/projected/3bc2976e-bdb0-4450-9c5e-73052e705f7a-kube-api-access-ms24s\") pod \"packageserver-d55dfcdfc-r4272\" (UID: \"3bc2976e-bdb0-4450-9c5e-73052e705f7a\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-r4272" Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.517411 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5wjsx\" (UniqueName: \"kubernetes.io/projected/9c06237f-9a58-430d-93c8-69297f9e1363-kube-api-access-5wjsx\") pod \"migrator-59844c95c7-kkv25\" (UID: \"9c06237f-9a58-430d-93c8-69297f9e1363\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-kkv25" Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.517434 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/522ecb50-d5ad-4834-9a63-a4c843ca824d-cert\") pod \"ingress-canary-l47zr\" (UID: \"522ecb50-d5ad-4834-9a63-a4c843ca824d\") " pod="openshift-ingress-canary/ingress-canary-l47zr" Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.517459 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4fb46f0a-fd98-475f-a2fe-db2c2f8c3d35-serving-cert\") pod \"service-ca-operator-777779d784-8dm4f\" (UID: \"4fb46f0a-fd98-475f-a2fe-db2c2f8c3d35\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-8dm4f" Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.517480 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/bc13833f-8cfa-417a-a16a-420e5b00843b-metrics-tls\") pod \"dns-default-gvvtl\" (UID: \"bc13833f-8cfa-417a-a16a-420e5b00843b\") " pod="openshift-dns/dns-default-gvvtl" Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.517504 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pdhv9\" (UniqueName: \"kubernetes.io/projected/4f9989fa-f608-438c-9bfd-f06ef32470c7-kube-api-access-pdhv9\") pod \"console-operator-58897d9998-qq8sm\" (UID: \"4f9989fa-f608-438c-9bfd-f06ef32470c7\") " pod="openshift-console-operator/console-operator-58897d9998-qq8sm" Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.517523 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1c9a984d-0762-47e3-82e4-8b4f9d6bea39-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-g5xg4\" (UID: \"1c9a984d-0762-47e3-82e4-8b4f9d6bea39\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-g5xg4" Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.517561 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/78119cb9-deb1-4a37-bc05-7b911969047f-etcd-client\") pod \"apiserver-7bbb656c7d-lh6qt\" (UID: \"78119cb9-deb1-4a37-bc05-7b911969047f\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-lh6qt" Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.517585 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7lg7h\" (UniqueName: \"kubernetes.io/projected/78119cb9-deb1-4a37-bc05-7b911969047f-kube-api-access-7lg7h\") pod \"apiserver-7bbb656c7d-lh6qt\" (UID: \"78119cb9-deb1-4a37-bc05-7b911969047f\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-lh6qt" Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.517641 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/a07b76f5-7f05-410e-a0e9-3aff034786ad-etcd-client\") pod \"etcd-operator-b45778765-rn9bk\" (UID: \"a07b76f5-7f05-410e-a0e9-3aff034786ad\") " pod="openshift-etcd-operator/etcd-operator-b45778765-rn9bk" Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.517665 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2e7d85a9-ce97-4481-934c-3e6f3c720007-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-44d9w\" (UID: \"2e7d85a9-ce97-4481-934c-3e6f3c720007\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-44d9w" Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.517688 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k5dnw\" (UniqueName: \"kubernetes.io/projected/079f1ed7-7f70-4b4c-9afd-cf0286348562-kube-api-access-k5dnw\") pod \"catalog-operator-68c6474976-qn8tr\" (UID: \"079f1ed7-7f70-4b4c-9afd-cf0286348562\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-qn8tr" Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.517706 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/fe1f76c3-fb22-4c92-bc66-7048e04e63b0-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-bvd4w\" (UID: \"fe1f76c3-fb22-4c92-bc66-7048e04e63b0\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-bvd4w" Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.517727 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bc13833f-8cfa-417a-a16a-420e5b00843b-config-volume\") pod \"dns-default-gvvtl\" (UID: \"bc13833f-8cfa-417a-a16a-420e5b00843b\") " pod="openshift-dns/dns-default-gvvtl" Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.517750 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/3bc2976e-bdb0-4450-9c5e-73052e705f7a-tmpfs\") pod \"packageserver-d55dfcdfc-r4272\" (UID: \"3bc2976e-bdb0-4450-9c5e-73052e705f7a\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-r4272" Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.517774 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/96a4fa95-2d84-4fc6-94a4-5629fe98e3ce-images\") pod \"machine-config-operator-74547568cd-btqhn\" (UID: \"96a4fa95-2d84-4fc6-94a4-5629fe98e3ce\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-btqhn" Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.517818 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k9ncs\" (UniqueName: \"kubernetes.io/projected/deb5d218-90f7-429b-b09d-137dd3422ea0-kube-api-access-k9ncs\") pod \"service-ca-9c57cc56f-v2z2q\" (UID: \"deb5d218-90f7-429b-b09d-137dd3422ea0\") " pod="openshift-service-ca/service-ca-9c57cc56f-v2z2q" Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.517846 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nnn2f\" (UniqueName: \"kubernetes.io/projected/061a3cd6-2958-4635-9ce1-4989d98d8432-kube-api-access-nnn2f\") pod \"package-server-manager-789f6589d5-dskl4\" (UID: \"061a3cd6-2958-4635-9ce1-4989d98d8432\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-dskl4" Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.517875 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tqt2w\" (UniqueName: \"kubernetes.io/projected/40856145-6171-40e7-97b9-51268a8c348b-kube-api-access-tqt2w\") pod \"router-default-5444994796-7fd22\" (UID: \"40856145-6171-40e7-97b9-51268a8c348b\") " pod="openshift-ingress/router-default-5444994796-7fd22" Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.517896 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/78119cb9-deb1-4a37-bc05-7b911969047f-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-lh6qt\" (UID: \"78119cb9-deb1-4a37-bc05-7b911969047f\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-lh6qt" Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.517917 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/42d025d0-f89c-4e18-aae4-f923d5797693-config\") pod \"openshift-apiserver-operator-796bbdcf4f-rnbkw\" (UID: \"42d025d0-f89c-4e18-aae4-f923d5797693\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-rnbkw" Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.517941 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/ecc5d9e7-3f61-4863-8b70-3f71e1c0278b-node-bootstrap-token\") pod \"machine-config-server-fqhgq\" (UID: \"ecc5d9e7-3f61-4863-8b70-3f71e1c0278b\") " pod="openshift-machine-config-operator/machine-config-server-fqhgq" Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.518154 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f201fb44-80ca-4142-aaf9-4cfbd8215309-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-xtqqs\" (UID: \"f201fb44-80ca-4142-aaf9-4cfbd8215309\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-xtqqs" Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.518620 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/deb5d218-90f7-429b-b09d-137dd3422ea0-signing-cabundle\") pod \"service-ca-9c57cc56f-v2z2q\" (UID: \"deb5d218-90f7-429b-b09d-137dd3422ea0\") " pod="openshift-service-ca/service-ca-9c57cc56f-v2z2q" Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.518652 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b0aac105-3797-4d99-ad0f-443048b96b0a-config\") pod \"route-controller-manager-6576b87f9c-wgzns\" (UID: \"b0aac105-3797-4d99-ad0f-443048b96b0a\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wgzns" Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.518683 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/40856145-6171-40e7-97b9-51268a8c348b-stats-auth\") pod \"router-default-5444994796-7fd22\" (UID: \"40856145-6171-40e7-97b9-51268a8c348b\") " pod="openshift-ingress/router-default-5444994796-7fd22" Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.518700 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dc6efe70-637d-42bf-a57e-92ed4503a23a-config\") pod \"machine-approver-56656f9798-76q8c\" (UID: \"dc6efe70-637d-42bf-a57e-92ed4503a23a\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-76q8c" Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.518715 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/a07b76f5-7f05-410e-a0e9-3aff034786ad-etcd-ca\") pod \"etcd-operator-b45778765-rn9bk\" (UID: \"a07b76f5-7f05-410e-a0e9-3aff034786ad\") " pod="openshift-etcd-operator/etcd-operator-b45778765-rn9bk" Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.518821 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2nkp\" (UniqueName: \"kubernetes.io/projected/dc6efe70-637d-42bf-a57e-92ed4503a23a-kube-api-access-s2nkp\") pod \"machine-approver-56656f9798-76q8c\" (UID: \"dc6efe70-637d-42bf-a57e-92ed4503a23a\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-76q8c" Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.518845 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/850edb82-dd95-474f-a41c-3aa48faa4b87-serving-cert\") pod \"controller-manager-879f6c89f-vws9k\" (UID: \"850edb82-dd95-474f-a41c-3aa48faa4b87\") " pod="openshift-controller-manager/controller-manager-879f6c89f-vws9k" Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.518868 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n9gpp\" (UniqueName: \"kubernetes.io/projected/b0aac105-3797-4d99-ad0f-443048b96b0a-kube-api-access-n9gpp\") pod \"route-controller-manager-6576b87f9c-wgzns\" (UID: \"b0aac105-3797-4d99-ad0f-443048b96b0a\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wgzns" Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.518889 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2e7d85a9-ce97-4481-934c-3e6f3c720007-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-44d9w\" (UID: \"2e7d85a9-ce97-4481-934c-3e6f3c720007\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-44d9w" Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.518917 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jjb4c\" (UniqueName: \"kubernetes.io/projected/4ec81ced-fd94-49a3-aa46-d7febbbfb825-kube-api-access-jjb4c\") pod \"olm-operator-6b444d44fb-4bmlt\" (UID: \"4ec81ced-fd94-49a3-aa46-d7febbbfb825\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-4bmlt" Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.518943 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6zb9k\" (UniqueName: \"kubernetes.io/projected/cc343e66-a0f1-4f9b-bbba-77b38c0260e7-kube-api-access-6zb9k\") pod \"dns-operator-744455d44c-brrd5\" (UID: \"cc343e66-a0f1-4f9b-bbba-77b38c0260e7\") " pod="openshift-dns-operator/dns-operator-744455d44c-brrd5" Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.518969 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqfcl\" (UniqueName: \"kubernetes.io/projected/e82e7008-55b0-4f3c-bdd7-8c9bfbc9284f-kube-api-access-cqfcl\") pod \"multus-admission-controller-857f4d67dd-7p66s\" (UID: \"e82e7008-55b0-4f3c-bdd7-8c9bfbc9284f\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-7p66s" Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.519201 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l5pnb\" (UniqueName: \"kubernetes.io/projected/42d025d0-f89c-4e18-aae4-f923d5797693-kube-api-access-l5pnb\") pod \"openshift-apiserver-operator-796bbdcf4f-rnbkw\" (UID: \"42d025d0-f89c-4e18-aae4-f923d5797693\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-rnbkw" Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.519220 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/db2db73c-d11d-46e6-9cc6-331faf9a21ca-mountpoint-dir\") pod \"csi-hostpathplugin-xrpzk\" (UID: \"db2db73c-d11d-46e6-9cc6-331faf9a21ca\") " pod="hostpath-provisioner/csi-hostpathplugin-xrpzk" Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.519237 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b0aac105-3797-4d99-ad0f-443048b96b0a-client-ca\") pod \"route-controller-manager-6576b87f9c-wgzns\" (UID: \"b0aac105-3797-4d99-ad0f-443048b96b0a\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wgzns" Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.519262 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/78119cb9-deb1-4a37-bc05-7b911969047f-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-lh6qt\" (UID: \"78119cb9-deb1-4a37-bc05-7b911969047f\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-lh6qt" Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.519287 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a07b76f5-7f05-410e-a0e9-3aff034786ad-config\") pod \"etcd-operator-b45778765-rn9bk\" (UID: \"a07b76f5-7f05-410e-a0e9-3aff034786ad\") " pod="openshift-etcd-operator/etcd-operator-b45778765-rn9bk" Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.519310 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/96a4fa95-2d84-4fc6-94a4-5629fe98e3ce-proxy-tls\") pod \"machine-config-operator-74547568cd-btqhn\" (UID: \"96a4fa95-2d84-4fc6-94a4-5629fe98e3ce\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-btqhn" Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.519329 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b0aac105-3797-4d99-ad0f-443048b96b0a-serving-cert\") pod \"route-controller-manager-6576b87f9c-wgzns\" (UID: \"b0aac105-3797-4d99-ad0f-443048b96b0a\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wgzns" Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.519353 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/4ec81ced-fd94-49a3-aa46-d7febbbfb825-srv-cert\") pod \"olm-operator-6b444d44fb-4bmlt\" (UID: \"4ec81ced-fd94-49a3-aa46-d7febbbfb825\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-4bmlt" Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.519372 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/db2db73c-d11d-46e6-9cc6-331faf9a21ca-registration-dir\") pod \"csi-hostpathplugin-xrpzk\" (UID: \"db2db73c-d11d-46e6-9cc6-331faf9a21ca\") " pod="hostpath-provisioner/csi-hostpathplugin-xrpzk" Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.519412 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a07b76f5-7f05-410e-a0e9-3aff034786ad-serving-cert\") pod \"etcd-operator-b45778765-rn9bk\" (UID: \"a07b76f5-7f05-410e-a0e9-3aff034786ad\") " pod="openshift-etcd-operator/etcd-operator-b45778765-rn9bk" Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.519435 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bpbxj\" (UniqueName: \"kubernetes.io/projected/a07b76f5-7f05-410e-a0e9-3aff034786ad-kube-api-access-bpbxj\") pod \"etcd-operator-b45778765-rn9bk\" (UID: \"a07b76f5-7f05-410e-a0e9-3aff034786ad\") " pod="openshift-etcd-operator/etcd-operator-b45778765-rn9bk" Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.519452 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/dff4ff8a-f156-4da4-ba81-477078a8345d-secret-volume\") pod \"collect-profiles-29422950-4cxn6\" (UID: \"dff4ff8a-f156-4da4-ba81-477078a8345d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29422950-4cxn6" Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.519471 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nq9rj\" (UniqueName: \"kubernetes.io/projected/dff4ff8a-f156-4da4-ba81-477078a8345d-kube-api-access-nq9rj\") pod \"collect-profiles-29422950-4cxn6\" (UID: \"dff4ff8a-f156-4da4-ba81-477078a8345d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29422950-4cxn6" Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.519501 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/db2db73c-d11d-46e6-9cc6-331faf9a21ca-csi-data-dir\") pod \"csi-hostpathplugin-xrpzk\" (UID: \"db2db73c-d11d-46e6-9cc6-331faf9a21ca\") " pod="hostpath-provisioner/csi-hostpathplugin-xrpzk" Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.519525 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jzqrt\" (UniqueName: \"kubernetes.io/projected/451fb12e-a97f-441e-8a8c-d4c217640aef-kube-api-access-jzqrt\") pod \"marketplace-operator-79b997595-twtlk\" (UID: \"451fb12e-a97f-441e-8a8c-d4c217640aef\") " pod="openshift-marketplace/marketplace-operator-79b997595-twtlk" Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.519547 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2e7d85a9-ce97-4481-934c-3e6f3c720007-config\") pod \"kube-apiserver-operator-766d6c64bb-44d9w\" (UID: \"2e7d85a9-ce97-4481-934c-3e6f3c720007\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-44d9w" Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.519569 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4fb46f0a-fd98-475f-a2fe-db2c2f8c3d35-config\") pod \"service-ca-operator-777779d784-8dm4f\" (UID: \"4fb46f0a-fd98-475f-a2fe-db2c2f8c3d35\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-8dm4f" Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.519594 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/dc6efe70-637d-42bf-a57e-92ed4503a23a-machine-approver-tls\") pod \"machine-approver-56656f9798-76q8c\" (UID: \"dc6efe70-637d-42bf-a57e-92ed4503a23a\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-76q8c" Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.519614 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-scwp4\" (UniqueName: \"kubernetes.io/projected/fe1f76c3-fb22-4c92-bc66-7048e04e63b0-kube-api-access-scwp4\") pod \"control-plane-machine-set-operator-78cbb6b69f-bvd4w\" (UID: \"fe1f76c3-fb22-4c92-bc66-7048e04e63b0\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-bvd4w" Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.519633 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2qp95\" (UniqueName: \"kubernetes.io/projected/96a4fa95-2d84-4fc6-94a4-5629fe98e3ce-kube-api-access-2qp95\") pod \"machine-config-operator-74547568cd-btqhn\" (UID: \"96a4fa95-2d84-4fc6-94a4-5629fe98e3ce\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-btqhn" Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.519652 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8d7xz\" (UniqueName: \"kubernetes.io/projected/f201fb44-80ca-4142-aaf9-4cfbd8215309-kube-api-access-8d7xz\") pod \"kube-storage-version-migrator-operator-b67b599dd-xtqqs\" (UID: \"f201fb44-80ca-4142-aaf9-4cfbd8215309\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-xtqqs" Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.519955 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/a07b76f5-7f05-410e-a0e9-3aff034786ad-etcd-ca\") pod \"etcd-operator-b45778765-rn9bk\" (UID: \"a07b76f5-7f05-410e-a0e9-3aff034786ad\") " pod="openshift-etcd-operator/etcd-operator-b45778765-rn9bk" Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.520091 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/db2db73c-d11d-46e6-9cc6-331faf9a21ca-socket-dir\") pod \"csi-hostpathplugin-xrpzk\" (UID: \"db2db73c-d11d-46e6-9cc6-331faf9a21ca\") " pod="hostpath-provisioner/csi-hostpathplugin-xrpzk" Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.520153 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/78119cb9-deb1-4a37-bc05-7b911969047f-audit-dir\") pod \"apiserver-7bbb656c7d-lh6qt\" (UID: \"78119cb9-deb1-4a37-bc05-7b911969047f\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-lh6qt" Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.520254 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/dc6efe70-637d-42bf-a57e-92ed4503a23a-auth-proxy-config\") pod \"machine-approver-56656f9798-76q8c\" (UID: \"dc6efe70-637d-42bf-a57e-92ed4503a23a\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-76q8c" Dec 10 14:34:05 crc kubenswrapper[4718]: E1210 14:34:05.520349 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-10 14:34:06.020329728 +0000 UTC m=+150.969553145 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.520798 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/850edb82-dd95-474f-a41c-3aa48faa4b87-client-ca\") pod \"controller-manager-879f6c89f-vws9k\" (UID: \"850edb82-dd95-474f-a41c-3aa48faa4b87\") " pod="openshift-controller-manager/controller-manager-879f6c89f-vws9k" Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.520860 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/db2db73c-d11d-46e6-9cc6-331faf9a21ca-plugins-dir\") pod \"csi-hostpathplugin-xrpzk\" (UID: \"db2db73c-d11d-46e6-9cc6-331faf9a21ca\") " pod="hostpath-provisioner/csi-hostpathplugin-xrpzk" Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.520879 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fmlmm\" (UniqueName: \"kubernetes.io/projected/28dfccad-4a7e-463f-a8b3-77a451a865ae-kube-api-access-fmlmm\") pod \"ingress-operator-5b745b69d9-9zbxh\" (UID: \"28dfccad-4a7e-463f-a8b3-77a451a865ae\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-9zbxh" Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.520897 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/40856145-6171-40e7-97b9-51268a8c348b-service-ca-bundle\") pod \"router-default-5444994796-7fd22\" (UID: \"40856145-6171-40e7-97b9-51268a8c348b\") " pod="openshift-ingress/router-default-5444994796-7fd22" Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.520916 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/799e0d54-7a8e-48e5-b5d0-db944b3cbe25-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-s92b8\" (UID: \"799e0d54-7a8e-48e5-b5d0-db944b3cbe25\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-s92b8" Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.520939 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdnpm\" (UniqueName: \"kubernetes.io/projected/db2db73c-d11d-46e6-9cc6-331faf9a21ca-kube-api-access-rdnpm\") pod \"csi-hostpathplugin-xrpzk\" (UID: \"db2db73c-d11d-46e6-9cc6-331faf9a21ca\") " pod="hostpath-provisioner/csi-hostpathplugin-xrpzk" Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.520968 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4qkpt\" (UniqueName: \"kubernetes.io/projected/2638a0da-6209-4691-a4d4-6aa91a4ca547-kube-api-access-4qkpt\") pod \"image-registry-697d97f7c8-n2lcz\" (UID: \"2638a0da-6209-4691-a4d4-6aa91a4ca547\") " pod="openshift-image-registry/image-registry-697d97f7c8-n2lcz" Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.520993 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gmgmh\" (UniqueName: \"kubernetes.io/projected/ecc5d9e7-3f61-4863-8b70-3f71e1c0278b-kube-api-access-gmgmh\") pod \"machine-config-server-fqhgq\" (UID: \"ecc5d9e7-3f61-4863-8b70-3f71e1c0278b\") " pod="openshift-machine-config-operator/machine-config-server-fqhgq" Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.521015 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/cc343e66-a0f1-4f9b-bbba-77b38c0260e7-metrics-tls\") pod \"dns-operator-744455d44c-brrd5\" (UID: \"cc343e66-a0f1-4f9b-bbba-77b38c0260e7\") " pod="openshift-dns-operator/dns-operator-744455d44c-brrd5" Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.521055 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8b5042f5-52a1-42da-9a21-72d7b2a75c75-config\") pod \"machine-api-operator-5694c8668f-prs2h\" (UID: \"8b5042f5-52a1-42da-9a21-72d7b2a75c75\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-prs2h" Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.521081 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/96a4fa95-2d84-4fc6-94a4-5629fe98e3ce-auth-proxy-config\") pod \"machine-config-operator-74547568cd-btqhn\" (UID: \"96a4fa95-2d84-4fc6-94a4-5629fe98e3ce\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-btqhn" Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.521098 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/ecc5d9e7-3f61-4863-8b70-3f71e1c0278b-certs\") pod \"machine-config-server-fqhgq\" (UID: \"ecc5d9e7-3f61-4863-8b70-3f71e1c0278b\") " pod="openshift-machine-config-operator/machine-config-server-fqhgq" Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.521118 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/40856145-6171-40e7-97b9-51268a8c348b-metrics-certs\") pod \"router-default-5444994796-7fd22\" (UID: \"40856145-6171-40e7-97b9-51268a8c348b\") " pod="openshift-ingress/router-default-5444994796-7fd22" Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.521137 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/78119cb9-deb1-4a37-bc05-7b911969047f-serving-cert\") pod \"apiserver-7bbb656c7d-lh6qt\" (UID: \"78119cb9-deb1-4a37-bc05-7b911969047f\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-lh6qt" Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.521146 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/db2db73c-d11d-46e6-9cc6-331faf9a21ca-csi-data-dir\") pod \"csi-hostpathplugin-xrpzk\" (UID: \"db2db73c-d11d-46e6-9cc6-331faf9a21ca\") " pod="hostpath-provisioner/csi-hostpathplugin-xrpzk" Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.521153 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/78119cb9-deb1-4a37-bc05-7b911969047f-encryption-config\") pod \"apiserver-7bbb656c7d-lh6qt\" (UID: \"78119cb9-deb1-4a37-bc05-7b911969047f\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-lh6qt" Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.521213 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vtvg7\" (UniqueName: \"kubernetes.io/projected/b7432a0e-050f-4112-a75d-f8687233cf0f-kube-api-access-vtvg7\") pod \"downloads-7954f5f757-hbnxp\" (UID: \"b7432a0e-050f-4112-a75d-f8687233cf0f\") " pod="openshift-console/downloads-7954f5f757-hbnxp" Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.521238 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/061a3cd6-2958-4635-9ce1-4989d98d8432-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-dskl4\" (UID: \"061a3cd6-2958-4635-9ce1-4989d98d8432\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-dskl4" Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.521261 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gdv4x\" (UniqueName: \"kubernetes.io/projected/4fb46f0a-fd98-475f-a2fe-db2c2f8c3d35-kube-api-access-gdv4x\") pod \"service-ca-operator-777779d784-8dm4f\" (UID: \"4fb46f0a-fd98-475f-a2fe-db2c2f8c3d35\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-8dm4f" Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.521286 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/dff4ff8a-f156-4da4-ba81-477078a8345d-config-volume\") pod \"collect-profiles-29422950-4cxn6\" (UID: \"dff4ff8a-f156-4da4-ba81-477078a8345d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29422950-4cxn6" Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.521308 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/451fb12e-a97f-441e-8a8c-d4c217640aef-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-twtlk\" (UID: \"451fb12e-a97f-441e-8a8c-d4c217640aef\") " pod="openshift-marketplace/marketplace-operator-79b997595-twtlk" Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.521328 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2cw6v\" (UniqueName: \"kubernetes.io/projected/8b5042f5-52a1-42da-9a21-72d7b2a75c75-kube-api-access-2cw6v\") pod \"machine-api-operator-5694c8668f-prs2h\" (UID: \"8b5042f5-52a1-42da-9a21-72d7b2a75c75\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-prs2h" Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.521347 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/451fb12e-a97f-441e-8a8c-d4c217640aef-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-twtlk\" (UID: \"451fb12e-a97f-441e-8a8c-d4c217640aef\") " pod="openshift-marketplace/marketplace-operator-79b997595-twtlk" Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.521367 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/e82e7008-55b0-4f3c-bdd7-8c9bfbc9284f-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-7p66s\" (UID: \"e82e7008-55b0-4f3c-bdd7-8c9bfbc9284f\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-7p66s" Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.521404 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4f9989fa-f608-438c-9bfd-f06ef32470c7-trusted-ca\") pod \"console-operator-58897d9998-qq8sm\" (UID: \"4f9989fa-f608-438c-9bfd-f06ef32470c7\") " pod="openshift-console-operator/console-operator-58897d9998-qq8sm" Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.521426 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/4ec81ced-fd94-49a3-aa46-d7febbbfb825-profile-collector-cert\") pod \"olm-operator-6b444d44fb-4bmlt\" (UID: \"4ec81ced-fd94-49a3-aa46-d7febbbfb825\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-4bmlt" Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.521448 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/28dfccad-4a7e-463f-a8b3-77a451a865ae-trusted-ca\") pod \"ingress-operator-5b745b69d9-9zbxh\" (UID: \"28dfccad-4a7e-463f-a8b3-77a451a865ae\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-9zbxh" Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.521471 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f201fb44-80ca-4142-aaf9-4cfbd8215309-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-xtqqs\" (UID: \"f201fb44-80ca-4142-aaf9-4cfbd8215309\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-xtqqs" Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.521493 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/799e0d54-7a8e-48e5-b5d0-db944b3cbe25-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-s92b8\" (UID: \"799e0d54-7a8e-48e5-b5d0-db944b3cbe25\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-s92b8" Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.521869 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1c9a984d-0762-47e3-82e4-8b4f9d6bea39-config\") pod \"kube-controller-manager-operator-78b949d7b-g5xg4\" (UID: \"1c9a984d-0762-47e3-82e4-8b4f9d6bea39\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-g5xg4" Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.523819 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/079f1ed7-7f70-4b4c-9afd-cf0286348562-srv-cert\") pod \"catalog-operator-68c6474976-qn8tr\" (UID: \"079f1ed7-7f70-4b4c-9afd-cf0286348562\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-qn8tr" Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.524678 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4f9989fa-f608-438c-9bfd-f06ef32470c7-serving-cert\") pod \"console-operator-58897d9998-qq8sm\" (UID: \"4f9989fa-f608-438c-9bfd-f06ef32470c7\") " pod="openshift-console-operator/console-operator-58897d9998-qq8sm" Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.524853 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/28dfccad-4a7e-463f-a8b3-77a451a865ae-metrics-tls\") pod \"ingress-operator-5b745b69d9-9zbxh\" (UID: \"28dfccad-4a7e-463f-a8b3-77a451a865ae\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-9zbxh" Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.525198 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/78119cb9-deb1-4a37-bc05-7b911969047f-audit-policies\") pod \"apiserver-7bbb656c7d-lh6qt\" (UID: \"78119cb9-deb1-4a37-bc05-7b911969047f\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-lh6qt" Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.525713 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4f9989fa-f608-438c-9bfd-f06ef32470c7-config\") pod \"console-operator-58897d9998-qq8sm\" (UID: \"4f9989fa-f608-438c-9bfd-f06ef32470c7\") " pod="openshift-console-operator/console-operator-58897d9998-qq8sm" Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.526968 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/78119cb9-deb1-4a37-bc05-7b911969047f-etcd-client\") pod \"apiserver-7bbb656c7d-lh6qt\" (UID: \"78119cb9-deb1-4a37-bc05-7b911969047f\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-lh6qt" Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.527362 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/deb5d218-90f7-429b-b09d-137dd3422ea0-signing-cabundle\") pod \"service-ca-9c57cc56f-v2z2q\" (UID: \"deb5d218-90f7-429b-b09d-137dd3422ea0\") " pod="openshift-service-ca/service-ca-9c57cc56f-v2z2q" Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.527472 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b0aac105-3797-4d99-ad0f-443048b96b0a-config\") pod \"route-controller-manager-6576b87f9c-wgzns\" (UID: \"b0aac105-3797-4d99-ad0f-443048b96b0a\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wgzns" Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.528529 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/db2db73c-d11d-46e6-9cc6-331faf9a21ca-mountpoint-dir\") pod \"csi-hostpathplugin-xrpzk\" (UID: \"db2db73c-d11d-46e6-9cc6-331faf9a21ca\") " pod="hostpath-provisioner/csi-hostpathplugin-xrpzk" Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.528888 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/799e0d54-7a8e-48e5-b5d0-db944b3cbe25-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-s92b8\" (UID: \"799e0d54-7a8e-48e5-b5d0-db944b3cbe25\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-s92b8" Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.529243 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/a07b76f5-7f05-410e-a0e9-3aff034786ad-etcd-client\") pod \"etcd-operator-b45778765-rn9bk\" (UID: \"a07b76f5-7f05-410e-a0e9-3aff034786ad\") " pod="openshift-etcd-operator/etcd-operator-b45778765-rn9bk" Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.529345 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b0aac105-3797-4d99-ad0f-443048b96b0a-client-ca\") pod \"route-controller-manager-6576b87f9c-wgzns\" (UID: \"b0aac105-3797-4d99-ad0f-443048b96b0a\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wgzns" Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.529832 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/78119cb9-deb1-4a37-bc05-7b911969047f-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-lh6qt\" (UID: \"78119cb9-deb1-4a37-bc05-7b911969047f\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-lh6qt" Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.530367 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a07b76f5-7f05-410e-a0e9-3aff034786ad-config\") pod \"etcd-operator-b45778765-rn9bk\" (UID: \"a07b76f5-7f05-410e-a0e9-3aff034786ad\") " pod="openshift-etcd-operator/etcd-operator-b45778765-rn9bk" Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.531275 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/451fb12e-a97f-441e-8a8c-d4c217640aef-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-twtlk\" (UID: \"451fb12e-a97f-441e-8a8c-d4c217640aef\") " pod="openshift-marketplace/marketplace-operator-79b997595-twtlk" Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.531592 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/799e0d54-7a8e-48e5-b5d0-db944b3cbe25-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-s92b8\" (UID: \"799e0d54-7a8e-48e5-b5d0-db944b3cbe25\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-s92b8" Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.531848 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/db2db73c-d11d-46e6-9cc6-331faf9a21ca-plugins-dir\") pod \"csi-hostpathplugin-xrpzk\" (UID: \"db2db73c-d11d-46e6-9cc6-331faf9a21ca\") " pod="hostpath-provisioner/csi-hostpathplugin-xrpzk" Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.532120 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/42d025d0-f89c-4e18-aae4-f923d5797693-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-rnbkw\" (UID: \"42d025d0-f89c-4e18-aae4-f923d5797693\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-rnbkw" Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.532209 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4f9989fa-f608-438c-9bfd-f06ef32470c7-trusted-ca\") pod \"console-operator-58897d9998-qq8sm\" (UID: \"4f9989fa-f608-438c-9bfd-f06ef32470c7\") " pod="openshift-console-operator/console-operator-58897d9998-qq8sm" Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.532492 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/a07b76f5-7f05-410e-a0e9-3aff034786ad-etcd-service-ca\") pod \"etcd-operator-b45778765-rn9bk\" (UID: \"a07b76f5-7f05-410e-a0e9-3aff034786ad\") " pod="openshift-etcd-operator/etcd-operator-b45778765-rn9bk" Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.532600 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/079f1ed7-7f70-4b4c-9afd-cf0286348562-profile-collector-cert\") pod \"catalog-operator-68c6474976-qn8tr\" (UID: \"079f1ed7-7f70-4b4c-9afd-cf0286348562\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-qn8tr" Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.533649 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/850edb82-dd95-474f-a41c-3aa48faa4b87-client-ca\") pod \"controller-manager-879f6c89f-vws9k\" (UID: \"850edb82-dd95-474f-a41c-3aa48faa4b87\") " pod="openshift-controller-manager/controller-manager-879f6c89f-vws9k" Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.534307 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/850edb82-dd95-474f-a41c-3aa48faa4b87-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-vws9k\" (UID: \"850edb82-dd95-474f-a41c-3aa48faa4b87\") " pod="openshift-controller-manager/controller-manager-879f6c89f-vws9k" Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.534469 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/850edb82-dd95-474f-a41c-3aa48faa4b87-config\") pod \"controller-manager-879f6c89f-vws9k\" (UID: \"850edb82-dd95-474f-a41c-3aa48faa4b87\") " pod="openshift-controller-manager/controller-manager-879f6c89f-vws9k" Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.534563 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/42d025d0-f89c-4e18-aae4-f923d5797693-config\") pod \"openshift-apiserver-operator-796bbdcf4f-rnbkw\" (UID: \"42d025d0-f89c-4e18-aae4-f923d5797693\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-rnbkw" Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.534658 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/061a3cd6-2958-4635-9ce1-4989d98d8432-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-dskl4\" (UID: \"061a3cd6-2958-4635-9ce1-4989d98d8432\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-dskl4" Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.534971 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4fb46f0a-fd98-475f-a2fe-db2c2f8c3d35-config\") pod \"service-ca-operator-777779d784-8dm4f\" (UID: \"4fb46f0a-fd98-475f-a2fe-db2c2f8c3d35\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-8dm4f" Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.535016 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2e7d85a9-ce97-4481-934c-3e6f3c720007-config\") pod \"kube-apiserver-operator-766d6c64bb-44d9w\" (UID: \"2e7d85a9-ce97-4481-934c-3e6f3c720007\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-44d9w" Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.535024 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/db2db73c-d11d-46e6-9cc6-331faf9a21ca-registration-dir\") pod \"csi-hostpathplugin-xrpzk\" (UID: \"db2db73c-d11d-46e6-9cc6-331faf9a21ca\") " pod="hostpath-provisioner/csi-hostpathplugin-xrpzk" Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.535224 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/deb5d218-90f7-429b-b09d-137dd3422ea0-signing-key\") pod \"service-ca-9c57cc56f-v2z2q\" (UID: \"deb5d218-90f7-429b-b09d-137dd3422ea0\") " pod="openshift-service-ca/service-ca-9c57cc56f-v2z2q" Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.535721 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/78119cb9-deb1-4a37-bc05-7b911969047f-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-lh6qt\" (UID: \"78119cb9-deb1-4a37-bc05-7b911969047f\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-lh6qt" Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.535812 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/451fb12e-a97f-441e-8a8c-d4c217640aef-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-twtlk\" (UID: \"451fb12e-a97f-441e-8a8c-d4c217640aef\") " pod="openshift-marketplace/marketplace-operator-79b997595-twtlk" Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.536134 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/40856145-6171-40e7-97b9-51268a8c348b-service-ca-bundle\") pod \"router-default-5444994796-7fd22\" (UID: \"40856145-6171-40e7-97b9-51268a8c348b\") " pod="openshift-ingress/router-default-5444994796-7fd22" Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.536457 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/96a4fa95-2d84-4fc6-94a4-5629fe98e3ce-auth-proxy-config\") pod \"machine-config-operator-74547568cd-btqhn\" (UID: \"96a4fa95-2d84-4fc6-94a4-5629fe98e3ce\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-btqhn" Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.536841 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/3bc2976e-bdb0-4450-9c5e-73052e705f7a-tmpfs\") pod \"packageserver-d55dfcdfc-r4272\" (UID: \"3bc2976e-bdb0-4450-9c5e-73052e705f7a\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-r4272" Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.537507 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/96a4fa95-2d84-4fc6-94a4-5629fe98e3ce-images\") pod \"machine-config-operator-74547568cd-btqhn\" (UID: \"96a4fa95-2d84-4fc6-94a4-5629fe98e3ce\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-btqhn" Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.537114 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bc13833f-8cfa-417a-a16a-420e5b00843b-config-volume\") pod \"dns-default-gvvtl\" (UID: \"bc13833f-8cfa-417a-a16a-420e5b00843b\") " pod="openshift-dns/dns-default-gvvtl" Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.537993 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/78119cb9-deb1-4a37-bc05-7b911969047f-encryption-config\") pod \"apiserver-7bbb656c7d-lh6qt\" (UID: \"78119cb9-deb1-4a37-bc05-7b911969047f\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-lh6qt" Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.538109 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/cc343e66-a0f1-4f9b-bbba-77b38c0260e7-metrics-tls\") pod \"dns-operator-744455d44c-brrd5\" (UID: \"cc343e66-a0f1-4f9b-bbba-77b38c0260e7\") " pod="openshift-dns-operator/dns-operator-744455d44c-brrd5" Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.538217 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/28dfccad-4a7e-463f-a8b3-77a451a865ae-trusted-ca\") pod \"ingress-operator-5b745b69d9-9zbxh\" (UID: \"28dfccad-4a7e-463f-a8b3-77a451a865ae\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-9zbxh" Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.539266 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/40856145-6171-40e7-97b9-51268a8c348b-stats-auth\") pod \"router-default-5444994796-7fd22\" (UID: \"40856145-6171-40e7-97b9-51268a8c348b\") " pod="openshift-ingress/router-default-5444994796-7fd22" Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.539788 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/4ec81ced-fd94-49a3-aa46-d7febbbfb825-profile-collector-cert\") pod \"olm-operator-6b444d44fb-4bmlt\" (UID: \"4ec81ced-fd94-49a3-aa46-d7febbbfb825\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-4bmlt" Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.540048 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/3bc2976e-bdb0-4450-9c5e-73052e705f7a-apiservice-cert\") pod \"packageserver-d55dfcdfc-r4272\" (UID: \"3bc2976e-bdb0-4450-9c5e-73052e705f7a\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-r4272" Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.540112 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f201fb44-80ca-4142-aaf9-4cfbd8215309-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-xtqqs\" (UID: \"f201fb44-80ca-4142-aaf9-4cfbd8215309\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-xtqqs" Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.540499 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/8b5042f5-52a1-42da-9a21-72d7b2a75c75-images\") pod \"machine-api-operator-5694c8668f-prs2h\" (UID: \"8b5042f5-52a1-42da-9a21-72d7b2a75c75\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-prs2h" Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.540529 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/bc13833f-8cfa-417a-a16a-420e5b00843b-metrics-tls\") pod \"dns-default-gvvtl\" (UID: \"bc13833f-8cfa-417a-a16a-420e5b00843b\") " pod="openshift-dns/dns-default-gvvtl" Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.540611 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/40856145-6171-40e7-97b9-51268a8c348b-default-certificate\") pod \"router-default-5444994796-7fd22\" (UID: \"40856145-6171-40e7-97b9-51268a8c348b\") " pod="openshift-ingress/router-default-5444994796-7fd22" Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.540783 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/dff4ff8a-f156-4da4-ba81-477078a8345d-config-volume\") pod \"collect-profiles-29422950-4cxn6\" (UID: \"dff4ff8a-f156-4da4-ba81-477078a8345d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29422950-4cxn6" Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.540856 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8b5042f5-52a1-42da-9a21-72d7b2a75c75-config\") pod \"machine-api-operator-5694c8668f-prs2h\" (UID: \"8b5042f5-52a1-42da-9a21-72d7b2a75c75\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-prs2h" Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.540965 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1c9a984d-0762-47e3-82e4-8b4f9d6bea39-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-g5xg4\" (UID: \"1c9a984d-0762-47e3-82e4-8b4f9d6bea39\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-g5xg4" Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.541136 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/e82e7008-55b0-4f3c-bdd7-8c9bfbc9284f-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-7p66s\" (UID: \"e82e7008-55b0-4f3c-bdd7-8c9bfbc9284f\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-7p66s" Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.541569 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2e7d85a9-ce97-4481-934c-3e6f3c720007-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-44d9w\" (UID: \"2e7d85a9-ce97-4481-934c-3e6f3c720007\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-44d9w" Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.542166 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f201fb44-80ca-4142-aaf9-4cfbd8215309-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-xtqqs\" (UID: \"f201fb44-80ca-4142-aaf9-4cfbd8215309\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-xtqqs" Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.542346 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/dc6efe70-637d-42bf-a57e-92ed4503a23a-machine-approver-tls\") pod \"machine-approver-56656f9798-76q8c\" (UID: \"dc6efe70-637d-42bf-a57e-92ed4503a23a\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-76q8c" Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.542943 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/3bc2976e-bdb0-4450-9c5e-73052e705f7a-webhook-cert\") pod \"packageserver-d55dfcdfc-r4272\" (UID: \"3bc2976e-bdb0-4450-9c5e-73052e705f7a\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-r4272" Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.543313 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4fb46f0a-fd98-475f-a2fe-db2c2f8c3d35-serving-cert\") pod \"service-ca-operator-777779d784-8dm4f\" (UID: \"4fb46f0a-fd98-475f-a2fe-db2c2f8c3d35\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-8dm4f" Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.543464 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/4ec81ced-fd94-49a3-aa46-d7febbbfb825-srv-cert\") pod \"olm-operator-6b444d44fb-4bmlt\" (UID: \"4ec81ced-fd94-49a3-aa46-d7febbbfb825\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-4bmlt" Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.543685 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b0aac105-3797-4d99-ad0f-443048b96b0a-serving-cert\") pod \"route-controller-manager-6576b87f9c-wgzns\" (UID: \"b0aac105-3797-4d99-ad0f-443048b96b0a\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wgzns" Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.543760 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/fe1f76c3-fb22-4c92-bc66-7048e04e63b0-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-bvd4w\" (UID: \"fe1f76c3-fb22-4c92-bc66-7048e04e63b0\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-bvd4w" Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.543980 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/96a4fa95-2d84-4fc6-94a4-5629fe98e3ce-proxy-tls\") pod \"machine-config-operator-74547568cd-btqhn\" (UID: \"96a4fa95-2d84-4fc6-94a4-5629fe98e3ce\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-btqhn" Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.544629 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/8b5042f5-52a1-42da-9a21-72d7b2a75c75-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-prs2h\" (UID: \"8b5042f5-52a1-42da-9a21-72d7b2a75c75\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-prs2h" Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.544695 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/850edb82-dd95-474f-a41c-3aa48faa4b87-serving-cert\") pod \"controller-manager-879f6c89f-vws9k\" (UID: \"850edb82-dd95-474f-a41c-3aa48faa4b87\") " pod="openshift-controller-manager/controller-manager-879f6c89f-vws9k" Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.544912 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/dff4ff8a-f156-4da4-ba81-477078a8345d-secret-volume\") pod \"collect-profiles-29422950-4cxn6\" (UID: \"dff4ff8a-f156-4da4-ba81-477078a8345d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29422950-4cxn6" Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.544929 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/78119cb9-deb1-4a37-bc05-7b911969047f-serving-cert\") pod \"apiserver-7bbb656c7d-lh6qt\" (UID: \"78119cb9-deb1-4a37-bc05-7b911969047f\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-lh6qt" Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.545206 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/40856145-6171-40e7-97b9-51268a8c348b-metrics-certs\") pod \"router-default-5444994796-7fd22\" (UID: \"40856145-6171-40e7-97b9-51268a8c348b\") " pod="openshift-ingress/router-default-5444994796-7fd22" Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.547353 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a07b76f5-7f05-410e-a0e9-3aff034786ad-serving-cert\") pod \"etcd-operator-b45778765-rn9bk\" (UID: \"a07b76f5-7f05-410e-a0e9-3aff034786ad\") " pod="openshift-etcd-operator/etcd-operator-b45778765-rn9bk" Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.547548 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/522ecb50-d5ad-4834-9a63-a4c843ca824d-cert\") pod \"ingress-canary-l47zr\" (UID: \"522ecb50-d5ad-4834-9a63-a4c843ca824d\") " pod="openshift-ingress-canary/ingress-canary-l47zr" Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.547592 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/ecc5d9e7-3f61-4863-8b70-3f71e1c0278b-node-bootstrap-token\") pod \"machine-config-server-fqhgq\" (UID: \"ecc5d9e7-3f61-4863-8b70-3f71e1c0278b\") " pod="openshift-machine-config-operator/machine-config-server-fqhgq" Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.550113 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/ecc5d9e7-3f61-4863-8b70-3f71e1c0278b-certs\") pod \"machine-config-server-fqhgq\" (UID: \"ecc5d9e7-3f61-4863-8b70-3f71e1c0278b\") " pod="openshift-machine-config-operator/machine-config-server-fqhgq" Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.564360 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pdhv9\" (UniqueName: \"kubernetes.io/projected/4f9989fa-f608-438c-9bfd-f06ef32470c7-kube-api-access-pdhv9\") pod \"console-operator-58897d9998-qq8sm\" (UID: \"4f9989fa-f608-438c-9bfd-f06ef32470c7\") " pod="openshift-console-operator/console-operator-58897d9998-qq8sm" Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.580664 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2e7d85a9-ce97-4481-934c-3e6f3c720007-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-44d9w\" (UID: \"2e7d85a9-ce97-4481-934c-3e6f3c720007\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-44d9w" Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.607535 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1c9a984d-0762-47e3-82e4-8b4f9d6bea39-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-g5xg4\" (UID: \"1c9a984d-0762-47e3-82e4-8b4f9d6bea39\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-g5xg4" Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.622845 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n2lcz\" (UID: \"2638a0da-6209-4691-a4d4-6aa91a4ca547\") " pod="openshift-image-registry/image-registry-697d97f7c8-n2lcz" Dec 10 14:34:05 crc kubenswrapper[4718]: E1210 14:34:05.623379 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-10 14:34:06.12336029 +0000 UTC m=+151.072583707 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n2lcz" (UID: "2638a0da-6209-4691-a4d4-6aa91a4ca547") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.625931 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/799e0d54-7a8e-48e5-b5d0-db944b3cbe25-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-s92b8\" (UID: \"799e0d54-7a8e-48e5-b5d0-db944b3cbe25\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-s92b8" Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.640818 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bwk8h\" (UniqueName: \"kubernetes.io/projected/850edb82-dd95-474f-a41c-3aa48faa4b87-kube-api-access-bwk8h\") pod \"controller-manager-879f6c89f-vws9k\" (UID: \"850edb82-dd95-474f-a41c-3aa48faa4b87\") " pod="openshift-controller-manager/controller-manager-879f6c89f-vws9k" Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.662381 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/28dfccad-4a7e-463f-a8b3-77a451a865ae-bound-sa-token\") pod \"ingress-operator-5b745b69d9-9zbxh\" (UID: \"28dfccad-4a7e-463f-a8b3-77a451a865ae\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-9zbxh" Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.683129 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7lg7h\" (UniqueName: \"kubernetes.io/projected/78119cb9-deb1-4a37-bc05-7b911969047f-kube-api-access-7lg7h\") pod \"apiserver-7bbb656c7d-lh6qt\" (UID: \"78119cb9-deb1-4a37-bc05-7b911969047f\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-lh6qt" Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.687062 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-s92b8" Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.700835 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k9ncs\" (UniqueName: \"kubernetes.io/projected/deb5d218-90f7-429b-b09d-137dd3422ea0-kube-api-access-k9ncs\") pod \"service-ca-9c57cc56f-v2z2q\" (UID: \"deb5d218-90f7-429b-b09d-137dd3422ea0\") " pod="openshift-service-ca/service-ca-9c57cc56f-v2z2q" Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.723721 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 10 14:34:05 crc kubenswrapper[4718]: E1210 14:34:05.723937 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-10 14:34:06.223899762 +0000 UTC m=+151.173123179 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.724285 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n2lcz\" (UID: \"2638a0da-6209-4691-a4d4-6aa91a4ca547\") " pod="openshift-image-registry/image-registry-697d97f7c8-n2lcz" Dec 10 14:34:05 crc kubenswrapper[4718]: E1210 14:34:05.724808 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-10 14:34:06.224787614 +0000 UTC m=+151.174011041 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n2lcz" (UID: "2638a0da-6209-4691-a4d4-6aa91a4ca547") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.727626 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nnn2f\" (UniqueName: \"kubernetes.io/projected/061a3cd6-2958-4635-9ce1-4989d98d8432-kube-api-access-nnn2f\") pod \"package-server-manager-789f6589d5-dskl4\" (UID: \"061a3cd6-2958-4635-9ce1-4989d98d8432\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-dskl4" Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.734310 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-v2z2q" Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.747241 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2cw6v\" (UniqueName: \"kubernetes.io/projected/8b5042f5-52a1-42da-9a21-72d7b2a75c75-kube-api-access-2cw6v\") pod \"machine-api-operator-5694c8668f-prs2h\" (UID: \"8b5042f5-52a1-42da-9a21-72d7b2a75c75\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-prs2h" Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.747626 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-g5xg4" Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.762531 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nq9rj\" (UniqueName: \"kubernetes.io/projected/dff4ff8a-f156-4da4-ba81-477078a8345d-kube-api-access-nq9rj\") pod \"collect-profiles-29422950-4cxn6\" (UID: \"dff4ff8a-f156-4da4-ba81-477078a8345d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29422950-4cxn6" Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.767483 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-prs2h" Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.783724 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vtvg7\" (UniqueName: \"kubernetes.io/projected/b7432a0e-050f-4112-a75d-f8687233cf0f-kube-api-access-vtvg7\") pod \"downloads-7954f5f757-hbnxp\" (UID: \"b7432a0e-050f-4112-a75d-f8687233cf0f\") " pod="openshift-console/downloads-7954f5f757-hbnxp" Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.812581 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5vrx6\" (UniqueName: \"kubernetes.io/projected/522ecb50-d5ad-4834-9a63-a4c843ca824d-kube-api-access-5vrx6\") pod \"ingress-canary-l47zr\" (UID: \"522ecb50-d5ad-4834-9a63-a4c843ca824d\") " pod="openshift-ingress-canary/ingress-canary-l47zr" Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.828152 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 10 14:34:05 crc kubenswrapper[4718]: E1210 14:34:05.828645 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-10 14:34:06.328625848 +0000 UTC m=+151.277849265 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.837290 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bpbxj\" (UniqueName: \"kubernetes.io/projected/a07b76f5-7f05-410e-a0e9-3aff034786ad-kube-api-access-bpbxj\") pod \"etcd-operator-b45778765-rn9bk\" (UID: \"a07b76f5-7f05-410e-a0e9-3aff034786ad\") " pod="openshift-etcd-operator/etcd-operator-b45778765-rn9bk" Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.842230 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jjb4c\" (UniqueName: \"kubernetes.io/projected/4ec81ced-fd94-49a3-aa46-d7febbbfb825-kube-api-access-jjb4c\") pod \"olm-operator-6b444d44fb-4bmlt\" (UID: \"4ec81ced-fd94-49a3-aa46-d7febbbfb825\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-4bmlt" Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.850380 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-lh6qt" Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.855737 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-qq8sm" Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.866548 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-vws9k" Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.874162 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-44d9w" Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.887955 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-hbnxp" Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.895142 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqfcl\" (UniqueName: \"kubernetes.io/projected/e82e7008-55b0-4f3c-bdd7-8c9bfbc9284f-kube-api-access-cqfcl\") pod \"multus-admission-controller-857f4d67dd-7p66s\" (UID: \"e82e7008-55b0-4f3c-bdd7-8c9bfbc9284f\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-7p66s" Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.900062 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6zb9k\" (UniqueName: \"kubernetes.io/projected/cc343e66-a0f1-4f9b-bbba-77b38c0260e7-kube-api-access-6zb9k\") pod \"dns-operator-744455d44c-brrd5\" (UID: \"cc343e66-a0f1-4f9b-bbba-77b38c0260e7\") " pod="openshift-dns-operator/dns-operator-744455d44c-brrd5" Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.908495 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l5pnb\" (UniqueName: \"kubernetes.io/projected/42d025d0-f89c-4e18-aae4-f923d5797693-kube-api-access-l5pnb\") pod \"openshift-apiserver-operator-796bbdcf4f-rnbkw\" (UID: \"42d025d0-f89c-4e18-aae4-f923d5797693\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-rnbkw" Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.925212 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gdv4x\" (UniqueName: \"kubernetes.io/projected/4fb46f0a-fd98-475f-a2fe-db2c2f8c3d35-kube-api-access-gdv4x\") pod \"service-ca-operator-777779d784-8dm4f\" (UID: \"4fb46f0a-fd98-475f-a2fe-db2c2f8c3d35\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-8dm4f" Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.925376 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-dskl4" Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.929609 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n2lcz\" (UID: \"2638a0da-6209-4691-a4d4-6aa91a4ca547\") " pod="openshift-image-registry/image-registry-697d97f7c8-n2lcz" Dec 10 14:34:05 crc kubenswrapper[4718]: E1210 14:34:05.929951 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-10 14:34:06.429938549 +0000 UTC m=+151.379161966 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n2lcz" (UID: "2638a0da-6209-4691-a4d4-6aa91a4ca547") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.941777 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-rnbkw" Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.945621 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-7p66s" Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.948832 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdnpm\" (UniqueName: \"kubernetes.io/projected/db2db73c-d11d-46e6-9cc6-331faf9a21ca-kube-api-access-rdnpm\") pod \"csi-hostpathplugin-xrpzk\" (UID: \"db2db73c-d11d-46e6-9cc6-331faf9a21ca\") " pod="hostpath-provisioner/csi-hostpathplugin-xrpzk" Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.960335 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-s92b8"] Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.962463 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-rn9bk" Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.963557 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4qkpt\" (UniqueName: \"kubernetes.io/projected/2638a0da-6209-4691-a4d4-6aa91a4ca547-kube-api-access-4qkpt\") pod \"image-registry-697d97f7c8-n2lcz\" (UID: \"2638a0da-6209-4691-a4d4-6aa91a4ca547\") " pod="openshift-image-registry/image-registry-697d97f7c8-n2lcz" Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.979171 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29422950-4cxn6" Dec 10 14:34:05 crc kubenswrapper[4718]: I1210 14:34:05.986651 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gmgmh\" (UniqueName: \"kubernetes.io/projected/ecc5d9e7-3f61-4863-8b70-3f71e1c0278b-kube-api-access-gmgmh\") pod \"machine-config-server-fqhgq\" (UID: \"ecc5d9e7-3f61-4863-8b70-3f71e1c0278b\") " pod="openshift-machine-config-operator/machine-config-server-fqhgq" Dec 10 14:34:06 crc kubenswrapper[4718]: I1210 14:34:06.000370 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-v2z2q"] Dec 10 14:34:06 crc kubenswrapper[4718]: I1210 14:34:06.020992 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-4bmlt" Dec 10 14:34:06 crc kubenswrapper[4718]: I1210 14:34:06.029583 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-8dm4f" Dec 10 14:34:06 crc kubenswrapper[4718]: I1210 14:34:06.030500 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 10 14:34:06 crc kubenswrapper[4718]: I1210 14:34:06.030794 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 10 14:34:06 crc kubenswrapper[4718]: E1210 14:34:06.030938 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-10 14:34:06.530898011 +0000 UTC m=+151.480121438 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 14:34:06 crc kubenswrapper[4718]: I1210 14:34:06.031200 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 10 14:34:06 crc kubenswrapper[4718]: I1210 14:34:06.031250 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 10 14:34:06 crc kubenswrapper[4718]: I1210 14:34:06.035438 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 10 14:34:06 crc kubenswrapper[4718]: I1210 14:34:06.037183 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 10 14:34:06 crc kubenswrapper[4718]: I1210 14:34:06.044711 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2qp95\" (UniqueName: \"kubernetes.io/projected/96a4fa95-2d84-4fc6-94a4-5629fe98e3ce-kube-api-access-2qp95\") pod \"machine-config-operator-74547568cd-btqhn\" (UID: \"96a4fa95-2d84-4fc6-94a4-5629fe98e3ce\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-btqhn" Dec 10 14:34:06 crc kubenswrapper[4718]: I1210 14:34:06.063956 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-g5xg4"] Dec 10 14:34:06 crc kubenswrapper[4718]: I1210 14:34:06.063960 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8d7xz\" (UniqueName: \"kubernetes.io/projected/f201fb44-80ca-4142-aaf9-4cfbd8215309-kube-api-access-8d7xz\") pod \"kube-storage-version-migrator-operator-b67b599dd-xtqqs\" (UID: \"f201fb44-80ca-4142-aaf9-4cfbd8215309\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-xtqqs" Dec 10 14:34:06 crc kubenswrapper[4718]: I1210 14:34:06.077263 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 10 14:34:06 crc kubenswrapper[4718]: I1210 14:34:06.078961 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-xrpzk" Dec 10 14:34:06 crc kubenswrapper[4718]: I1210 14:34:06.081466 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tqt2w\" (UniqueName: \"kubernetes.io/projected/40856145-6171-40e7-97b9-51268a8c348b-kube-api-access-tqt2w\") pod \"router-default-5444994796-7fd22\" (UID: \"40856145-6171-40e7-97b9-51268a8c348b\") " pod="openshift-ingress/router-default-5444994796-7fd22" Dec 10 14:34:06 crc kubenswrapper[4718]: W1210 14:34:06.083552 4718 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod799e0d54_7a8e_48e5_b5d0_db944b3cbe25.slice/crio-562ee6f7d8ccba82d357d5ea09497008300d4f57dc3d45e568f9a5f78006f0d5 WatchSource:0}: Error finding container 562ee6f7d8ccba82d357d5ea09497008300d4f57dc3d45e568f9a5f78006f0d5: Status 404 returned error can't find the container with id 562ee6f7d8ccba82d357d5ea09497008300d4f57dc3d45e568f9a5f78006f0d5 Dec 10 14:34:06 crc kubenswrapper[4718]: I1210 14:34:06.084255 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5wjsx\" (UniqueName: \"kubernetes.io/projected/9c06237f-9a58-430d-93c8-69297f9e1363-kube-api-access-5wjsx\") pod \"migrator-59844c95c7-kkv25\" (UID: \"9c06237f-9a58-430d-93c8-69297f9e1363\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-kkv25" Dec 10 14:34:06 crc kubenswrapper[4718]: I1210 14:34:06.095988 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-l47zr" Dec 10 14:34:06 crc kubenswrapper[4718]: I1210 14:34:06.101909 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-scwp4\" (UniqueName: \"kubernetes.io/projected/fe1f76c3-fb22-4c92-bc66-7048e04e63b0-kube-api-access-scwp4\") pod \"control-plane-machine-set-operator-78cbb6b69f-bvd4w\" (UID: \"fe1f76c3-fb22-4c92-bc66-7048e04e63b0\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-bvd4w" Dec 10 14:34:06 crc kubenswrapper[4718]: I1210 14:34:06.111767 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-fqhgq" Dec 10 14:34:06 crc kubenswrapper[4718]: I1210 14:34:06.124436 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fmlmm\" (UniqueName: \"kubernetes.io/projected/28dfccad-4a7e-463f-a8b3-77a451a865ae-kube-api-access-fmlmm\") pod \"ingress-operator-5b745b69d9-9zbxh\" (UID: \"28dfccad-4a7e-463f-a8b3-77a451a865ae\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-9zbxh" Dec 10 14:34:06 crc kubenswrapper[4718]: I1210 14:34:06.132406 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 10 14:34:06 crc kubenswrapper[4718]: I1210 14:34:06.132452 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n2lcz\" (UID: \"2638a0da-6209-4691-a4d4-6aa91a4ca547\") " pod="openshift-image-registry/image-registry-697d97f7c8-n2lcz" Dec 10 14:34:06 crc kubenswrapper[4718]: E1210 14:34:06.132849 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-10 14:34:06.632836068 +0000 UTC m=+151.582059485 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n2lcz" (UID: "2638a0da-6209-4691-a4d4-6aa91a4ca547") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 14:34:06 crc kubenswrapper[4718]: I1210 14:34:06.139766 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 10 14:34:06 crc kubenswrapper[4718]: I1210 14:34:06.148035 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2nkp\" (UniqueName: \"kubernetes.io/projected/dc6efe70-637d-42bf-a57e-92ed4503a23a-kube-api-access-s2nkp\") pod \"machine-approver-56656f9798-76q8c\" (UID: \"dc6efe70-637d-42bf-a57e-92ed4503a23a\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-76q8c" Dec 10 14:34:06 crc kubenswrapper[4718]: I1210 14:34:06.148416 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-prs2h"] Dec 10 14:34:06 crc kubenswrapper[4718]: I1210 14:34:06.174599 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n9gpp\" (UniqueName: \"kubernetes.io/projected/b0aac105-3797-4d99-ad0f-443048b96b0a-kube-api-access-n9gpp\") pod \"route-controller-manager-6576b87f9c-wgzns\" (UID: \"b0aac105-3797-4d99-ad0f-443048b96b0a\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wgzns" Dec 10 14:34:06 crc kubenswrapper[4718]: I1210 14:34:06.181753 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-lh6qt"] Dec 10 14:34:06 crc kubenswrapper[4718]: I1210 14:34:06.183286 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jzqrt\" (UniqueName: \"kubernetes.io/projected/451fb12e-a97f-441e-8a8c-d4c217640aef-kube-api-access-jzqrt\") pod \"marketplace-operator-79b997595-twtlk\" (UID: \"451fb12e-a97f-441e-8a8c-d4c217640aef\") " pod="openshift-marketplace/marketplace-operator-79b997595-twtlk" Dec 10 14:34:06 crc kubenswrapper[4718]: I1210 14:34:06.187697 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-76q8c" Dec 10 14:34:06 crc kubenswrapper[4718]: I1210 14:34:06.195932 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-brrd5" Dec 10 14:34:06 crc kubenswrapper[4718]: I1210 14:34:06.202879 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-btqhn" Dec 10 14:34:06 crc kubenswrapper[4718]: I1210 14:34:06.206706 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6wjdq\" (UniqueName: \"kubernetes.io/projected/bc13833f-8cfa-417a-a16a-420e5b00843b-kube-api-access-6wjdq\") pod \"dns-default-gvvtl\" (UID: \"bc13833f-8cfa-417a-a16a-420e5b00843b\") " pod="openshift-dns/dns-default-gvvtl" Dec 10 14:34:06 crc kubenswrapper[4718]: I1210 14:34:06.216760 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-xtqqs" Dec 10 14:34:06 crc kubenswrapper[4718]: I1210 14:34:06.223760 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k5dnw\" (UniqueName: \"kubernetes.io/projected/079f1ed7-7f70-4b4c-9afd-cf0286348562-kube-api-access-k5dnw\") pod \"catalog-operator-68c6474976-qn8tr\" (UID: \"079f1ed7-7f70-4b4c-9afd-cf0286348562\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-qn8tr" Dec 10 14:34:06 crc kubenswrapper[4718]: I1210 14:34:06.231062 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-qn8tr" Dec 10 14:34:06 crc kubenswrapper[4718]: I1210 14:34:06.233482 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 10 14:34:06 crc kubenswrapper[4718]: E1210 14:34:06.233923 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-10 14:34:06.733896993 +0000 UTC m=+151.683120420 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 14:34:06 crc kubenswrapper[4718]: I1210 14:34:06.237593 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 10 14:34:06 crc kubenswrapper[4718]: I1210 14:34:06.245779 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 10 14:34:06 crc kubenswrapper[4718]: I1210 14:34:06.256439 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ms24s\" (UniqueName: \"kubernetes.io/projected/3bc2976e-bdb0-4450-9c5e-73052e705f7a-kube-api-access-ms24s\") pod \"packageserver-d55dfcdfc-r4272\" (UID: \"3bc2976e-bdb0-4450-9c5e-73052e705f7a\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-r4272" Dec 10 14:34:06 crc kubenswrapper[4718]: I1210 14:34:06.300915 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-7fd22" Dec 10 14:34:06 crc kubenswrapper[4718]: I1210 14:34:06.301044 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-9zbxh" Dec 10 14:34:06 crc kubenswrapper[4718]: I1210 14:34:06.313423 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-bvd4w" Dec 10 14:34:06 crc kubenswrapper[4718]: I1210 14:34:06.336205 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n2lcz\" (UID: \"2638a0da-6209-4691-a4d4-6aa91a4ca547\") " pod="openshift-image-registry/image-registry-697d97f7c8-n2lcz" Dec 10 14:34:06 crc kubenswrapper[4718]: E1210 14:34:06.336793 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-10 14:34:06.836745242 +0000 UTC m=+151.785968659 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n2lcz" (UID: "2638a0da-6209-4691-a4d4-6aa91a4ca547") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 14:34:06 crc kubenswrapper[4718]: I1210 14:34:06.339471 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wgzns" Dec 10 14:34:06 crc kubenswrapper[4718]: I1210 14:34:06.354902 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-kkv25" Dec 10 14:34:06 crc kubenswrapper[4718]: I1210 14:34:06.388569 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gvvtl" Dec 10 14:34:06 crc kubenswrapper[4718]: I1210 14:34:06.421181 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 10 14:34:06 crc kubenswrapper[4718]: I1210 14:34:06.448739 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-twtlk" Dec 10 14:34:06 crc kubenswrapper[4718]: I1210 14:34:06.449638 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 10 14:34:06 crc kubenswrapper[4718]: E1210 14:34:06.450093 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-10 14:34:06.950071983 +0000 UTC m=+151.899295400 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 14:34:06 crc kubenswrapper[4718]: I1210 14:34:06.470757 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-44d9w"] Dec 10 14:34:06 crc kubenswrapper[4718]: I1210 14:34:06.484674 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-lh6qt" event={"ID":"78119cb9-deb1-4a37-bc05-7b911969047f","Type":"ContainerStarted","Data":"6de288bf17e09e1f1463d3446d784470b932e610c0da230f1b202daf0e20ce98"} Dec 10 14:34:06 crc kubenswrapper[4718]: I1210 14:34:06.498262 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-v2z2q" event={"ID":"deb5d218-90f7-429b-b09d-137dd3422ea0","Type":"ContainerStarted","Data":"9a4c53bfaef5a364b035dd9ecd1d6c0bc3a319b0e42a1dc5509410328ab2c1b7"} Dec 10 14:34:06 crc kubenswrapper[4718]: I1210 14:34:06.503636 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-l6wsm" event={"ID":"62d109b9-dd6e-47c6-b384-336b480f804d","Type":"ContainerStarted","Data":"46aaeeedf0514899ea0eb489bc15dc2c5af4fda9d0e2c2ae988b5ecfc03df265"} Dec 10 14:34:06 crc kubenswrapper[4718]: I1210 14:34:06.510237 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-r4272" Dec 10 14:34:06 crc kubenswrapper[4718]: I1210 14:34:06.523690 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-rkbqm" event={"ID":"8908165f-bd25-44b8-916e-5b910ce5d74c","Type":"ContainerStarted","Data":"28614c0c76067e8eaf708cc31d28e15a6892f090552a7e2f92e245a0eed65308"} Dec 10 14:34:06 crc kubenswrapper[4718]: I1210 14:34:06.535976 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-tk99n" Dec 10 14:34:06 crc kubenswrapper[4718]: I1210 14:34:06.539967 4718 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-tk99n container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.23:6443/healthz\": dial tcp 10.217.0.23:6443: connect: connection refused" start-of-body= Dec 10 14:34:06 crc kubenswrapper[4718]: I1210 14:34:06.540063 4718 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-tk99n" podUID="90744db6-7e3a-4f7f-a0ae-4f4dfed7df33" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.23:6443/healthz\": dial tcp 10.217.0.23:6443: connect: connection refused" Dec 10 14:34:06 crc kubenswrapper[4718]: I1210 14:34:06.547495 4718 generic.go:334] "Generic (PLEG): container finished" podID="f18678b9-691a-4582-b327-b5bc9f1983d8" containerID="73a0eb756d13e8ad7d60697ff9507663dd0457f3e27b754a6e6514796c08e55d" exitCode=0 Dec 10 14:34:06 crc kubenswrapper[4718]: I1210 14:34:06.549883 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-5qn5w" event={"ID":"f18678b9-691a-4582-b327-b5bc9f1983d8","Type":"ContainerDied","Data":"73a0eb756d13e8ad7d60697ff9507663dd0457f3e27b754a6e6514796c08e55d"} Dec 10 14:34:06 crc kubenswrapper[4718]: I1210 14:34:06.555492 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-hbnxp"] Dec 10 14:34:06 crc kubenswrapper[4718]: I1210 14:34:06.556452 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n2lcz\" (UID: \"2638a0da-6209-4691-a4d4-6aa91a4ca547\") " pod="openshift-image-registry/image-registry-697d97f7c8-n2lcz" Dec 10 14:34:06 crc kubenswrapper[4718]: E1210 14:34:06.560234 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-10 14:34:07.060182664 +0000 UTC m=+152.009406081 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n2lcz" (UID: "2638a0da-6209-4691-a4d4-6aa91a4ca547") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 14:34:06 crc kubenswrapper[4718]: I1210 14:34:06.564379 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-s92b8" event={"ID":"799e0d54-7a8e-48e5-b5d0-db944b3cbe25","Type":"ContainerStarted","Data":"562ee6f7d8ccba82d357d5ea09497008300d4f57dc3d45e568f9a5f78006f0d5"} Dec 10 14:34:06 crc kubenswrapper[4718]: I1210 14:34:06.569988 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-p7wpl" event={"ID":"da647d92-a61b-4c5d-b97c-730df809d8fb","Type":"ContainerStarted","Data":"f3278537d84760702cc075e491777f18a8bc9c7bde07e4fb864feba6cbd7a4a9"} Dec 10 14:34:06 crc kubenswrapper[4718]: I1210 14:34:06.617966 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-qq8sm"] Dec 10 14:34:06 crc kubenswrapper[4718]: I1210 14:34:06.618106 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-q5c57" event={"ID":"fdfd948f-0ed8-45ed-90ab-3126ba209608","Type":"ContainerStarted","Data":"f48c7ffffcbd32db8868e376e268e47ae337c6bcd84edf7f443013f2f2174b71"} Dec 10 14:34:06 crc kubenswrapper[4718]: I1210 14:34:06.639298 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-4bmlt"] Dec 10 14:34:06 crc kubenswrapper[4718]: I1210 14:34:06.645086 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-2x76l" event={"ID":"f4e99f92-30bf-44e1-a7b3-b5d481af2100","Type":"ContainerStarted","Data":"f53b155951b0a05f3e389a7551a4267dce6d0a099771db28c5c030c1e7e3e259"} Dec 10 14:34:06 crc kubenswrapper[4718]: I1210 14:34:06.658285 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 10 14:34:06 crc kubenswrapper[4718]: E1210 14:34:06.658539 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-10 14:34:07.15850071 +0000 UTC m=+152.107724127 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 14:34:06 crc kubenswrapper[4718]: I1210 14:34:06.658759 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n2lcz\" (UID: \"2638a0da-6209-4691-a4d4-6aa91a4ca547\") " pod="openshift-image-registry/image-registry-697d97f7c8-n2lcz" Dec 10 14:34:06 crc kubenswrapper[4718]: E1210 14:34:06.663377 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-10 14:34:07.163352441 +0000 UTC m=+152.112575858 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n2lcz" (UID: "2638a0da-6209-4691-a4d4-6aa91a4ca547") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 14:34:06 crc kubenswrapper[4718]: I1210 14:34:06.671243 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-mx69k" event={"ID":"ab797808-0785-40c6-8399-1a3bfef7b7f1","Type":"ContainerStarted","Data":"7102abd426fd5dd7b4b924d321c32ebcabf5b5c8bf8ad0a4b1ad5f86922ed1f9"} Dec 10 14:34:06 crc kubenswrapper[4718]: I1210 14:34:06.683850 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-prs2h" event={"ID":"8b5042f5-52a1-42da-9a21-72d7b2a75c75","Type":"ContainerStarted","Data":"336895e4e08a90edd6e3e3d300d24925d927a2c4da34808ffb64d0794f7b260f"} Dec 10 14:34:06 crc kubenswrapper[4718]: I1210 14:34:06.702181 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-g5xg4" event={"ID":"1c9a984d-0762-47e3-82e4-8b4f9d6bea39","Type":"ContainerStarted","Data":"7a17d4cab58519c66d4a73350fd98dd407ef7de182f59d093364295ad9e5e2ef"} Dec 10 14:34:06 crc kubenswrapper[4718]: I1210 14:34:06.766421 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 10 14:34:06 crc kubenswrapper[4718]: E1210 14:34:06.767096 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-10 14:34:07.267072152 +0000 UTC m=+152.216295569 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 14:34:06 crc kubenswrapper[4718]: I1210 14:34:06.867709 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n2lcz\" (UID: \"2638a0da-6209-4691-a4d4-6aa91a4ca547\") " pod="openshift-image-registry/image-registry-697d97f7c8-n2lcz" Dec 10 14:34:06 crc kubenswrapper[4718]: E1210 14:34:06.868257 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-10 14:34:07.36823989 +0000 UTC m=+152.317463307 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n2lcz" (UID: "2638a0da-6209-4691-a4d4-6aa91a4ca547") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 14:34:06 crc kubenswrapper[4718]: I1210 14:34:06.968358 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 10 14:34:06 crc kubenswrapper[4718]: E1210 14:34:06.969338 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-10 14:34:07.469310004 +0000 UTC m=+152.418533421 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 14:34:07 crc kubenswrapper[4718]: I1210 14:34:07.071055 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n2lcz\" (UID: \"2638a0da-6209-4691-a4d4-6aa91a4ca547\") " pod="openshift-image-registry/image-registry-697d97f7c8-n2lcz" Dec 10 14:34:07 crc kubenswrapper[4718]: E1210 14:34:07.083622 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-10 14:34:07.583591979 +0000 UTC m=+152.532815396 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n2lcz" (UID: "2638a0da-6209-4691-a4d4-6aa91a4ca547") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 14:34:07 crc kubenswrapper[4718]: I1210 14:34:07.172366 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 10 14:34:07 crc kubenswrapper[4718]: E1210 14:34:07.172698 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-10 14:34:07.672631404 +0000 UTC m=+152.621854821 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 14:34:07 crc kubenswrapper[4718]: I1210 14:34:07.173207 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n2lcz\" (UID: \"2638a0da-6209-4691-a4d4-6aa91a4ca547\") " pod="openshift-image-registry/image-registry-697d97f7c8-n2lcz" Dec 10 14:34:07 crc kubenswrapper[4718]: E1210 14:34:07.174075 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-10 14:34:07.674062519 +0000 UTC m=+152.623285936 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n2lcz" (UID: "2638a0da-6209-4691-a4d4-6aa91a4ca547") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 14:34:07 crc kubenswrapper[4718]: I1210 14:34:07.275671 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 10 14:34:07 crc kubenswrapper[4718]: E1210 14:34:07.277020 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-10 14:34:07.77697466 +0000 UTC m=+152.726198077 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 14:34:07 crc kubenswrapper[4718]: I1210 14:34:07.277545 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n2lcz\" (UID: \"2638a0da-6209-4691-a4d4-6aa91a4ca547\") " pod="openshift-image-registry/image-registry-697d97f7c8-n2lcz" Dec 10 14:34:07 crc kubenswrapper[4718]: E1210 14:34:07.278731 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-10 14:34:07.778711394 +0000 UTC m=+152.727934811 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n2lcz" (UID: "2638a0da-6209-4691-a4d4-6aa91a4ca547") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 14:34:07 crc kubenswrapper[4718]: I1210 14:34:07.294050 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-p7wpl" podStartSLOduration=128.294005256 podStartE2EDuration="2m8.294005256s" podCreationTimestamp="2025-12-10 14:31:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 14:34:07.286896368 +0000 UTC m=+152.236119785" watchObservedRunningTime="2025-12-10 14:34:07.294005256 +0000 UTC m=+152.243228673" Dec 10 14:34:07 crc kubenswrapper[4718]: I1210 14:34:07.353557 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-rkbqm" podStartSLOduration=127.353516333 podStartE2EDuration="2m7.353516333s" podCreationTimestamp="2025-12-10 14:32:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 14:34:07.3306097 +0000 UTC m=+152.279833117" watchObservedRunningTime="2025-12-10 14:34:07.353516333 +0000 UTC m=+152.302739750" Dec 10 14:34:07 crc kubenswrapper[4718]: I1210 14:34:07.372522 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-l6wsm" podStartSLOduration=127.372494627 podStartE2EDuration="2m7.372494627s" podCreationTimestamp="2025-12-10 14:32:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 14:34:07.370191779 +0000 UTC m=+152.319415196" watchObservedRunningTime="2025-12-10 14:34:07.372494627 +0000 UTC m=+152.321718044" Dec 10 14:34:07 crc kubenswrapper[4718]: I1210 14:34:07.381744 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 10 14:34:07 crc kubenswrapper[4718]: E1210 14:34:07.382470 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-10 14:34:07.882428895 +0000 UTC m=+152.831652312 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 14:34:07 crc kubenswrapper[4718]: I1210 14:34:07.446054 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-tk99n" podStartSLOduration=128.446002963 podStartE2EDuration="2m8.446002963s" podCreationTimestamp="2025-12-10 14:31:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 14:34:07.441561862 +0000 UTC m=+152.390785299" watchObservedRunningTime="2025-12-10 14:34:07.446002963 +0000 UTC m=+152.395226380" Dec 10 14:34:07 crc kubenswrapper[4718]: I1210 14:34:07.483561 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n2lcz\" (UID: \"2638a0da-6209-4691-a4d4-6aa91a4ca547\") " pod="openshift-image-registry/image-registry-697d97f7c8-n2lcz" Dec 10 14:34:07 crc kubenswrapper[4718]: E1210 14:34:07.484271 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-10 14:34:07.984247028 +0000 UTC m=+152.933470445 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n2lcz" (UID: "2638a0da-6209-4691-a4d4-6aa91a4ca547") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 14:34:07 crc kubenswrapper[4718]: I1210 14:34:07.586062 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 10 14:34:07 crc kubenswrapper[4718]: E1210 14:34:07.586379 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-10 14:34:08.086351799 +0000 UTC m=+153.035575216 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 14:34:07 crc kubenswrapper[4718]: I1210 14:34:07.692579 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n2lcz\" (UID: \"2638a0da-6209-4691-a4d4-6aa91a4ca547\") " pod="openshift-image-registry/image-registry-697d97f7c8-n2lcz" Dec 10 14:34:07 crc kubenswrapper[4718]: E1210 14:34:07.693323 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-10 14:34:08.193304951 +0000 UTC m=+153.142528368 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n2lcz" (UID: "2638a0da-6209-4691-a4d4-6aa91a4ca547") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 14:34:07 crc kubenswrapper[4718]: I1210 14:34:07.784691 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-7p66s"] Dec 10 14:34:07 crc kubenswrapper[4718]: I1210 14:34:07.784771 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-xrpzk"] Dec 10 14:34:07 crc kubenswrapper[4718]: I1210 14:34:07.795138 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 10 14:34:07 crc kubenswrapper[4718]: I1210 14:34:07.795467 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-dskl4"] Dec 10 14:34:07 crc kubenswrapper[4718]: E1210 14:34:07.795527 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-10 14:34:08.295503374 +0000 UTC m=+153.244726791 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 14:34:07 crc kubenswrapper[4718]: I1210 14:34:07.858726 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-2x76l" event={"ID":"f4e99f92-30bf-44e1-a7b3-b5d481af2100","Type":"ContainerStarted","Data":"edadc4534ef8321f22991117442c0e666b4e8734819f3d5a8bc56eb48e8fa3d5"} Dec 10 14:34:07 crc kubenswrapper[4718]: I1210 14:34:07.875124 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-76q8c" event={"ID":"dc6efe70-637d-42bf-a57e-92ed4503a23a","Type":"ContainerStarted","Data":"c0c618d9402b8399cc25626152ed05bbe1093d25660043073f05e16edaf33249"} Dec 10 14:34:07 crc kubenswrapper[4718]: I1210 14:34:07.892703 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-q5c57" event={"ID":"fdfd948f-0ed8-45ed-90ab-3126ba209608","Type":"ContainerStarted","Data":"71a254dc0a0a8228ed0309fba7a88539f70535c3f305f37274d9e1878e73c258"} Dec 10 14:34:07 crc kubenswrapper[4718]: I1210 14:34:07.893996 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-44d9w" event={"ID":"2e7d85a9-ce97-4481-934c-3e6f3c720007","Type":"ContainerStarted","Data":"6a0f0051e58d2ff601d164912485353d4eeb349959932ebcabf24da3509ffdd0"} Dec 10 14:34:07 crc kubenswrapper[4718]: I1210 14:34:07.896862 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n2lcz\" (UID: \"2638a0da-6209-4691-a4d4-6aa91a4ca547\") " pod="openshift-image-registry/image-registry-697d97f7c8-n2lcz" Dec 10 14:34:07 crc kubenswrapper[4718]: E1210 14:34:07.897253 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-10 14:34:08.397236356 +0000 UTC m=+153.346459773 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n2lcz" (UID: "2638a0da-6209-4691-a4d4-6aa91a4ca547") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 14:34:07 crc kubenswrapper[4718]: I1210 14:34:07.897784 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-t2gmf" event={"ID":"17fe734a-f022-4fd4-8276-661e662e2c6b","Type":"ContainerStarted","Data":"ee4c5716a10b09e6e69b7ab971d84293431c791284b0d2c1b130a009ac51bc78"} Dec 10 14:34:07 crc kubenswrapper[4718]: I1210 14:34:07.911421 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-fqhgq" event={"ID":"ecc5d9e7-3f61-4863-8b70-3f71e1c0278b","Type":"ContainerStarted","Data":"1896a9986d12079cdf1fd3a940e2713ddc6bcb89b87a3b73658c7f4ff01c0430"} Dec 10 14:34:07 crc kubenswrapper[4718]: I1210 14:34:07.914495 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-hbnxp" event={"ID":"b7432a0e-050f-4112-a75d-f8687233cf0f","Type":"ContainerStarted","Data":"c9b7906c6477516efc6b65338aec522aa6f6f40040a16c2dad3284fd8c9c1507"} Dec 10 14:34:07 crc kubenswrapper[4718]: I1210 14:34:07.918442 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-v2z2q" event={"ID":"deb5d218-90f7-429b-b09d-137dd3422ea0","Type":"ContainerStarted","Data":"304dd0d43265cd275eb4c6a1e3480057a00b63649fb8990271dc11ff0efe964b"} Dec 10 14:34:07 crc kubenswrapper[4718]: I1210 14:34:07.926491 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-7fd22" event={"ID":"40856145-6171-40e7-97b9-51268a8c348b","Type":"ContainerStarted","Data":"0ba2a0385c28dc0cc17e7b9c95e3018701401c86495017c888826c279be71aa7"} Dec 10 14:34:07 crc kubenswrapper[4718]: I1210 14:34:07.926833 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-t2gmf" podStartSLOduration=127.926804824 podStartE2EDuration="2m7.926804824s" podCreationTimestamp="2025-12-10 14:32:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 14:34:07.92421574 +0000 UTC m=+152.873439167" watchObservedRunningTime="2025-12-10 14:34:07.926804824 +0000 UTC m=+152.876028241" Dec 10 14:34:07 crc kubenswrapper[4718]: I1210 14:34:07.947499 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-4bmlt" event={"ID":"4ec81ced-fd94-49a3-aa46-d7febbbfb825","Type":"ContainerStarted","Data":"32e50706da19073aa72a20708ab2e2014f11295ffe226426caa65db97512b988"} Dec 10 14:34:07 crc kubenswrapper[4718]: I1210 14:34:07.958107 4718 generic.go:334] "Generic (PLEG): container finished" podID="ab797808-0785-40c6-8399-1a3bfef7b7f1" containerID="7102abd426fd5dd7b4b924d321c32ebcabf5b5c8bf8ad0a4b1ad5f86922ed1f9" exitCode=0 Dec 10 14:34:07 crc kubenswrapper[4718]: I1210 14:34:07.958297 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-mx69k" event={"ID":"ab797808-0785-40c6-8399-1a3bfef7b7f1","Type":"ContainerDied","Data":"7102abd426fd5dd7b4b924d321c32ebcabf5b5c8bf8ad0a4b1ad5f86922ed1f9"} Dec 10 14:34:07 crc kubenswrapper[4718]: I1210 14:34:07.969457 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-tk99n" event={"ID":"90744db6-7e3a-4f7f-a0ae-4f4dfed7df33","Type":"ContainerStarted","Data":"528ce00589e35ed31d99b6f71e5cd305814a543f03811cf8b692571b5421a8c7"} Dec 10 14:34:07 crc kubenswrapper[4718]: I1210 14:34:07.971896 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-v2z2q" podStartSLOduration=126.97186554 podStartE2EDuration="2m6.97186554s" podCreationTimestamp="2025-12-10 14:32:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 14:34:07.948798124 +0000 UTC m=+152.898021541" watchObservedRunningTime="2025-12-10 14:34:07.97186554 +0000 UTC m=+152.921088957" Dec 10 14:34:07 crc kubenswrapper[4718]: I1210 14:34:07.987880 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-qq8sm" event={"ID":"4f9989fa-f608-438c-9bfd-f06ef32470c7","Type":"ContainerStarted","Data":"72c41eef36f0967ca779b492d6d51f01007c57ade00d65d83de5dd8e5f91d3e1"} Dec 10 14:34:08 crc kubenswrapper[4718]: I1210 14:34:08.005149 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 10 14:34:08 crc kubenswrapper[4718]: E1210 14:34:08.005334 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-10 14:34:08.505312126 +0000 UTC m=+153.454535543 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 14:34:08 crc kubenswrapper[4718]: I1210 14:34:08.006443 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n2lcz\" (UID: \"2638a0da-6209-4691-a4d4-6aa91a4ca547\") " pod="openshift-image-registry/image-registry-697d97f7c8-n2lcz" Dec 10 14:34:08 crc kubenswrapper[4718]: E1210 14:34:08.007034 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-10 14:34:08.507018038 +0000 UTC m=+153.456241455 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n2lcz" (UID: "2638a0da-6209-4691-a4d4-6aa91a4ca547") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 14:34:08 crc kubenswrapper[4718]: W1210 14:34:08.079782 4718 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode82e7008_55b0_4f3c_bdd7_8c9bfbc9284f.slice/crio-6351be66f68a0fc4997dc519045e23535a5dd3170ce86d00d10cdc6b92b4a447 WatchSource:0}: Error finding container 6351be66f68a0fc4997dc519045e23535a5dd3170ce86d00d10cdc6b92b4a447: Status 404 returned error can't find the container with id 6351be66f68a0fc4997dc519045e23535a5dd3170ce86d00d10cdc6b92b4a447 Dec 10 14:34:08 crc kubenswrapper[4718]: I1210 14:34:08.104463 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-tk99n" Dec 10 14:34:08 crc kubenswrapper[4718]: I1210 14:34:08.111658 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 10 14:34:08 crc kubenswrapper[4718]: E1210 14:34:08.113369 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-10 14:34:08.613350545 +0000 UTC m=+153.562573962 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 14:34:08 crc kubenswrapper[4718]: I1210 14:34:08.129344 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-s92b8" podStartSLOduration=128.129318093 podStartE2EDuration="2m8.129318093s" podCreationTimestamp="2025-12-10 14:32:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 14:34:08.004058444 +0000 UTC m=+152.953281861" watchObservedRunningTime="2025-12-10 14:34:08.129318093 +0000 UTC m=+153.078541510" Dec 10 14:34:08 crc kubenswrapper[4718]: I1210 14:34:08.213130 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n2lcz\" (UID: \"2638a0da-6209-4691-a4d4-6aa91a4ca547\") " pod="openshift-image-registry/image-registry-697d97f7c8-n2lcz" Dec 10 14:34:08 crc kubenswrapper[4718]: E1210 14:34:08.213579 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-10 14:34:08.713561938 +0000 UTC m=+153.662785355 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n2lcz" (UID: "2638a0da-6209-4691-a4d4-6aa91a4ca547") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 14:34:08 crc kubenswrapper[4718]: I1210 14:34:08.314754 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 10 14:34:08 crc kubenswrapper[4718]: E1210 14:34:08.315025 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-10 14:34:08.814974711 +0000 UTC m=+153.764198148 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 14:34:08 crc kubenswrapper[4718]: I1210 14:34:08.315327 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n2lcz\" (UID: \"2638a0da-6209-4691-a4d4-6aa91a4ca547\") " pod="openshift-image-registry/image-registry-697d97f7c8-n2lcz" Dec 10 14:34:08 crc kubenswrapper[4718]: E1210 14:34:08.315822 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-10 14:34:08.815805852 +0000 UTC m=+153.765029459 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n2lcz" (UID: "2638a0da-6209-4691-a4d4-6aa91a4ca547") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 14:34:08 crc kubenswrapper[4718]: I1210 14:34:08.417105 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 10 14:34:08 crc kubenswrapper[4718]: E1210 14:34:08.418283 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-10 14:34:08.918258242 +0000 UTC m=+153.867481659 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 14:34:08 crc kubenswrapper[4718]: I1210 14:34:08.520695 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n2lcz\" (UID: \"2638a0da-6209-4691-a4d4-6aa91a4ca547\") " pod="openshift-image-registry/image-registry-697d97f7c8-n2lcz" Dec 10 14:34:08 crc kubenswrapper[4718]: E1210 14:34:08.521107 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-10 14:34:09.021089581 +0000 UTC m=+153.970312988 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n2lcz" (UID: "2638a0da-6209-4691-a4d4-6aa91a4ca547") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 14:34:08 crc kubenswrapper[4718]: I1210 14:34:08.624934 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 10 14:34:08 crc kubenswrapper[4718]: E1210 14:34:08.625353 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-10 14:34:09.125316324 +0000 UTC m=+154.074539741 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 14:34:08 crc kubenswrapper[4718]: I1210 14:34:08.630179 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n2lcz\" (UID: \"2638a0da-6209-4691-a4d4-6aa91a4ca547\") " pod="openshift-image-registry/image-registry-697d97f7c8-n2lcz" Dec 10 14:34:08 crc kubenswrapper[4718]: E1210 14:34:08.630738 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-10 14:34:09.130719789 +0000 UTC m=+154.079943206 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n2lcz" (UID: "2638a0da-6209-4691-a4d4-6aa91a4ca547") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 14:34:08 crc kubenswrapper[4718]: I1210 14:34:08.637703 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-8dm4f"] Dec 10 14:34:08 crc kubenswrapper[4718]: I1210 14:34:08.651886 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-rnbkw"] Dec 10 14:34:08 crc kubenswrapper[4718]: I1210 14:34:08.655666 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29422950-4cxn6"] Dec 10 14:34:08 crc kubenswrapper[4718]: I1210 14:34:08.734436 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 10 14:34:08 crc kubenswrapper[4718]: E1210 14:34:08.734792 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-10 14:34:09.234771119 +0000 UTC m=+154.183994536 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 14:34:08 crc kubenswrapper[4718]: W1210 14:34:08.747456 4718 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod42d025d0_f89c_4e18_aae4_f923d5797693.slice/crio-06c7fea07905790343b4f9245cc6d8dc5f1e342075152349f7774170d41ddf03 WatchSource:0}: Error finding container 06c7fea07905790343b4f9245cc6d8dc5f1e342075152349f7774170d41ddf03: Status 404 returned error can't find the container with id 06c7fea07905790343b4f9245cc6d8dc5f1e342075152349f7774170d41ddf03 Dec 10 14:34:08 crc kubenswrapper[4718]: I1210 14:34:08.753066 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-vws9k"] Dec 10 14:34:08 crc kubenswrapper[4718]: I1210 14:34:08.755331 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-rn9bk"] Dec 10 14:34:08 crc kubenswrapper[4718]: I1210 14:34:08.815487 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-brrd5"] Dec 10 14:34:08 crc kubenswrapper[4718]: I1210 14:34:08.835814 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n2lcz\" (UID: \"2638a0da-6209-4691-a4d4-6aa91a4ca547\") " pod="openshift-image-registry/image-registry-697d97f7c8-n2lcz" Dec 10 14:34:08 crc kubenswrapper[4718]: E1210 14:34:08.836326 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-10 14:34:09.336310395 +0000 UTC m=+154.285533812 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n2lcz" (UID: "2638a0da-6209-4691-a4d4-6aa91a4ca547") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 14:34:08 crc kubenswrapper[4718]: I1210 14:34:08.836694 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-xtqqs"] Dec 10 14:34:08 crc kubenswrapper[4718]: I1210 14:34:08.939369 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 10 14:34:08 crc kubenswrapper[4718]: E1210 14:34:08.940403 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-10 14:34:09.440360515 +0000 UTC m=+154.389583922 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 14:34:08 crc kubenswrapper[4718]: I1210 14:34:08.991870 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-l47zr"] Dec 10 14:34:08 crc kubenswrapper[4718]: I1210 14:34:08.995276 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-wgzns"] Dec 10 14:34:09 crc kubenswrapper[4718]: I1210 14:34:09.041580 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n2lcz\" (UID: \"2638a0da-6209-4691-a4d4-6aa91a4ca547\") " pod="openshift-image-registry/image-registry-697d97f7c8-n2lcz" Dec 10 14:34:09 crc kubenswrapper[4718]: E1210 14:34:09.041961 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-10 14:34:09.541947012 +0000 UTC m=+154.491170429 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n2lcz" (UID: "2638a0da-6209-4691-a4d4-6aa91a4ca547") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 14:34:09 crc kubenswrapper[4718]: I1210 14:34:09.042058 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-vws9k" event={"ID":"850edb82-dd95-474f-a41c-3aa48faa4b87","Type":"ContainerStarted","Data":"ea1157e1a81b647049cbf7d0c5af958cbebe649061d75d5bbde5fdbbb39354a2"} Dec 10 14:34:09 crc kubenswrapper[4718]: I1210 14:34:09.049519 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-g5xg4" event={"ID":"1c9a984d-0762-47e3-82e4-8b4f9d6bea39","Type":"ContainerStarted","Data":"140a5b449f457896133210d3a67004655b8de1ca25c2d7fcf8700bcecd45e9da"} Dec 10 14:34:09 crc kubenswrapper[4718]: I1210 14:34:09.099779 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-76q8c" event={"ID":"dc6efe70-637d-42bf-a57e-92ed4503a23a","Type":"ContainerStarted","Data":"8cee71110b40cb1125218f7d4c05fa0743567524fb645ed0f13d52aa830c55c8"} Dec 10 14:34:09 crc kubenswrapper[4718]: I1210 14:34:09.101377 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-g5xg4" podStartSLOduration=129.101350596 podStartE2EDuration="2m9.101350596s" podCreationTimestamp="2025-12-10 14:32:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 14:34:09.086701951 +0000 UTC m=+154.035925388" watchObservedRunningTime="2025-12-10 14:34:09.101350596 +0000 UTC m=+154.050574013" Dec 10 14:34:09 crc kubenswrapper[4718]: I1210 14:34:09.104609 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-twtlk"] Dec 10 14:34:09 crc kubenswrapper[4718]: W1210 14:34:09.114668 4718 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9d751cbb_f2e2_430d_9754_c882a5e924a5.slice/crio-ad0ce8cfb8c1106dec2ad97c1097e26d5f4724b94bdd40da0ce37fac4727e7e2 WatchSource:0}: Error finding container ad0ce8cfb8c1106dec2ad97c1097e26d5f4724b94bdd40da0ce37fac4727e7e2: Status 404 returned error can't find the container with id ad0ce8cfb8c1106dec2ad97c1097e26d5f4724b94bdd40da0ce37fac4727e7e2 Dec 10 14:34:09 crc kubenswrapper[4718]: W1210 14:34:09.135324 4718 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3b6479f0_333b_4a96_9adf_2099afdc2447.slice/crio-4958e753c156089f83a121334fe598f01a06f343376f0d9411b6ab000a00a808 WatchSource:0}: Error finding container 4958e753c156089f83a121334fe598f01a06f343376f0d9411b6ab000a00a808: Status 404 returned error can't find the container with id 4958e753c156089f83a121334fe598f01a06f343376f0d9411b6ab000a00a808 Dec 10 14:34:09 crc kubenswrapper[4718]: I1210 14:34:09.142582 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 10 14:34:09 crc kubenswrapper[4718]: E1210 14:34:09.144615 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-10 14:34:09.644587356 +0000 UTC m=+154.593810773 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 14:34:09 crc kubenswrapper[4718]: I1210 14:34:09.173519 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-xtqqs" event={"ID":"f201fb44-80ca-4142-aaf9-4cfbd8215309","Type":"ContainerStarted","Data":"ab08799467d236e7a813988bb1ecbaa861247e27696506bd100e8799a9531930"} Dec 10 14:34:09 crc kubenswrapper[4718]: I1210 14:34:09.187800 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-rn9bk" event={"ID":"a07b76f5-7f05-410e-a0e9-3aff034786ad","Type":"ContainerStarted","Data":"d3cc10c526333b95f07f3d68943f4c114842c7d6fd263cf7ac751d6fdd7ffdfc"} Dec 10 14:34:09 crc kubenswrapper[4718]: I1210 14:34:09.198587 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-4bmlt" event={"ID":"4ec81ced-fd94-49a3-aa46-d7febbbfb825","Type":"ContainerStarted","Data":"e6fa724b7370679597c2d834351adc0933e4b664414ad81e71d8243b4b46bcf4"} Dec 10 14:34:09 crc kubenswrapper[4718]: I1210 14:34:09.200350 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-4bmlt" Dec 10 14:34:09 crc kubenswrapper[4718]: I1210 14:34:09.209992 4718 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-4bmlt container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.31:8443/healthz\": dial tcp 10.217.0.31:8443: connect: connection refused" start-of-body= Dec 10 14:34:09 crc kubenswrapper[4718]: I1210 14:34:09.210037 4718 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-4bmlt" podUID="4ec81ced-fd94-49a3-aa46-d7febbbfb825" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.31:8443/healthz\": dial tcp 10.217.0.31:8443: connect: connection refused" Dec 10 14:34:09 crc kubenswrapper[4718]: I1210 14:34:09.214046 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-fqhgq" event={"ID":"ecc5d9e7-3f61-4863-8b70-3f71e1c0278b","Type":"ContainerStarted","Data":"b953dcd3b9ef44de09cb8a2f67defec6955ed797d7dbc035f75ee8429c535d52"} Dec 10 14:34:09 crc kubenswrapper[4718]: I1210 14:34:09.244762 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n2lcz\" (UID: \"2638a0da-6209-4691-a4d4-6aa91a4ca547\") " pod="openshift-image-registry/image-registry-697d97f7c8-n2lcz" Dec 10 14:34:09 crc kubenswrapper[4718]: E1210 14:34:09.246540 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-10 14:34:09.746503982 +0000 UTC m=+154.695727399 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n2lcz" (UID: "2638a0da-6209-4691-a4d4-6aa91a4ca547") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 14:34:09 crc kubenswrapper[4718]: I1210 14:34:09.258275 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-4bmlt" podStartSLOduration=128.258226645 podStartE2EDuration="2m8.258226645s" podCreationTimestamp="2025-12-10 14:32:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 14:34:09.233555038 +0000 UTC m=+154.182778455" watchObservedRunningTime="2025-12-10 14:34:09.258226645 +0000 UTC m=+154.207450062" Dec 10 14:34:09 crc kubenswrapper[4718]: I1210 14:34:09.261332 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-fqhgq" podStartSLOduration=7.261278281 podStartE2EDuration="7.261278281s" podCreationTimestamp="2025-12-10 14:34:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 14:34:09.256696716 +0000 UTC m=+154.205920133" watchObservedRunningTime="2025-12-10 14:34:09.261278281 +0000 UTC m=+154.210501698" Dec 10 14:34:09 crc kubenswrapper[4718]: I1210 14:34:09.288905 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-q5c57" event={"ID":"fdfd948f-0ed8-45ed-90ab-3126ba209608","Type":"ContainerStarted","Data":"540f4c74dd4ac2bea66fd19b1b80424f86de4456e50bbff30b31c3491000a3cf"} Dec 10 14:34:09 crc kubenswrapper[4718]: I1210 14:34:09.326821 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-q5c57" podStartSLOduration=129.326789157 podStartE2EDuration="2m9.326789157s" podCreationTimestamp="2025-12-10 14:32:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 14:34:09.320184892 +0000 UTC m=+154.269408309" watchObservedRunningTime="2025-12-10 14:34:09.326789157 +0000 UTC m=+154.276012574" Dec 10 14:34:09 crc kubenswrapper[4718]: I1210 14:34:09.345959 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 10 14:34:09 crc kubenswrapper[4718]: E1210 14:34:09.349866 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-10 14:34:09.849838753 +0000 UTC m=+154.799062170 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 14:34:09 crc kubenswrapper[4718]: I1210 14:34:09.356099 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-mx69k" event={"ID":"ab797808-0785-40c6-8399-1a3bfef7b7f1","Type":"ContainerStarted","Data":"9a588811cada7a57faf4210e79491d9616ea1859062c5b62996d4c0b3b116f86"} Dec 10 14:34:09 crc kubenswrapper[4718]: I1210 14:34:09.356701 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-mx69k" Dec 10 14:34:09 crc kubenswrapper[4718]: I1210 14:34:09.374584 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-hbnxp" event={"ID":"b7432a0e-050f-4112-a75d-f8687233cf0f","Type":"ContainerStarted","Data":"d01398e57d4eabd9ce5d2560b99c51eece829c3f364c52200ac15409e98782f0"} Dec 10 14:34:09 crc kubenswrapper[4718]: I1210 14:34:09.375516 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-hbnxp" Dec 10 14:34:09 crc kubenswrapper[4718]: I1210 14:34:09.392607 4718 patch_prober.go:28] interesting pod/downloads-7954f5f757-hbnxp container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" start-of-body= Dec 10 14:34:09 crc kubenswrapper[4718]: I1210 14:34:09.392682 4718 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-hbnxp" podUID="b7432a0e-050f-4112-a75d-f8687233cf0f" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" Dec 10 14:34:09 crc kubenswrapper[4718]: I1210 14:34:09.394794 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-dskl4" event={"ID":"061a3cd6-2958-4635-9ce1-4989d98d8432","Type":"ContainerStarted","Data":"e7dfd4e0244f0e5c6ee3f7e4b3a0e2d5ca8edfe291d3468f9f6cb477c29c912a"} Dec 10 14:34:09 crc kubenswrapper[4718]: I1210 14:34:09.434096 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-mx69k" podStartSLOduration=129.434065097 podStartE2EDuration="2m9.434065097s" podCreationTimestamp="2025-12-10 14:32:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 14:34:09.428428366 +0000 UTC m=+154.377651783" watchObservedRunningTime="2025-12-10 14:34:09.434065097 +0000 UTC m=+154.383288514" Dec 10 14:34:09 crc kubenswrapper[4718]: I1210 14:34:09.436279 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-qn8tr"] Dec 10 14:34:09 crc kubenswrapper[4718]: I1210 14:34:09.452240 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-2x76l" event={"ID":"f4e99f92-30bf-44e1-a7b3-b5d481af2100","Type":"ContainerStarted","Data":"bd1ac1bdc14b19ce80b6317924fa1ac0c519d3384ae8bc75355a4d9fc392017e"} Dec 10 14:34:09 crc kubenswrapper[4718]: I1210 14:34:09.453748 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n2lcz\" (UID: \"2638a0da-6209-4691-a4d4-6aa91a4ca547\") " pod="openshift-image-registry/image-registry-697d97f7c8-n2lcz" Dec 10 14:34:09 crc kubenswrapper[4718]: E1210 14:34:09.454709 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-10 14:34:09.954692583 +0000 UTC m=+154.903916000 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n2lcz" (UID: "2638a0da-6209-4691-a4d4-6aa91a4ca547") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 14:34:09 crc kubenswrapper[4718]: I1210 14:34:09.466905 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-r4272"] Dec 10 14:34:09 crc kubenswrapper[4718]: I1210 14:34:09.486558 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-hbnxp" podStartSLOduration=129.486529058 podStartE2EDuration="2m9.486529058s" podCreationTimestamp="2025-12-10 14:32:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 14:34:09.484684942 +0000 UTC m=+154.433908359" watchObservedRunningTime="2025-12-10 14:34:09.486529058 +0000 UTC m=+154.435752475" Dec 10 14:34:09 crc kubenswrapper[4718]: I1210 14:34:09.533474 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-kkv25"] Dec 10 14:34:09 crc kubenswrapper[4718]: I1210 14:34:09.546539 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-btqhn"] Dec 10 14:34:09 crc kubenswrapper[4718]: I1210 14:34:09.557943 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 10 14:34:09 crc kubenswrapper[4718]: E1210 14:34:09.560798 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-10 14:34:10.060764512 +0000 UTC m=+155.009987919 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 14:34:09 crc kubenswrapper[4718]: I1210 14:34:09.567854 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-9zbxh"] Dec 10 14:34:09 crc kubenswrapper[4718]: I1210 14:34:09.598369 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-2x76l" podStartSLOduration=129.598348821 podStartE2EDuration="2m9.598348821s" podCreationTimestamp="2025-12-10 14:32:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 14:34:09.566025244 +0000 UTC m=+154.515248661" watchObservedRunningTime="2025-12-10 14:34:09.598348821 +0000 UTC m=+154.547572238" Dec 10 14:34:09 crc kubenswrapper[4718]: I1210 14:34:09.599682 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-prs2h" event={"ID":"8b5042f5-52a1-42da-9a21-72d7b2a75c75","Type":"ContainerStarted","Data":"98b8d323b3c6c2926ee9f68b493ec3c6f8d76ec464c9188a1f25fc84008cff65"} Dec 10 14:34:09 crc kubenswrapper[4718]: I1210 14:34:09.613415 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-bvd4w"] Dec 10 14:34:09 crc kubenswrapper[4718]: I1210 14:34:09.629943 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-gvvtl"] Dec 10 14:34:09 crc kubenswrapper[4718]: I1210 14:34:09.651436 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-prs2h" podStartSLOduration=128.651409057 podStartE2EDuration="2m8.651409057s" podCreationTimestamp="2025-12-10 14:32:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 14:34:09.651065028 +0000 UTC m=+154.600288465" watchObservedRunningTime="2025-12-10 14:34:09.651409057 +0000 UTC m=+154.600632464" Dec 10 14:34:09 crc kubenswrapper[4718]: I1210 14:34:09.652264 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-7p66s" event={"ID":"e82e7008-55b0-4f3c-bdd7-8c9bfbc9284f","Type":"ContainerStarted","Data":"6351be66f68a0fc4997dc519045e23535a5dd3170ce86d00d10cdc6b92b4a447"} Dec 10 14:34:09 crc kubenswrapper[4718]: I1210 14:34:09.661657 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n2lcz\" (UID: \"2638a0da-6209-4691-a4d4-6aa91a4ca547\") " pod="openshift-image-registry/image-registry-697d97f7c8-n2lcz" Dec 10 14:34:09 crc kubenswrapper[4718]: E1210 14:34:09.663617 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-10 14:34:10.163592891 +0000 UTC m=+155.112816308 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n2lcz" (UID: "2638a0da-6209-4691-a4d4-6aa91a4ca547") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 14:34:09 crc kubenswrapper[4718]: I1210 14:34:09.726239 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-5qn5w" event={"ID":"f18678b9-691a-4582-b327-b5bc9f1983d8","Type":"ContainerStarted","Data":"ab1b71a5e093a8d75611381560e477c8814c42ab61b8f696da415a043e2a4919"} Dec 10 14:34:09 crc kubenswrapper[4718]: I1210 14:34:09.754878 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-8dm4f" event={"ID":"4fb46f0a-fd98-475f-a2fe-db2c2f8c3d35","Type":"ContainerStarted","Data":"9bf169f9a895fdb0c568ba5c492c0b9dfa7f249d9e8f32bda96e0a0c3378752f"} Dec 10 14:34:09 crc kubenswrapper[4718]: I1210 14:34:09.771000 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 10 14:34:09 crc kubenswrapper[4718]: E1210 14:34:09.771426 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-10 14:34:10.271405975 +0000 UTC m=+155.220629392 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 14:34:09 crc kubenswrapper[4718]: I1210 14:34:09.772209 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29422950-4cxn6" event={"ID":"dff4ff8a-f156-4da4-ba81-477078a8345d","Type":"ContainerStarted","Data":"53558c77b5e312c46511471e95d187645d78479c83cc9b2cf2de80b711e83533"} Dec 10 14:34:09 crc kubenswrapper[4718]: I1210 14:34:09.779352 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-brrd5" event={"ID":"cc343e66-a0f1-4f9b-bbba-77b38c0260e7","Type":"ContainerStarted","Data":"a6db651908bc7cc6d62ba4bcd8bafd62561ac58154d82bbcfd58f94f263c2367"} Dec 10 14:34:09 crc kubenswrapper[4718]: I1210 14:34:09.790096 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-rnbkw" event={"ID":"42d025d0-f89c-4e18-aae4-f923d5797693","Type":"ContainerStarted","Data":"06c7fea07905790343b4f9245cc6d8dc5f1e342075152349f7774170d41ddf03"} Dec 10 14:34:09 crc kubenswrapper[4718]: I1210 14:34:09.797862 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-qq8sm" event={"ID":"4f9989fa-f608-438c-9bfd-f06ef32470c7","Type":"ContainerStarted","Data":"67937df5659d8cac317dd1d89bf5af2520859ef068643f648d3c90c2fb3e3c80"} Dec 10 14:34:09 crc kubenswrapper[4718]: I1210 14:34:09.799282 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-qq8sm" Dec 10 14:34:09 crc kubenswrapper[4718]: I1210 14:34:09.800101 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29422950-4cxn6" podStartSLOduration=130.800065141 podStartE2EDuration="2m10.800065141s" podCreationTimestamp="2025-12-10 14:31:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 14:34:09.796604804 +0000 UTC m=+154.745828221" watchObservedRunningTime="2025-12-10 14:34:09.800065141 +0000 UTC m=+154.749288578" Dec 10 14:34:09 crc kubenswrapper[4718]: I1210 14:34:09.837085 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-44d9w" event={"ID":"2e7d85a9-ce97-4481-934c-3e6f3c720007","Type":"ContainerStarted","Data":"cbc4ff4ae97db955974be015b10aa003a5ea73851fa4b86646d14d920188eee6"} Dec 10 14:34:09 crc kubenswrapper[4718]: W1210 14:34:09.841751 4718 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbc13833f_8cfa_417a_a16a_420e5b00843b.slice/crio-d9d0dc9546421aa0d67759eaea0f6f08520d1137f9836ed2bc96b4430298e35a WatchSource:0}: Error finding container d9d0dc9546421aa0d67759eaea0f6f08520d1137f9836ed2bc96b4430298e35a: Status 404 returned error can't find the container with id d9d0dc9546421aa0d67759eaea0f6f08520d1137f9836ed2bc96b4430298e35a Dec 10 14:34:09 crc kubenswrapper[4718]: I1210 14:34:09.855586 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-s92b8" event={"ID":"799e0d54-7a8e-48e5-b5d0-db944b3cbe25","Type":"ContainerStarted","Data":"ecb77001471c6f87fb8e62500cbcf0afcb4937db6aaabe9e4bda9215296f3c4e"} Dec 10 14:34:09 crc kubenswrapper[4718]: I1210 14:34:09.871594 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-rnbkw" podStartSLOduration=130.871564317 podStartE2EDuration="2m10.871564317s" podCreationTimestamp="2025-12-10 14:31:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 14:34:09.84005782 +0000 UTC m=+154.789281237" watchObservedRunningTime="2025-12-10 14:34:09.871564317 +0000 UTC m=+154.820787734" Dec 10 14:34:09 crc kubenswrapper[4718]: I1210 14:34:09.872980 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-44d9w" podStartSLOduration=129.872973342 podStartE2EDuration="2m9.872973342s" podCreationTimestamp="2025-12-10 14:32:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 14:34:09.871774542 +0000 UTC m=+154.820997959" watchObservedRunningTime="2025-12-10 14:34:09.872973342 +0000 UTC m=+154.822196749" Dec 10 14:34:09 crc kubenswrapper[4718]: I1210 14:34:09.873012 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n2lcz\" (UID: \"2638a0da-6209-4691-a4d4-6aa91a4ca547\") " pod="openshift-image-registry/image-registry-697d97f7c8-n2lcz" Dec 10 14:34:09 crc kubenswrapper[4718]: E1210 14:34:09.875132 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-10 14:34:10.375103305 +0000 UTC m=+155.324326722 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n2lcz" (UID: "2638a0da-6209-4691-a4d4-6aa91a4ca547") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 14:34:09 crc kubenswrapper[4718]: I1210 14:34:09.880685 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-xrpzk" event={"ID":"db2db73c-d11d-46e6-9cc6-331faf9a21ca","Type":"ContainerStarted","Data":"435526bb7d72445441dee150a505b66a993b5f3b91cac8b1798b212c79dd346b"} Dec 10 14:34:09 crc kubenswrapper[4718]: I1210 14:34:09.908038 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-qq8sm" podStartSLOduration=129.907995357 podStartE2EDuration="2m9.907995357s" podCreationTimestamp="2025-12-10 14:32:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 14:34:09.898684764 +0000 UTC m=+154.847908181" watchObservedRunningTime="2025-12-10 14:34:09.907995357 +0000 UTC m=+154.857218764" Dec 10 14:34:09 crc kubenswrapper[4718]: I1210 14:34:09.975517 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 10 14:34:09 crc kubenswrapper[4718]: E1210 14:34:09.978507 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-10 14:34:10.478469697 +0000 UTC m=+155.427693164 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 14:34:10 crc kubenswrapper[4718]: I1210 14:34:10.078222 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n2lcz\" (UID: \"2638a0da-6209-4691-a4d4-6aa91a4ca547\") " pod="openshift-image-registry/image-registry-697d97f7c8-n2lcz" Dec 10 14:34:10 crc kubenswrapper[4718]: E1210 14:34:10.078673 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-10 14:34:10.57865625 +0000 UTC m=+155.527879667 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n2lcz" (UID: "2638a0da-6209-4691-a4d4-6aa91a4ca547") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 14:34:10 crc kubenswrapper[4718]: I1210 14:34:10.179632 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 10 14:34:10 crc kubenswrapper[4718]: E1210 14:34:10.180125 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-10 14:34:10.680103505 +0000 UTC m=+155.629326922 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 14:34:10 crc kubenswrapper[4718]: I1210 14:34:10.281161 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n2lcz\" (UID: \"2638a0da-6209-4691-a4d4-6aa91a4ca547\") " pod="openshift-image-registry/image-registry-697d97f7c8-n2lcz" Dec 10 14:34:10 crc kubenswrapper[4718]: E1210 14:34:10.281763 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-10 14:34:10.781739384 +0000 UTC m=+155.730962811 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n2lcz" (UID: "2638a0da-6209-4691-a4d4-6aa91a4ca547") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 14:34:10 crc kubenswrapper[4718]: I1210 14:34:10.361309 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-qq8sm" Dec 10 14:34:10 crc kubenswrapper[4718]: I1210 14:34:10.385190 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 10 14:34:10 crc kubenswrapper[4718]: E1210 14:34:10.385583 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-10 14:34:10.885556397 +0000 UTC m=+155.834779814 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 14:34:10 crc kubenswrapper[4718]: I1210 14:34:10.385760 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n2lcz\" (UID: \"2638a0da-6209-4691-a4d4-6aa91a4ca547\") " pod="openshift-image-registry/image-registry-697d97f7c8-n2lcz" Dec 10 14:34:10 crc kubenswrapper[4718]: E1210 14:34:10.386148 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-10 14:34:10.886138352 +0000 UTC m=+155.835361769 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n2lcz" (UID: "2638a0da-6209-4691-a4d4-6aa91a4ca547") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 14:34:10 crc kubenswrapper[4718]: I1210 14:34:10.487061 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 10 14:34:10 crc kubenswrapper[4718]: E1210 14:34:10.487457 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-10 14:34:10.987433482 +0000 UTC m=+155.936656899 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 14:34:10 crc kubenswrapper[4718]: I1210 14:34:10.589875 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n2lcz\" (UID: \"2638a0da-6209-4691-a4d4-6aa91a4ca547\") " pod="openshift-image-registry/image-registry-697d97f7c8-n2lcz" Dec 10 14:34:10 crc kubenswrapper[4718]: E1210 14:34:10.590920 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-10 14:34:11.090900557 +0000 UTC m=+156.040123974 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n2lcz" (UID: "2638a0da-6209-4691-a4d4-6aa91a4ca547") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 14:34:10 crc kubenswrapper[4718]: I1210 14:34:10.691309 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 10 14:34:10 crc kubenswrapper[4718]: E1210 14:34:10.691509 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-10 14:34:11.19147502 +0000 UTC m=+156.140698437 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 14:34:10 crc kubenswrapper[4718]: I1210 14:34:10.691660 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n2lcz\" (UID: \"2638a0da-6209-4691-a4d4-6aa91a4ca547\") " pod="openshift-image-registry/image-registry-697d97f7c8-n2lcz" Dec 10 14:34:10 crc kubenswrapper[4718]: E1210 14:34:10.692282 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-10 14:34:11.192264929 +0000 UTC m=+156.141488346 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n2lcz" (UID: "2638a0da-6209-4691-a4d4-6aa91a4ca547") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 14:34:10 crc kubenswrapper[4718]: I1210 14:34:10.793588 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 10 14:34:10 crc kubenswrapper[4718]: E1210 14:34:10.793850 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-10 14:34:11.293797376 +0000 UTC m=+156.243020793 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 14:34:10 crc kubenswrapper[4718]: I1210 14:34:10.793969 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n2lcz\" (UID: \"2638a0da-6209-4691-a4d4-6aa91a4ca547\") " pod="openshift-image-registry/image-registry-697d97f7c8-n2lcz" Dec 10 14:34:10 crc kubenswrapper[4718]: E1210 14:34:10.794581 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-10 14:34:11.294561475 +0000 UTC m=+156.243784892 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n2lcz" (UID: "2638a0da-6209-4691-a4d4-6aa91a4ca547") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 14:34:10 crc kubenswrapper[4718]: I1210 14:34:10.895520 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 10 14:34:10 crc kubenswrapper[4718]: E1210 14:34:10.896455 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-10 14:34:11.39643197 +0000 UTC m=+156.345655387 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 14:34:10 crc kubenswrapper[4718]: I1210 14:34:10.949815 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-7fd22" event={"ID":"40856145-6171-40e7-97b9-51268a8c348b","Type":"ContainerStarted","Data":"c1a1a28bd827ae94afb04d46490eadbb867236053140164bbc2867ba711376ce"} Dec 10 14:34:10 crc kubenswrapper[4718]: I1210 14:34:10.966308 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-rnbkw" event={"ID":"42d025d0-f89c-4e18-aae4-f923d5797693","Type":"ContainerStarted","Data":"e5eb2f26aeca52da738469517212c0b8b7db5090a968794fdff94d0b22c56149"} Dec 10 14:34:10 crc kubenswrapper[4718]: I1210 14:34:10.999432 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-7p66s" event={"ID":"e82e7008-55b0-4f3c-bdd7-8c9bfbc9284f","Type":"ContainerStarted","Data":"902f507aa777fa77a379f994267a47d5a10c6e6a0608b987c007b0926c1afc1a"} Dec 10 14:34:10 crc kubenswrapper[4718]: I1210 14:34:10.999498 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-7p66s" event={"ID":"e82e7008-55b0-4f3c-bdd7-8c9bfbc9284f","Type":"ContainerStarted","Data":"02312519f3710cbe6043d064ac79d2cf053dac393291d7fd202141ae19cdfdd9"} Dec 10 14:34:11 crc kubenswrapper[4718]: I1210 14:34:11.002765 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-7fd22" podStartSLOduration=131.002750466 podStartE2EDuration="2m11.002750466s" podCreationTimestamp="2025-12-10 14:32:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 14:34:11.002022548 +0000 UTC m=+155.951245965" watchObservedRunningTime="2025-12-10 14:34:11.002750466 +0000 UTC m=+155.951973883" Dec 10 14:34:11 crc kubenswrapper[4718]: I1210 14:34:11.003529 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n2lcz\" (UID: \"2638a0da-6209-4691-a4d4-6aa91a4ca547\") " pod="openshift-image-registry/image-registry-697d97f7c8-n2lcz" Dec 10 14:34:11 crc kubenswrapper[4718]: E1210 14:34:11.003928 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-10 14:34:11.503903235 +0000 UTC m=+156.453126652 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n2lcz" (UID: "2638a0da-6209-4691-a4d4-6aa91a4ca547") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 14:34:11 crc kubenswrapper[4718]: I1210 14:34:11.012674 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-qn8tr" event={"ID":"079f1ed7-7f70-4b4c-9afd-cf0286348562","Type":"ContainerStarted","Data":"68031118c0cf04c584c88a197dace78ab0d8a9e577e350630834aff8ca9a6a65"} Dec 10 14:34:11 crc kubenswrapper[4718]: I1210 14:34:11.070808 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-twtlk" event={"ID":"451fb12e-a97f-441e-8a8c-d4c217640aef","Type":"ContainerStarted","Data":"1d04a660357379885f49b3614a2508450398926ae43b100daeef376b23948e39"} Dec 10 14:34:11 crc kubenswrapper[4718]: I1210 14:34:11.104792 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 10 14:34:11 crc kubenswrapper[4718]: E1210 14:34:11.106558 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-10 14:34:11.606506078 +0000 UTC m=+156.555729625 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 14:34:11 crc kubenswrapper[4718]: I1210 14:34:11.127658 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-xtqqs" event={"ID":"f201fb44-80ca-4142-aaf9-4cfbd8215309","Type":"ContainerStarted","Data":"650bf399b6cabc261ee8ec2ccc1ea8adaa9336a4c36a078d6abca131486774a3"} Dec 10 14:34:11 crc kubenswrapper[4718]: I1210 14:34:11.164332 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-7p66s" podStartSLOduration=130.164299032 podStartE2EDuration="2m10.164299032s" podCreationTimestamp="2025-12-10 14:32:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 14:34:11.033252758 +0000 UTC m=+155.982476175" watchObservedRunningTime="2025-12-10 14:34:11.164299032 +0000 UTC m=+156.113522449" Dec 10 14:34:11 crc kubenswrapper[4718]: I1210 14:34:11.167193 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-xtqqs" podStartSLOduration=131.167163263 podStartE2EDuration="2m11.167163263s" podCreationTimestamp="2025-12-10 14:32:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 14:34:11.163237425 +0000 UTC m=+156.112460842" watchObservedRunningTime="2025-12-10 14:34:11.167163263 +0000 UTC m=+156.116386680" Dec 10 14:34:11 crc kubenswrapper[4718]: I1210 14:34:11.203992 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-brrd5" event={"ID":"cc343e66-a0f1-4f9b-bbba-77b38c0260e7","Type":"ContainerStarted","Data":"18358a2212f25b5018d14e53bcd01ddabaf0f0c4fc96014c3e5d1304560cec89"} Dec 10 14:34:11 crc kubenswrapper[4718]: I1210 14:34:11.206893 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n2lcz\" (UID: \"2638a0da-6209-4691-a4d4-6aa91a4ca547\") " pod="openshift-image-registry/image-registry-697d97f7c8-n2lcz" Dec 10 14:34:11 crc kubenswrapper[4718]: E1210 14:34:11.208610 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-10 14:34:11.708588118 +0000 UTC m=+156.657811535 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n2lcz" (UID: "2638a0da-6209-4691-a4d4-6aa91a4ca547") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 14:34:11 crc kubenswrapper[4718]: I1210 14:34:11.283372 4718 generic.go:334] "Generic (PLEG): container finished" podID="78119cb9-deb1-4a37-bc05-7b911969047f" containerID="c20ef00a838ebe236fe35a4b2d396f913df69637291ab3d8176323b356b43b52" exitCode=0 Dec 10 14:34:11 crc kubenswrapper[4718]: I1210 14:34:11.283484 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-lh6qt" event={"ID":"78119cb9-deb1-4a37-bc05-7b911969047f","Type":"ContainerDied","Data":"c20ef00a838ebe236fe35a4b2d396f913df69637291ab3d8176323b356b43b52"} Dec 10 14:34:11 crc kubenswrapper[4718]: I1210 14:34:11.291137 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-8dm4f" event={"ID":"4fb46f0a-fd98-475f-a2fe-db2c2f8c3d35","Type":"ContainerStarted","Data":"795ff2c75c8ba55aae731139647f62d90557c0f19f459ad29812b4652b0092c5"} Dec 10 14:34:11 crc kubenswrapper[4718]: I1210 14:34:11.304660 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-7fd22" Dec 10 14:34:11 crc kubenswrapper[4718]: I1210 14:34:11.310535 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 10 14:34:11 crc kubenswrapper[4718]: E1210 14:34:11.312629 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-10 14:34:11.812602516 +0000 UTC m=+156.761825943 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 14:34:11 crc kubenswrapper[4718]: I1210 14:34:11.313134 4718 patch_prober.go:28] interesting pod/router-default-5444994796-7fd22 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 10 14:34:11 crc kubenswrapper[4718]: [-]has-synced failed: reason withheld Dec 10 14:34:11 crc kubenswrapper[4718]: [+]process-running ok Dec 10 14:34:11 crc kubenswrapper[4718]: healthz check failed Dec 10 14:34:11 crc kubenswrapper[4718]: I1210 14:34:11.313171 4718 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-7fd22" podUID="40856145-6171-40e7-97b9-51268a8c348b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 10 14:34:11 crc kubenswrapper[4718]: I1210 14:34:11.351306 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-l47zr" event={"ID":"522ecb50-d5ad-4834-9a63-a4c843ca824d","Type":"ContainerStarted","Data":"d18e5f93aa4a4ba2836566fbb1aba0ba7820d17c1011fec7d3a1c9839dea1c27"} Dec 10 14:34:11 crc kubenswrapper[4718]: I1210 14:34:11.351372 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-l47zr" event={"ID":"522ecb50-d5ad-4834-9a63-a4c843ca824d","Type":"ContainerStarted","Data":"4fb83d2d2f932e01dc0aeb9743f93c400a2d5e3873f632dcd07d87be2b54058d"} Dec 10 14:34:11 crc kubenswrapper[4718]: I1210 14:34:11.374014 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-bvd4w" event={"ID":"fe1f76c3-fb22-4c92-bc66-7048e04e63b0","Type":"ContainerStarted","Data":"508d525dc2f70e9e43205c094f54141aad09c4c02ee2fbfc1987ef14b145167e"} Dec 10 14:34:11 crc kubenswrapper[4718]: I1210 14:34:11.403594 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"f68836720097bf7e99b119d17c19dcc190602b9553caa08f373f19cc1c6f9f0d"} Dec 10 14:34:11 crc kubenswrapper[4718]: I1210 14:34:11.403987 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"4958e753c156089f83a121334fe598f01a06f343376f0d9411b6ab000a00a808"} Dec 10 14:34:11 crc kubenswrapper[4718]: I1210 14:34:11.405567 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 10 14:34:11 crc kubenswrapper[4718]: I1210 14:34:11.412345 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n2lcz\" (UID: \"2638a0da-6209-4691-a4d4-6aa91a4ca547\") " pod="openshift-image-registry/image-registry-697d97f7c8-n2lcz" Dec 10 14:34:11 crc kubenswrapper[4718]: E1210 14:34:11.413079 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-10 14:34:11.913062546 +0000 UTC m=+156.862285963 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n2lcz" (UID: "2638a0da-6209-4691-a4d4-6aa91a4ca547") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 14:34:11 crc kubenswrapper[4718]: I1210 14:34:11.430726 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-8dm4f" podStartSLOduration=130.430701757 podStartE2EDuration="2m10.430701757s" podCreationTimestamp="2025-12-10 14:32:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 14:34:11.393115518 +0000 UTC m=+156.342338935" watchObservedRunningTime="2025-12-10 14:34:11.430701757 +0000 UTC m=+156.379925184" Dec 10 14:34:11 crc kubenswrapper[4718]: I1210 14:34:11.432245 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-l47zr" podStartSLOduration=9.432235795 podStartE2EDuration="9.432235795s" podCreationTimestamp="2025-12-10 14:34:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 14:34:11.43161916 +0000 UTC m=+156.380842577" watchObservedRunningTime="2025-12-10 14:34:11.432235795 +0000 UTC m=+156.381459212" Dec 10 14:34:11 crc kubenswrapper[4718]: I1210 14:34:11.488238 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29422950-4cxn6" event={"ID":"dff4ff8a-f156-4da4-ba81-477078a8345d","Type":"ContainerStarted","Data":"3d0778dc12dd262505f9f4eb58b634c40987acff8ff079b4c716cfaf2f0e0a47"} Dec 10 14:34:11 crc kubenswrapper[4718]: I1210 14:34:11.513375 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wgzns" event={"ID":"b0aac105-3797-4d99-ad0f-443048b96b0a","Type":"ContainerStarted","Data":"3c3729af7be3d4a65679951d5a5d171c24534603c2174e2c37bf9c32c7bf0f5a"} Dec 10 14:34:11 crc kubenswrapper[4718]: I1210 14:34:11.513474 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wgzns" event={"ID":"b0aac105-3797-4d99-ad0f-443048b96b0a","Type":"ContainerStarted","Data":"c7f213ddcf78d1aa56392e71cbb1b0e8edaafc8a9a983d6862df6b13ffdbde8c"} Dec 10 14:34:11 crc kubenswrapper[4718]: I1210 14:34:11.513764 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wgzns" Dec 10 14:34:11 crc kubenswrapper[4718]: I1210 14:34:11.514040 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 10 14:34:11 crc kubenswrapper[4718]: E1210 14:34:11.515286 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-10 14:34:12.015244499 +0000 UTC m=+156.964467916 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 14:34:11 crc kubenswrapper[4718]: I1210 14:34:11.530089 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-9zbxh" event={"ID":"28dfccad-4a7e-463f-a8b3-77a451a865ae","Type":"ContainerStarted","Data":"28e67fe3fc460773bc350b0348866e575533bbd40e77cc4ee7fec23e6da6fc1c"} Dec 10 14:34:11 crc kubenswrapper[4718]: I1210 14:34:11.561369 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wgzns" podStartSLOduration=130.5613429 podStartE2EDuration="2m10.5613429s" podCreationTimestamp="2025-12-10 14:32:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 14:34:11.56013659 +0000 UTC m=+156.509360007" watchObservedRunningTime="2025-12-10 14:34:11.5613429 +0000 UTC m=+156.510566317" Dec 10 14:34:11 crc kubenswrapper[4718]: I1210 14:34:11.597259 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-prs2h" event={"ID":"8b5042f5-52a1-42da-9a21-72d7b2a75c75","Type":"ContainerStarted","Data":"81d7f9d37ef0592ef0c20186c3390cd803602c45712456ec09ab6d0672c06bc6"} Dec 10 14:34:11 crc kubenswrapper[4718]: I1210 14:34:11.607447 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-r4272" event={"ID":"3bc2976e-bdb0-4450-9c5e-73052e705f7a","Type":"ContainerStarted","Data":"a755f268c9cdf328040705c058fb82d37059fb014e3b78fd1f70ca1dc10e8a4f"} Dec 10 14:34:11 crc kubenswrapper[4718]: I1210 14:34:11.609093 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-r4272" Dec 10 14:34:11 crc kubenswrapper[4718]: I1210 14:34:11.615819 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n2lcz\" (UID: \"2638a0da-6209-4691-a4d4-6aa91a4ca547\") " pod="openshift-image-registry/image-registry-697d97f7c8-n2lcz" Dec 10 14:34:11 crc kubenswrapper[4718]: E1210 14:34:11.618195 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-10 14:34:12.11817236 +0000 UTC m=+157.067395777 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n2lcz" (UID: "2638a0da-6209-4691-a4d4-6aa91a4ca547") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 14:34:11 crc kubenswrapper[4718]: I1210 14:34:11.623203 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-btqhn" event={"ID":"96a4fa95-2d84-4fc6-94a4-5629fe98e3ce","Type":"ContainerStarted","Data":"2227bc5ec1e3bfe4297f3dab448b8b277b24fa99266cbbc5669ac37b9dd8bd56"} Dec 10 14:34:11 crc kubenswrapper[4718]: I1210 14:34:11.638186 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-vws9k" event={"ID":"850edb82-dd95-474f-a41c-3aa48faa4b87","Type":"ContainerStarted","Data":"fc3f83f97e473e473d600afa8c79c475b7ebec5f2afb79a3edf440cff907b615"} Dec 10 14:34:11 crc kubenswrapper[4718]: I1210 14:34:11.642755 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-vws9k" Dec 10 14:34:11 crc kubenswrapper[4718]: I1210 14:34:11.643364 4718 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-r4272 container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.18:5443/healthz\": dial tcp 10.217.0.18:5443: connect: connection refused" start-of-body= Dec 10 14:34:11 crc kubenswrapper[4718]: I1210 14:34:11.643407 4718 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-r4272" podUID="3bc2976e-bdb0-4450-9c5e-73052e705f7a" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.18:5443/healthz\": dial tcp 10.217.0.18:5443: connect: connection refused" Dec 10 14:34:11 crc kubenswrapper[4718]: I1210 14:34:11.672667 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-gvvtl" event={"ID":"bc13833f-8cfa-417a-a16a-420e5b00843b","Type":"ContainerStarted","Data":"d9d0dc9546421aa0d67759eaea0f6f08520d1137f9836ed2bc96b4430298e35a"} Dec 10 14:34:11 crc kubenswrapper[4718]: I1210 14:34:11.710923 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-vws9k" Dec 10 14:34:11 crc kubenswrapper[4718]: I1210 14:34:11.715417 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-vws9k" podStartSLOduration=131.715364738 podStartE2EDuration="2m11.715364738s" podCreationTimestamp="2025-12-10 14:32:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 14:34:11.704027895 +0000 UTC m=+156.653251312" watchObservedRunningTime="2025-12-10 14:34:11.715364738 +0000 UTC m=+156.664588155" Dec 10 14:34:11 crc kubenswrapper[4718]: I1210 14:34:11.716678 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 10 14:34:11 crc kubenswrapper[4718]: E1210 14:34:11.717941 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-10 14:34:12.217925532 +0000 UTC m=+157.167148949 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 14:34:11 crc kubenswrapper[4718]: I1210 14:34:11.715536 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-r4272" podStartSLOduration=130.715531072 podStartE2EDuration="2m10.715531072s" podCreationTimestamp="2025-12-10 14:32:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 14:34:11.635114203 +0000 UTC m=+156.584337640" watchObservedRunningTime="2025-12-10 14:34:11.715531072 +0000 UTC m=+156.664754489" Dec 10 14:34:11 crc kubenswrapper[4718]: I1210 14:34:11.734508 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-xrpzk" event={"ID":"db2db73c-d11d-46e6-9cc6-331faf9a21ca","Type":"ContainerStarted","Data":"e4d3c747f20d30dfda273edde4e6143ffe73b9213bb14c64d73dce5f18820d1b"} Dec 10 14:34:11 crc kubenswrapper[4718]: I1210 14:34:11.763738 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"0f01ec5e7489276a9ed06f9a4994297eca24a13b372ab7b1f2d77699d5100f46"} Dec 10 14:34:11 crc kubenswrapper[4718]: I1210 14:34:11.815154 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"3a10b71bdf89b2527da85f7e4df1f20f9b089b7b4e81038ce0c2c18bd1692835"} Dec 10 14:34:11 crc kubenswrapper[4718]: I1210 14:34:11.815208 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"ad0ce8cfb8c1106dec2ad97c1097e26d5f4724b94bdd40da0ce37fac4727e7e2"} Dec 10 14:34:11 crc kubenswrapper[4718]: I1210 14:34:11.831316 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n2lcz\" (UID: \"2638a0da-6209-4691-a4d4-6aa91a4ca547\") " pod="openshift-image-registry/image-registry-697d97f7c8-n2lcz" Dec 10 14:34:11 crc kubenswrapper[4718]: E1210 14:34:11.831870 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-10 14:34:12.331853838 +0000 UTC m=+157.281077255 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n2lcz" (UID: "2638a0da-6209-4691-a4d4-6aa91a4ca547") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 14:34:11 crc kubenswrapper[4718]: I1210 14:34:11.853910 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wgzns" Dec 10 14:34:11 crc kubenswrapper[4718]: I1210 14:34:11.905842 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-rn9bk" event={"ID":"a07b76f5-7f05-410e-a0e9-3aff034786ad","Type":"ContainerStarted","Data":"a4b4bca50b8497bce0945c7cd7c56430117dd23a0843020d0a316ed8ca2c54aa"} Dec 10 14:34:11 crc kubenswrapper[4718]: I1210 14:34:11.940071 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 10 14:34:11 crc kubenswrapper[4718]: E1210 14:34:11.940595 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-10 14:34:12.440552484 +0000 UTC m=+157.389775901 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 14:34:11 crc kubenswrapper[4718]: I1210 14:34:11.952408 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-dskl4" event={"ID":"061a3cd6-2958-4635-9ce1-4989d98d8432","Type":"ContainerStarted","Data":"419141d4c450a4cd625ab4c16be2d581fc4fc1b23e36e117e2ee723075bf97ba"} Dec 10 14:34:11 crc kubenswrapper[4718]: I1210 14:34:11.952484 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-dskl4" event={"ID":"061a3cd6-2958-4635-9ce1-4989d98d8432","Type":"ContainerStarted","Data":"7925e20547e18779bb0db592c11fbd55ce222c695299bb919b8d1b7a9c920fb2"} Dec 10 14:34:11 crc kubenswrapper[4718]: I1210 14:34:11.952505 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-dskl4" Dec 10 14:34:11 crc kubenswrapper[4718]: I1210 14:34:11.983500 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-76q8c" event={"ID":"dc6efe70-637d-42bf-a57e-92ed4503a23a","Type":"ContainerStarted","Data":"2e831929469d5d3424be80a6e27f9e7cdef1823552a440e0152cdd576bad4833"} Dec 10 14:34:11 crc kubenswrapper[4718]: I1210 14:34:11.996356 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-kkv25" event={"ID":"9c06237f-9a58-430d-93c8-69297f9e1363","Type":"ContainerStarted","Data":"b2eac01da3b4b3a519b5b448174504ce8a764cad87e595e081b9179a2907e7f2"} Dec 10 14:34:11 crc kubenswrapper[4718]: I1210 14:34:11.996437 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-kkv25" event={"ID":"9c06237f-9a58-430d-93c8-69297f9e1363","Type":"ContainerStarted","Data":"dc8d1b9e170a5d6bd357e9731c89e923eab6454886a99cb5bfa7d03724e0a5a7"} Dec 10 14:34:12 crc kubenswrapper[4718]: I1210 14:34:12.064089 4718 patch_prober.go:28] interesting pod/downloads-7954f5f757-hbnxp container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" start-of-body= Dec 10 14:34:12 crc kubenswrapper[4718]: I1210 14:34:12.064221 4718 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-hbnxp" podUID="b7432a0e-050f-4112-a75d-f8687233cf0f" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" Dec 10 14:34:12 crc kubenswrapper[4718]: I1210 14:34:12.066871 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n2lcz\" (UID: \"2638a0da-6209-4691-a4d4-6aa91a4ca547\") " pod="openshift-image-registry/image-registry-697d97f7c8-n2lcz" Dec 10 14:34:12 crc kubenswrapper[4718]: E1210 14:34:12.071111 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-10 14:34:12.571062894 +0000 UTC m=+157.520286311 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n2lcz" (UID: "2638a0da-6209-4691-a4d4-6aa91a4ca547") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 14:34:12 crc kubenswrapper[4718]: I1210 14:34:12.171146 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 10 14:34:12 crc kubenswrapper[4718]: E1210 14:34:12.172464 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-10 14:34:12.672432396 +0000 UTC m=+157.621655813 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 14:34:12 crc kubenswrapper[4718]: I1210 14:34:12.209158 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-5qn5w" event={"ID":"f18678b9-691a-4582-b327-b5bc9f1983d8","Type":"ContainerStarted","Data":"608558713d0c5ea811ccc15200b2fcb9d231d5b266918c0d606af68e6dd6983d"} Dec 10 14:34:12 crc kubenswrapper[4718]: I1210 14:34:12.209846 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-4bmlt" Dec 10 14:34:12 crc kubenswrapper[4718]: I1210 14:34:12.209959 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-mx69k" Dec 10 14:34:12 crc kubenswrapper[4718]: I1210 14:34:12.251182 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-rn9bk" podStartSLOduration=132.251156823 podStartE2EDuration="2m12.251156823s" podCreationTimestamp="2025-12-10 14:32:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 14:34:12.24263594 +0000 UTC m=+157.191859357" watchObservedRunningTime="2025-12-10 14:34:12.251156823 +0000 UTC m=+157.200380240" Dec 10 14:34:12 crc kubenswrapper[4718]: I1210 14:34:12.277015 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n2lcz\" (UID: \"2638a0da-6209-4691-a4d4-6aa91a4ca547\") " pod="openshift-image-registry/image-registry-697d97f7c8-n2lcz" Dec 10 14:34:12 crc kubenswrapper[4718]: E1210 14:34:12.277851 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-10 14:34:12.777833539 +0000 UTC m=+157.727056956 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n2lcz" (UID: "2638a0da-6209-4691-a4d4-6aa91a4ca547") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 14:34:12 crc kubenswrapper[4718]: I1210 14:34:12.321348 4718 patch_prober.go:28] interesting pod/router-default-5444994796-7fd22 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 10 14:34:12 crc kubenswrapper[4718]: [-]has-synced failed: reason withheld Dec 10 14:34:12 crc kubenswrapper[4718]: [+]process-running ok Dec 10 14:34:12 crc kubenswrapper[4718]: healthz check failed Dec 10 14:34:12 crc kubenswrapper[4718]: I1210 14:34:12.321421 4718 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-7fd22" podUID="40856145-6171-40e7-97b9-51268a8c348b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 10 14:34:12 crc kubenswrapper[4718]: I1210 14:34:12.334619 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-76q8c" podStartSLOduration=133.334597978 podStartE2EDuration="2m13.334597978s" podCreationTimestamp="2025-12-10 14:31:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 14:34:12.32107381 +0000 UTC m=+157.270297227" watchObservedRunningTime="2025-12-10 14:34:12.334597978 +0000 UTC m=+157.283821395" Dec 10 14:34:12 crc kubenswrapper[4718]: I1210 14:34:12.378825 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 10 14:34:12 crc kubenswrapper[4718]: E1210 14:34:12.379469 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-10 14:34:12.879443078 +0000 UTC m=+157.828666495 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 14:34:12 crc kubenswrapper[4718]: I1210 14:34:12.450414 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-5qn5w" podStartSLOduration=133.45037324 podStartE2EDuration="2m13.45037324s" podCreationTimestamp="2025-12-10 14:31:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 14:34:12.41755738 +0000 UTC m=+157.366780797" watchObservedRunningTime="2025-12-10 14:34:12.45037324 +0000 UTC m=+157.399596657" Dec 10 14:34:12 crc kubenswrapper[4718]: I1210 14:34:12.480919 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n2lcz\" (UID: \"2638a0da-6209-4691-a4d4-6aa91a4ca547\") " pod="openshift-image-registry/image-registry-697d97f7c8-n2lcz" Dec 10 14:34:12 crc kubenswrapper[4718]: E1210 14:34:12.481589 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-10 14:34:12.981572859 +0000 UTC m=+157.930796276 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n2lcz" (UID: "2638a0da-6209-4691-a4d4-6aa91a4ca547") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 14:34:12 crc kubenswrapper[4718]: I1210 14:34:12.489759 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-kkv25" podStartSLOduration=132.489733723 podStartE2EDuration="2m12.489733723s" podCreationTimestamp="2025-12-10 14:32:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 14:34:12.450866832 +0000 UTC m=+157.400090249" watchObservedRunningTime="2025-12-10 14:34:12.489733723 +0000 UTC m=+157.438957140" Dec 10 14:34:12 crc kubenswrapper[4718]: I1210 14:34:12.552211 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-dskl4" podStartSLOduration=131.552184973 podStartE2EDuration="2m11.552184973s" podCreationTimestamp="2025-12-10 14:32:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 14:34:12.493893317 +0000 UTC m=+157.443116724" watchObservedRunningTime="2025-12-10 14:34:12.552184973 +0000 UTC m=+157.501408390" Dec 10 14:34:12 crc kubenswrapper[4718]: I1210 14:34:12.585760 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 10 14:34:12 crc kubenswrapper[4718]: E1210 14:34:12.586689 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-10 14:34:13.086668855 +0000 UTC m=+158.035892262 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 14:34:12 crc kubenswrapper[4718]: I1210 14:34:12.689274 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n2lcz\" (UID: \"2638a0da-6209-4691-a4d4-6aa91a4ca547\") " pod="openshift-image-registry/image-registry-697d97f7c8-n2lcz" Dec 10 14:34:12 crc kubenswrapper[4718]: E1210 14:34:12.689675 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-10 14:34:13.189661368 +0000 UTC m=+158.138884785 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n2lcz" (UID: "2638a0da-6209-4691-a4d4-6aa91a4ca547") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 14:34:12 crc kubenswrapper[4718]: I1210 14:34:12.794381 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 10 14:34:12 crc kubenswrapper[4718]: E1210 14:34:12.794679 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-10 14:34:13.294631879 +0000 UTC m=+158.243855306 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 14:34:12 crc kubenswrapper[4718]: I1210 14:34:12.796698 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n2lcz\" (UID: \"2638a0da-6209-4691-a4d4-6aa91a4ca547\") " pod="openshift-image-registry/image-registry-697d97f7c8-n2lcz" Dec 10 14:34:12 crc kubenswrapper[4718]: E1210 14:34:12.797176 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-10 14:34:13.297158692 +0000 UTC m=+158.246382109 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n2lcz" (UID: "2638a0da-6209-4691-a4d4-6aa91a4ca547") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 14:34:12 crc kubenswrapper[4718]: I1210 14:34:12.908035 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 10 14:34:12 crc kubenswrapper[4718]: E1210 14:34:12.908318 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-10 14:34:13.408281348 +0000 UTC m=+158.357504765 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 14:34:12 crc kubenswrapper[4718]: I1210 14:34:12.908783 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n2lcz\" (UID: \"2638a0da-6209-4691-a4d4-6aa91a4ca547\") " pod="openshift-image-registry/image-registry-697d97f7c8-n2lcz" Dec 10 14:34:12 crc kubenswrapper[4718]: E1210 14:34:12.909223 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-10 14:34:13.409204541 +0000 UTC m=+158.358427958 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n2lcz" (UID: "2638a0da-6209-4691-a4d4-6aa91a4ca547") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 14:34:13 crc kubenswrapper[4718]: I1210 14:34:13.009773 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 10 14:34:13 crc kubenswrapper[4718]: E1210 14:34:13.010036 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-10 14:34:13.509992499 +0000 UTC m=+158.459215916 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 14:34:13 crc kubenswrapper[4718]: I1210 14:34:13.010154 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n2lcz\" (UID: \"2638a0da-6209-4691-a4d4-6aa91a4ca547\") " pod="openshift-image-registry/image-registry-697d97f7c8-n2lcz" Dec 10 14:34:13 crc kubenswrapper[4718]: E1210 14:34:13.010701 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-10 14:34:13.510688836 +0000 UTC m=+158.459912253 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n2lcz" (UID: "2638a0da-6209-4691-a4d4-6aa91a4ca547") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 14:34:13 crc kubenswrapper[4718]: I1210 14:34:13.069407 4718 generic.go:334] "Generic (PLEG): container finished" podID="dff4ff8a-f156-4da4-ba81-477078a8345d" containerID="3d0778dc12dd262505f9f4eb58b634c40987acff8ff079b4c716cfaf2f0e0a47" exitCode=0 Dec 10 14:34:13 crc kubenswrapper[4718]: I1210 14:34:13.069517 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29422950-4cxn6" event={"ID":"dff4ff8a-f156-4da4-ba81-477078a8345d","Type":"ContainerDied","Data":"3d0778dc12dd262505f9f4eb58b634c40987acff8ff079b4c716cfaf2f0e0a47"} Dec 10 14:34:13 crc kubenswrapper[4718]: I1210 14:34:13.088308 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-twtlk" event={"ID":"451fb12e-a97f-441e-8a8c-d4c217640aef","Type":"ContainerStarted","Data":"e845f171f5180489cbc7cd369993220b5a604c652034411ca26e46365ecb30b8"} Dec 10 14:34:13 crc kubenswrapper[4718]: I1210 14:34:13.089133 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-twtlk" Dec 10 14:34:13 crc kubenswrapper[4718]: I1210 14:34:13.091734 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-kkv25" event={"ID":"9c06237f-9a58-430d-93c8-69297f9e1363","Type":"ContainerStarted","Data":"3489ff814f07ef90fc09729f0adb3d7ad2f5aab4e8711ca72d34aec3d6367522"} Dec 10 14:34:13 crc kubenswrapper[4718]: I1210 14:34:13.097785 4718 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-twtlk container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.33:8080/healthz\": dial tcp 10.217.0.33:8080: connect: connection refused" start-of-body= Dec 10 14:34:13 crc kubenswrapper[4718]: I1210 14:34:13.097865 4718 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-twtlk" podUID="451fb12e-a97f-441e-8a8c-d4c217640aef" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.33:8080/healthz\": dial tcp 10.217.0.33:8080: connect: connection refused" Dec 10 14:34:13 crc kubenswrapper[4718]: I1210 14:34:13.113150 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 10 14:34:13 crc kubenswrapper[4718]: E1210 14:34:13.114665 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-10 14:34:13.614606132 +0000 UTC m=+158.563829549 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 14:34:13 crc kubenswrapper[4718]: I1210 14:34:13.124418 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-qn8tr" event={"ID":"079f1ed7-7f70-4b4c-9afd-cf0286348562","Type":"ContainerStarted","Data":"fd4cd53564b52fc84d217e81e25b757337f306a31a8c94ddf3264a4c67c9e5ac"} Dec 10 14:34:13 crc kubenswrapper[4718]: I1210 14:34:13.126158 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-qn8tr" Dec 10 14:34:13 crc kubenswrapper[4718]: I1210 14:34:13.163279 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-xrpzk" event={"ID":"db2db73c-d11d-46e6-9cc6-331faf9a21ca","Type":"ContainerStarted","Data":"f5448759c19154f963dc3f56859ca6efcb8886f816515cda0f7ab1920a3b8412"} Dec 10 14:34:13 crc kubenswrapper[4718]: I1210 14:34:13.194832 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-qn8tr" Dec 10 14:34:13 crc kubenswrapper[4718]: I1210 14:34:13.195272 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-9zbxh" event={"ID":"28dfccad-4a7e-463f-a8b3-77a451a865ae","Type":"ContainerStarted","Data":"cd6eb3ba84353a6452fa162ec05c307640a06a19f16a478413d1be431e977d0d"} Dec 10 14:34:13 crc kubenswrapper[4718]: I1210 14:34:13.195367 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-9zbxh" event={"ID":"28dfccad-4a7e-463f-a8b3-77a451a865ae","Type":"ContainerStarted","Data":"93d45675b8a6dd852c52e3207769c568e33cbecbfbc574d6c3bdb5cb6448f78c"} Dec 10 14:34:13 crc kubenswrapper[4718]: I1210 14:34:13.205832 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-twtlk" podStartSLOduration=132.205803801 podStartE2EDuration="2m12.205803801s" podCreationTimestamp="2025-12-10 14:32:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 14:34:13.168270363 +0000 UTC m=+158.117493780" watchObservedRunningTime="2025-12-10 14:34:13.205803801 +0000 UTC m=+158.155027218" Dec 10 14:34:13 crc kubenswrapper[4718]: I1210 14:34:13.216452 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n2lcz\" (UID: \"2638a0da-6209-4691-a4d4-6aa91a4ca547\") " pod="openshift-image-registry/image-registry-697d97f7c8-n2lcz" Dec 10 14:34:13 crc kubenswrapper[4718]: E1210 14:34:13.219133 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-10 14:34:13.719071352 +0000 UTC m=+158.668294769 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n2lcz" (UID: "2638a0da-6209-4691-a4d4-6aa91a4ca547") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 14:34:13 crc kubenswrapper[4718]: I1210 14:34:13.238050 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-lh6qt" event={"ID":"78119cb9-deb1-4a37-bc05-7b911969047f","Type":"ContainerStarted","Data":"5744e796d374ae2da64c72befee6ae8b7b8eccc2c653e196f38d32410effed0c"} Dec 10 14:34:13 crc kubenswrapper[4718]: I1210 14:34:13.251967 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-qn8tr" podStartSLOduration=132.251946384 podStartE2EDuration="2m12.251946384s" podCreationTimestamp="2025-12-10 14:32:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 14:34:13.204335104 +0000 UTC m=+158.153558541" watchObservedRunningTime="2025-12-10 14:34:13.251946384 +0000 UTC m=+158.201169801" Dec 10 14:34:13 crc kubenswrapper[4718]: I1210 14:34:13.270925 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-bvd4w" event={"ID":"fe1f76c3-fb22-4c92-bc66-7048e04e63b0","Type":"ContainerStarted","Data":"1728b2c33e22f82d07bca904b25a96662a56656b98c5c6c3022e42c32b9ac1cc"} Dec 10 14:34:13 crc kubenswrapper[4718]: I1210 14:34:13.296676 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-r4272" event={"ID":"3bc2976e-bdb0-4450-9c5e-73052e705f7a","Type":"ContainerStarted","Data":"06bfbda2fa79e472b6315537dbd854a5608dcd9a11550500cb6553316428cb41"} Dec 10 14:34:13 crc kubenswrapper[4718]: I1210 14:34:13.317693 4718 patch_prober.go:28] interesting pod/router-default-5444994796-7fd22 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 10 14:34:13 crc kubenswrapper[4718]: [-]has-synced failed: reason withheld Dec 10 14:34:13 crc kubenswrapper[4718]: [+]process-running ok Dec 10 14:34:13 crc kubenswrapper[4718]: healthz check failed Dec 10 14:34:13 crc kubenswrapper[4718]: I1210 14:34:13.317781 4718 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-7fd22" podUID="40856145-6171-40e7-97b9-51268a8c348b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 10 14:34:13 crc kubenswrapper[4718]: I1210 14:34:13.318541 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 10 14:34:13 crc kubenswrapper[4718]: E1210 14:34:13.319786 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-10 14:34:13.819765168 +0000 UTC m=+158.768988585 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 14:34:13 crc kubenswrapper[4718]: I1210 14:34:13.351113 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-gvvtl" event={"ID":"bc13833f-8cfa-417a-a16a-420e5b00843b","Type":"ContainerStarted","Data":"1859a3a31505497ab8e4f550d391075dad9aa5f18831ac1263c3074091d1495f"} Dec 10 14:34:13 crc kubenswrapper[4718]: I1210 14:34:13.351170 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-gvvtl" event={"ID":"bc13833f-8cfa-417a-a16a-420e5b00843b","Type":"ContainerStarted","Data":"244237e57dd794d263f94b5c056e0d3b7ea34ff635f2622990c9159003647f80"} Dec 10 14:34:13 crc kubenswrapper[4718]: I1210 14:34:13.352497 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-gvvtl" Dec 10 14:34:13 crc kubenswrapper[4718]: I1210 14:34:13.374801 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-btqhn" event={"ID":"96a4fa95-2d84-4fc6-94a4-5629fe98e3ce","Type":"ContainerStarted","Data":"ebdf9a1fbc382fdf7e40de4390cccb3bb212d39b808893017f4da7779ebfb954"} Dec 10 14:34:13 crc kubenswrapper[4718]: I1210 14:34:13.374876 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-btqhn" event={"ID":"96a4fa95-2d84-4fc6-94a4-5629fe98e3ce","Type":"ContainerStarted","Data":"5b7010a683031dd1cfa1a67480db0a865930f3c5b93688597742cb2fb7361a7c"} Dec 10 14:34:13 crc kubenswrapper[4718]: I1210 14:34:13.391034 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-lh6qt" podStartSLOduration=132.390999537 podStartE2EDuration="2m12.390999537s" podCreationTimestamp="2025-12-10 14:32:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 14:34:13.386282249 +0000 UTC m=+158.335505666" watchObservedRunningTime="2025-12-10 14:34:13.390999537 +0000 UTC m=+158.340222964" Dec 10 14:34:13 crc kubenswrapper[4718]: I1210 14:34:13.397435 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-9zbxh" podStartSLOduration=133.397413788 podStartE2EDuration="2m13.397413788s" podCreationTimestamp="2025-12-10 14:32:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 14:34:13.338199358 +0000 UTC m=+158.287422775" watchObservedRunningTime="2025-12-10 14:34:13.397413788 +0000 UTC m=+158.346637205" Dec 10 14:34:13 crc kubenswrapper[4718]: I1210 14:34:13.410725 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"24090fae63e945b827770087cb454970cca215e236f134e4e38d7d9e8a49ffcf"} Dec 10 14:34:13 crc kubenswrapper[4718]: I1210 14:34:13.423694 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n2lcz\" (UID: \"2638a0da-6209-4691-a4d4-6aa91a4ca547\") " pod="openshift-image-registry/image-registry-697d97f7c8-n2lcz" Dec 10 14:34:13 crc kubenswrapper[4718]: E1210 14:34:13.426172 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-10 14:34:13.926149935 +0000 UTC m=+158.875373352 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n2lcz" (UID: "2638a0da-6209-4691-a4d4-6aa91a4ca547") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 14:34:13 crc kubenswrapper[4718]: I1210 14:34:13.502301 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-brrd5" event={"ID":"cc343e66-a0f1-4f9b-bbba-77b38c0260e7","Type":"ContainerStarted","Data":"8dbf4d8d48bde2531cd52e5a0808c16f7fac5edf334d7f880e90038fc9951a4a"} Dec 10 14:34:13 crc kubenswrapper[4718]: I1210 14:34:13.536588 4718 patch_prober.go:28] interesting pod/downloads-7954f5f757-hbnxp container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" start-of-body= Dec 10 14:34:13 crc kubenswrapper[4718]: I1210 14:34:13.536639 4718 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-hbnxp" podUID="b7432a0e-050f-4112-a75d-f8687233cf0f" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" Dec 10 14:34:13 crc kubenswrapper[4718]: I1210 14:34:13.537478 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 10 14:34:13 crc kubenswrapper[4718]: E1210 14:34:13.537747 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-10 14:34:14.037731223 +0000 UTC m=+158.986954640 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 14:34:13 crc kubenswrapper[4718]: I1210 14:34:13.639640 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n2lcz\" (UID: \"2638a0da-6209-4691-a4d4-6aa91a4ca547\") " pod="openshift-image-registry/image-registry-697d97f7c8-n2lcz" Dec 10 14:34:13 crc kubenswrapper[4718]: E1210 14:34:13.645176 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-10 14:34:14.145150386 +0000 UTC m=+159.094373983 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n2lcz" (UID: "2638a0da-6209-4691-a4d4-6aa91a4ca547") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 14:34:13 crc kubenswrapper[4718]: I1210 14:34:13.744688 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-bvd4w" podStartSLOduration=133.744644552 podStartE2EDuration="2m13.744644552s" podCreationTimestamp="2025-12-10 14:32:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 14:34:13.579766073 +0000 UTC m=+158.528989510" watchObservedRunningTime="2025-12-10 14:34:13.744644552 +0000 UTC m=+158.693867969" Dec 10 14:34:13 crc kubenswrapper[4718]: I1210 14:34:13.746504 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 10 14:34:13 crc kubenswrapper[4718]: E1210 14:34:13.746773 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-10 14:34:14.246753245 +0000 UTC m=+159.195976662 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 14:34:13 crc kubenswrapper[4718]: I1210 14:34:13.746852 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Dec 10 14:34:13 crc kubenswrapper[4718]: I1210 14:34:13.747554 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 10 14:34:13 crc kubenswrapper[4718]: I1210 14:34:13.762779 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Dec 10 14:34:13 crc kubenswrapper[4718]: I1210 14:34:13.763026 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Dec 10 14:34:13 crc kubenswrapper[4718]: I1210 14:34:13.821770 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-btqhn" podStartSLOduration=133.821746478 podStartE2EDuration="2m13.821746478s" podCreationTimestamp="2025-12-10 14:32:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 14:34:13.787985445 +0000 UTC m=+158.737208862" watchObservedRunningTime="2025-12-10 14:34:13.821746478 +0000 UTC m=+158.770969895" Dec 10 14:34:13 crc kubenswrapper[4718]: I1210 14:34:13.822240 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Dec 10 14:34:13 crc kubenswrapper[4718]: I1210 14:34:13.851588 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-brrd5" podStartSLOduration=133.851561703 podStartE2EDuration="2m13.851561703s" podCreationTimestamp="2025-12-10 14:32:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 14:34:13.850826355 +0000 UTC m=+158.800049772" watchObservedRunningTime="2025-12-10 14:34:13.851561703 +0000 UTC m=+158.800785120" Dec 10 14:34:13 crc kubenswrapper[4718]: I1210 14:34:13.851768 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n2lcz\" (UID: \"2638a0da-6209-4691-a4d4-6aa91a4ca547\") " pod="openshift-image-registry/image-registry-697d97f7c8-n2lcz" Dec 10 14:34:13 crc kubenswrapper[4718]: E1210 14:34:13.852141 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-10 14:34:14.352128677 +0000 UTC m=+159.301352094 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n2lcz" (UID: "2638a0da-6209-4691-a4d4-6aa91a4ca547") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 14:34:13 crc kubenswrapper[4718]: I1210 14:34:13.852513 4718 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Dec 10 14:34:13 crc kubenswrapper[4718]: I1210 14:34:13.952920 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-gvvtl" podStartSLOduration=11.952897294 podStartE2EDuration="11.952897294s" podCreationTimestamp="2025-12-10 14:34:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 14:34:13.951057218 +0000 UTC m=+158.900280635" watchObservedRunningTime="2025-12-10 14:34:13.952897294 +0000 UTC m=+158.902120711" Dec 10 14:34:13 crc kubenswrapper[4718]: I1210 14:34:13.954436 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 10 14:34:13 crc kubenswrapper[4718]: I1210 14:34:13.954565 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8244dd75-1817-4fec-9ee4-3fb68956fbc1-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"8244dd75-1817-4fec-9ee4-3fb68956fbc1\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 10 14:34:13 crc kubenswrapper[4718]: I1210 14:34:13.954618 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8244dd75-1817-4fec-9ee4-3fb68956fbc1-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"8244dd75-1817-4fec-9ee4-3fb68956fbc1\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 10 14:34:13 crc kubenswrapper[4718]: E1210 14:34:13.954804 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-10 14:34:14.454781432 +0000 UTC m=+159.404004849 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 14:34:14 crc kubenswrapper[4718]: I1210 14:34:14.056051 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8244dd75-1817-4fec-9ee4-3fb68956fbc1-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"8244dd75-1817-4fec-9ee4-3fb68956fbc1\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 10 14:34:14 crc kubenswrapper[4718]: I1210 14:34:14.056113 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8244dd75-1817-4fec-9ee4-3fb68956fbc1-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"8244dd75-1817-4fec-9ee4-3fb68956fbc1\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 10 14:34:14 crc kubenswrapper[4718]: I1210 14:34:14.056150 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n2lcz\" (UID: \"2638a0da-6209-4691-a4d4-6aa91a4ca547\") " pod="openshift-image-registry/image-registry-697d97f7c8-n2lcz" Dec 10 14:34:14 crc kubenswrapper[4718]: E1210 14:34:14.056519 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-10 14:34:14.556502623 +0000 UTC m=+159.505726040 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n2lcz" (UID: "2638a0da-6209-4691-a4d4-6aa91a4ca547") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 14:34:14 crc kubenswrapper[4718]: I1210 14:34:14.056635 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8244dd75-1817-4fec-9ee4-3fb68956fbc1-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"8244dd75-1817-4fec-9ee4-3fb68956fbc1\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 10 14:34:14 crc kubenswrapper[4718]: I1210 14:34:14.112438 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8244dd75-1817-4fec-9ee4-3fb68956fbc1-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"8244dd75-1817-4fec-9ee4-3fb68956fbc1\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 10 14:34:14 crc kubenswrapper[4718]: I1210 14:34:14.157671 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 10 14:34:14 crc kubenswrapper[4718]: E1210 14:34:14.157967 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-10 14:34:14.657921996 +0000 UTC m=+159.607145413 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 14:34:14 crc kubenswrapper[4718]: I1210 14:34:14.158055 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n2lcz\" (UID: \"2638a0da-6209-4691-a4d4-6aa91a4ca547\") " pod="openshift-image-registry/image-registry-697d97f7c8-n2lcz" Dec 10 14:34:14 crc kubenswrapper[4718]: E1210 14:34:14.158576 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-10 14:34:14.658564012 +0000 UTC m=+159.607787599 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n2lcz" (UID: "2638a0da-6209-4691-a4d4-6aa91a4ca547") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 14:34:14 crc kubenswrapper[4718]: I1210 14:34:14.243296 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-l2t5d"] Dec 10 14:34:14 crc kubenswrapper[4718]: I1210 14:34:14.244350 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-l2t5d" Dec 10 14:34:14 crc kubenswrapper[4718]: I1210 14:34:14.252678 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-r4272" Dec 10 14:34:14 crc kubenswrapper[4718]: I1210 14:34:14.253427 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Dec 10 14:34:14 crc kubenswrapper[4718]: I1210 14:34:14.259400 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 10 14:34:14 crc kubenswrapper[4718]: E1210 14:34:14.259587 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-10 14:34:14.759535675 +0000 UTC m=+159.708759092 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 14:34:14 crc kubenswrapper[4718]: I1210 14:34:14.259698 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-75td8\" (UniqueName: \"kubernetes.io/projected/a267a67e-effa-40d4-8923-7669798e594d-kube-api-access-75td8\") pod \"community-operators-l2t5d\" (UID: \"a267a67e-effa-40d4-8923-7669798e594d\") " pod="openshift-marketplace/community-operators-l2t5d" Dec 10 14:34:14 crc kubenswrapper[4718]: I1210 14:34:14.259750 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a267a67e-effa-40d4-8923-7669798e594d-utilities\") pod \"community-operators-l2t5d\" (UID: \"a267a67e-effa-40d4-8923-7669798e594d\") " pod="openshift-marketplace/community-operators-l2t5d" Dec 10 14:34:14 crc kubenswrapper[4718]: I1210 14:34:14.259852 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a267a67e-effa-40d4-8923-7669798e594d-catalog-content\") pod \"community-operators-l2t5d\" (UID: \"a267a67e-effa-40d4-8923-7669798e594d\") " pod="openshift-marketplace/community-operators-l2t5d" Dec 10 14:34:14 crc kubenswrapper[4718]: I1210 14:34:14.259977 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n2lcz\" (UID: \"2638a0da-6209-4691-a4d4-6aa91a4ca547\") " pod="openshift-image-registry/image-registry-697d97f7c8-n2lcz" Dec 10 14:34:14 crc kubenswrapper[4718]: E1210 14:34:14.260481 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-10 14:34:14.760465758 +0000 UTC m=+159.709689185 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n2lcz" (UID: "2638a0da-6209-4691-a4d4-6aa91a4ca547") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 14:34:14 crc kubenswrapper[4718]: I1210 14:34:14.305764 4718 patch_prober.go:28] interesting pod/router-default-5444994796-7fd22 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 10 14:34:14 crc kubenswrapper[4718]: [-]has-synced failed: reason withheld Dec 10 14:34:14 crc kubenswrapper[4718]: [+]process-running ok Dec 10 14:34:14 crc kubenswrapper[4718]: healthz check failed Dec 10 14:34:14 crc kubenswrapper[4718]: I1210 14:34:14.305853 4718 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-7fd22" podUID="40856145-6171-40e7-97b9-51268a8c348b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 10 14:34:14 crc kubenswrapper[4718]: I1210 14:34:14.340920 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-l2t5d"] Dec 10 14:34:14 crc kubenswrapper[4718]: I1210 14:34:14.360902 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 10 14:34:14 crc kubenswrapper[4718]: E1210 14:34:14.361177 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-10 14:34:14.861136943 +0000 UTC m=+159.810360370 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 14:34:14 crc kubenswrapper[4718]: I1210 14:34:14.361246 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a267a67e-effa-40d4-8923-7669798e594d-catalog-content\") pod \"community-operators-l2t5d\" (UID: \"a267a67e-effa-40d4-8923-7669798e594d\") " pod="openshift-marketplace/community-operators-l2t5d" Dec 10 14:34:14 crc kubenswrapper[4718]: I1210 14:34:14.361449 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n2lcz\" (UID: \"2638a0da-6209-4691-a4d4-6aa91a4ca547\") " pod="openshift-image-registry/image-registry-697d97f7c8-n2lcz" Dec 10 14:34:14 crc kubenswrapper[4718]: I1210 14:34:14.361607 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-75td8\" (UniqueName: \"kubernetes.io/projected/a267a67e-effa-40d4-8923-7669798e594d-kube-api-access-75td8\") pod \"community-operators-l2t5d\" (UID: \"a267a67e-effa-40d4-8923-7669798e594d\") " pod="openshift-marketplace/community-operators-l2t5d" Dec 10 14:34:14 crc kubenswrapper[4718]: I1210 14:34:14.361651 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a267a67e-effa-40d4-8923-7669798e594d-utilities\") pod \"community-operators-l2t5d\" (UID: \"a267a67e-effa-40d4-8923-7669798e594d\") " pod="openshift-marketplace/community-operators-l2t5d" Dec 10 14:34:14 crc kubenswrapper[4718]: I1210 14:34:14.362078 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a267a67e-effa-40d4-8923-7669798e594d-catalog-content\") pod \"community-operators-l2t5d\" (UID: \"a267a67e-effa-40d4-8923-7669798e594d\") " pod="openshift-marketplace/community-operators-l2t5d" Dec 10 14:34:14 crc kubenswrapper[4718]: I1210 14:34:14.362164 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a267a67e-effa-40d4-8923-7669798e594d-utilities\") pod \"community-operators-l2t5d\" (UID: \"a267a67e-effa-40d4-8923-7669798e594d\") " pod="openshift-marketplace/community-operators-l2t5d" Dec 10 14:34:14 crc kubenswrapper[4718]: E1210 14:34:14.362557 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-10 14:34:14.862539868 +0000 UTC m=+159.811763285 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n2lcz" (UID: "2638a0da-6209-4691-a4d4-6aa91a4ca547") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 14:34:14 crc kubenswrapper[4718]: I1210 14:34:14.378640 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 10 14:34:14 crc kubenswrapper[4718]: I1210 14:34:14.415578 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-75td8\" (UniqueName: \"kubernetes.io/projected/a267a67e-effa-40d4-8923-7669798e594d-kube-api-access-75td8\") pod \"community-operators-l2t5d\" (UID: \"a267a67e-effa-40d4-8923-7669798e594d\") " pod="openshift-marketplace/community-operators-l2t5d" Dec 10 14:34:14 crc kubenswrapper[4718]: I1210 14:34:14.440280 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-b4bq2"] Dec 10 14:34:14 crc kubenswrapper[4718]: I1210 14:34:14.441255 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-b4bq2" Dec 10 14:34:14 crc kubenswrapper[4718]: I1210 14:34:14.444837 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Dec 10 14:34:14 crc kubenswrapper[4718]: I1210 14:34:14.461291 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-b4bq2"] Dec 10 14:34:14 crc kubenswrapper[4718]: I1210 14:34:14.462684 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 10 14:34:14 crc kubenswrapper[4718]: I1210 14:34:14.462805 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/37550300-88f8-40dc-be98-1825518dc65c-catalog-content\") pod \"certified-operators-b4bq2\" (UID: \"37550300-88f8-40dc-be98-1825518dc65c\") " pod="openshift-marketplace/certified-operators-b4bq2" Dec 10 14:34:14 crc kubenswrapper[4718]: I1210 14:34:14.462867 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k4zhd\" (UniqueName: \"kubernetes.io/projected/37550300-88f8-40dc-be98-1825518dc65c-kube-api-access-k4zhd\") pod \"certified-operators-b4bq2\" (UID: \"37550300-88f8-40dc-be98-1825518dc65c\") " pod="openshift-marketplace/certified-operators-b4bq2" Dec 10 14:34:14 crc kubenswrapper[4718]: E1210 14:34:14.462971 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-10 14:34:14.962917436 +0000 UTC m=+159.912140853 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 14:34:14 crc kubenswrapper[4718]: I1210 14:34:14.463091 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/37550300-88f8-40dc-be98-1825518dc65c-utilities\") pod \"certified-operators-b4bq2\" (UID: \"37550300-88f8-40dc-be98-1825518dc65c\") " pod="openshift-marketplace/certified-operators-b4bq2" Dec 10 14:34:14 crc kubenswrapper[4718]: I1210 14:34:14.476198 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-5qn5w" Dec 10 14:34:14 crc kubenswrapper[4718]: I1210 14:34:14.476577 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-5qn5w" Dec 10 14:34:14 crc kubenswrapper[4718]: I1210 14:34:14.517785 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-t2gmf" Dec 10 14:34:14 crc kubenswrapper[4718]: I1210 14:34:14.517851 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-t2gmf" Dec 10 14:34:14 crc kubenswrapper[4718]: I1210 14:34:14.528981 4718 patch_prober.go:28] interesting pod/apiserver-76f77b778f-5qn5w container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Dec 10 14:34:14 crc kubenswrapper[4718]: [+]log ok Dec 10 14:34:14 crc kubenswrapper[4718]: [+]etcd ok Dec 10 14:34:14 crc kubenswrapper[4718]: [+]poststarthook/start-apiserver-admission-initializer ok Dec 10 14:34:14 crc kubenswrapper[4718]: [+]poststarthook/generic-apiserver-start-informers ok Dec 10 14:34:14 crc kubenswrapper[4718]: [+]poststarthook/max-in-flight-filter ok Dec 10 14:34:14 crc kubenswrapper[4718]: [+]poststarthook/storage-object-count-tracker-hook ok Dec 10 14:34:14 crc kubenswrapper[4718]: [+]poststarthook/image.openshift.io-apiserver-caches ok Dec 10 14:34:14 crc kubenswrapper[4718]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Dec 10 14:34:14 crc kubenswrapper[4718]: [-]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa failed: reason withheld Dec 10 14:34:14 crc kubenswrapper[4718]: [+]poststarthook/project.openshift.io-projectcache ok Dec 10 14:34:14 crc kubenswrapper[4718]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Dec 10 14:34:14 crc kubenswrapper[4718]: [+]poststarthook/openshift.io-startinformers ok Dec 10 14:34:14 crc kubenswrapper[4718]: [+]poststarthook/openshift.io-restmapperupdater ok Dec 10 14:34:14 crc kubenswrapper[4718]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Dec 10 14:34:14 crc kubenswrapper[4718]: livez check failed Dec 10 14:34:14 crc kubenswrapper[4718]: I1210 14:34:14.529072 4718 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-5qn5w" podUID="f18678b9-691a-4582-b327-b5bc9f1983d8" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 10 14:34:14 crc kubenswrapper[4718]: I1210 14:34:14.532810 4718 patch_prober.go:28] interesting pod/console-f9d7485db-t2gmf container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.15:8443/health\": dial tcp 10.217.0.15:8443: connect: connection refused" start-of-body= Dec 10 14:34:14 crc kubenswrapper[4718]: I1210 14:34:14.532886 4718 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-t2gmf" podUID="17fe734a-f022-4fd4-8276-661e662e2c6b" containerName="console" probeResult="failure" output="Get \"https://10.217.0.15:8443/health\": dial tcp 10.217.0.15:8443: connect: connection refused" Dec 10 14:34:14 crc kubenswrapper[4718]: I1210 14:34:14.561736 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-l2t5d" Dec 10 14:34:14 crc kubenswrapper[4718]: I1210 14:34:14.567258 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-xrpzk" event={"ID":"db2db73c-d11d-46e6-9cc6-331faf9a21ca","Type":"ContainerStarted","Data":"13079b7a9ecbe0a46e437330961cb33e92357ab64f49e68a8c93a26145c8aa81"} Dec 10 14:34:14 crc kubenswrapper[4718]: I1210 14:34:14.573809 4718 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-twtlk container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.33:8080/healthz\": dial tcp 10.217.0.33:8080: connect: connection refused" start-of-body= Dec 10 14:34:14 crc kubenswrapper[4718]: I1210 14:34:14.573883 4718 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-twtlk" podUID="451fb12e-a97f-441e-8a8c-d4c217640aef" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.33:8080/healthz\": dial tcp 10.217.0.33:8080: connect: connection refused" Dec 10 14:34:14 crc kubenswrapper[4718]: I1210 14:34:14.573963 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n2lcz\" (UID: \"2638a0da-6209-4691-a4d4-6aa91a4ca547\") " pod="openshift-image-registry/image-registry-697d97f7c8-n2lcz" Dec 10 14:34:14 crc kubenswrapper[4718]: I1210 14:34:14.574064 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k4zhd\" (UniqueName: \"kubernetes.io/projected/37550300-88f8-40dc-be98-1825518dc65c-kube-api-access-k4zhd\") pod \"certified-operators-b4bq2\" (UID: \"37550300-88f8-40dc-be98-1825518dc65c\") " pod="openshift-marketplace/certified-operators-b4bq2" Dec 10 14:34:14 crc kubenswrapper[4718]: I1210 14:34:14.574226 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/37550300-88f8-40dc-be98-1825518dc65c-utilities\") pod \"certified-operators-b4bq2\" (UID: \"37550300-88f8-40dc-be98-1825518dc65c\") " pod="openshift-marketplace/certified-operators-b4bq2" Dec 10 14:34:14 crc kubenswrapper[4718]: I1210 14:34:14.574795 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/37550300-88f8-40dc-be98-1825518dc65c-catalog-content\") pod \"certified-operators-b4bq2\" (UID: \"37550300-88f8-40dc-be98-1825518dc65c\") " pod="openshift-marketplace/certified-operators-b4bq2" Dec 10 14:34:14 crc kubenswrapper[4718]: I1210 14:34:14.576319 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/37550300-88f8-40dc-be98-1825518dc65c-utilities\") pod \"certified-operators-b4bq2\" (UID: \"37550300-88f8-40dc-be98-1825518dc65c\") " pod="openshift-marketplace/certified-operators-b4bq2" Dec 10 14:34:14 crc kubenswrapper[4718]: E1210 14:34:14.580260 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-10 14:34:15.080224576 +0000 UTC m=+160.029447993 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n2lcz" (UID: "2638a0da-6209-4691-a4d4-6aa91a4ca547") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 10 14:34:14 crc kubenswrapper[4718]: I1210 14:34:14.585999 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/37550300-88f8-40dc-be98-1825518dc65c-catalog-content\") pod \"certified-operators-b4bq2\" (UID: \"37550300-88f8-40dc-be98-1825518dc65c\") " pod="openshift-marketplace/certified-operators-b4bq2" Dec 10 14:34:14 crc kubenswrapper[4718]: I1210 14:34:14.592982 4718 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2025-12-10T14:34:13.852543137Z","Handler":null,"Name":""} Dec 10 14:34:14 crc kubenswrapper[4718]: I1210 14:34:14.605994 4718 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Dec 10 14:34:14 crc kubenswrapper[4718]: I1210 14:34:14.606091 4718 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Dec 10 14:34:14 crc kubenswrapper[4718]: I1210 14:34:14.637839 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k4zhd\" (UniqueName: \"kubernetes.io/projected/37550300-88f8-40dc-be98-1825518dc65c-kube-api-access-k4zhd\") pod \"certified-operators-b4bq2\" (UID: \"37550300-88f8-40dc-be98-1825518dc65c\") " pod="openshift-marketplace/certified-operators-b4bq2" Dec 10 14:34:14 crc kubenswrapper[4718]: I1210 14:34:14.655985 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-q2xbb"] Dec 10 14:34:14 crc kubenswrapper[4718]: I1210 14:34:14.657197 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-q2xbb" Dec 10 14:34:14 crc kubenswrapper[4718]: I1210 14:34:14.679528 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 10 14:34:14 crc kubenswrapper[4718]: I1210 14:34:14.679968 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-98h8v\" (UniqueName: \"kubernetes.io/projected/046f580c-2cdc-471d-a16a-43913d0b6092-kube-api-access-98h8v\") pod \"community-operators-q2xbb\" (UID: \"046f580c-2cdc-471d-a16a-43913d0b6092\") " pod="openshift-marketplace/community-operators-q2xbb" Dec 10 14:34:14 crc kubenswrapper[4718]: I1210 14:34:14.680029 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/046f580c-2cdc-471d-a16a-43913d0b6092-catalog-content\") pod \"community-operators-q2xbb\" (UID: \"046f580c-2cdc-471d-a16a-43913d0b6092\") " pod="openshift-marketplace/community-operators-q2xbb" Dec 10 14:34:14 crc kubenswrapper[4718]: I1210 14:34:14.680399 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/046f580c-2cdc-471d-a16a-43913d0b6092-utilities\") pod \"community-operators-q2xbb\" (UID: \"046f580c-2cdc-471d-a16a-43913d0b6092\") " pod="openshift-marketplace/community-operators-q2xbb" Dec 10 14:34:14 crc kubenswrapper[4718]: I1210 14:34:14.687777 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-q2xbb"] Dec 10 14:34:14 crc kubenswrapper[4718]: I1210 14:34:14.717227 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Dec 10 14:34:14 crc kubenswrapper[4718]: I1210 14:34:14.786848 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-b4bq2" Dec 10 14:34:14 crc kubenswrapper[4718]: I1210 14:34:14.787095 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/046f580c-2cdc-471d-a16a-43913d0b6092-utilities\") pod \"community-operators-q2xbb\" (UID: \"046f580c-2cdc-471d-a16a-43913d0b6092\") " pod="openshift-marketplace/community-operators-q2xbb" Dec 10 14:34:14 crc kubenswrapper[4718]: I1210 14:34:14.787170 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n2lcz\" (UID: \"2638a0da-6209-4691-a4d4-6aa91a4ca547\") " pod="openshift-image-registry/image-registry-697d97f7c8-n2lcz" Dec 10 14:34:14 crc kubenswrapper[4718]: I1210 14:34:14.787190 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-98h8v\" (UniqueName: \"kubernetes.io/projected/046f580c-2cdc-471d-a16a-43913d0b6092-kube-api-access-98h8v\") pod \"community-operators-q2xbb\" (UID: \"046f580c-2cdc-471d-a16a-43913d0b6092\") " pod="openshift-marketplace/community-operators-q2xbb" Dec 10 14:34:14 crc kubenswrapper[4718]: I1210 14:34:14.787208 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/046f580c-2cdc-471d-a16a-43913d0b6092-catalog-content\") pod \"community-operators-q2xbb\" (UID: \"046f580c-2cdc-471d-a16a-43913d0b6092\") " pod="openshift-marketplace/community-operators-q2xbb" Dec 10 14:34:14 crc kubenswrapper[4718]: I1210 14:34:14.787720 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/046f580c-2cdc-471d-a16a-43913d0b6092-catalog-content\") pod \"community-operators-q2xbb\" (UID: \"046f580c-2cdc-471d-a16a-43913d0b6092\") " pod="openshift-marketplace/community-operators-q2xbb" Dec 10 14:34:14 crc kubenswrapper[4718]: I1210 14:34:14.788232 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/046f580c-2cdc-471d-a16a-43913d0b6092-utilities\") pod \"community-operators-q2xbb\" (UID: \"046f580c-2cdc-471d-a16a-43913d0b6092\") " pod="openshift-marketplace/community-operators-q2xbb" Dec 10 14:34:14 crc kubenswrapper[4718]: I1210 14:34:14.812877 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-98h8v\" (UniqueName: \"kubernetes.io/projected/046f580c-2cdc-471d-a16a-43913d0b6092-kube-api-access-98h8v\") pod \"community-operators-q2xbb\" (UID: \"046f580c-2cdc-471d-a16a-43913d0b6092\") " pod="openshift-marketplace/community-operators-q2xbb" Dec 10 14:34:14 crc kubenswrapper[4718]: I1210 14:34:14.842318 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-bb57t"] Dec 10 14:34:14 crc kubenswrapper[4718]: I1210 14:34:14.843720 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bb57t" Dec 10 14:34:14 crc kubenswrapper[4718]: I1210 14:34:14.874866 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-bb57t"] Dec 10 14:34:14 crc kubenswrapper[4718]: I1210 14:34:14.892005 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9wfw5\" (UniqueName: \"kubernetes.io/projected/37fd67ff-cb68-413b-a919-929db5e9d0b1-kube-api-access-9wfw5\") pod \"certified-operators-bb57t\" (UID: \"37fd67ff-cb68-413b-a919-929db5e9d0b1\") " pod="openshift-marketplace/certified-operators-bb57t" Dec 10 14:34:14 crc kubenswrapper[4718]: I1210 14:34:14.892094 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/37fd67ff-cb68-413b-a919-929db5e9d0b1-utilities\") pod \"certified-operators-bb57t\" (UID: \"37fd67ff-cb68-413b-a919-929db5e9d0b1\") " pod="openshift-marketplace/certified-operators-bb57t" Dec 10 14:34:14 crc kubenswrapper[4718]: I1210 14:34:14.892554 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/37fd67ff-cb68-413b-a919-929db5e9d0b1-catalog-content\") pod \"certified-operators-bb57t\" (UID: \"37fd67ff-cb68-413b-a919-929db5e9d0b1\") " pod="openshift-marketplace/certified-operators-bb57t" Dec 10 14:34:14 crc kubenswrapper[4718]: I1210 14:34:14.996845 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/37fd67ff-cb68-413b-a919-929db5e9d0b1-catalog-content\") pod \"certified-operators-bb57t\" (UID: \"37fd67ff-cb68-413b-a919-929db5e9d0b1\") " pod="openshift-marketplace/certified-operators-bb57t" Dec 10 14:34:14 crc kubenswrapper[4718]: I1210 14:34:14.996926 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9wfw5\" (UniqueName: \"kubernetes.io/projected/37fd67ff-cb68-413b-a919-929db5e9d0b1-kube-api-access-9wfw5\") pod \"certified-operators-bb57t\" (UID: \"37fd67ff-cb68-413b-a919-929db5e9d0b1\") " pod="openshift-marketplace/certified-operators-bb57t" Dec 10 14:34:14 crc kubenswrapper[4718]: I1210 14:34:14.997007 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/37fd67ff-cb68-413b-a919-929db5e9d0b1-utilities\") pod \"certified-operators-bb57t\" (UID: \"37fd67ff-cb68-413b-a919-929db5e9d0b1\") " pod="openshift-marketplace/certified-operators-bb57t" Dec 10 14:34:14 crc kubenswrapper[4718]: I1210 14:34:14.997526 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/37fd67ff-cb68-413b-a919-929db5e9d0b1-utilities\") pod \"certified-operators-bb57t\" (UID: \"37fd67ff-cb68-413b-a919-929db5e9d0b1\") " pod="openshift-marketplace/certified-operators-bb57t" Dec 10 14:34:14 crc kubenswrapper[4718]: I1210 14:34:14.997750 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/37fd67ff-cb68-413b-a919-929db5e9d0b1-catalog-content\") pod \"certified-operators-bb57t\" (UID: \"37fd67ff-cb68-413b-a919-929db5e9d0b1\") " pod="openshift-marketplace/certified-operators-bb57t" Dec 10 14:34:15 crc kubenswrapper[4718]: I1210 14:34:15.027319 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9wfw5\" (UniqueName: \"kubernetes.io/projected/37fd67ff-cb68-413b-a919-929db5e9d0b1-kube-api-access-9wfw5\") pod \"certified-operators-bb57t\" (UID: \"37fd67ff-cb68-413b-a919-929db5e9d0b1\") " pod="openshift-marketplace/certified-operators-bb57t" Dec 10 14:34:15 crc kubenswrapper[4718]: I1210 14:34:15.044638 4718 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Dec 10 14:34:15 crc kubenswrapper[4718]: I1210 14:34:15.044722 4718 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n2lcz\" (UID: \"2638a0da-6209-4691-a4d4-6aa91a4ca547\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-n2lcz" Dec 10 14:34:15 crc kubenswrapper[4718]: I1210 14:34:15.045854 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-q2xbb" Dec 10 14:34:15 crc kubenswrapper[4718]: I1210 14:34:15.132474 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29422950-4cxn6" Dec 10 14:34:15 crc kubenswrapper[4718]: I1210 14:34:15.138458 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n2lcz\" (UID: \"2638a0da-6209-4691-a4d4-6aa91a4ca547\") " pod="openshift-image-registry/image-registry-697d97f7c8-n2lcz" Dec 10 14:34:15 crc kubenswrapper[4718]: I1210 14:34:15.200766 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/dff4ff8a-f156-4da4-ba81-477078a8345d-config-volume\") pod \"dff4ff8a-f156-4da4-ba81-477078a8345d\" (UID: \"dff4ff8a-f156-4da4-ba81-477078a8345d\") " Dec 10 14:34:15 crc kubenswrapper[4718]: I1210 14:34:15.201014 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/dff4ff8a-f156-4da4-ba81-477078a8345d-secret-volume\") pod \"dff4ff8a-f156-4da4-ba81-477078a8345d\" (UID: \"dff4ff8a-f156-4da4-ba81-477078a8345d\") " Dec 10 14:34:15 crc kubenswrapper[4718]: I1210 14:34:15.201100 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nq9rj\" (UniqueName: \"kubernetes.io/projected/dff4ff8a-f156-4da4-ba81-477078a8345d-kube-api-access-nq9rj\") pod \"dff4ff8a-f156-4da4-ba81-477078a8345d\" (UID: \"dff4ff8a-f156-4da4-ba81-477078a8345d\") " Dec 10 14:34:15 crc kubenswrapper[4718]: I1210 14:34:15.202213 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dff4ff8a-f156-4da4-ba81-477078a8345d-config-volume" (OuterVolumeSpecName: "config-volume") pod "dff4ff8a-f156-4da4-ba81-477078a8345d" (UID: "dff4ff8a-f156-4da4-ba81-477078a8345d"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:34:15 crc kubenswrapper[4718]: I1210 14:34:15.203924 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bb57t" Dec 10 14:34:15 crc kubenswrapper[4718]: I1210 14:34:15.220751 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dff4ff8a-f156-4da4-ba81-477078a8345d-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "dff4ff8a-f156-4da4-ba81-477078a8345d" (UID: "dff4ff8a-f156-4da4-ba81-477078a8345d"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 14:34:15 crc kubenswrapper[4718]: I1210 14:34:15.220902 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dff4ff8a-f156-4da4-ba81-477078a8345d-kube-api-access-nq9rj" (OuterVolumeSpecName: "kube-api-access-nq9rj") pod "dff4ff8a-f156-4da4-ba81-477078a8345d" (UID: "dff4ff8a-f156-4da4-ba81-477078a8345d"). InnerVolumeSpecName "kube-api-access-nq9rj". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 14:34:15 crc kubenswrapper[4718]: I1210 14:34:15.257257 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-n2lcz" Dec 10 14:34:15 crc kubenswrapper[4718]: I1210 14:34:15.261852 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Dec 10 14:34:15 crc kubenswrapper[4718]: I1210 14:34:15.299096 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-l2t5d"] Dec 10 14:34:15 crc kubenswrapper[4718]: I1210 14:34:15.302615 4718 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/dff4ff8a-f156-4da4-ba81-477078a8345d-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 10 14:34:15 crc kubenswrapper[4718]: I1210 14:34:15.302657 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nq9rj\" (UniqueName: \"kubernetes.io/projected/dff4ff8a-f156-4da4-ba81-477078a8345d-kube-api-access-nq9rj\") on node \"crc\" DevicePath \"\"" Dec 10 14:34:15 crc kubenswrapper[4718]: I1210 14:34:15.302671 4718 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/dff4ff8a-f156-4da4-ba81-477078a8345d-config-volume\") on node \"crc\" DevicePath \"\"" Dec 10 14:34:15 crc kubenswrapper[4718]: I1210 14:34:15.306753 4718 patch_prober.go:28] interesting pod/router-default-5444994796-7fd22 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 10 14:34:15 crc kubenswrapper[4718]: [-]has-synced failed: reason withheld Dec 10 14:34:15 crc kubenswrapper[4718]: [+]process-running ok Dec 10 14:34:15 crc kubenswrapper[4718]: healthz check failed Dec 10 14:34:15 crc kubenswrapper[4718]: I1210 14:34:15.306841 4718 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-7fd22" podUID="40856145-6171-40e7-97b9-51268a8c348b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 10 14:34:15 crc kubenswrapper[4718]: I1210 14:34:15.491161 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-b4bq2"] Dec 10 14:34:15 crc kubenswrapper[4718]: I1210 14:34:15.589505 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-l2t5d" event={"ID":"a267a67e-effa-40d4-8923-7669798e594d","Type":"ContainerStarted","Data":"d797228584c0ac788adfa4038d054498bf79033359cb3ecceafb24936ed67fae"} Dec 10 14:34:15 crc kubenswrapper[4718]: I1210 14:34:15.591524 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"8244dd75-1817-4fec-9ee4-3fb68956fbc1","Type":"ContainerStarted","Data":"f1d382d012a539c6e39f3c84526d98825669c382af66e55eac2be150a4eee84c"} Dec 10 14:34:15 crc kubenswrapper[4718]: I1210 14:34:15.601644 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29422950-4cxn6" Dec 10 14:34:15 crc kubenswrapper[4718]: I1210 14:34:15.604452 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29422950-4cxn6" event={"ID":"dff4ff8a-f156-4da4-ba81-477078a8345d","Type":"ContainerDied","Data":"53558c77b5e312c46511471e95d187645d78479c83cc9b2cf2de80b711e83533"} Dec 10 14:34:15 crc kubenswrapper[4718]: I1210 14:34:15.604580 4718 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="53558c77b5e312c46511471e95d187645d78479c83cc9b2cf2de80b711e83533" Dec 10 14:34:15 crc kubenswrapper[4718]: I1210 14:34:15.611342 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-b4bq2" event={"ID":"37550300-88f8-40dc-be98-1825518dc65c","Type":"ContainerStarted","Data":"b988ed9c5a009640a28c4b3c6e96048b12118f8dc7d5b9aa4574f486b53afa86"} Dec 10 14:34:15 crc kubenswrapper[4718]: I1210 14:34:15.615932 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-xrpzk" event={"ID":"db2db73c-d11d-46e6-9cc6-331faf9a21ca","Type":"ContainerStarted","Data":"98fac1cfde54afd23aca67d36a1a87125f8a9fc15114258037624806f84dbc1b"} Dec 10 14:34:15 crc kubenswrapper[4718]: I1210 14:34:15.663011 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-xrpzk" podStartSLOduration=13.662970295000001 podStartE2EDuration="13.662970295s" podCreationTimestamp="2025-12-10 14:34:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 14:34:15.64914111 +0000 UTC m=+160.598364527" watchObservedRunningTime="2025-12-10 14:34:15.662970295 +0000 UTC m=+160.612193712" Dec 10 14:34:15 crc kubenswrapper[4718]: I1210 14:34:15.713784 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-q2xbb"] Dec 10 14:34:15 crc kubenswrapper[4718]: I1210 14:34:15.851476 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-lh6qt" Dec 10 14:34:15 crc kubenswrapper[4718]: I1210 14:34:15.851527 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-lh6qt" Dec 10 14:34:15 crc kubenswrapper[4718]: I1210 14:34:15.889925 4718 patch_prober.go:28] interesting pod/downloads-7954f5f757-hbnxp container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" start-of-body= Dec 10 14:34:15 crc kubenswrapper[4718]: I1210 14:34:15.889985 4718 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-hbnxp" podUID="b7432a0e-050f-4112-a75d-f8687233cf0f" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" Dec 10 14:34:15 crc kubenswrapper[4718]: I1210 14:34:15.890360 4718 patch_prober.go:28] interesting pod/downloads-7954f5f757-hbnxp container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" start-of-body= Dec 10 14:34:15 crc kubenswrapper[4718]: I1210 14:34:15.890377 4718 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-hbnxp" podUID="b7432a0e-050f-4112-a75d-f8687233cf0f" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" Dec 10 14:34:15 crc kubenswrapper[4718]: I1210 14:34:15.923766 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-lh6qt" Dec 10 14:34:16 crc kubenswrapper[4718]: I1210 14:34:16.039527 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Dec 10 14:34:16 crc kubenswrapper[4718]: I1210 14:34:16.040189 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-n2lcz"] Dec 10 14:34:16 crc kubenswrapper[4718]: W1210 14:34:16.081314 4718 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2638a0da_6209_4691_a4d4_6aa91a4ca547.slice/crio-89d8172ee68c1ce0612b0f9d92543a6555331767060a5644ea28b6db05f03443 WatchSource:0}: Error finding container 89d8172ee68c1ce0612b0f9d92543a6555331767060a5644ea28b6db05f03443: Status 404 returned error can't find the container with id 89d8172ee68c1ce0612b0f9d92543a6555331767060a5644ea28b6db05f03443 Dec 10 14:34:16 crc kubenswrapper[4718]: I1210 14:34:16.139648 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-bb57t"] Dec 10 14:34:16 crc kubenswrapper[4718]: I1210 14:34:16.229310 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-6bztl"] Dec 10 14:34:16 crc kubenswrapper[4718]: E1210 14:34:16.229692 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dff4ff8a-f156-4da4-ba81-477078a8345d" containerName="collect-profiles" Dec 10 14:34:16 crc kubenswrapper[4718]: I1210 14:34:16.229726 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="dff4ff8a-f156-4da4-ba81-477078a8345d" containerName="collect-profiles" Dec 10 14:34:16 crc kubenswrapper[4718]: I1210 14:34:16.229848 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="dff4ff8a-f156-4da4-ba81-477078a8345d" containerName="collect-profiles" Dec 10 14:34:16 crc kubenswrapper[4718]: I1210 14:34:16.230875 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6bztl" Dec 10 14:34:16 crc kubenswrapper[4718]: I1210 14:34:16.234979 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Dec 10 14:34:16 crc kubenswrapper[4718]: I1210 14:34:16.254780 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-6bztl"] Dec 10 14:34:16 crc kubenswrapper[4718]: I1210 14:34:16.301797 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-7fd22" Dec 10 14:34:16 crc kubenswrapper[4718]: I1210 14:34:16.313184 4718 patch_prober.go:28] interesting pod/router-default-5444994796-7fd22 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 10 14:34:16 crc kubenswrapper[4718]: [-]has-synced failed: reason withheld Dec 10 14:34:16 crc kubenswrapper[4718]: [+]process-running ok Dec 10 14:34:16 crc kubenswrapper[4718]: healthz check failed Dec 10 14:34:16 crc kubenswrapper[4718]: I1210 14:34:16.313288 4718 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-7fd22" podUID="40856145-6171-40e7-97b9-51268a8c348b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 10 14:34:16 crc kubenswrapper[4718]: I1210 14:34:16.349666 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wpzrg\" (UniqueName: \"kubernetes.io/projected/d0a8400b-e706-4ec6-b7e8-e4c25bfe9440-kube-api-access-wpzrg\") pod \"redhat-marketplace-6bztl\" (UID: \"d0a8400b-e706-4ec6-b7e8-e4c25bfe9440\") " pod="openshift-marketplace/redhat-marketplace-6bztl" Dec 10 14:34:16 crc kubenswrapper[4718]: I1210 14:34:16.349742 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d0a8400b-e706-4ec6-b7e8-e4c25bfe9440-utilities\") pod \"redhat-marketplace-6bztl\" (UID: \"d0a8400b-e706-4ec6-b7e8-e4c25bfe9440\") " pod="openshift-marketplace/redhat-marketplace-6bztl" Dec 10 14:34:16 crc kubenswrapper[4718]: I1210 14:34:16.349830 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d0a8400b-e706-4ec6-b7e8-e4c25bfe9440-catalog-content\") pod \"redhat-marketplace-6bztl\" (UID: \"d0a8400b-e706-4ec6-b7e8-e4c25bfe9440\") " pod="openshift-marketplace/redhat-marketplace-6bztl" Dec 10 14:34:16 crc kubenswrapper[4718]: I1210 14:34:16.450682 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d0a8400b-e706-4ec6-b7e8-e4c25bfe9440-catalog-content\") pod \"redhat-marketplace-6bztl\" (UID: \"d0a8400b-e706-4ec6-b7e8-e4c25bfe9440\") " pod="openshift-marketplace/redhat-marketplace-6bztl" Dec 10 14:34:16 crc kubenswrapper[4718]: I1210 14:34:16.450769 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wpzrg\" (UniqueName: \"kubernetes.io/projected/d0a8400b-e706-4ec6-b7e8-e4c25bfe9440-kube-api-access-wpzrg\") pod \"redhat-marketplace-6bztl\" (UID: \"d0a8400b-e706-4ec6-b7e8-e4c25bfe9440\") " pod="openshift-marketplace/redhat-marketplace-6bztl" Dec 10 14:34:16 crc kubenswrapper[4718]: I1210 14:34:16.450842 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d0a8400b-e706-4ec6-b7e8-e4c25bfe9440-utilities\") pod \"redhat-marketplace-6bztl\" (UID: \"d0a8400b-e706-4ec6-b7e8-e4c25bfe9440\") " pod="openshift-marketplace/redhat-marketplace-6bztl" Dec 10 14:34:16 crc kubenswrapper[4718]: I1210 14:34:16.451254 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d0a8400b-e706-4ec6-b7e8-e4c25bfe9440-catalog-content\") pod \"redhat-marketplace-6bztl\" (UID: \"d0a8400b-e706-4ec6-b7e8-e4c25bfe9440\") " pod="openshift-marketplace/redhat-marketplace-6bztl" Dec 10 14:34:16 crc kubenswrapper[4718]: I1210 14:34:16.451568 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d0a8400b-e706-4ec6-b7e8-e4c25bfe9440-utilities\") pod \"redhat-marketplace-6bztl\" (UID: \"d0a8400b-e706-4ec6-b7e8-e4c25bfe9440\") " pod="openshift-marketplace/redhat-marketplace-6bztl" Dec 10 14:34:16 crc kubenswrapper[4718]: I1210 14:34:16.454556 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-twtlk" Dec 10 14:34:16 crc kubenswrapper[4718]: I1210 14:34:16.472773 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wpzrg\" (UniqueName: \"kubernetes.io/projected/d0a8400b-e706-4ec6-b7e8-e4c25bfe9440-kube-api-access-wpzrg\") pod \"redhat-marketplace-6bztl\" (UID: \"d0a8400b-e706-4ec6-b7e8-e4c25bfe9440\") " pod="openshift-marketplace/redhat-marketplace-6bztl" Dec 10 14:34:16 crc kubenswrapper[4718]: I1210 14:34:16.597711 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6bztl" Dec 10 14:34:16 crc kubenswrapper[4718]: I1210 14:34:16.626878 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-b4bq2" event={"ID":"37550300-88f8-40dc-be98-1825518dc65c","Type":"ContainerDied","Data":"d7a55fd31986a10f214a9adf99798c784b8d68678422ff7c483f2be79bfb0421"} Dec 10 14:34:16 crc kubenswrapper[4718]: I1210 14:34:16.626951 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-d8blb"] Dec 10 14:34:16 crc kubenswrapper[4718]: I1210 14:34:16.627145 4718 generic.go:334] "Generic (PLEG): container finished" podID="37550300-88f8-40dc-be98-1825518dc65c" containerID="d7a55fd31986a10f214a9adf99798c784b8d68678422ff7c483f2be79bfb0421" exitCode=0 Dec 10 14:34:16 crc kubenswrapper[4718]: I1210 14:34:16.628074 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-d8blb" Dec 10 14:34:16 crc kubenswrapper[4718]: I1210 14:34:16.629905 4718 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 10 14:34:16 crc kubenswrapper[4718]: I1210 14:34:16.630917 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-n2lcz" event={"ID":"2638a0da-6209-4691-a4d4-6aa91a4ca547","Type":"ContainerStarted","Data":"39b7f994dda0d5ee0f4dadc168939bc66049f6d1013e620d0a2afbe47638a832"} Dec 10 14:34:16 crc kubenswrapper[4718]: I1210 14:34:16.630958 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-n2lcz" event={"ID":"2638a0da-6209-4691-a4d4-6aa91a4ca547","Type":"ContainerStarted","Data":"89d8172ee68c1ce0612b0f9d92543a6555331767060a5644ea28b6db05f03443"} Dec 10 14:34:16 crc kubenswrapper[4718]: I1210 14:34:16.631080 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-n2lcz" Dec 10 14:34:16 crc kubenswrapper[4718]: I1210 14:34:16.633133 4718 generic.go:334] "Generic (PLEG): container finished" podID="37fd67ff-cb68-413b-a919-929db5e9d0b1" containerID="a9db68656afc76b5099b1b6ac72870f9dc5087974e6735a69a05c8060dc3b96d" exitCode=0 Dec 10 14:34:16 crc kubenswrapper[4718]: I1210 14:34:16.633188 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bb57t" event={"ID":"37fd67ff-cb68-413b-a919-929db5e9d0b1","Type":"ContainerDied","Data":"a9db68656afc76b5099b1b6ac72870f9dc5087974e6735a69a05c8060dc3b96d"} Dec 10 14:34:16 crc kubenswrapper[4718]: I1210 14:34:16.633208 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bb57t" event={"ID":"37fd67ff-cb68-413b-a919-929db5e9d0b1","Type":"ContainerStarted","Data":"305e295875bdd95d1f487ff81e351ce09bb6327ea7a9823c8e45c607a3cc9cf0"} Dec 10 14:34:16 crc kubenswrapper[4718]: I1210 14:34:16.635564 4718 generic.go:334] "Generic (PLEG): container finished" podID="046f580c-2cdc-471d-a16a-43913d0b6092" containerID="ee70afc5d3452a59e2923013dd9b6d62adf8ab282eb0db08e332b37e6299e027" exitCode=0 Dec 10 14:34:16 crc kubenswrapper[4718]: I1210 14:34:16.635661 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q2xbb" event={"ID":"046f580c-2cdc-471d-a16a-43913d0b6092","Type":"ContainerDied","Data":"ee70afc5d3452a59e2923013dd9b6d62adf8ab282eb0db08e332b37e6299e027"} Dec 10 14:34:16 crc kubenswrapper[4718]: I1210 14:34:16.635709 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q2xbb" event={"ID":"046f580c-2cdc-471d-a16a-43913d0b6092","Type":"ContainerStarted","Data":"c941e6a7f2e0698089d953a22c3789ea24d5e3f6bde7a3d97cf8ec1579026bbe"} Dec 10 14:34:16 crc kubenswrapper[4718]: I1210 14:34:16.638144 4718 generic.go:334] "Generic (PLEG): container finished" podID="a267a67e-effa-40d4-8923-7669798e594d" containerID="d481d1f81e106cf2e907fafebba31e29f613de20322d827fd2143b1c998aa97d" exitCode=0 Dec 10 14:34:16 crc kubenswrapper[4718]: I1210 14:34:16.638186 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-l2t5d" event={"ID":"a267a67e-effa-40d4-8923-7669798e594d","Type":"ContainerDied","Data":"d481d1f81e106cf2e907fafebba31e29f613de20322d827fd2143b1c998aa97d"} Dec 10 14:34:16 crc kubenswrapper[4718]: I1210 14:34:16.649887 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-d8blb"] Dec 10 14:34:16 crc kubenswrapper[4718]: I1210 14:34:16.672270 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"8244dd75-1817-4fec-9ee4-3fb68956fbc1","Type":"ContainerStarted","Data":"2293ec8b4966b5f6dd4bed0cc596345beaaa398a47df3d0c9bebbcb45e5e5c70"} Dec 10 14:34:16 crc kubenswrapper[4718]: I1210 14:34:16.681627 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-lh6qt" Dec 10 14:34:16 crc kubenswrapper[4718]: I1210 14:34:16.697088 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-n2lcz" podStartSLOduration=136.697064558 podStartE2EDuration="2m16.697064558s" podCreationTimestamp="2025-12-10 14:32:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 14:34:16.69315553 +0000 UTC m=+161.642378967" watchObservedRunningTime="2025-12-10 14:34:16.697064558 +0000 UTC m=+161.646287975" Dec 10 14:34:16 crc kubenswrapper[4718]: I1210 14:34:16.756248 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f73d3e76-59b1-446b-b599-18b0c115447f-catalog-content\") pod \"redhat-marketplace-d8blb\" (UID: \"f73d3e76-59b1-446b-b599-18b0c115447f\") " pod="openshift-marketplace/redhat-marketplace-d8blb" Dec 10 14:34:16 crc kubenswrapper[4718]: I1210 14:34:16.756466 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4fb6p\" (UniqueName: \"kubernetes.io/projected/f73d3e76-59b1-446b-b599-18b0c115447f-kube-api-access-4fb6p\") pod \"redhat-marketplace-d8blb\" (UID: \"f73d3e76-59b1-446b-b599-18b0c115447f\") " pod="openshift-marketplace/redhat-marketplace-d8blb" Dec 10 14:34:16 crc kubenswrapper[4718]: I1210 14:34:16.756601 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f73d3e76-59b1-446b-b599-18b0c115447f-utilities\") pod \"redhat-marketplace-d8blb\" (UID: \"f73d3e76-59b1-446b-b599-18b0c115447f\") " pod="openshift-marketplace/redhat-marketplace-d8blb" Dec 10 14:34:16 crc kubenswrapper[4718]: I1210 14:34:16.795924 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/revision-pruner-9-crc" podStartSLOduration=3.795901717 podStartE2EDuration="3.795901717s" podCreationTimestamp="2025-12-10 14:34:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 14:34:16.793974679 +0000 UTC m=+161.743198096" watchObservedRunningTime="2025-12-10 14:34:16.795901717 +0000 UTC m=+161.745125134" Dec 10 14:34:16 crc kubenswrapper[4718]: I1210 14:34:16.858842 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f73d3e76-59b1-446b-b599-18b0c115447f-utilities\") pod \"redhat-marketplace-d8blb\" (UID: \"f73d3e76-59b1-446b-b599-18b0c115447f\") " pod="openshift-marketplace/redhat-marketplace-d8blb" Dec 10 14:34:16 crc kubenswrapper[4718]: I1210 14:34:16.858954 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f73d3e76-59b1-446b-b599-18b0c115447f-catalog-content\") pod \"redhat-marketplace-d8blb\" (UID: \"f73d3e76-59b1-446b-b599-18b0c115447f\") " pod="openshift-marketplace/redhat-marketplace-d8blb" Dec 10 14:34:16 crc kubenswrapper[4718]: I1210 14:34:16.859051 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4fb6p\" (UniqueName: \"kubernetes.io/projected/f73d3e76-59b1-446b-b599-18b0c115447f-kube-api-access-4fb6p\") pod \"redhat-marketplace-d8blb\" (UID: \"f73d3e76-59b1-446b-b599-18b0c115447f\") " pod="openshift-marketplace/redhat-marketplace-d8blb" Dec 10 14:34:16 crc kubenswrapper[4718]: I1210 14:34:16.861642 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f73d3e76-59b1-446b-b599-18b0c115447f-catalog-content\") pod \"redhat-marketplace-d8blb\" (UID: \"f73d3e76-59b1-446b-b599-18b0c115447f\") " pod="openshift-marketplace/redhat-marketplace-d8blb" Dec 10 14:34:16 crc kubenswrapper[4718]: I1210 14:34:16.862325 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f73d3e76-59b1-446b-b599-18b0c115447f-utilities\") pod \"redhat-marketplace-d8blb\" (UID: \"f73d3e76-59b1-446b-b599-18b0c115447f\") " pod="openshift-marketplace/redhat-marketplace-d8blb" Dec 10 14:34:16 crc kubenswrapper[4718]: I1210 14:34:16.911963 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-6bztl"] Dec 10 14:34:16 crc kubenswrapper[4718]: I1210 14:34:16.925078 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4fb6p\" (UniqueName: \"kubernetes.io/projected/f73d3e76-59b1-446b-b599-18b0c115447f-kube-api-access-4fb6p\") pod \"redhat-marketplace-d8blb\" (UID: \"f73d3e76-59b1-446b-b599-18b0c115447f\") " pod="openshift-marketplace/redhat-marketplace-d8blb" Dec 10 14:34:16 crc kubenswrapper[4718]: I1210 14:34:16.999136 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-d8blb" Dec 10 14:34:17 crc kubenswrapper[4718]: I1210 14:34:17.306088 4718 patch_prober.go:28] interesting pod/router-default-5444994796-7fd22 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 10 14:34:17 crc kubenswrapper[4718]: [-]has-synced failed: reason withheld Dec 10 14:34:17 crc kubenswrapper[4718]: [+]process-running ok Dec 10 14:34:17 crc kubenswrapper[4718]: healthz check failed Dec 10 14:34:17 crc kubenswrapper[4718]: I1210 14:34:17.306872 4718 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-7fd22" podUID="40856145-6171-40e7-97b9-51268a8c348b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 10 14:34:17 crc kubenswrapper[4718]: I1210 14:34:17.327655 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-d8blb"] Dec 10 14:34:17 crc kubenswrapper[4718]: I1210 14:34:17.425600 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-6xgvp"] Dec 10 14:34:17 crc kubenswrapper[4718]: I1210 14:34:17.427004 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6xgvp" Dec 10 14:34:17 crc kubenswrapper[4718]: I1210 14:34:17.431274 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Dec 10 14:34:17 crc kubenswrapper[4718]: I1210 14:34:17.436020 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-6xgvp"] Dec 10 14:34:17 crc kubenswrapper[4718]: I1210 14:34:17.571720 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/413a70a8-23c0-4e34-a2f7-ec5cb980bfb5-utilities\") pod \"redhat-operators-6xgvp\" (UID: \"413a70a8-23c0-4e34-a2f7-ec5cb980bfb5\") " pod="openshift-marketplace/redhat-operators-6xgvp" Dec 10 14:34:17 crc kubenswrapper[4718]: I1210 14:34:17.572010 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/413a70a8-23c0-4e34-a2f7-ec5cb980bfb5-catalog-content\") pod \"redhat-operators-6xgvp\" (UID: \"413a70a8-23c0-4e34-a2f7-ec5cb980bfb5\") " pod="openshift-marketplace/redhat-operators-6xgvp" Dec 10 14:34:17 crc kubenswrapper[4718]: I1210 14:34:17.572038 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c5qwj\" (UniqueName: \"kubernetes.io/projected/413a70a8-23c0-4e34-a2f7-ec5cb980bfb5-kube-api-access-c5qwj\") pod \"redhat-operators-6xgvp\" (UID: \"413a70a8-23c0-4e34-a2f7-ec5cb980bfb5\") " pod="openshift-marketplace/redhat-operators-6xgvp" Dec 10 14:34:17 crc kubenswrapper[4718]: I1210 14:34:17.673531 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/413a70a8-23c0-4e34-a2f7-ec5cb980bfb5-catalog-content\") pod \"redhat-operators-6xgvp\" (UID: \"413a70a8-23c0-4e34-a2f7-ec5cb980bfb5\") " pod="openshift-marketplace/redhat-operators-6xgvp" Dec 10 14:34:17 crc kubenswrapper[4718]: I1210 14:34:17.673592 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c5qwj\" (UniqueName: \"kubernetes.io/projected/413a70a8-23c0-4e34-a2f7-ec5cb980bfb5-kube-api-access-c5qwj\") pod \"redhat-operators-6xgvp\" (UID: \"413a70a8-23c0-4e34-a2f7-ec5cb980bfb5\") " pod="openshift-marketplace/redhat-operators-6xgvp" Dec 10 14:34:17 crc kubenswrapper[4718]: I1210 14:34:17.673625 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/413a70a8-23c0-4e34-a2f7-ec5cb980bfb5-utilities\") pod \"redhat-operators-6xgvp\" (UID: \"413a70a8-23c0-4e34-a2f7-ec5cb980bfb5\") " pod="openshift-marketplace/redhat-operators-6xgvp" Dec 10 14:34:17 crc kubenswrapper[4718]: I1210 14:34:17.674337 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/413a70a8-23c0-4e34-a2f7-ec5cb980bfb5-catalog-content\") pod \"redhat-operators-6xgvp\" (UID: \"413a70a8-23c0-4e34-a2f7-ec5cb980bfb5\") " pod="openshift-marketplace/redhat-operators-6xgvp" Dec 10 14:34:17 crc kubenswrapper[4718]: I1210 14:34:17.674716 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/413a70a8-23c0-4e34-a2f7-ec5cb980bfb5-utilities\") pod \"redhat-operators-6xgvp\" (UID: \"413a70a8-23c0-4e34-a2f7-ec5cb980bfb5\") " pod="openshift-marketplace/redhat-operators-6xgvp" Dec 10 14:34:17 crc kubenswrapper[4718]: I1210 14:34:17.676025 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-d8blb" event={"ID":"f73d3e76-59b1-446b-b599-18b0c115447f","Type":"ContainerStarted","Data":"50331affcb8059e7dfd2ad6f1199619153a8b67c9dabae953adc200b91b09f54"} Dec 10 14:34:17 crc kubenswrapper[4718]: I1210 14:34:17.678452 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6bztl" event={"ID":"d0a8400b-e706-4ec6-b7e8-e4c25bfe9440","Type":"ContainerStarted","Data":"7d39c38ce1be6613f355711c5d12fb2dd06f2916379110362b99e06000f90aae"} Dec 10 14:34:17 crc kubenswrapper[4718]: I1210 14:34:17.699983 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c5qwj\" (UniqueName: \"kubernetes.io/projected/413a70a8-23c0-4e34-a2f7-ec5cb980bfb5-kube-api-access-c5qwj\") pod \"redhat-operators-6xgvp\" (UID: \"413a70a8-23c0-4e34-a2f7-ec5cb980bfb5\") " pod="openshift-marketplace/redhat-operators-6xgvp" Dec 10 14:34:17 crc kubenswrapper[4718]: I1210 14:34:17.750626 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6xgvp" Dec 10 14:34:17 crc kubenswrapper[4718]: I1210 14:34:17.831538 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-rwvfp"] Dec 10 14:34:17 crc kubenswrapper[4718]: I1210 14:34:17.840161 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rwvfp" Dec 10 14:34:17 crc kubenswrapper[4718]: I1210 14:34:17.853453 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-rwvfp"] Dec 10 14:34:17 crc kubenswrapper[4718]: I1210 14:34:17.978960 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bc324349-4229-4bd4-b5b1-35de752d9f85-utilities\") pod \"redhat-operators-rwvfp\" (UID: \"bc324349-4229-4bd4-b5b1-35de752d9f85\") " pod="openshift-marketplace/redhat-operators-rwvfp" Dec 10 14:34:17 crc kubenswrapper[4718]: I1210 14:34:17.979043 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bc324349-4229-4bd4-b5b1-35de752d9f85-catalog-content\") pod \"redhat-operators-rwvfp\" (UID: \"bc324349-4229-4bd4-b5b1-35de752d9f85\") " pod="openshift-marketplace/redhat-operators-rwvfp" Dec 10 14:34:17 crc kubenswrapper[4718]: I1210 14:34:17.979092 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fs954\" (UniqueName: \"kubernetes.io/projected/bc324349-4229-4bd4-b5b1-35de752d9f85-kube-api-access-fs954\") pod \"redhat-operators-rwvfp\" (UID: \"bc324349-4229-4bd4-b5b1-35de752d9f85\") " pod="openshift-marketplace/redhat-operators-rwvfp" Dec 10 14:34:18 crc kubenswrapper[4718]: I1210 14:34:18.009533 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-6xgvp"] Dec 10 14:34:18 crc kubenswrapper[4718]: W1210 14:34:18.017200 4718 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod413a70a8_23c0_4e34_a2f7_ec5cb980bfb5.slice/crio-b1ccd011ad5cbc576752302792179e8d4eeee09dca89768f2ceef6fae192c5a5 WatchSource:0}: Error finding container b1ccd011ad5cbc576752302792179e8d4eeee09dca89768f2ceef6fae192c5a5: Status 404 returned error can't find the container with id b1ccd011ad5cbc576752302792179e8d4eeee09dca89768f2ceef6fae192c5a5 Dec 10 14:34:18 crc kubenswrapper[4718]: I1210 14:34:18.080800 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bc324349-4229-4bd4-b5b1-35de752d9f85-utilities\") pod \"redhat-operators-rwvfp\" (UID: \"bc324349-4229-4bd4-b5b1-35de752d9f85\") " pod="openshift-marketplace/redhat-operators-rwvfp" Dec 10 14:34:18 crc kubenswrapper[4718]: I1210 14:34:18.080908 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bc324349-4229-4bd4-b5b1-35de752d9f85-catalog-content\") pod \"redhat-operators-rwvfp\" (UID: \"bc324349-4229-4bd4-b5b1-35de752d9f85\") " pod="openshift-marketplace/redhat-operators-rwvfp" Dec 10 14:34:18 crc kubenswrapper[4718]: I1210 14:34:18.080967 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fs954\" (UniqueName: \"kubernetes.io/projected/bc324349-4229-4bd4-b5b1-35de752d9f85-kube-api-access-fs954\") pod \"redhat-operators-rwvfp\" (UID: \"bc324349-4229-4bd4-b5b1-35de752d9f85\") " pod="openshift-marketplace/redhat-operators-rwvfp" Dec 10 14:34:18 crc kubenswrapper[4718]: I1210 14:34:18.082327 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bc324349-4229-4bd4-b5b1-35de752d9f85-catalog-content\") pod \"redhat-operators-rwvfp\" (UID: \"bc324349-4229-4bd4-b5b1-35de752d9f85\") " pod="openshift-marketplace/redhat-operators-rwvfp" Dec 10 14:34:18 crc kubenswrapper[4718]: I1210 14:34:18.082377 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bc324349-4229-4bd4-b5b1-35de752d9f85-utilities\") pod \"redhat-operators-rwvfp\" (UID: \"bc324349-4229-4bd4-b5b1-35de752d9f85\") " pod="openshift-marketplace/redhat-operators-rwvfp" Dec 10 14:34:18 crc kubenswrapper[4718]: I1210 14:34:18.084085 4718 patch_prober.go:28] interesting pod/machine-config-daemon-8zmhn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 10 14:34:18 crc kubenswrapper[4718]: I1210 14:34:18.084131 4718 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" podUID="8db53917-7cfb-496d-b8a0-5cc68f3be4e7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 10 14:34:18 crc kubenswrapper[4718]: I1210 14:34:18.102986 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fs954\" (UniqueName: \"kubernetes.io/projected/bc324349-4229-4bd4-b5b1-35de752d9f85-kube-api-access-fs954\") pod \"redhat-operators-rwvfp\" (UID: \"bc324349-4229-4bd4-b5b1-35de752d9f85\") " pod="openshift-marketplace/redhat-operators-rwvfp" Dec 10 14:34:18 crc kubenswrapper[4718]: I1210 14:34:18.278780 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rwvfp" Dec 10 14:34:18 crc kubenswrapper[4718]: I1210 14:34:18.307457 4718 patch_prober.go:28] interesting pod/router-default-5444994796-7fd22 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 10 14:34:18 crc kubenswrapper[4718]: [-]has-synced failed: reason withheld Dec 10 14:34:18 crc kubenswrapper[4718]: [+]process-running ok Dec 10 14:34:18 crc kubenswrapper[4718]: healthz check failed Dec 10 14:34:18 crc kubenswrapper[4718]: I1210 14:34:18.307583 4718 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-7fd22" podUID="40856145-6171-40e7-97b9-51268a8c348b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 10 14:34:18 crc kubenswrapper[4718]: I1210 14:34:18.558551 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Dec 10 14:34:18 crc kubenswrapper[4718]: I1210 14:34:18.560025 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 10 14:34:18 crc kubenswrapper[4718]: I1210 14:34:18.563583 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Dec 10 14:34:18 crc kubenswrapper[4718]: I1210 14:34:18.563670 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Dec 10 14:34:18 crc kubenswrapper[4718]: I1210 14:34:18.564074 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Dec 10 14:34:18 crc kubenswrapper[4718]: I1210 14:34:18.589694 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-rwvfp"] Dec 10 14:34:18 crc kubenswrapper[4718]: I1210 14:34:18.691501 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ed53b368-a8d3-4aab-bfcf-48e823b3974d-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"ed53b368-a8d3-4aab-bfcf-48e823b3974d\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 10 14:34:18 crc kubenswrapper[4718]: I1210 14:34:18.691663 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ed53b368-a8d3-4aab-bfcf-48e823b3974d-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"ed53b368-a8d3-4aab-bfcf-48e823b3974d\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 10 14:34:18 crc kubenswrapper[4718]: I1210 14:34:18.701024 4718 generic.go:334] "Generic (PLEG): container finished" podID="f73d3e76-59b1-446b-b599-18b0c115447f" containerID="350ea21326b7dc3e696f2746bfa06f4be08012e6c6fb25f111eed083de7035b2" exitCode=0 Dec 10 14:34:18 crc kubenswrapper[4718]: I1210 14:34:18.701433 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-d8blb" event={"ID":"f73d3e76-59b1-446b-b599-18b0c115447f","Type":"ContainerDied","Data":"350ea21326b7dc3e696f2746bfa06f4be08012e6c6fb25f111eed083de7035b2"} Dec 10 14:34:18 crc kubenswrapper[4718]: I1210 14:34:18.709242 4718 generic.go:334] "Generic (PLEG): container finished" podID="d0a8400b-e706-4ec6-b7e8-e4c25bfe9440" containerID="4732a81b71d59cb55fd7132f89a1435c3626287ae2bce394831a6e93b3667c25" exitCode=0 Dec 10 14:34:18 crc kubenswrapper[4718]: I1210 14:34:18.709329 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6bztl" event={"ID":"d0a8400b-e706-4ec6-b7e8-e4c25bfe9440","Type":"ContainerDied","Data":"4732a81b71d59cb55fd7132f89a1435c3626287ae2bce394831a6e93b3667c25"} Dec 10 14:34:18 crc kubenswrapper[4718]: I1210 14:34:18.715091 4718 generic.go:334] "Generic (PLEG): container finished" podID="8244dd75-1817-4fec-9ee4-3fb68956fbc1" containerID="2293ec8b4966b5f6dd4bed0cc596345beaaa398a47df3d0c9bebbcb45e5e5c70" exitCode=0 Dec 10 14:34:18 crc kubenswrapper[4718]: I1210 14:34:18.715182 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"8244dd75-1817-4fec-9ee4-3fb68956fbc1","Type":"ContainerDied","Data":"2293ec8b4966b5f6dd4bed0cc596345beaaa398a47df3d0c9bebbcb45e5e5c70"} Dec 10 14:34:18 crc kubenswrapper[4718]: I1210 14:34:18.722274 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rwvfp" event={"ID":"bc324349-4229-4bd4-b5b1-35de752d9f85","Type":"ContainerStarted","Data":"69bbfded079dece5ede0ba96ccdbfd2db7f4e9067394aa6654a485e064bc0bef"} Dec 10 14:34:18 crc kubenswrapper[4718]: I1210 14:34:18.729938 4718 generic.go:334] "Generic (PLEG): container finished" podID="413a70a8-23c0-4e34-a2f7-ec5cb980bfb5" containerID="9887a8dcb7daee930078e09f41ad3b35998b45163dc1aecd013ebfe52605600a" exitCode=0 Dec 10 14:34:18 crc kubenswrapper[4718]: I1210 14:34:18.730000 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6xgvp" event={"ID":"413a70a8-23c0-4e34-a2f7-ec5cb980bfb5","Type":"ContainerDied","Data":"9887a8dcb7daee930078e09f41ad3b35998b45163dc1aecd013ebfe52605600a"} Dec 10 14:34:18 crc kubenswrapper[4718]: I1210 14:34:18.730034 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6xgvp" event={"ID":"413a70a8-23c0-4e34-a2f7-ec5cb980bfb5","Type":"ContainerStarted","Data":"b1ccd011ad5cbc576752302792179e8d4eeee09dca89768f2ceef6fae192c5a5"} Dec 10 14:34:18 crc kubenswrapper[4718]: I1210 14:34:18.795434 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ed53b368-a8d3-4aab-bfcf-48e823b3974d-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"ed53b368-a8d3-4aab-bfcf-48e823b3974d\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 10 14:34:18 crc kubenswrapper[4718]: I1210 14:34:18.795572 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ed53b368-a8d3-4aab-bfcf-48e823b3974d-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"ed53b368-a8d3-4aab-bfcf-48e823b3974d\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 10 14:34:18 crc kubenswrapper[4718]: I1210 14:34:18.795990 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ed53b368-a8d3-4aab-bfcf-48e823b3974d-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"ed53b368-a8d3-4aab-bfcf-48e823b3974d\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 10 14:34:18 crc kubenswrapper[4718]: I1210 14:34:18.819124 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ed53b368-a8d3-4aab-bfcf-48e823b3974d-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"ed53b368-a8d3-4aab-bfcf-48e823b3974d\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 10 14:34:18 crc kubenswrapper[4718]: I1210 14:34:18.885817 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 10 14:34:19 crc kubenswrapper[4718]: I1210 14:34:19.308116 4718 patch_prober.go:28] interesting pod/router-default-5444994796-7fd22 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 10 14:34:19 crc kubenswrapper[4718]: [-]has-synced failed: reason withheld Dec 10 14:34:19 crc kubenswrapper[4718]: [+]process-running ok Dec 10 14:34:19 crc kubenswrapper[4718]: healthz check failed Dec 10 14:34:19 crc kubenswrapper[4718]: I1210 14:34:19.308588 4718 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-7fd22" podUID="40856145-6171-40e7-97b9-51268a8c348b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 10 14:34:19 crc kubenswrapper[4718]: I1210 14:34:19.476161 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-5qn5w" Dec 10 14:34:19 crc kubenswrapper[4718]: I1210 14:34:19.486289 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Dec 10 14:34:19 crc kubenswrapper[4718]: I1210 14:34:19.492157 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-5qn5w" Dec 10 14:34:19 crc kubenswrapper[4718]: I1210 14:34:19.739110 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"ed53b368-a8d3-4aab-bfcf-48e823b3974d","Type":"ContainerStarted","Data":"87ac056c361c7f90b8b0aedd88ee640243458263353667db43336963d4f5c926"} Dec 10 14:34:19 crc kubenswrapper[4718]: I1210 14:34:19.741817 4718 generic.go:334] "Generic (PLEG): container finished" podID="bc324349-4229-4bd4-b5b1-35de752d9f85" containerID="558531f00547567ce6eaf7db773fdc97bf415af2184066524a36edac6bcbdfec" exitCode=0 Dec 10 14:34:19 crc kubenswrapper[4718]: I1210 14:34:19.742919 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rwvfp" event={"ID":"bc324349-4229-4bd4-b5b1-35de752d9f85","Type":"ContainerDied","Data":"558531f00547567ce6eaf7db773fdc97bf415af2184066524a36edac6bcbdfec"} Dec 10 14:34:20 crc kubenswrapper[4718]: I1210 14:34:20.288712 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 10 14:34:20 crc kubenswrapper[4718]: I1210 14:34:20.317162 4718 patch_prober.go:28] interesting pod/router-default-5444994796-7fd22 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 10 14:34:20 crc kubenswrapper[4718]: [-]has-synced failed: reason withheld Dec 10 14:34:20 crc kubenswrapper[4718]: [+]process-running ok Dec 10 14:34:20 crc kubenswrapper[4718]: healthz check failed Dec 10 14:34:20 crc kubenswrapper[4718]: I1210 14:34:20.317251 4718 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-7fd22" podUID="40856145-6171-40e7-97b9-51268a8c348b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 10 14:34:20 crc kubenswrapper[4718]: I1210 14:34:20.431378 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8244dd75-1817-4fec-9ee4-3fb68956fbc1-kubelet-dir\") pod \"8244dd75-1817-4fec-9ee4-3fb68956fbc1\" (UID: \"8244dd75-1817-4fec-9ee4-3fb68956fbc1\") " Dec 10 14:34:20 crc kubenswrapper[4718]: I1210 14:34:20.431580 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8244dd75-1817-4fec-9ee4-3fb68956fbc1-kube-api-access\") pod \"8244dd75-1817-4fec-9ee4-3fb68956fbc1\" (UID: \"8244dd75-1817-4fec-9ee4-3fb68956fbc1\") " Dec 10 14:34:20 crc kubenswrapper[4718]: I1210 14:34:20.433049 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8244dd75-1817-4fec-9ee4-3fb68956fbc1-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "8244dd75-1817-4fec-9ee4-3fb68956fbc1" (UID: "8244dd75-1817-4fec-9ee4-3fb68956fbc1"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 10 14:34:20 crc kubenswrapper[4718]: I1210 14:34:20.447307 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8244dd75-1817-4fec-9ee4-3fb68956fbc1-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "8244dd75-1817-4fec-9ee4-3fb68956fbc1" (UID: "8244dd75-1817-4fec-9ee4-3fb68956fbc1"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 14:34:20 crc kubenswrapper[4718]: I1210 14:34:20.534772 4718 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8244dd75-1817-4fec-9ee4-3fb68956fbc1-kubelet-dir\") on node \"crc\" DevicePath \"\"" Dec 10 14:34:20 crc kubenswrapper[4718]: I1210 14:34:20.534815 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8244dd75-1817-4fec-9ee4-3fb68956fbc1-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 10 14:34:20 crc kubenswrapper[4718]: I1210 14:34:20.767429 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 10 14:34:20 crc kubenswrapper[4718]: I1210 14:34:20.767458 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"8244dd75-1817-4fec-9ee4-3fb68956fbc1","Type":"ContainerDied","Data":"f1d382d012a539c6e39f3c84526d98825669c382af66e55eac2be150a4eee84c"} Dec 10 14:34:20 crc kubenswrapper[4718]: I1210 14:34:20.767513 4718 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f1d382d012a539c6e39f3c84526d98825669c382af66e55eac2be150a4eee84c" Dec 10 14:34:21 crc kubenswrapper[4718]: I1210 14:34:21.305507 4718 patch_prober.go:28] interesting pod/router-default-5444994796-7fd22 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 10 14:34:21 crc kubenswrapper[4718]: [-]has-synced failed: reason withheld Dec 10 14:34:21 crc kubenswrapper[4718]: [+]process-running ok Dec 10 14:34:21 crc kubenswrapper[4718]: healthz check failed Dec 10 14:34:21 crc kubenswrapper[4718]: I1210 14:34:21.305581 4718 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-7fd22" podUID="40856145-6171-40e7-97b9-51268a8c348b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 10 14:34:21 crc kubenswrapper[4718]: I1210 14:34:21.393005 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-gvvtl" Dec 10 14:34:21 crc kubenswrapper[4718]: I1210 14:34:21.792902 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"ed53b368-a8d3-4aab-bfcf-48e823b3974d","Type":"ContainerStarted","Data":"821a24a799a2da0145dd9225bc4cc8832cae644719dbe7bc7cdf06a39f2df399"} Dec 10 14:34:22 crc kubenswrapper[4718]: I1210 14:34:22.304200 4718 patch_prober.go:28] interesting pod/router-default-5444994796-7fd22 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 10 14:34:22 crc kubenswrapper[4718]: [-]has-synced failed: reason withheld Dec 10 14:34:22 crc kubenswrapper[4718]: [+]process-running ok Dec 10 14:34:22 crc kubenswrapper[4718]: healthz check failed Dec 10 14:34:22 crc kubenswrapper[4718]: I1210 14:34:22.304317 4718 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-7fd22" podUID="40856145-6171-40e7-97b9-51268a8c348b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 10 14:34:22 crc kubenswrapper[4718]: I1210 14:34:22.817934 4718 generic.go:334] "Generic (PLEG): container finished" podID="ed53b368-a8d3-4aab-bfcf-48e823b3974d" containerID="821a24a799a2da0145dd9225bc4cc8832cae644719dbe7bc7cdf06a39f2df399" exitCode=0 Dec 10 14:34:22 crc kubenswrapper[4718]: I1210 14:34:22.818111 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"ed53b368-a8d3-4aab-bfcf-48e823b3974d","Type":"ContainerDied","Data":"821a24a799a2da0145dd9225bc4cc8832cae644719dbe7bc7cdf06a39f2df399"} Dec 10 14:34:23 crc kubenswrapper[4718]: I1210 14:34:23.305304 4718 patch_prober.go:28] interesting pod/router-default-5444994796-7fd22 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 10 14:34:23 crc kubenswrapper[4718]: [-]has-synced failed: reason withheld Dec 10 14:34:23 crc kubenswrapper[4718]: [+]process-running ok Dec 10 14:34:23 crc kubenswrapper[4718]: healthz check failed Dec 10 14:34:23 crc kubenswrapper[4718]: I1210 14:34:23.305402 4718 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-7fd22" podUID="40856145-6171-40e7-97b9-51268a8c348b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 10 14:34:24 crc kubenswrapper[4718]: I1210 14:34:24.221210 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 10 14:34:24 crc kubenswrapper[4718]: I1210 14:34:24.303986 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ed53b368-a8d3-4aab-bfcf-48e823b3974d-kube-api-access\") pod \"ed53b368-a8d3-4aab-bfcf-48e823b3974d\" (UID: \"ed53b368-a8d3-4aab-bfcf-48e823b3974d\") " Dec 10 14:34:24 crc kubenswrapper[4718]: I1210 14:34:24.304304 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ed53b368-a8d3-4aab-bfcf-48e823b3974d-kubelet-dir\") pod \"ed53b368-a8d3-4aab-bfcf-48e823b3974d\" (UID: \"ed53b368-a8d3-4aab-bfcf-48e823b3974d\") " Dec 10 14:34:24 crc kubenswrapper[4718]: I1210 14:34:24.304404 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ed53b368-a8d3-4aab-bfcf-48e823b3974d-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "ed53b368-a8d3-4aab-bfcf-48e823b3974d" (UID: "ed53b368-a8d3-4aab-bfcf-48e823b3974d"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 10 14:34:24 crc kubenswrapper[4718]: I1210 14:34:24.305021 4718 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ed53b368-a8d3-4aab-bfcf-48e823b3974d-kubelet-dir\") on node \"crc\" DevicePath \"\"" Dec 10 14:34:24 crc kubenswrapper[4718]: I1210 14:34:24.313270 4718 patch_prober.go:28] interesting pod/router-default-5444994796-7fd22 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 10 14:34:24 crc kubenswrapper[4718]: [-]has-synced failed: reason withheld Dec 10 14:34:24 crc kubenswrapper[4718]: [+]process-running ok Dec 10 14:34:24 crc kubenswrapper[4718]: healthz check failed Dec 10 14:34:24 crc kubenswrapper[4718]: I1210 14:34:24.313465 4718 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-7fd22" podUID="40856145-6171-40e7-97b9-51268a8c348b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 10 14:34:24 crc kubenswrapper[4718]: I1210 14:34:24.334781 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ed53b368-a8d3-4aab-bfcf-48e823b3974d-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "ed53b368-a8d3-4aab-bfcf-48e823b3974d" (UID: "ed53b368-a8d3-4aab-bfcf-48e823b3974d"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 14:34:24 crc kubenswrapper[4718]: I1210 14:34:24.406603 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ed53b368-a8d3-4aab-bfcf-48e823b3974d-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 10 14:34:24 crc kubenswrapper[4718]: I1210 14:34:24.517855 4718 patch_prober.go:28] interesting pod/console-f9d7485db-t2gmf container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.15:8443/health\": dial tcp 10.217.0.15:8443: connect: connection refused" start-of-body= Dec 10 14:34:24 crc kubenswrapper[4718]: I1210 14:34:24.517920 4718 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-t2gmf" podUID="17fe734a-f022-4fd4-8276-661e662e2c6b" containerName="console" probeResult="failure" output="Get \"https://10.217.0.15:8443/health\": dial tcp 10.217.0.15:8443: connect: connection refused" Dec 10 14:34:24 crc kubenswrapper[4718]: I1210 14:34:24.835895 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"ed53b368-a8d3-4aab-bfcf-48e823b3974d","Type":"ContainerDied","Data":"87ac056c361c7f90b8b0aedd88ee640243458263353667db43336963d4f5c926"} Dec 10 14:34:24 crc kubenswrapper[4718]: I1210 14:34:24.835971 4718 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="87ac056c361c7f90b8b0aedd88ee640243458263353667db43336963d4f5c926" Dec 10 14:34:24 crc kubenswrapper[4718]: I1210 14:34:24.835971 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 10 14:34:25 crc kubenswrapper[4718]: I1210 14:34:25.305607 4718 patch_prober.go:28] interesting pod/router-default-5444994796-7fd22 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 10 14:34:25 crc kubenswrapper[4718]: [-]has-synced failed: reason withheld Dec 10 14:34:25 crc kubenswrapper[4718]: [+]process-running ok Dec 10 14:34:25 crc kubenswrapper[4718]: healthz check failed Dec 10 14:34:25 crc kubenswrapper[4718]: I1210 14:34:25.305757 4718 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-7fd22" podUID="40856145-6171-40e7-97b9-51268a8c348b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 10 14:34:25 crc kubenswrapper[4718]: I1210 14:34:25.629484 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1494ebfa-d66c-4200-a336-2cedebcd5889-metrics-certs\") pod \"network-metrics-daemon-r8zbt\" (UID: \"1494ebfa-d66c-4200-a336-2cedebcd5889\") " pod="openshift-multus/network-metrics-daemon-r8zbt" Dec 10 14:34:25 crc kubenswrapper[4718]: I1210 14:34:25.644798 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1494ebfa-d66c-4200-a336-2cedebcd5889-metrics-certs\") pod \"network-metrics-daemon-r8zbt\" (UID: \"1494ebfa-d66c-4200-a336-2cedebcd5889\") " pod="openshift-multus/network-metrics-daemon-r8zbt" Dec 10 14:34:25 crc kubenswrapper[4718]: I1210 14:34:25.913178 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-hbnxp" Dec 10 14:34:25 crc kubenswrapper[4718]: I1210 14:34:25.915791 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-r8zbt" Dec 10 14:34:26 crc kubenswrapper[4718]: I1210 14:34:26.305316 4718 patch_prober.go:28] interesting pod/router-default-5444994796-7fd22 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 10 14:34:26 crc kubenswrapper[4718]: [-]has-synced failed: reason withheld Dec 10 14:34:26 crc kubenswrapper[4718]: [+]process-running ok Dec 10 14:34:26 crc kubenswrapper[4718]: healthz check failed Dec 10 14:34:26 crc kubenswrapper[4718]: I1210 14:34:26.305438 4718 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-7fd22" podUID="40856145-6171-40e7-97b9-51268a8c348b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 10 14:34:26 crc kubenswrapper[4718]: I1210 14:34:26.615434 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-r8zbt"] Dec 10 14:34:26 crc kubenswrapper[4718]: W1210 14:34:26.641043 4718 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1494ebfa_d66c_4200_a336_2cedebcd5889.slice/crio-f0ad0a4bbce66afbec65e35954a53ff272ca3d65091c088d6bc666c68fe35959 WatchSource:0}: Error finding container f0ad0a4bbce66afbec65e35954a53ff272ca3d65091c088d6bc666c68fe35959: Status 404 returned error can't find the container with id f0ad0a4bbce66afbec65e35954a53ff272ca3d65091c088d6bc666c68fe35959 Dec 10 14:34:26 crc kubenswrapper[4718]: I1210 14:34:26.859277 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-r8zbt" event={"ID":"1494ebfa-d66c-4200-a336-2cedebcd5889","Type":"ContainerStarted","Data":"f0ad0a4bbce66afbec65e35954a53ff272ca3d65091c088d6bc666c68fe35959"} Dec 10 14:34:27 crc kubenswrapper[4718]: I1210 14:34:27.308478 4718 patch_prober.go:28] interesting pod/router-default-5444994796-7fd22 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 10 14:34:27 crc kubenswrapper[4718]: [-]has-synced failed: reason withheld Dec 10 14:34:27 crc kubenswrapper[4718]: [+]process-running ok Dec 10 14:34:27 crc kubenswrapper[4718]: healthz check failed Dec 10 14:34:27 crc kubenswrapper[4718]: I1210 14:34:27.308555 4718 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-7fd22" podUID="40856145-6171-40e7-97b9-51268a8c348b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 10 14:34:27 crc kubenswrapper[4718]: I1210 14:34:27.871473 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-r8zbt" event={"ID":"1494ebfa-d66c-4200-a336-2cedebcd5889","Type":"ContainerStarted","Data":"8867280960d5b363d044257cf140ca8667eb16acbbc052022e0a706b01683690"} Dec 10 14:34:28 crc kubenswrapper[4718]: I1210 14:34:28.310158 4718 patch_prober.go:28] interesting pod/router-default-5444994796-7fd22 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 10 14:34:28 crc kubenswrapper[4718]: [-]has-synced failed: reason withheld Dec 10 14:34:28 crc kubenswrapper[4718]: [+]process-running ok Dec 10 14:34:28 crc kubenswrapper[4718]: healthz check failed Dec 10 14:34:28 crc kubenswrapper[4718]: I1210 14:34:28.310298 4718 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-7fd22" podUID="40856145-6171-40e7-97b9-51268a8c348b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 10 14:34:29 crc kubenswrapper[4718]: I1210 14:34:29.304934 4718 patch_prober.go:28] interesting pod/router-default-5444994796-7fd22 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 10 14:34:29 crc kubenswrapper[4718]: [-]has-synced failed: reason withheld Dec 10 14:34:29 crc kubenswrapper[4718]: [+]process-running ok Dec 10 14:34:29 crc kubenswrapper[4718]: healthz check failed Dec 10 14:34:29 crc kubenswrapper[4718]: I1210 14:34:29.305075 4718 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-7fd22" podUID="40856145-6171-40e7-97b9-51268a8c348b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 10 14:34:30 crc kubenswrapper[4718]: I1210 14:34:30.305994 4718 patch_prober.go:28] interesting pod/router-default-5444994796-7fd22 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 10 14:34:30 crc kubenswrapper[4718]: [+]has-synced ok Dec 10 14:34:30 crc kubenswrapper[4718]: [+]process-running ok Dec 10 14:34:30 crc kubenswrapper[4718]: healthz check failed Dec 10 14:34:30 crc kubenswrapper[4718]: I1210 14:34:30.306375 4718 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-7fd22" podUID="40856145-6171-40e7-97b9-51268a8c348b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 10 14:34:31 crc kubenswrapper[4718]: I1210 14:34:31.305620 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-7fd22" Dec 10 14:34:31 crc kubenswrapper[4718]: I1210 14:34:31.308678 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-7fd22" Dec 10 14:34:35 crc kubenswrapper[4718]: I1210 14:34:35.264048 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-n2lcz" Dec 10 14:34:35 crc kubenswrapper[4718]: I1210 14:34:35.354031 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-t2gmf" Dec 10 14:34:35 crc kubenswrapper[4718]: I1210 14:34:35.360515 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-t2gmf" Dec 10 14:34:45 crc kubenswrapper[4718]: I1210 14:34:45.929939 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-dskl4" Dec 10 14:34:46 crc kubenswrapper[4718]: I1210 14:34:46.250910 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 10 14:34:48 crc kubenswrapper[4718]: I1210 14:34:48.083939 4718 patch_prober.go:28] interesting pod/machine-config-daemon-8zmhn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 10 14:34:48 crc kubenswrapper[4718]: I1210 14:34:48.085479 4718 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" podUID="8db53917-7cfb-496d-b8a0-5cc68f3be4e7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 10 14:34:51 crc kubenswrapper[4718]: I1210 14:34:51.155236 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Dec 10 14:34:51 crc kubenswrapper[4718]: E1210 14:34:51.155751 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8244dd75-1817-4fec-9ee4-3fb68956fbc1" containerName="pruner" Dec 10 14:34:51 crc kubenswrapper[4718]: I1210 14:34:51.155767 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="8244dd75-1817-4fec-9ee4-3fb68956fbc1" containerName="pruner" Dec 10 14:34:51 crc kubenswrapper[4718]: E1210 14:34:51.155780 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed53b368-a8d3-4aab-bfcf-48e823b3974d" containerName="pruner" Dec 10 14:34:51 crc kubenswrapper[4718]: I1210 14:34:51.155786 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed53b368-a8d3-4aab-bfcf-48e823b3974d" containerName="pruner" Dec 10 14:34:51 crc kubenswrapper[4718]: I1210 14:34:51.155905 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="ed53b368-a8d3-4aab-bfcf-48e823b3974d" containerName="pruner" Dec 10 14:34:51 crc kubenswrapper[4718]: I1210 14:34:51.155925 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="8244dd75-1817-4fec-9ee4-3fb68956fbc1" containerName="pruner" Dec 10 14:34:51 crc kubenswrapper[4718]: I1210 14:34:51.156747 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 10 14:34:51 crc kubenswrapper[4718]: I1210 14:34:51.158923 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Dec 10 14:34:51 crc kubenswrapper[4718]: I1210 14:34:51.161359 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Dec 10 14:34:51 crc kubenswrapper[4718]: I1210 14:34:51.163849 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Dec 10 14:34:51 crc kubenswrapper[4718]: I1210 14:34:51.207355 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2b4486bd-f5e8-41a1-8107-d1b1d57c87c4-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"2b4486bd-f5e8-41a1-8107-d1b1d57c87c4\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 10 14:34:51 crc kubenswrapper[4718]: I1210 14:34:51.207556 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2b4486bd-f5e8-41a1-8107-d1b1d57c87c4-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"2b4486bd-f5e8-41a1-8107-d1b1d57c87c4\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 10 14:34:51 crc kubenswrapper[4718]: I1210 14:34:51.309770 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2b4486bd-f5e8-41a1-8107-d1b1d57c87c4-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"2b4486bd-f5e8-41a1-8107-d1b1d57c87c4\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 10 14:34:51 crc kubenswrapper[4718]: I1210 14:34:51.309916 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2b4486bd-f5e8-41a1-8107-d1b1d57c87c4-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"2b4486bd-f5e8-41a1-8107-d1b1d57c87c4\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 10 14:34:51 crc kubenswrapper[4718]: I1210 14:34:51.310014 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2b4486bd-f5e8-41a1-8107-d1b1d57c87c4-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"2b4486bd-f5e8-41a1-8107-d1b1d57c87c4\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 10 14:34:51 crc kubenswrapper[4718]: I1210 14:34:51.350704 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2b4486bd-f5e8-41a1-8107-d1b1d57c87c4-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"2b4486bd-f5e8-41a1-8107-d1b1d57c87c4\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 10 14:34:51 crc kubenswrapper[4718]: I1210 14:34:51.486463 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 10 14:34:55 crc kubenswrapper[4718]: I1210 14:34:55.743595 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Dec 10 14:34:55 crc kubenswrapper[4718]: I1210 14:34:55.748952 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Dec 10 14:34:55 crc kubenswrapper[4718]: I1210 14:34:55.758593 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Dec 10 14:34:55 crc kubenswrapper[4718]: I1210 14:34:55.886377 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/09fe8e9b-3f80-4979-8203-7aca9407605d-var-lock\") pod \"installer-9-crc\" (UID: \"09fe8e9b-3f80-4979-8203-7aca9407605d\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 10 14:34:55 crc kubenswrapper[4718]: I1210 14:34:55.887271 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/09fe8e9b-3f80-4979-8203-7aca9407605d-kube-api-access\") pod \"installer-9-crc\" (UID: \"09fe8e9b-3f80-4979-8203-7aca9407605d\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 10 14:34:55 crc kubenswrapper[4718]: I1210 14:34:55.887568 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/09fe8e9b-3f80-4979-8203-7aca9407605d-kubelet-dir\") pod \"installer-9-crc\" (UID: \"09fe8e9b-3f80-4979-8203-7aca9407605d\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 10 14:34:55 crc kubenswrapper[4718]: I1210 14:34:55.989826 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/09fe8e9b-3f80-4979-8203-7aca9407605d-kube-api-access\") pod \"installer-9-crc\" (UID: \"09fe8e9b-3f80-4979-8203-7aca9407605d\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 10 14:34:55 crc kubenswrapper[4718]: I1210 14:34:55.989981 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/09fe8e9b-3f80-4979-8203-7aca9407605d-kubelet-dir\") pod \"installer-9-crc\" (UID: \"09fe8e9b-3f80-4979-8203-7aca9407605d\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 10 14:34:55 crc kubenswrapper[4718]: I1210 14:34:55.990034 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/09fe8e9b-3f80-4979-8203-7aca9407605d-var-lock\") pod \"installer-9-crc\" (UID: \"09fe8e9b-3f80-4979-8203-7aca9407605d\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 10 14:34:55 crc kubenswrapper[4718]: I1210 14:34:55.990182 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/09fe8e9b-3f80-4979-8203-7aca9407605d-var-lock\") pod \"installer-9-crc\" (UID: \"09fe8e9b-3f80-4979-8203-7aca9407605d\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 10 14:34:55 crc kubenswrapper[4718]: I1210 14:34:55.991079 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/09fe8e9b-3f80-4979-8203-7aca9407605d-kubelet-dir\") pod \"installer-9-crc\" (UID: \"09fe8e9b-3f80-4979-8203-7aca9407605d\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 10 14:34:56 crc kubenswrapper[4718]: I1210 14:34:56.009420 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/09fe8e9b-3f80-4979-8203-7aca9407605d-kube-api-access\") pod \"installer-9-crc\" (UID: \"09fe8e9b-3f80-4979-8203-7aca9407605d\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 10 14:34:56 crc kubenswrapper[4718]: I1210 14:34:56.089897 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Dec 10 14:35:09 crc kubenswrapper[4718]: E1210 14:35:09.069103 4718 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Dec 10 14:35:09 crc kubenswrapper[4718]: E1210 14:35:09.071182 4718 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9wfw5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-bb57t_openshift-marketplace(37fd67ff-cb68-413b-a919-929db5e9d0b1): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 10 14:35:09 crc kubenswrapper[4718]: E1210 14:35:09.072454 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-bb57t" podUID="37fd67ff-cb68-413b-a919-929db5e9d0b1" Dec 10 14:35:11 crc kubenswrapper[4718]: E1210 14:35:11.495625 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-bb57t" podUID="37fd67ff-cb68-413b-a919-929db5e9d0b1" Dec 10 14:35:14 crc kubenswrapper[4718]: E1210 14:35:14.120989 4718 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Dec 10 14:35:14 crc kubenswrapper[4718]: E1210 14:35:14.121721 4718 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-75td8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-l2t5d_openshift-marketplace(a267a67e-effa-40d4-8923-7669798e594d): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 10 14:35:14 crc kubenswrapper[4718]: E1210 14:35:14.122929 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-l2t5d" podUID="a267a67e-effa-40d4-8923-7669798e594d" Dec 10 14:35:18 crc kubenswrapper[4718]: I1210 14:35:18.084552 4718 patch_prober.go:28] interesting pod/machine-config-daemon-8zmhn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 10 14:35:18 crc kubenswrapper[4718]: I1210 14:35:18.084987 4718 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" podUID="8db53917-7cfb-496d-b8a0-5cc68f3be4e7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 10 14:35:18 crc kubenswrapper[4718]: I1210 14:35:18.085112 4718 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" Dec 10 14:35:18 crc kubenswrapper[4718]: I1210 14:35:18.086090 4718 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f420f195ef9f1879dc97af4b3feb8835025c2cbe0ede4113fb1b0e0b08ba6c2a"} pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 10 14:35:18 crc kubenswrapper[4718]: I1210 14:35:18.086300 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" podUID="8db53917-7cfb-496d-b8a0-5cc68f3be4e7" containerName="machine-config-daemon" containerID="cri-o://f420f195ef9f1879dc97af4b3feb8835025c2cbe0ede4113fb1b0e0b08ba6c2a" gracePeriod=600 Dec 10 14:35:26 crc kubenswrapper[4718]: I1210 14:35:26.267950 4718 generic.go:334] "Generic (PLEG): container finished" podID="8db53917-7cfb-496d-b8a0-5cc68f3be4e7" containerID="f420f195ef9f1879dc97af4b3feb8835025c2cbe0ede4113fb1b0e0b08ba6c2a" exitCode=0 Dec 10 14:35:26 crc kubenswrapper[4718]: I1210 14:35:26.268042 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" event={"ID":"8db53917-7cfb-496d-b8a0-5cc68f3be4e7","Type":"ContainerDied","Data":"f420f195ef9f1879dc97af4b3feb8835025c2cbe0ede4113fb1b0e0b08ba6c2a"} Dec 10 14:35:37 crc kubenswrapper[4718]: E1210 14:35:36.999727 4718 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Dec 10 14:35:37 crc kubenswrapper[4718]: E1210 14:35:37.000341 4718 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-c5qwj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-6xgvp_openshift-marketplace(413a70a8-23c0-4e34-a2f7-ec5cb980bfb5): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 10 14:35:37 crc kubenswrapper[4718]: E1210 14:35:37.001572 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-6xgvp" podUID="413a70a8-23c0-4e34-a2f7-ec5cb980bfb5" Dec 10 14:35:38 crc kubenswrapper[4718]: E1210 14:35:38.891056 4718 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Dec 10 14:35:38 crc kubenswrapper[4718]: E1210 14:35:38.891801 4718 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-98h8v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-q2xbb_openshift-marketplace(046f580c-2cdc-471d-a16a-43913d0b6092): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 10 14:35:38 crc kubenswrapper[4718]: E1210 14:35:38.893026 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-q2xbb" podUID="046f580c-2cdc-471d-a16a-43913d0b6092" Dec 10 14:35:39 crc kubenswrapper[4718]: E1210 14:35:39.082997 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-6xgvp" podUID="413a70a8-23c0-4e34-a2f7-ec5cb980bfb5" Dec 10 14:35:39 crc kubenswrapper[4718]: E1210 14:35:39.155765 4718 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Dec 10 14:35:39 crc kubenswrapper[4718]: E1210 14:35:39.156310 4718 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fs954,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-rwvfp_openshift-marketplace(bc324349-4229-4bd4-b5b1-35de752d9f85): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 10 14:35:39 crc kubenswrapper[4718]: E1210 14:35:39.157739 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-rwvfp" podUID="bc324349-4229-4bd4-b5b1-35de752d9f85" Dec 10 14:35:39 crc kubenswrapper[4718]: E1210 14:35:39.298180 4718 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Dec 10 14:35:39 crc kubenswrapper[4718]: E1210 14:35:39.298362 4718 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4fb6p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-d8blb_openshift-marketplace(f73d3e76-59b1-446b-b599-18b0c115447f): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 10 14:35:39 crc kubenswrapper[4718]: E1210 14:35:39.299609 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-d8blb" podUID="f73d3e76-59b1-446b-b599-18b0c115447f" Dec 10 14:35:39 crc kubenswrapper[4718]: E1210 14:35:39.337069 4718 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Dec 10 14:35:39 crc kubenswrapper[4718]: E1210 14:35:39.337261 4718 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-k4zhd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-b4bq2_openshift-marketplace(37550300-88f8-40dc-be98-1825518dc65c): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 10 14:35:39 crc kubenswrapper[4718]: E1210 14:35:39.338462 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-b4bq2" podUID="37550300-88f8-40dc-be98-1825518dc65c" Dec 10 14:35:39 crc kubenswrapper[4718]: E1210 14:35:39.396641 4718 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Dec 10 14:35:39 crc kubenswrapper[4718]: E1210 14:35:39.396829 4718 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wpzrg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-6bztl_openshift-marketplace(d0a8400b-e706-4ec6-b7e8-e4c25bfe9440): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 10 14:35:39 crc kubenswrapper[4718]: E1210 14:35:39.398840 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-6bztl" podUID="d0a8400b-e706-4ec6-b7e8-e4c25bfe9440" Dec 10 14:35:39 crc kubenswrapper[4718]: I1210 14:35:39.566516 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Dec 10 14:35:39 crc kubenswrapper[4718]: I1210 14:35:39.837365 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Dec 10 14:35:40 crc kubenswrapper[4718]: E1210 14:35:40.156777 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-q2xbb" podUID="046f580c-2cdc-471d-a16a-43913d0b6092" Dec 10 14:35:40 crc kubenswrapper[4718]: E1210 14:35:40.156864 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-d8blb" podUID="f73d3e76-59b1-446b-b599-18b0c115447f" Dec 10 14:35:40 crc kubenswrapper[4718]: E1210 14:35:40.156956 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-rwvfp" podUID="bc324349-4229-4bd4-b5b1-35de752d9f85" Dec 10 14:35:40 crc kubenswrapper[4718]: E1210 14:35:40.157200 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-b4bq2" podUID="37550300-88f8-40dc-be98-1825518dc65c" Dec 10 14:35:40 crc kubenswrapper[4718]: I1210 14:35:40.374646 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"09fe8e9b-3f80-4979-8203-7aca9407605d","Type":"ContainerStarted","Data":"244f3eb28636604d0434301a99fbf9a282b6d8bfb6bffafe89d1636a90681496"} Dec 10 14:35:40 crc kubenswrapper[4718]: I1210 14:35:40.380506 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"2b4486bd-f5e8-41a1-8107-d1b1d57c87c4","Type":"ContainerStarted","Data":"6cc48f940f88e1226cf8f3951d87587befb36a52213a1d726a062c34d4515139"} Dec 10 14:35:40 crc kubenswrapper[4718]: E1210 14:35:40.382126 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-6bztl" podUID="d0a8400b-e706-4ec6-b7e8-e4c25bfe9440" Dec 10 14:35:41 crc kubenswrapper[4718]: I1210 14:35:41.387553 4718 generic.go:334] "Generic (PLEG): container finished" podID="37fd67ff-cb68-413b-a919-929db5e9d0b1" containerID="c8a941842a660497a056927e6cb412d54a17b3e5895c0c61fd2d10e36895aed1" exitCode=0 Dec 10 14:35:41 crc kubenswrapper[4718]: I1210 14:35:41.387652 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bb57t" event={"ID":"37fd67ff-cb68-413b-a919-929db5e9d0b1","Type":"ContainerDied","Data":"c8a941842a660497a056927e6cb412d54a17b3e5895c0c61fd2d10e36895aed1"} Dec 10 14:35:41 crc kubenswrapper[4718]: I1210 14:35:41.390882 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-l2t5d" event={"ID":"a267a67e-effa-40d4-8923-7669798e594d","Type":"ContainerStarted","Data":"abf9badbcad981ca03bdb14664eb045c6026d29a565827fce4add513b5fac188"} Dec 10 14:35:41 crc kubenswrapper[4718]: I1210 14:35:41.393862 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" event={"ID":"8db53917-7cfb-496d-b8a0-5cc68f3be4e7","Type":"ContainerStarted","Data":"d7904af25779ae17073393bf7640a10c1799049d2c7e4bb956caaee74cd26ba8"} Dec 10 14:35:41 crc kubenswrapper[4718]: I1210 14:35:41.395820 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"2b4486bd-f5e8-41a1-8107-d1b1d57c87c4","Type":"ContainerStarted","Data":"ebdbee023b9ae3b4cf1b21adf8939527969b9f9948f23cba67402a3346331cfa"} Dec 10 14:35:41 crc kubenswrapper[4718]: I1210 14:35:41.397278 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"09fe8e9b-3f80-4979-8203-7aca9407605d","Type":"ContainerStarted","Data":"30813467ed4bf581f31f34768ceda4d82b2b1e2b67fdef5874fe464fa02523fe"} Dec 10 14:35:41 crc kubenswrapper[4718]: I1210 14:35:41.398748 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-r8zbt" event={"ID":"1494ebfa-d66c-4200-a336-2cedebcd5889","Type":"ContainerStarted","Data":"b3a0bf46ead187830ebd7b2f67b7a9242e7a39cad3712d8a1769da1c3543d92a"} Dec 10 14:35:41 crc kubenswrapper[4718]: I1210 14:35:41.462659 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-9-crc" podStartSLOduration=50.462617843 podStartE2EDuration="50.462617843s" podCreationTimestamp="2025-12-10 14:34:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 14:35:41.461172796 +0000 UTC m=+246.410396233" watchObservedRunningTime="2025-12-10 14:35:41.462617843 +0000 UTC m=+246.411841290" Dec 10 14:35:41 crc kubenswrapper[4718]: I1210 14:35:41.464831 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=46.464815008 podStartE2EDuration="46.464815008s" podCreationTimestamp="2025-12-10 14:34:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 14:35:41.446099886 +0000 UTC m=+246.395323323" watchObservedRunningTime="2025-12-10 14:35:41.464815008 +0000 UTC m=+246.414038435" Dec 10 14:35:41 crc kubenswrapper[4718]: I1210 14:35:41.499236 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-r8zbt" podStartSLOduration=222.499206336 podStartE2EDuration="3m42.499206336s" podCreationTimestamp="2025-12-10 14:31:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 14:35:41.497285137 +0000 UTC m=+246.446508574" watchObservedRunningTime="2025-12-10 14:35:41.499206336 +0000 UTC m=+246.448429753" Dec 10 14:35:42 crc kubenswrapper[4718]: I1210 14:35:42.405846 4718 generic.go:334] "Generic (PLEG): container finished" podID="2b4486bd-f5e8-41a1-8107-d1b1d57c87c4" containerID="ebdbee023b9ae3b4cf1b21adf8939527969b9f9948f23cba67402a3346331cfa" exitCode=0 Dec 10 14:35:42 crc kubenswrapper[4718]: I1210 14:35:42.406171 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"2b4486bd-f5e8-41a1-8107-d1b1d57c87c4","Type":"ContainerDied","Data":"ebdbee023b9ae3b4cf1b21adf8939527969b9f9948f23cba67402a3346331cfa"} Dec 10 14:35:42 crc kubenswrapper[4718]: I1210 14:35:42.408515 4718 generic.go:334] "Generic (PLEG): container finished" podID="a267a67e-effa-40d4-8923-7669798e594d" containerID="abf9badbcad981ca03bdb14664eb045c6026d29a565827fce4add513b5fac188" exitCode=0 Dec 10 14:35:42 crc kubenswrapper[4718]: I1210 14:35:42.409801 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-l2t5d" event={"ID":"a267a67e-effa-40d4-8923-7669798e594d","Type":"ContainerDied","Data":"abf9badbcad981ca03bdb14664eb045c6026d29a565827fce4add513b5fac188"} Dec 10 14:35:43 crc kubenswrapper[4718]: I1210 14:35:43.459601 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bb57t" event={"ID":"37fd67ff-cb68-413b-a919-929db5e9d0b1","Type":"ContainerStarted","Data":"3cb0d77995108a7b2618f6bec0008b40586274bc1c1062430dd638a4125285bd"} Dec 10 14:35:43 crc kubenswrapper[4718]: I1210 14:35:43.482079 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-bb57t" podStartSLOduration=3.594966632 podStartE2EDuration="1m29.48204896s" podCreationTimestamp="2025-12-10 14:34:14 +0000 UTC" firstStartedPulling="2025-12-10 14:34:16.634271289 +0000 UTC m=+161.583494706" lastFinishedPulling="2025-12-10 14:35:42.521353607 +0000 UTC m=+247.470577034" observedRunningTime="2025-12-10 14:35:43.477846764 +0000 UTC m=+248.427070181" watchObservedRunningTime="2025-12-10 14:35:43.48204896 +0000 UTC m=+248.431272377" Dec 10 14:35:43 crc kubenswrapper[4718]: I1210 14:35:43.502298 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-l2t5d" podStartSLOduration=2.897488279 podStartE2EDuration="1m29.502246459s" podCreationTimestamp="2025-12-10 14:34:14 +0000 UTC" firstStartedPulling="2025-12-10 14:34:16.671047688 +0000 UTC m=+161.620271105" lastFinishedPulling="2025-12-10 14:35:43.275805848 +0000 UTC m=+248.225029285" observedRunningTime="2025-12-10 14:35:43.495764136 +0000 UTC m=+248.444987573" watchObservedRunningTime="2025-12-10 14:35:43.502246459 +0000 UTC m=+248.451469876" Dec 10 14:35:43 crc kubenswrapper[4718]: I1210 14:35:43.817974 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 10 14:35:43 crc kubenswrapper[4718]: I1210 14:35:43.965250 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2b4486bd-f5e8-41a1-8107-d1b1d57c87c4-kubelet-dir\") pod \"2b4486bd-f5e8-41a1-8107-d1b1d57c87c4\" (UID: \"2b4486bd-f5e8-41a1-8107-d1b1d57c87c4\") " Dec 10 14:35:43 crc kubenswrapper[4718]: I1210 14:35:43.965430 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2b4486bd-f5e8-41a1-8107-d1b1d57c87c4-kube-api-access\") pod \"2b4486bd-f5e8-41a1-8107-d1b1d57c87c4\" (UID: \"2b4486bd-f5e8-41a1-8107-d1b1d57c87c4\") " Dec 10 14:35:43 crc kubenswrapper[4718]: I1210 14:35:43.966658 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2b4486bd-f5e8-41a1-8107-d1b1d57c87c4-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "2b4486bd-f5e8-41a1-8107-d1b1d57c87c4" (UID: "2b4486bd-f5e8-41a1-8107-d1b1d57c87c4"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 10 14:35:44 crc kubenswrapper[4718]: I1210 14:35:44.067294 4718 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2b4486bd-f5e8-41a1-8107-d1b1d57c87c4-kubelet-dir\") on node \"crc\" DevicePath \"\"" Dec 10 14:35:44 crc kubenswrapper[4718]: I1210 14:35:44.434165 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2b4486bd-f5e8-41a1-8107-d1b1d57c87c4-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "2b4486bd-f5e8-41a1-8107-d1b1d57c87c4" (UID: "2b4486bd-f5e8-41a1-8107-d1b1d57c87c4"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 14:35:44 crc kubenswrapper[4718]: I1210 14:35:44.435288 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2b4486bd-f5e8-41a1-8107-d1b1d57c87c4-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 10 14:35:44 crc kubenswrapper[4718]: I1210 14:35:44.481277 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-l2t5d" event={"ID":"a267a67e-effa-40d4-8923-7669798e594d","Type":"ContainerStarted","Data":"cd3457ae829587d2caa906dddebd7a06d9b8bde377bb5e017d6869df51191026"} Dec 10 14:35:44 crc kubenswrapper[4718]: I1210 14:35:44.486102 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"2b4486bd-f5e8-41a1-8107-d1b1d57c87c4","Type":"ContainerDied","Data":"6cc48f940f88e1226cf8f3951d87587befb36a52213a1d726a062c34d4515139"} Dec 10 14:35:44 crc kubenswrapper[4718]: I1210 14:35:44.486174 4718 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6cc48f940f88e1226cf8f3951d87587befb36a52213a1d726a062c34d4515139" Dec 10 14:35:44 crc kubenswrapper[4718]: I1210 14:35:44.486734 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 10 14:35:44 crc kubenswrapper[4718]: I1210 14:35:44.563563 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-l2t5d" Dec 10 14:35:44 crc kubenswrapper[4718]: I1210 14:35:44.563638 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-l2t5d" Dec 10 14:35:45 crc kubenswrapper[4718]: I1210 14:35:45.206003 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-bb57t" Dec 10 14:35:45 crc kubenswrapper[4718]: I1210 14:35:45.206119 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-bb57t" Dec 10 14:35:46 crc kubenswrapper[4718]: I1210 14:35:46.381760 4718 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-l2t5d" podUID="a267a67e-effa-40d4-8923-7669798e594d" containerName="registry-server" probeResult="failure" output=< Dec 10 14:35:46 crc kubenswrapper[4718]: timeout: failed to connect service ":50051" within 1s Dec 10 14:35:46 crc kubenswrapper[4718]: > Dec 10 14:35:46 crc kubenswrapper[4718]: I1210 14:35:46.383945 4718 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-bb57t" podUID="37fd67ff-cb68-413b-a919-929db5e9d0b1" containerName="registry-server" probeResult="failure" output=< Dec 10 14:35:46 crc kubenswrapper[4718]: timeout: failed to connect service ":50051" within 1s Dec 10 14:35:46 crc kubenswrapper[4718]: > Dec 10 14:35:54 crc kubenswrapper[4718]: I1210 14:35:54.604236 4718 generic.go:334] "Generic (PLEG): container finished" podID="37550300-88f8-40dc-be98-1825518dc65c" containerID="e3d0252968a2a71bd70788f113565b16c6c4d4d152c98d47e9c6cd996697727f" exitCode=0 Dec 10 14:35:54 crc kubenswrapper[4718]: I1210 14:35:54.604631 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-b4bq2" event={"ID":"37550300-88f8-40dc-be98-1825518dc65c","Type":"ContainerDied","Data":"e3d0252968a2a71bd70788f113565b16c6c4d4d152c98d47e9c6cd996697727f"} Dec 10 14:35:54 crc kubenswrapper[4718]: I1210 14:35:54.615142 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-l2t5d" Dec 10 14:35:54 crc kubenswrapper[4718]: I1210 14:35:54.677355 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-l2t5d" Dec 10 14:35:55 crc kubenswrapper[4718]: I1210 14:35:55.274254 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-bb57t" Dec 10 14:35:55 crc kubenswrapper[4718]: I1210 14:35:55.318094 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-bb57t" Dec 10 14:35:55 crc kubenswrapper[4718]: I1210 14:35:55.615499 4718 generic.go:334] "Generic (PLEG): container finished" podID="d0a8400b-e706-4ec6-b7e8-e4c25bfe9440" containerID="ebaf46d9ca1756e9f79fd538ebdad606c5f4d045b7d180c0089efe608634748f" exitCode=0 Dec 10 14:35:55 crc kubenswrapper[4718]: I1210 14:35:55.615574 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6bztl" event={"ID":"d0a8400b-e706-4ec6-b7e8-e4c25bfe9440","Type":"ContainerDied","Data":"ebaf46d9ca1756e9f79fd538ebdad606c5f4d045b7d180c0089efe608634748f"} Dec 10 14:35:55 crc kubenswrapper[4718]: I1210 14:35:55.621617 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-b4bq2" event={"ID":"37550300-88f8-40dc-be98-1825518dc65c","Type":"ContainerStarted","Data":"d048cbae971635d05801ceadf3707f59f0ab1cd0869b5e0e4668d14a898b890c"} Dec 10 14:35:55 crc kubenswrapper[4718]: I1210 14:35:55.628082 4718 generic.go:334] "Generic (PLEG): container finished" podID="413a70a8-23c0-4e34-a2f7-ec5cb980bfb5" containerID="52bd36c22d3cd72ebbc0f8798fec85da2a3b3436688f120e1d68324eacc73731" exitCode=0 Dec 10 14:35:55 crc kubenswrapper[4718]: I1210 14:35:55.628151 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6xgvp" event={"ID":"413a70a8-23c0-4e34-a2f7-ec5cb980bfb5","Type":"ContainerDied","Data":"52bd36c22d3cd72ebbc0f8798fec85da2a3b3436688f120e1d68324eacc73731"} Dec 10 14:35:55 crc kubenswrapper[4718]: I1210 14:35:55.639681 4718 generic.go:334] "Generic (PLEG): container finished" podID="046f580c-2cdc-471d-a16a-43913d0b6092" containerID="21a910cc0ff1837efaef01995602e229cc866c15638e7e2a20173cd17f1db1af" exitCode=0 Dec 10 14:35:55 crc kubenswrapper[4718]: I1210 14:35:55.639751 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q2xbb" event={"ID":"046f580c-2cdc-471d-a16a-43913d0b6092","Type":"ContainerDied","Data":"21a910cc0ff1837efaef01995602e229cc866c15638e7e2a20173cd17f1db1af"} Dec 10 14:35:55 crc kubenswrapper[4718]: I1210 14:35:55.646293 4718 generic.go:334] "Generic (PLEG): container finished" podID="f73d3e76-59b1-446b-b599-18b0c115447f" containerID="1e613538c65f1d16e639d7f0e6c979fad10c933cce720339f1a55db520f74c79" exitCode=0 Dec 10 14:35:55 crc kubenswrapper[4718]: I1210 14:35:55.646369 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-d8blb" event={"ID":"f73d3e76-59b1-446b-b599-18b0c115447f","Type":"ContainerDied","Data":"1e613538c65f1d16e639d7f0e6c979fad10c933cce720339f1a55db520f74c79"} Dec 10 14:35:55 crc kubenswrapper[4718]: I1210 14:35:55.686987 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-b4bq2" podStartSLOduration=2.939078798 podStartE2EDuration="1m41.686948181s" podCreationTimestamp="2025-12-10 14:34:14 +0000 UTC" firstStartedPulling="2025-12-10 14:34:16.629581312 +0000 UTC m=+161.578804729" lastFinishedPulling="2025-12-10 14:35:55.377450695 +0000 UTC m=+260.326674112" observedRunningTime="2025-12-10 14:35:55.685254429 +0000 UTC m=+260.634477856" watchObservedRunningTime="2025-12-10 14:35:55.686948181 +0000 UTC m=+260.636171598" Dec 10 14:35:56 crc kubenswrapper[4718]: I1210 14:35:56.382083 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-bb57t"] Dec 10 14:35:56 crc kubenswrapper[4718]: I1210 14:35:56.770316 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6xgvp" event={"ID":"413a70a8-23c0-4e34-a2f7-ec5cb980bfb5","Type":"ContainerStarted","Data":"aa306118d66fce8d8475ce09b0bc834630b33d511d6497fac629d3ef6c6fa23f"} Dec 10 14:35:56 crc kubenswrapper[4718]: I1210 14:35:56.774873 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q2xbb" event={"ID":"046f580c-2cdc-471d-a16a-43913d0b6092","Type":"ContainerStarted","Data":"cb9d646b1db94968420716075915786980184e186bf3b22c372426164a711410"} Dec 10 14:35:56 crc kubenswrapper[4718]: I1210 14:35:56.780827 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-d8blb" event={"ID":"f73d3e76-59b1-446b-b599-18b0c115447f","Type":"ContainerStarted","Data":"df40adfd077501ccc7e53deb046ef0ce9625dee1121be6149b6a565b9bad290c"} Dec 10 14:35:56 crc kubenswrapper[4718]: I1210 14:35:56.796954 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6bztl" event={"ID":"d0a8400b-e706-4ec6-b7e8-e4c25bfe9440","Type":"ContainerStarted","Data":"0458b8172a28e2047758a32f9869b39dea094544b91b0c93c34eab8e99b04cef"} Dec 10 14:35:56 crc kubenswrapper[4718]: I1210 14:35:56.802420 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-bb57t" podUID="37fd67ff-cb68-413b-a919-929db5e9d0b1" containerName="registry-server" containerID="cri-o://3cb0d77995108a7b2618f6bec0008b40586274bc1c1062430dd638a4125285bd" gracePeriod=2 Dec 10 14:35:56 crc kubenswrapper[4718]: I1210 14:35:56.802507 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rwvfp" event={"ID":"bc324349-4229-4bd4-b5b1-35de752d9f85","Type":"ContainerStarted","Data":"91d16ef19dad6da53c2e02c62a58f445543841825b8ab78df5d5e14094fb4712"} Dec 10 14:35:56 crc kubenswrapper[4718]: I1210 14:35:56.806021 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-6xgvp" podStartSLOduration=2.210969625 podStartE2EDuration="1m39.805993598s" podCreationTimestamp="2025-12-10 14:34:17 +0000 UTC" firstStartedPulling="2025-12-10 14:34:18.732079206 +0000 UTC m=+163.681302623" lastFinishedPulling="2025-12-10 14:35:56.327103179 +0000 UTC m=+261.276326596" observedRunningTime="2025-12-10 14:35:56.802910145 +0000 UTC m=+261.752133562" watchObservedRunningTime="2025-12-10 14:35:56.805993598 +0000 UTC m=+261.755217015" Dec 10 14:35:56 crc kubenswrapper[4718]: I1210 14:35:56.829209 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-q2xbb" podStartSLOduration=3.263453914 podStartE2EDuration="1m42.829178515s" podCreationTimestamp="2025-12-10 14:34:14 +0000 UTC" firstStartedPulling="2025-12-10 14:34:16.638085714 +0000 UTC m=+161.587309131" lastFinishedPulling="2025-12-10 14:35:56.203810315 +0000 UTC m=+261.153033732" observedRunningTime="2025-12-10 14:35:56.825713221 +0000 UTC m=+261.774936668" watchObservedRunningTime="2025-12-10 14:35:56.829178515 +0000 UTC m=+261.778401922" Dec 10 14:35:56 crc kubenswrapper[4718]: I1210 14:35:56.890481 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-6bztl" podStartSLOduration=3.488444371 podStartE2EDuration="1m40.890450762s" podCreationTimestamp="2025-12-10 14:34:16 +0000 UTC" firstStartedPulling="2025-12-10 14:34:18.711771479 +0000 UTC m=+163.660994896" lastFinishedPulling="2025-12-10 14:35:56.11377787 +0000 UTC m=+261.063001287" observedRunningTime="2025-12-10 14:35:56.888133989 +0000 UTC m=+261.837357406" watchObservedRunningTime="2025-12-10 14:35:56.890450762 +0000 UTC m=+261.839674179" Dec 10 14:35:56 crc kubenswrapper[4718]: I1210 14:35:56.913621 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-d8blb" podStartSLOduration=3.53290491 podStartE2EDuration="1m40.913592518s" podCreationTimestamp="2025-12-10 14:34:16 +0000 UTC" firstStartedPulling="2025-12-10 14:34:18.708382364 +0000 UTC m=+163.657605781" lastFinishedPulling="2025-12-10 14:35:56.089069972 +0000 UTC m=+261.038293389" observedRunningTime="2025-12-10 14:35:56.911404819 +0000 UTC m=+261.860628246" watchObservedRunningTime="2025-12-10 14:35:56.913592518 +0000 UTC m=+261.862815935" Dec 10 14:35:57 crc kubenswrapper[4718]: I1210 14:35:57.001347 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-d8blb" Dec 10 14:35:57 crc kubenswrapper[4718]: I1210 14:35:57.035760 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-d8blb" Dec 10 14:35:57 crc kubenswrapper[4718]: I1210 14:35:57.676291 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bb57t" Dec 10 14:35:57 crc kubenswrapper[4718]: I1210 14:35:57.714864 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/37fd67ff-cb68-413b-a919-929db5e9d0b1-catalog-content\") pod \"37fd67ff-cb68-413b-a919-929db5e9d0b1\" (UID: \"37fd67ff-cb68-413b-a919-929db5e9d0b1\") " Dec 10 14:35:57 crc kubenswrapper[4718]: I1210 14:35:57.715153 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9wfw5\" (UniqueName: \"kubernetes.io/projected/37fd67ff-cb68-413b-a919-929db5e9d0b1-kube-api-access-9wfw5\") pod \"37fd67ff-cb68-413b-a919-929db5e9d0b1\" (UID: \"37fd67ff-cb68-413b-a919-929db5e9d0b1\") " Dec 10 14:35:57 crc kubenswrapper[4718]: I1210 14:35:57.715203 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/37fd67ff-cb68-413b-a919-929db5e9d0b1-utilities\") pod \"37fd67ff-cb68-413b-a919-929db5e9d0b1\" (UID: \"37fd67ff-cb68-413b-a919-929db5e9d0b1\") " Dec 10 14:35:57 crc kubenswrapper[4718]: I1210 14:35:57.716271 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/37fd67ff-cb68-413b-a919-929db5e9d0b1-utilities" (OuterVolumeSpecName: "utilities") pod "37fd67ff-cb68-413b-a919-929db5e9d0b1" (UID: "37fd67ff-cb68-413b-a919-929db5e9d0b1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 14:35:57 crc kubenswrapper[4718]: I1210 14:35:57.724121 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/37fd67ff-cb68-413b-a919-929db5e9d0b1-kube-api-access-9wfw5" (OuterVolumeSpecName: "kube-api-access-9wfw5") pod "37fd67ff-cb68-413b-a919-929db5e9d0b1" (UID: "37fd67ff-cb68-413b-a919-929db5e9d0b1"). InnerVolumeSpecName "kube-api-access-9wfw5". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 14:35:57 crc kubenswrapper[4718]: I1210 14:35:57.751592 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-6xgvp" Dec 10 14:35:57 crc kubenswrapper[4718]: I1210 14:35:57.751698 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-6xgvp" Dec 10 14:35:57 crc kubenswrapper[4718]: I1210 14:35:57.778193 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/37fd67ff-cb68-413b-a919-929db5e9d0b1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "37fd67ff-cb68-413b-a919-929db5e9d0b1" (UID: "37fd67ff-cb68-413b-a919-929db5e9d0b1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 14:35:57 crc kubenswrapper[4718]: I1210 14:35:57.816508 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9wfw5\" (UniqueName: \"kubernetes.io/projected/37fd67ff-cb68-413b-a919-929db5e9d0b1-kube-api-access-9wfw5\") on node \"crc\" DevicePath \"\"" Dec 10 14:35:57 crc kubenswrapper[4718]: I1210 14:35:57.816556 4718 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/37fd67ff-cb68-413b-a919-929db5e9d0b1-utilities\") on node \"crc\" DevicePath \"\"" Dec 10 14:35:57 crc kubenswrapper[4718]: I1210 14:35:57.816574 4718 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/37fd67ff-cb68-413b-a919-929db5e9d0b1-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 10 14:35:57 crc kubenswrapper[4718]: I1210 14:35:57.820534 4718 generic.go:334] "Generic (PLEG): container finished" podID="37fd67ff-cb68-413b-a919-929db5e9d0b1" containerID="3cb0d77995108a7b2618f6bec0008b40586274bc1c1062430dd638a4125285bd" exitCode=0 Dec 10 14:35:57 crc kubenswrapper[4718]: I1210 14:35:57.820636 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bb57t" Dec 10 14:35:57 crc kubenswrapper[4718]: I1210 14:35:57.820695 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bb57t" event={"ID":"37fd67ff-cb68-413b-a919-929db5e9d0b1","Type":"ContainerDied","Data":"3cb0d77995108a7b2618f6bec0008b40586274bc1c1062430dd638a4125285bd"} Dec 10 14:35:57 crc kubenswrapper[4718]: I1210 14:35:57.820825 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bb57t" event={"ID":"37fd67ff-cb68-413b-a919-929db5e9d0b1","Type":"ContainerDied","Data":"305e295875bdd95d1f487ff81e351ce09bb6327ea7a9823c8e45c607a3cc9cf0"} Dec 10 14:35:57 crc kubenswrapper[4718]: I1210 14:35:57.820863 4718 scope.go:117] "RemoveContainer" containerID="3cb0d77995108a7b2618f6bec0008b40586274bc1c1062430dd638a4125285bd" Dec 10 14:35:57 crc kubenswrapper[4718]: I1210 14:35:57.836080 4718 generic.go:334] "Generic (PLEG): container finished" podID="bc324349-4229-4bd4-b5b1-35de752d9f85" containerID="91d16ef19dad6da53c2e02c62a58f445543841825b8ab78df5d5e14094fb4712" exitCode=0 Dec 10 14:35:57 crc kubenswrapper[4718]: I1210 14:35:57.836209 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rwvfp" event={"ID":"bc324349-4229-4bd4-b5b1-35de752d9f85","Type":"ContainerDied","Data":"91d16ef19dad6da53c2e02c62a58f445543841825b8ab78df5d5e14094fb4712"} Dec 10 14:35:57 crc kubenswrapper[4718]: I1210 14:35:57.866022 4718 scope.go:117] "RemoveContainer" containerID="c8a941842a660497a056927e6cb412d54a17b3e5895c0c61fd2d10e36895aed1" Dec 10 14:35:57 crc kubenswrapper[4718]: I1210 14:35:57.884726 4718 scope.go:117] "RemoveContainer" containerID="a9db68656afc76b5099b1b6ac72870f9dc5087974e6735a69a05c8060dc3b96d" Dec 10 14:35:57 crc kubenswrapper[4718]: I1210 14:35:57.894684 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-bb57t"] Dec 10 14:35:57 crc kubenswrapper[4718]: I1210 14:35:57.900697 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-bb57t"] Dec 10 14:35:57 crc kubenswrapper[4718]: I1210 14:35:57.913079 4718 scope.go:117] "RemoveContainer" containerID="3cb0d77995108a7b2618f6bec0008b40586274bc1c1062430dd638a4125285bd" Dec 10 14:35:57 crc kubenswrapper[4718]: E1210 14:35:57.914896 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3cb0d77995108a7b2618f6bec0008b40586274bc1c1062430dd638a4125285bd\": container with ID starting with 3cb0d77995108a7b2618f6bec0008b40586274bc1c1062430dd638a4125285bd not found: ID does not exist" containerID="3cb0d77995108a7b2618f6bec0008b40586274bc1c1062430dd638a4125285bd" Dec 10 14:35:57 crc kubenswrapper[4718]: I1210 14:35:57.914946 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3cb0d77995108a7b2618f6bec0008b40586274bc1c1062430dd638a4125285bd"} err="failed to get container status \"3cb0d77995108a7b2618f6bec0008b40586274bc1c1062430dd638a4125285bd\": rpc error: code = NotFound desc = could not find container \"3cb0d77995108a7b2618f6bec0008b40586274bc1c1062430dd638a4125285bd\": container with ID starting with 3cb0d77995108a7b2618f6bec0008b40586274bc1c1062430dd638a4125285bd not found: ID does not exist" Dec 10 14:35:57 crc kubenswrapper[4718]: I1210 14:35:57.915017 4718 scope.go:117] "RemoveContainer" containerID="c8a941842a660497a056927e6cb412d54a17b3e5895c0c61fd2d10e36895aed1" Dec 10 14:35:57 crc kubenswrapper[4718]: E1210 14:35:57.916274 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c8a941842a660497a056927e6cb412d54a17b3e5895c0c61fd2d10e36895aed1\": container with ID starting with c8a941842a660497a056927e6cb412d54a17b3e5895c0c61fd2d10e36895aed1 not found: ID does not exist" containerID="c8a941842a660497a056927e6cb412d54a17b3e5895c0c61fd2d10e36895aed1" Dec 10 14:35:57 crc kubenswrapper[4718]: I1210 14:35:57.916349 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c8a941842a660497a056927e6cb412d54a17b3e5895c0c61fd2d10e36895aed1"} err="failed to get container status \"c8a941842a660497a056927e6cb412d54a17b3e5895c0c61fd2d10e36895aed1\": rpc error: code = NotFound desc = could not find container \"c8a941842a660497a056927e6cb412d54a17b3e5895c0c61fd2d10e36895aed1\": container with ID starting with c8a941842a660497a056927e6cb412d54a17b3e5895c0c61fd2d10e36895aed1 not found: ID does not exist" Dec 10 14:35:57 crc kubenswrapper[4718]: I1210 14:35:57.916423 4718 scope.go:117] "RemoveContainer" containerID="a9db68656afc76b5099b1b6ac72870f9dc5087974e6735a69a05c8060dc3b96d" Dec 10 14:35:57 crc kubenswrapper[4718]: E1210 14:35:57.916835 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a9db68656afc76b5099b1b6ac72870f9dc5087974e6735a69a05c8060dc3b96d\": container with ID starting with a9db68656afc76b5099b1b6ac72870f9dc5087974e6735a69a05c8060dc3b96d not found: ID does not exist" containerID="a9db68656afc76b5099b1b6ac72870f9dc5087974e6735a69a05c8060dc3b96d" Dec 10 14:35:57 crc kubenswrapper[4718]: I1210 14:35:57.916864 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a9db68656afc76b5099b1b6ac72870f9dc5087974e6735a69a05c8060dc3b96d"} err="failed to get container status \"a9db68656afc76b5099b1b6ac72870f9dc5087974e6735a69a05c8060dc3b96d\": rpc error: code = NotFound desc = could not find container \"a9db68656afc76b5099b1b6ac72870f9dc5087974e6735a69a05c8060dc3b96d\": container with ID starting with a9db68656afc76b5099b1b6ac72870f9dc5087974e6735a69a05c8060dc3b96d not found: ID does not exist" Dec 10 14:35:58 crc kubenswrapper[4718]: I1210 14:35:58.050263 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="37fd67ff-cb68-413b-a919-929db5e9d0b1" path="/var/lib/kubelet/pods/37fd67ff-cb68-413b-a919-929db5e9d0b1/volumes" Dec 10 14:35:58 crc kubenswrapper[4718]: I1210 14:35:58.059090 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-tk99n"] Dec 10 14:35:58 crc kubenswrapper[4718]: I1210 14:35:58.107817 4718 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-d8blb" podUID="f73d3e76-59b1-446b-b599-18b0c115447f" containerName="registry-server" probeResult="failure" output=< Dec 10 14:35:58 crc kubenswrapper[4718]: timeout: failed to connect service ":50051" within 1s Dec 10 14:35:58 crc kubenswrapper[4718]: > Dec 10 14:35:58 crc kubenswrapper[4718]: I1210 14:35:58.815552 4718 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-6xgvp" podUID="413a70a8-23c0-4e34-a2f7-ec5cb980bfb5" containerName="registry-server" probeResult="failure" output=< Dec 10 14:35:58 crc kubenswrapper[4718]: timeout: failed to connect service ":50051" within 1s Dec 10 14:35:58 crc kubenswrapper[4718]: > Dec 10 14:35:59 crc kubenswrapper[4718]: I1210 14:35:59.851974 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rwvfp" event={"ID":"bc324349-4229-4bd4-b5b1-35de752d9f85","Type":"ContainerStarted","Data":"8fec9bf53304bf4615509afbf99c6388c5503aa74686b65b39182a40bb5e0abe"} Dec 10 14:35:59 crc kubenswrapper[4718]: I1210 14:35:59.875491 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-rwvfp" podStartSLOduration=3.407452667 podStartE2EDuration="1m42.875466853s" podCreationTimestamp="2025-12-10 14:34:17 +0000 UTC" firstStartedPulling="2025-12-10 14:34:19.744204071 +0000 UTC m=+164.693427488" lastFinishedPulling="2025-12-10 14:35:59.212218257 +0000 UTC m=+264.161441674" observedRunningTime="2025-12-10 14:35:59.871135746 +0000 UTC m=+264.820359183" watchObservedRunningTime="2025-12-10 14:35:59.875466853 +0000 UTC m=+264.824690270" Dec 10 14:36:04 crc kubenswrapper[4718]: I1210 14:36:04.788405 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-b4bq2" Dec 10 14:36:04 crc kubenswrapper[4718]: I1210 14:36:04.789012 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-b4bq2" Dec 10 14:36:04 crc kubenswrapper[4718]: I1210 14:36:04.834590 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-b4bq2" Dec 10 14:36:04 crc kubenswrapper[4718]: I1210 14:36:04.928500 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-b4bq2" Dec 10 14:36:05 crc kubenswrapper[4718]: I1210 14:36:05.047313 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-q2xbb" Dec 10 14:36:05 crc kubenswrapper[4718]: I1210 14:36:05.047418 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-q2xbb" Dec 10 14:36:05 crc kubenswrapper[4718]: I1210 14:36:05.100729 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-q2xbb" Dec 10 14:36:05 crc kubenswrapper[4718]: I1210 14:36:05.937139 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-q2xbb" Dec 10 14:36:06 crc kubenswrapper[4718]: I1210 14:36:06.598545 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-6bztl" Dec 10 14:36:06 crc kubenswrapper[4718]: I1210 14:36:06.599367 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-6bztl" Dec 10 14:36:06 crc kubenswrapper[4718]: I1210 14:36:06.765917 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-6bztl" Dec 10 14:36:06 crc kubenswrapper[4718]: I1210 14:36:06.937020 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-6bztl" Dec 10 14:36:07 crc kubenswrapper[4718]: I1210 14:36:07.043892 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-d8blb" Dec 10 14:36:07 crc kubenswrapper[4718]: I1210 14:36:07.096318 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-d8blb" Dec 10 14:36:07 crc kubenswrapper[4718]: I1210 14:36:07.276140 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-q2xbb"] Dec 10 14:36:07 crc kubenswrapper[4718]: I1210 14:36:07.792212 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-6xgvp" Dec 10 14:36:07 crc kubenswrapper[4718]: I1210 14:36:07.837433 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-6xgvp" Dec 10 14:36:07 crc kubenswrapper[4718]: I1210 14:36:07.900600 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-q2xbb" podUID="046f580c-2cdc-471d-a16a-43913d0b6092" containerName="registry-server" containerID="cri-o://cb9d646b1db94968420716075915786980184e186bf3b22c372426164a711410" gracePeriod=2 Dec 10 14:36:08 crc kubenswrapper[4718]: I1210 14:36:08.279565 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-rwvfp" Dec 10 14:36:08 crc kubenswrapper[4718]: I1210 14:36:08.279639 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-rwvfp" Dec 10 14:36:08 crc kubenswrapper[4718]: I1210 14:36:08.339979 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-rwvfp" Dec 10 14:36:08 crc kubenswrapper[4718]: I1210 14:36:08.944849 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-rwvfp" Dec 10 14:36:09 crc kubenswrapper[4718]: I1210 14:36:09.075122 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-d8blb"] Dec 10 14:36:09 crc kubenswrapper[4718]: I1210 14:36:09.075669 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-d8blb" podUID="f73d3e76-59b1-446b-b599-18b0c115447f" containerName="registry-server" containerID="cri-o://df40adfd077501ccc7e53deb046ef0ce9625dee1121be6149b6a565b9bad290c" gracePeriod=2 Dec 10 14:36:11 crc kubenswrapper[4718]: I1210 14:36:11.477574 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-rwvfp"] Dec 10 14:36:11 crc kubenswrapper[4718]: I1210 14:36:11.478873 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-rwvfp" podUID="bc324349-4229-4bd4-b5b1-35de752d9f85" containerName="registry-server" containerID="cri-o://8fec9bf53304bf4615509afbf99c6388c5503aa74686b65b39182a40bb5e0abe" gracePeriod=2 Dec 10 14:36:15 crc kubenswrapper[4718]: E1210 14:36:15.048059 4718 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of cb9d646b1db94968420716075915786980184e186bf3b22c372426164a711410 is running failed: container process not found" containerID="cb9d646b1db94968420716075915786980184e186bf3b22c372426164a711410" cmd=["grpc_health_probe","-addr=:50051"] Dec 10 14:36:15 crc kubenswrapper[4718]: E1210 14:36:15.050478 4718 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of cb9d646b1db94968420716075915786980184e186bf3b22c372426164a711410 is running failed: container process not found" containerID="cb9d646b1db94968420716075915786980184e186bf3b22c372426164a711410" cmd=["grpc_health_probe","-addr=:50051"] Dec 10 14:36:15 crc kubenswrapper[4718]: E1210 14:36:15.051289 4718 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of cb9d646b1db94968420716075915786980184e186bf3b22c372426164a711410 is running failed: container process not found" containerID="cb9d646b1db94968420716075915786980184e186bf3b22c372426164a711410" cmd=["grpc_health_probe","-addr=:50051"] Dec 10 14:36:15 crc kubenswrapper[4718]: E1210 14:36:15.051435 4718 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of cb9d646b1db94968420716075915786980184e186bf3b22c372426164a711410 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/community-operators-q2xbb" podUID="046f580c-2cdc-471d-a16a-43913d0b6092" containerName="registry-server" Dec 10 14:36:15 crc kubenswrapper[4718]: I1210 14:36:15.780510 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-q2xbb_046f580c-2cdc-471d-a16a-43913d0b6092/registry-server/0.log" Dec 10 14:36:15 crc kubenswrapper[4718]: I1210 14:36:15.782096 4718 generic.go:334] "Generic (PLEG): container finished" podID="046f580c-2cdc-471d-a16a-43913d0b6092" containerID="cb9d646b1db94968420716075915786980184e186bf3b22c372426164a711410" exitCode=137 Dec 10 14:36:15 crc kubenswrapper[4718]: I1210 14:36:15.782143 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q2xbb" event={"ID":"046f580c-2cdc-471d-a16a-43913d0b6092","Type":"ContainerDied","Data":"cb9d646b1db94968420716075915786980184e186bf3b22c372426164a711410"} Dec 10 14:36:28 crc kubenswrapper[4718]: I1210 14:36:16.307481 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-q2xbb_046f580c-2cdc-471d-a16a-43913d0b6092/registry-server/0.log" Dec 10 14:36:28 crc kubenswrapper[4718]: I1210 14:36:16.315079 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-q2xbb" Dec 10 14:36:28 crc kubenswrapper[4718]: I1210 14:36:16.471580 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/046f580c-2cdc-471d-a16a-43913d0b6092-catalog-content\") pod \"046f580c-2cdc-471d-a16a-43913d0b6092\" (UID: \"046f580c-2cdc-471d-a16a-43913d0b6092\") " Dec 10 14:36:28 crc kubenswrapper[4718]: I1210 14:36:16.471659 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-98h8v\" (UniqueName: \"kubernetes.io/projected/046f580c-2cdc-471d-a16a-43913d0b6092-kube-api-access-98h8v\") pod \"046f580c-2cdc-471d-a16a-43913d0b6092\" (UID: \"046f580c-2cdc-471d-a16a-43913d0b6092\") " Dec 10 14:36:28 crc kubenswrapper[4718]: I1210 14:36:16.471757 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/046f580c-2cdc-471d-a16a-43913d0b6092-utilities\") pod \"046f580c-2cdc-471d-a16a-43913d0b6092\" (UID: \"046f580c-2cdc-471d-a16a-43913d0b6092\") " Dec 10 14:36:28 crc kubenswrapper[4718]: I1210 14:36:16.472631 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/046f580c-2cdc-471d-a16a-43913d0b6092-utilities" (OuterVolumeSpecName: "utilities") pod "046f580c-2cdc-471d-a16a-43913d0b6092" (UID: "046f580c-2cdc-471d-a16a-43913d0b6092"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 14:36:28 crc kubenswrapper[4718]: I1210 14:36:16.484725 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/046f580c-2cdc-471d-a16a-43913d0b6092-kube-api-access-98h8v" (OuterVolumeSpecName: "kube-api-access-98h8v") pod "046f580c-2cdc-471d-a16a-43913d0b6092" (UID: "046f580c-2cdc-471d-a16a-43913d0b6092"). InnerVolumeSpecName "kube-api-access-98h8v". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 14:36:28 crc kubenswrapper[4718]: I1210 14:36:16.531762 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/046f580c-2cdc-471d-a16a-43913d0b6092-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "046f580c-2cdc-471d-a16a-43913d0b6092" (UID: "046f580c-2cdc-471d-a16a-43913d0b6092"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 14:36:28 crc kubenswrapper[4718]: I1210 14:36:16.573304 4718 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/046f580c-2cdc-471d-a16a-43913d0b6092-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 10 14:36:28 crc kubenswrapper[4718]: I1210 14:36:16.573354 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-98h8v\" (UniqueName: \"kubernetes.io/projected/046f580c-2cdc-471d-a16a-43913d0b6092-kube-api-access-98h8v\") on node \"crc\" DevicePath \"\"" Dec 10 14:36:28 crc kubenswrapper[4718]: I1210 14:36:16.573370 4718 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/046f580c-2cdc-471d-a16a-43913d0b6092-utilities\") on node \"crc\" DevicePath \"\"" Dec 10 14:36:28 crc kubenswrapper[4718]: I1210 14:36:16.791300 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-q2xbb_046f580c-2cdc-471d-a16a-43913d0b6092/registry-server/0.log" Dec 10 14:36:28 crc kubenswrapper[4718]: I1210 14:36:16.792226 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q2xbb" event={"ID":"046f580c-2cdc-471d-a16a-43913d0b6092","Type":"ContainerDied","Data":"c941e6a7f2e0698089d953a22c3789ea24d5e3f6bde7a3d97cf8ec1579026bbe"} Dec 10 14:36:28 crc kubenswrapper[4718]: I1210 14:36:16.792285 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-q2xbb" Dec 10 14:36:28 crc kubenswrapper[4718]: I1210 14:36:16.792291 4718 scope.go:117] "RemoveContainer" containerID="cb9d646b1db94968420716075915786980184e186bf3b22c372426164a711410" Dec 10 14:36:28 crc kubenswrapper[4718]: I1210 14:36:16.794281 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-d8blb_f73d3e76-59b1-446b-b599-18b0c115447f/registry-server/0.log" Dec 10 14:36:28 crc kubenswrapper[4718]: I1210 14:36:16.795759 4718 generic.go:334] "Generic (PLEG): container finished" podID="f73d3e76-59b1-446b-b599-18b0c115447f" containerID="df40adfd077501ccc7e53deb046ef0ce9625dee1121be6149b6a565b9bad290c" exitCode=137 Dec 10 14:36:28 crc kubenswrapper[4718]: I1210 14:36:16.795831 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-d8blb" event={"ID":"f73d3e76-59b1-446b-b599-18b0c115447f","Type":"ContainerDied","Data":"df40adfd077501ccc7e53deb046ef0ce9625dee1121be6149b6a565b9bad290c"} Dec 10 14:36:28 crc kubenswrapper[4718]: I1210 14:36:16.797622 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-rwvfp_bc324349-4229-4bd4-b5b1-35de752d9f85/registry-server/0.log" Dec 10 14:36:28 crc kubenswrapper[4718]: I1210 14:36:16.798408 4718 generic.go:334] "Generic (PLEG): container finished" podID="bc324349-4229-4bd4-b5b1-35de752d9f85" containerID="8fec9bf53304bf4615509afbf99c6388c5503aa74686b65b39182a40bb5e0abe" exitCode=137 Dec 10 14:36:28 crc kubenswrapper[4718]: I1210 14:36:16.798463 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rwvfp" event={"ID":"bc324349-4229-4bd4-b5b1-35de752d9f85","Type":"ContainerDied","Data":"8fec9bf53304bf4615509afbf99c6388c5503aa74686b65b39182a40bb5e0abe"} Dec 10 14:36:28 crc kubenswrapper[4718]: I1210 14:36:16.814895 4718 scope.go:117] "RemoveContainer" containerID="21a910cc0ff1837efaef01995602e229cc866c15638e7e2a20173cd17f1db1af" Dec 10 14:36:28 crc kubenswrapper[4718]: I1210 14:36:16.827162 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-q2xbb"] Dec 10 14:36:28 crc kubenswrapper[4718]: I1210 14:36:16.831978 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-q2xbb"] Dec 10 14:36:28 crc kubenswrapper[4718]: I1210 14:36:16.859804 4718 scope.go:117] "RemoveContainer" containerID="ee70afc5d3452a59e2923013dd9b6d62adf8ab282eb0db08e332b37e6299e027" Dec 10 14:36:28 crc kubenswrapper[4718]: E1210 14:36:17.003558 4718 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of df40adfd077501ccc7e53deb046ef0ce9625dee1121be6149b6a565b9bad290c is running failed: container process not found" containerID="df40adfd077501ccc7e53deb046ef0ce9625dee1121be6149b6a565b9bad290c" cmd=["grpc_health_probe","-addr=:50051"] Dec 10 14:36:28 crc kubenswrapper[4718]: E1210 14:36:17.004274 4718 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of df40adfd077501ccc7e53deb046ef0ce9625dee1121be6149b6a565b9bad290c is running failed: container process not found" containerID="df40adfd077501ccc7e53deb046ef0ce9625dee1121be6149b6a565b9bad290c" cmd=["grpc_health_probe","-addr=:50051"] Dec 10 14:36:28 crc kubenswrapper[4718]: E1210 14:36:17.004659 4718 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of df40adfd077501ccc7e53deb046ef0ce9625dee1121be6149b6a565b9bad290c is running failed: container process not found" containerID="df40adfd077501ccc7e53deb046ef0ce9625dee1121be6149b6a565b9bad290c" cmd=["grpc_health_probe","-addr=:50051"] Dec 10 14:36:28 crc kubenswrapper[4718]: E1210 14:36:17.004781 4718 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of df40adfd077501ccc7e53deb046ef0ce9625dee1121be6149b6a565b9bad290c is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/redhat-marketplace-d8blb" podUID="f73d3e76-59b1-446b-b599-18b0c115447f" containerName="registry-server" Dec 10 14:36:28 crc kubenswrapper[4718]: I1210 14:36:18.029322 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="046f580c-2cdc-471d-a16a-43913d0b6092" path="/var/lib/kubelet/pods/046f580c-2cdc-471d-a16a-43913d0b6092/volumes" Dec 10 14:36:28 crc kubenswrapper[4718]: E1210 14:36:18.280312 4718 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 8fec9bf53304bf4615509afbf99c6388c5503aa74686b65b39182a40bb5e0abe is running failed: container process not found" containerID="8fec9bf53304bf4615509afbf99c6388c5503aa74686b65b39182a40bb5e0abe" cmd=["grpc_health_probe","-addr=:50051"] Dec 10 14:36:28 crc kubenswrapper[4718]: E1210 14:36:18.280959 4718 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 8fec9bf53304bf4615509afbf99c6388c5503aa74686b65b39182a40bb5e0abe is running failed: container process not found" containerID="8fec9bf53304bf4615509afbf99c6388c5503aa74686b65b39182a40bb5e0abe" cmd=["grpc_health_probe","-addr=:50051"] Dec 10 14:36:28 crc kubenswrapper[4718]: E1210 14:36:18.281545 4718 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 8fec9bf53304bf4615509afbf99c6388c5503aa74686b65b39182a40bb5e0abe is running failed: container process not found" containerID="8fec9bf53304bf4615509afbf99c6388c5503aa74686b65b39182a40bb5e0abe" cmd=["grpc_health_probe","-addr=:50051"] Dec 10 14:36:28 crc kubenswrapper[4718]: E1210 14:36:18.281646 4718 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 8fec9bf53304bf4615509afbf99c6388c5503aa74686b65b39182a40bb5e0abe is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/redhat-operators-rwvfp" podUID="bc324349-4229-4bd4-b5b1-35de752d9f85" containerName="registry-server" Dec 10 14:36:28 crc kubenswrapper[4718]: I1210 14:36:18.567951 4718 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Dec 10 14:36:28 crc kubenswrapper[4718]: I1210 14:36:18.568457 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://65966d33004e0a6740f0b450f404729baec86fc47c84a58ee020a78b7378bfee" gracePeriod=15 Dec 10 14:36:28 crc kubenswrapper[4718]: I1210 14:36:18.568575 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://6f363cabfc65ba2cb737968ead6f21568f30416a1f385f511f2d1a45da1e831a" gracePeriod=15 Dec 10 14:36:28 crc kubenswrapper[4718]: I1210 14:36:18.568646 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://cfa44e8f79d7525847ab10384aaf41105a8e6769b384679417cd0a7379748dee" gracePeriod=15 Dec 10 14:36:28 crc kubenswrapper[4718]: I1210 14:36:18.568575 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://32fcede9b35f9a45177a341e272e7c2cbe2e5c6bf699e877664a603a5f96124a" gracePeriod=15 Dec 10 14:36:28 crc kubenswrapper[4718]: I1210 14:36:18.569709 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://d3d4b682855c5510a5957e308008602e4f749bd44ca52f974cb4be27d06a7c80" gracePeriod=15 Dec 10 14:36:28 crc kubenswrapper[4718]: I1210 14:36:18.579303 4718 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Dec 10 14:36:28 crc kubenswrapper[4718]: E1210 14:36:18.580052 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Dec 10 14:36:28 crc kubenswrapper[4718]: I1210 14:36:18.580070 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Dec 10 14:36:28 crc kubenswrapper[4718]: E1210 14:36:18.580094 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Dec 10 14:36:28 crc kubenswrapper[4718]: I1210 14:36:18.580101 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Dec 10 14:36:28 crc kubenswrapper[4718]: E1210 14:36:18.580112 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Dec 10 14:36:28 crc kubenswrapper[4718]: I1210 14:36:18.580121 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Dec 10 14:36:28 crc kubenswrapper[4718]: E1210 14:36:18.580129 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="046f580c-2cdc-471d-a16a-43913d0b6092" containerName="extract-content" Dec 10 14:36:28 crc kubenswrapper[4718]: I1210 14:36:18.580136 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="046f580c-2cdc-471d-a16a-43913d0b6092" containerName="extract-content" Dec 10 14:36:28 crc kubenswrapper[4718]: E1210 14:36:18.580151 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Dec 10 14:36:28 crc kubenswrapper[4718]: I1210 14:36:18.580158 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Dec 10 14:36:28 crc kubenswrapper[4718]: E1210 14:36:18.580172 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="37fd67ff-cb68-413b-a919-929db5e9d0b1" containerName="extract-content" Dec 10 14:36:28 crc kubenswrapper[4718]: I1210 14:36:18.580178 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="37fd67ff-cb68-413b-a919-929db5e9d0b1" containerName="extract-content" Dec 10 14:36:28 crc kubenswrapper[4718]: E1210 14:36:18.580196 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="37fd67ff-cb68-413b-a919-929db5e9d0b1" containerName="registry-server" Dec 10 14:36:28 crc kubenswrapper[4718]: I1210 14:36:18.580203 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="37fd67ff-cb68-413b-a919-929db5e9d0b1" containerName="registry-server" Dec 10 14:36:28 crc kubenswrapper[4718]: E1210 14:36:18.580211 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Dec 10 14:36:28 crc kubenswrapper[4718]: I1210 14:36:18.580218 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Dec 10 14:36:28 crc kubenswrapper[4718]: E1210 14:36:18.580230 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Dec 10 14:36:28 crc kubenswrapper[4718]: I1210 14:36:18.580238 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Dec 10 14:36:28 crc kubenswrapper[4718]: E1210 14:36:18.580248 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b4486bd-f5e8-41a1-8107-d1b1d57c87c4" containerName="pruner" Dec 10 14:36:28 crc kubenswrapper[4718]: I1210 14:36:18.580254 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b4486bd-f5e8-41a1-8107-d1b1d57c87c4" containerName="pruner" Dec 10 14:36:28 crc kubenswrapper[4718]: E1210 14:36:18.580264 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="046f580c-2cdc-471d-a16a-43913d0b6092" containerName="extract-utilities" Dec 10 14:36:28 crc kubenswrapper[4718]: I1210 14:36:18.580270 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="046f580c-2cdc-471d-a16a-43913d0b6092" containerName="extract-utilities" Dec 10 14:36:28 crc kubenswrapper[4718]: E1210 14:36:18.580282 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="37fd67ff-cb68-413b-a919-929db5e9d0b1" containerName="extract-utilities" Dec 10 14:36:28 crc kubenswrapper[4718]: I1210 14:36:18.580288 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="37fd67ff-cb68-413b-a919-929db5e9d0b1" containerName="extract-utilities" Dec 10 14:36:28 crc kubenswrapper[4718]: E1210 14:36:18.580301 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="046f580c-2cdc-471d-a16a-43913d0b6092" containerName="registry-server" Dec 10 14:36:28 crc kubenswrapper[4718]: I1210 14:36:18.580308 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="046f580c-2cdc-471d-a16a-43913d0b6092" containerName="registry-server" Dec 10 14:36:28 crc kubenswrapper[4718]: E1210 14:36:18.580322 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Dec 10 14:36:28 crc kubenswrapper[4718]: I1210 14:36:18.580328 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Dec 10 14:36:28 crc kubenswrapper[4718]: I1210 14:36:18.581422 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Dec 10 14:36:28 crc kubenswrapper[4718]: I1210 14:36:18.581453 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Dec 10 14:36:28 crc kubenswrapper[4718]: I1210 14:36:18.581472 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Dec 10 14:36:28 crc kubenswrapper[4718]: I1210 14:36:18.581484 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Dec 10 14:36:28 crc kubenswrapper[4718]: I1210 14:36:18.581496 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Dec 10 14:36:28 crc kubenswrapper[4718]: I1210 14:36:18.581523 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="2b4486bd-f5e8-41a1-8107-d1b1d57c87c4" containerName="pruner" Dec 10 14:36:28 crc kubenswrapper[4718]: I1210 14:36:18.581535 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Dec 10 14:36:28 crc kubenswrapper[4718]: I1210 14:36:18.581553 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Dec 10 14:36:28 crc kubenswrapper[4718]: I1210 14:36:18.581561 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="046f580c-2cdc-471d-a16a-43913d0b6092" containerName="registry-server" Dec 10 14:36:28 crc kubenswrapper[4718]: I1210 14:36:18.581572 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="37fd67ff-cb68-413b-a919-929db5e9d0b1" containerName="registry-server" Dec 10 14:36:28 crc kubenswrapper[4718]: E1210 14:36:18.581879 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Dec 10 14:36:28 crc kubenswrapper[4718]: I1210 14:36:18.581889 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Dec 10 14:36:28 crc kubenswrapper[4718]: I1210 14:36:18.590527 4718 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Dec 10 14:36:28 crc kubenswrapper[4718]: I1210 14:36:18.592085 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 10 14:36:28 crc kubenswrapper[4718]: I1210 14:36:18.605320 4718 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="f4b27818a5e8e43d0dc095d08835c792" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" Dec 10 14:36:28 crc kubenswrapper[4718]: I1210 14:36:18.605794 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 10 14:36:28 crc kubenswrapper[4718]: I1210 14:36:18.605895 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 10 14:36:28 crc kubenswrapper[4718]: I1210 14:36:18.605926 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 10 14:36:28 crc kubenswrapper[4718]: I1210 14:36:18.605957 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 10 14:36:28 crc kubenswrapper[4718]: I1210 14:36:18.606025 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 10 14:36:28 crc kubenswrapper[4718]: I1210 14:36:18.606058 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 10 14:36:28 crc kubenswrapper[4718]: I1210 14:36:18.606119 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 10 14:36:28 crc kubenswrapper[4718]: I1210 14:36:18.606227 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 10 14:36:28 crc kubenswrapper[4718]: I1210 14:36:18.635121 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Dec 10 14:36:28 crc kubenswrapper[4718]: I1210 14:36:18.708082 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 10 14:36:28 crc kubenswrapper[4718]: I1210 14:36:18.708166 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 10 14:36:28 crc kubenswrapper[4718]: I1210 14:36:18.708203 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 10 14:36:28 crc kubenswrapper[4718]: I1210 14:36:18.708236 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 10 14:36:28 crc kubenswrapper[4718]: I1210 14:36:18.708265 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 10 14:36:28 crc kubenswrapper[4718]: I1210 14:36:18.708284 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 10 14:36:28 crc kubenswrapper[4718]: I1210 14:36:18.708351 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 10 14:36:28 crc kubenswrapper[4718]: I1210 14:36:18.708430 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 10 14:36:28 crc kubenswrapper[4718]: I1210 14:36:18.708365 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 10 14:36:28 crc kubenswrapper[4718]: I1210 14:36:18.708443 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 10 14:36:28 crc kubenswrapper[4718]: I1210 14:36:18.708490 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 10 14:36:28 crc kubenswrapper[4718]: I1210 14:36:18.708403 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 10 14:36:28 crc kubenswrapper[4718]: I1210 14:36:18.708538 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 10 14:36:28 crc kubenswrapper[4718]: I1210 14:36:18.708602 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 10 14:36:28 crc kubenswrapper[4718]: I1210 14:36:18.708546 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 10 14:36:28 crc kubenswrapper[4718]: I1210 14:36:18.708575 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 10 14:36:28 crc kubenswrapper[4718]: I1210 14:36:18.931772 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 10 14:36:28 crc kubenswrapper[4718]: W1210 14:36:18.955743 4718 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf85e55b1a89d02b0cb034b1ea31ed45a.slice/crio-0b1b003324521de0085f1b07923b3a3d60f637a27cb3e862c62a82ed7b29463c WatchSource:0}: Error finding container 0b1b003324521de0085f1b07923b3a3d60f637a27cb3e862c62a82ed7b29463c: Status 404 returned error can't find the container with id 0b1b003324521de0085f1b07923b3a3d60f637a27cb3e862c62a82ed7b29463c Dec 10 14:36:28 crc kubenswrapper[4718]: E1210 14:36:18.958961 4718 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.17:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.187fe15f6f4e372a openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-10 14:36:18.958063402 +0000 UTC m=+283.907286819,LastTimestamp:2025-12-10 14:36:18.958063402 +0000 UTC m=+283.907286819,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 10 14:36:28 crc kubenswrapper[4718]: I1210 14:36:19.824431 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"0b1b003324521de0085f1b07923b3a3d60f637a27cb3e862c62a82ed7b29463c"} Dec 10 14:36:28 crc kubenswrapper[4718]: E1210 14:36:22.800636 4718 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.17:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.187fe15f6f4e372a openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-10 14:36:18.958063402 +0000 UTC m=+283.907286819,LastTimestamp:2025-12-10 14:36:18.958063402 +0000 UTC m=+283.907286819,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 10 14:36:28 crc kubenswrapper[4718]: I1210 14:36:23.114592 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-tk99n" podUID="90744db6-7e3a-4f7f-a0ae-4f4dfed7df33" containerName="oauth-openshift" containerID="cri-o://528ce00589e35ed31d99b6f71e5cd305814a543f03811cf8b692571b5421a8c7" gracePeriod=15 Dec 10 14:36:28 crc kubenswrapper[4718]: I1210 14:36:24.541502 4718 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-tk99n container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.23:6443/healthz\": dial tcp 10.217.0.23:6443: connect: connection refused" start-of-body= Dec 10 14:36:28 crc kubenswrapper[4718]: I1210 14:36:24.541886 4718 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-tk99n" podUID="90744db6-7e3a-4f7f-a0ae-4f4dfed7df33" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.23:6443/healthz\": dial tcp 10.217.0.23:6443: connect: connection refused" Dec 10 14:36:28 crc kubenswrapper[4718]: E1210 14:36:24.825034 4718 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.17:6443: connect: connection refused" Dec 10 14:36:28 crc kubenswrapper[4718]: E1210 14:36:24.825456 4718 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.17:6443: connect: connection refused" Dec 10 14:36:28 crc kubenswrapper[4718]: E1210 14:36:24.825929 4718 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.17:6443: connect: connection refused" Dec 10 14:36:28 crc kubenswrapper[4718]: E1210 14:36:24.826216 4718 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.17:6443: connect: connection refused" Dec 10 14:36:28 crc kubenswrapper[4718]: E1210 14:36:24.826665 4718 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.17:6443: connect: connection refused" Dec 10 14:36:28 crc kubenswrapper[4718]: I1210 14:36:24.826752 4718 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Dec 10 14:36:28 crc kubenswrapper[4718]: E1210 14:36:24.827529 4718 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.17:6443: connect: connection refused" interval="200ms" Dec 10 14:36:28 crc kubenswrapper[4718]: E1210 14:36:25.028987 4718 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.17:6443: connect: connection refused" interval="400ms" Dec 10 14:36:28 crc kubenswrapper[4718]: E1210 14:36:25.430707 4718 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.17:6443: connect: connection refused" interval="800ms" Dec 10 14:36:28 crc kubenswrapper[4718]: I1210 14:36:26.023162 4718 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.17:6443: connect: connection refused" Dec 10 14:36:28 crc kubenswrapper[4718]: E1210 14:36:26.231692 4718 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.17:6443: connect: connection refused" interval="1.6s" Dec 10 14:36:28 crc kubenswrapper[4718]: E1210 14:36:27.001663 4718 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of df40adfd077501ccc7e53deb046ef0ce9625dee1121be6149b6a565b9bad290c is running failed: container process not found" containerID="df40adfd077501ccc7e53deb046ef0ce9625dee1121be6149b6a565b9bad290c" cmd=["grpc_health_probe","-addr=:50051"] Dec 10 14:36:28 crc kubenswrapper[4718]: E1210 14:36:27.002546 4718 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of df40adfd077501ccc7e53deb046ef0ce9625dee1121be6149b6a565b9bad290c is running failed: container process not found" containerID="df40adfd077501ccc7e53deb046ef0ce9625dee1121be6149b6a565b9bad290c" cmd=["grpc_health_probe","-addr=:50051"] Dec 10 14:36:28 crc kubenswrapper[4718]: E1210 14:36:27.003012 4718 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of df40adfd077501ccc7e53deb046ef0ce9625dee1121be6149b6a565b9bad290c is running failed: container process not found" containerID="df40adfd077501ccc7e53deb046ef0ce9625dee1121be6149b6a565b9bad290c" cmd=["grpc_health_probe","-addr=:50051"] Dec 10 14:36:28 crc kubenswrapper[4718]: E1210 14:36:27.003110 4718 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of df40adfd077501ccc7e53deb046ef0ce9625dee1121be6149b6a565b9bad290c is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/redhat-marketplace-d8blb" podUID="f73d3e76-59b1-446b-b599-18b0c115447f" containerName="registry-server" Dec 10 14:36:28 crc kubenswrapper[4718]: E1210 14:36:27.833505 4718 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.17:6443: connect: connection refused" interval="3.2s" Dec 10 14:36:28 crc kubenswrapper[4718]: I1210 14:36:28.106010 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/2.log" Dec 10 14:36:28 crc kubenswrapper[4718]: I1210 14:36:28.107523 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Dec 10 14:36:29 crc kubenswrapper[4718]: E1210 14:36:28.280290 4718 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 8fec9bf53304bf4615509afbf99c6388c5503aa74686b65b39182a40bb5e0abe is running failed: container process not found" containerID="8fec9bf53304bf4615509afbf99c6388c5503aa74686b65b39182a40bb5e0abe" cmd=["grpc_health_probe","-addr=:50051"] Dec 10 14:36:29 crc kubenswrapper[4718]: E1210 14:36:28.281360 4718 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 8fec9bf53304bf4615509afbf99c6388c5503aa74686b65b39182a40bb5e0abe is running failed: container process not found" containerID="8fec9bf53304bf4615509afbf99c6388c5503aa74686b65b39182a40bb5e0abe" cmd=["grpc_health_probe","-addr=:50051"] Dec 10 14:36:29 crc kubenswrapper[4718]: E1210 14:36:28.282094 4718 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 8fec9bf53304bf4615509afbf99c6388c5503aa74686b65b39182a40bb5e0abe is running failed: container process not found" containerID="8fec9bf53304bf4615509afbf99c6388c5503aa74686b65b39182a40bb5e0abe" cmd=["grpc_health_probe","-addr=:50051"] Dec 10 14:36:29 crc kubenswrapper[4718]: E1210 14:36:28.282123 4718 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 8fec9bf53304bf4615509afbf99c6388c5503aa74686b65b39182a40bb5e0abe is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/redhat-operators-rwvfp" podUID="bc324349-4229-4bd4-b5b1-35de752d9f85" containerName="registry-server" Dec 10 14:36:29 crc kubenswrapper[4718]: I1210 14:36:28.388367 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Dec 10 14:36:29 crc kubenswrapper[4718]: I1210 14:36:28.390837 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Dec 10 14:36:29 crc kubenswrapper[4718]: I1210 14:36:28.392757 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 10 14:36:29 crc kubenswrapper[4718]: I1210 14:36:28.393255 4718 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.17:6443: connect: connection refused" Dec 10 14:36:29 crc kubenswrapper[4718]: I1210 14:36:28.393661 4718 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.17:6443: connect: connection refused" Dec 10 14:36:29 crc kubenswrapper[4718]: I1210 14:36:28.549594 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Dec 10 14:36:29 crc kubenswrapper[4718]: I1210 14:36:28.549686 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Dec 10 14:36:29 crc kubenswrapper[4718]: I1210 14:36:28.549725 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 10 14:36:29 crc kubenswrapper[4718]: I1210 14:36:28.549775 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Dec 10 14:36:29 crc kubenswrapper[4718]: I1210 14:36:28.549804 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 10 14:36:29 crc kubenswrapper[4718]: I1210 14:36:28.549897 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 10 14:36:29 crc kubenswrapper[4718]: I1210 14:36:28.550351 4718 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Dec 10 14:36:29 crc kubenswrapper[4718]: I1210 14:36:28.550368 4718 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Dec 10 14:36:29 crc kubenswrapper[4718]: I1210 14:36:28.550378 4718 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Dec 10 14:36:29 crc kubenswrapper[4718]: I1210 14:36:28.620609 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Dec 10 14:36:29 crc kubenswrapper[4718]: I1210 14:36:28.621375 4718 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="d3d4b682855c5510a5957e308008602e4f749bd44ca52f974cb4be27d06a7c80" exitCode=-1 Dec 10 14:36:29 crc kubenswrapper[4718]: I1210 14:36:28.621423 4718 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="6f363cabfc65ba2cb737968ead6f21568f30416a1f385f511f2d1a45da1e831a" exitCode=0 Dec 10 14:36:29 crc kubenswrapper[4718]: I1210 14:36:28.621435 4718 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="cfa44e8f79d7525847ab10384aaf41105a8e6769b384679417cd0a7379748dee" exitCode=0 Dec 10 14:36:29 crc kubenswrapper[4718]: I1210 14:36:28.621448 4718 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="32fcede9b35f9a45177a341e272e7c2cbe2e5c6bf699e877664a603a5f96124a" exitCode=2 Dec 10 14:36:29 crc kubenswrapper[4718]: I1210 14:36:28.621496 4718 scope.go:117] "RemoveContainer" containerID="d3d4b682855c5510a5957e308008602e4f749bd44ca52f974cb4be27d06a7c80" Dec 10 14:36:29 crc kubenswrapper[4718]: I1210 14:36:28.641777 4718 scope.go:117] "RemoveContainer" containerID="9d6e990a4a3f575ffc8a6399b8dd1fd59a84db71b34885946046fc7c5bd0a8e8" Dec 10 14:36:29 crc kubenswrapper[4718]: I1210 14:36:29.208185 4718 scope.go:117] "RemoveContainer" containerID="6f363cabfc65ba2cb737968ead6f21568f30416a1f385f511f2d1a45da1e831a" Dec 10 14:36:29 crc kubenswrapper[4718]: I1210 14:36:29.226214 4718 scope.go:117] "RemoveContainer" containerID="cfa44e8f79d7525847ab10384aaf41105a8e6769b384679417cd0a7379748dee" Dec 10 14:36:29 crc kubenswrapper[4718]: I1210 14:36:29.247025 4718 scope.go:117] "RemoveContainer" containerID="32fcede9b35f9a45177a341e272e7c2cbe2e5c6bf699e877664a603a5f96124a" Dec 10 14:36:29 crc kubenswrapper[4718]: I1210 14:36:29.267291 4718 scope.go:117] "RemoveContainer" containerID="65966d33004e0a6740f0b450f404729baec86fc47c84a58ee020a78b7378bfee" Dec 10 14:36:29 crc kubenswrapper[4718]: I1210 14:36:29.284952 4718 scope.go:117] "RemoveContainer" containerID="a3cbf8dfd3e1278a733c6ca7e888788d0bec845ff72dfd618e2070262d05b6a7" Dec 10 14:36:29 crc kubenswrapper[4718]: I1210 14:36:29.306264 4718 scope.go:117] "RemoveContainer" containerID="d3d4b682855c5510a5957e308008602e4f749bd44ca52f974cb4be27d06a7c80" Dec 10 14:36:29 crc kubenswrapper[4718]: E1210 14:36:29.307137 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d3d4b682855c5510a5957e308008602e4f749bd44ca52f974cb4be27d06a7c80\": container with ID starting with d3d4b682855c5510a5957e308008602e4f749bd44ca52f974cb4be27d06a7c80 not found: ID does not exist" containerID="d3d4b682855c5510a5957e308008602e4f749bd44ca52f974cb4be27d06a7c80" Dec 10 14:36:29 crc kubenswrapper[4718]: I1210 14:36:29.307171 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d3d4b682855c5510a5957e308008602e4f749bd44ca52f974cb4be27d06a7c80"} err="failed to get container status \"d3d4b682855c5510a5957e308008602e4f749bd44ca52f974cb4be27d06a7c80\": rpc error: code = NotFound desc = could not find container \"d3d4b682855c5510a5957e308008602e4f749bd44ca52f974cb4be27d06a7c80\": container with ID starting with d3d4b682855c5510a5957e308008602e4f749bd44ca52f974cb4be27d06a7c80 not found: ID does not exist" Dec 10 14:36:29 crc kubenswrapper[4718]: I1210 14:36:29.307210 4718 scope.go:117] "RemoveContainer" containerID="9d6e990a4a3f575ffc8a6399b8dd1fd59a84db71b34885946046fc7c5bd0a8e8" Dec 10 14:36:29 crc kubenswrapper[4718]: E1210 14:36:29.307704 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9d6e990a4a3f575ffc8a6399b8dd1fd59a84db71b34885946046fc7c5bd0a8e8\": container with ID starting with 9d6e990a4a3f575ffc8a6399b8dd1fd59a84db71b34885946046fc7c5bd0a8e8 not found: ID does not exist" containerID="9d6e990a4a3f575ffc8a6399b8dd1fd59a84db71b34885946046fc7c5bd0a8e8" Dec 10 14:36:29 crc kubenswrapper[4718]: I1210 14:36:29.307729 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9d6e990a4a3f575ffc8a6399b8dd1fd59a84db71b34885946046fc7c5bd0a8e8"} err="failed to get container status \"9d6e990a4a3f575ffc8a6399b8dd1fd59a84db71b34885946046fc7c5bd0a8e8\": rpc error: code = NotFound desc = could not find container \"9d6e990a4a3f575ffc8a6399b8dd1fd59a84db71b34885946046fc7c5bd0a8e8\": container with ID starting with 9d6e990a4a3f575ffc8a6399b8dd1fd59a84db71b34885946046fc7c5bd0a8e8 not found: ID does not exist" Dec 10 14:36:29 crc kubenswrapper[4718]: I1210 14:36:29.307947 4718 scope.go:117] "RemoveContainer" containerID="6f363cabfc65ba2cb737968ead6f21568f30416a1f385f511f2d1a45da1e831a" Dec 10 14:36:29 crc kubenswrapper[4718]: E1210 14:36:29.308541 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6f363cabfc65ba2cb737968ead6f21568f30416a1f385f511f2d1a45da1e831a\": container with ID starting with 6f363cabfc65ba2cb737968ead6f21568f30416a1f385f511f2d1a45da1e831a not found: ID does not exist" containerID="6f363cabfc65ba2cb737968ead6f21568f30416a1f385f511f2d1a45da1e831a" Dec 10 14:36:29 crc kubenswrapper[4718]: I1210 14:36:29.308604 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6f363cabfc65ba2cb737968ead6f21568f30416a1f385f511f2d1a45da1e831a"} err="failed to get container status \"6f363cabfc65ba2cb737968ead6f21568f30416a1f385f511f2d1a45da1e831a\": rpc error: code = NotFound desc = could not find container \"6f363cabfc65ba2cb737968ead6f21568f30416a1f385f511f2d1a45da1e831a\": container with ID starting with 6f363cabfc65ba2cb737968ead6f21568f30416a1f385f511f2d1a45da1e831a not found: ID does not exist" Dec 10 14:36:29 crc kubenswrapper[4718]: I1210 14:36:29.308655 4718 scope.go:117] "RemoveContainer" containerID="cfa44e8f79d7525847ab10384aaf41105a8e6769b384679417cd0a7379748dee" Dec 10 14:36:29 crc kubenswrapper[4718]: E1210 14:36:29.309073 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cfa44e8f79d7525847ab10384aaf41105a8e6769b384679417cd0a7379748dee\": container with ID starting with cfa44e8f79d7525847ab10384aaf41105a8e6769b384679417cd0a7379748dee not found: ID does not exist" containerID="cfa44e8f79d7525847ab10384aaf41105a8e6769b384679417cd0a7379748dee" Dec 10 14:36:29 crc kubenswrapper[4718]: I1210 14:36:29.309102 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cfa44e8f79d7525847ab10384aaf41105a8e6769b384679417cd0a7379748dee"} err="failed to get container status \"cfa44e8f79d7525847ab10384aaf41105a8e6769b384679417cd0a7379748dee\": rpc error: code = NotFound desc = could not find container \"cfa44e8f79d7525847ab10384aaf41105a8e6769b384679417cd0a7379748dee\": container with ID starting with cfa44e8f79d7525847ab10384aaf41105a8e6769b384679417cd0a7379748dee not found: ID does not exist" Dec 10 14:36:29 crc kubenswrapper[4718]: I1210 14:36:29.309120 4718 scope.go:117] "RemoveContainer" containerID="32fcede9b35f9a45177a341e272e7c2cbe2e5c6bf699e877664a603a5f96124a" Dec 10 14:36:29 crc kubenswrapper[4718]: E1210 14:36:29.309417 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"32fcede9b35f9a45177a341e272e7c2cbe2e5c6bf699e877664a603a5f96124a\": container with ID starting with 32fcede9b35f9a45177a341e272e7c2cbe2e5c6bf699e877664a603a5f96124a not found: ID does not exist" containerID="32fcede9b35f9a45177a341e272e7c2cbe2e5c6bf699e877664a603a5f96124a" Dec 10 14:36:29 crc kubenswrapper[4718]: I1210 14:36:29.309455 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"32fcede9b35f9a45177a341e272e7c2cbe2e5c6bf699e877664a603a5f96124a"} err="failed to get container status \"32fcede9b35f9a45177a341e272e7c2cbe2e5c6bf699e877664a603a5f96124a\": rpc error: code = NotFound desc = could not find container \"32fcede9b35f9a45177a341e272e7c2cbe2e5c6bf699e877664a603a5f96124a\": container with ID starting with 32fcede9b35f9a45177a341e272e7c2cbe2e5c6bf699e877664a603a5f96124a not found: ID does not exist" Dec 10 14:36:29 crc kubenswrapper[4718]: I1210 14:36:29.309482 4718 scope.go:117] "RemoveContainer" containerID="65966d33004e0a6740f0b450f404729baec86fc47c84a58ee020a78b7378bfee" Dec 10 14:36:29 crc kubenswrapper[4718]: E1210 14:36:29.309883 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"65966d33004e0a6740f0b450f404729baec86fc47c84a58ee020a78b7378bfee\": container with ID starting with 65966d33004e0a6740f0b450f404729baec86fc47c84a58ee020a78b7378bfee not found: ID does not exist" containerID="65966d33004e0a6740f0b450f404729baec86fc47c84a58ee020a78b7378bfee" Dec 10 14:36:29 crc kubenswrapper[4718]: I1210 14:36:29.309921 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"65966d33004e0a6740f0b450f404729baec86fc47c84a58ee020a78b7378bfee"} err="failed to get container status \"65966d33004e0a6740f0b450f404729baec86fc47c84a58ee020a78b7378bfee\": rpc error: code = NotFound desc = could not find container \"65966d33004e0a6740f0b450f404729baec86fc47c84a58ee020a78b7378bfee\": container with ID starting with 65966d33004e0a6740f0b450f404729baec86fc47c84a58ee020a78b7378bfee not found: ID does not exist" Dec 10 14:36:29 crc kubenswrapper[4718]: I1210 14:36:29.309951 4718 scope.go:117] "RemoveContainer" containerID="a3cbf8dfd3e1278a733c6ca7e888788d0bec845ff72dfd618e2070262d05b6a7" Dec 10 14:36:29 crc kubenswrapper[4718]: E1210 14:36:29.314987 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a3cbf8dfd3e1278a733c6ca7e888788d0bec845ff72dfd618e2070262d05b6a7\": container with ID starting with a3cbf8dfd3e1278a733c6ca7e888788d0bec845ff72dfd618e2070262d05b6a7 not found: ID does not exist" containerID="a3cbf8dfd3e1278a733c6ca7e888788d0bec845ff72dfd618e2070262d05b6a7" Dec 10 14:36:29 crc kubenswrapper[4718]: I1210 14:36:29.315078 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a3cbf8dfd3e1278a733c6ca7e888788d0bec845ff72dfd618e2070262d05b6a7"} err="failed to get container status \"a3cbf8dfd3e1278a733c6ca7e888788d0bec845ff72dfd618e2070262d05b6a7\": rpc error: code = NotFound desc = could not find container \"a3cbf8dfd3e1278a733c6ca7e888788d0bec845ff72dfd618e2070262d05b6a7\": container with ID starting with a3cbf8dfd3e1278a733c6ca7e888788d0bec845ff72dfd618e2070262d05b6a7 not found: ID does not exist" Dec 10 14:36:29 crc kubenswrapper[4718]: I1210 14:36:29.315138 4718 scope.go:117] "RemoveContainer" containerID="d3d4b682855c5510a5957e308008602e4f749bd44ca52f974cb4be27d06a7c80" Dec 10 14:36:29 crc kubenswrapper[4718]: I1210 14:36:29.315579 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d3d4b682855c5510a5957e308008602e4f749bd44ca52f974cb4be27d06a7c80"} err="failed to get container status \"d3d4b682855c5510a5957e308008602e4f749bd44ca52f974cb4be27d06a7c80\": rpc error: code = NotFound desc = could not find container \"d3d4b682855c5510a5957e308008602e4f749bd44ca52f974cb4be27d06a7c80\": container with ID starting with d3d4b682855c5510a5957e308008602e4f749bd44ca52f974cb4be27d06a7c80 not found: ID does not exist" Dec 10 14:36:29 crc kubenswrapper[4718]: I1210 14:36:29.315613 4718 scope.go:117] "RemoveContainer" containerID="9d6e990a4a3f575ffc8a6399b8dd1fd59a84db71b34885946046fc7c5bd0a8e8" Dec 10 14:36:29 crc kubenswrapper[4718]: I1210 14:36:29.317400 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9d6e990a4a3f575ffc8a6399b8dd1fd59a84db71b34885946046fc7c5bd0a8e8"} err="failed to get container status \"9d6e990a4a3f575ffc8a6399b8dd1fd59a84db71b34885946046fc7c5bd0a8e8\": rpc error: code = NotFound desc = could not find container \"9d6e990a4a3f575ffc8a6399b8dd1fd59a84db71b34885946046fc7c5bd0a8e8\": container with ID starting with 9d6e990a4a3f575ffc8a6399b8dd1fd59a84db71b34885946046fc7c5bd0a8e8 not found: ID does not exist" Dec 10 14:36:29 crc kubenswrapper[4718]: I1210 14:36:29.317438 4718 scope.go:117] "RemoveContainer" containerID="6f363cabfc65ba2cb737968ead6f21568f30416a1f385f511f2d1a45da1e831a" Dec 10 14:36:29 crc kubenswrapper[4718]: I1210 14:36:29.317775 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6f363cabfc65ba2cb737968ead6f21568f30416a1f385f511f2d1a45da1e831a"} err="failed to get container status \"6f363cabfc65ba2cb737968ead6f21568f30416a1f385f511f2d1a45da1e831a\": rpc error: code = NotFound desc = could not find container \"6f363cabfc65ba2cb737968ead6f21568f30416a1f385f511f2d1a45da1e831a\": container with ID starting with 6f363cabfc65ba2cb737968ead6f21568f30416a1f385f511f2d1a45da1e831a not found: ID does not exist" Dec 10 14:36:29 crc kubenswrapper[4718]: I1210 14:36:29.317806 4718 scope.go:117] "RemoveContainer" containerID="cfa44e8f79d7525847ab10384aaf41105a8e6769b384679417cd0a7379748dee" Dec 10 14:36:29 crc kubenswrapper[4718]: I1210 14:36:29.318019 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cfa44e8f79d7525847ab10384aaf41105a8e6769b384679417cd0a7379748dee"} err="failed to get container status \"cfa44e8f79d7525847ab10384aaf41105a8e6769b384679417cd0a7379748dee\": rpc error: code = NotFound desc = could not find container \"cfa44e8f79d7525847ab10384aaf41105a8e6769b384679417cd0a7379748dee\": container with ID starting with cfa44e8f79d7525847ab10384aaf41105a8e6769b384679417cd0a7379748dee not found: ID does not exist" Dec 10 14:36:29 crc kubenswrapper[4718]: I1210 14:36:29.318040 4718 scope.go:117] "RemoveContainer" containerID="32fcede9b35f9a45177a341e272e7c2cbe2e5c6bf699e877664a603a5f96124a" Dec 10 14:36:29 crc kubenswrapper[4718]: I1210 14:36:29.318312 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"32fcede9b35f9a45177a341e272e7c2cbe2e5c6bf699e877664a603a5f96124a"} err="failed to get container status \"32fcede9b35f9a45177a341e272e7c2cbe2e5c6bf699e877664a603a5f96124a\": rpc error: code = NotFound desc = could not find container \"32fcede9b35f9a45177a341e272e7c2cbe2e5c6bf699e877664a603a5f96124a\": container with ID starting with 32fcede9b35f9a45177a341e272e7c2cbe2e5c6bf699e877664a603a5f96124a not found: ID does not exist" Dec 10 14:36:29 crc kubenswrapper[4718]: I1210 14:36:29.318330 4718 scope.go:117] "RemoveContainer" containerID="65966d33004e0a6740f0b450f404729baec86fc47c84a58ee020a78b7378bfee" Dec 10 14:36:29 crc kubenswrapper[4718]: I1210 14:36:29.318549 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"65966d33004e0a6740f0b450f404729baec86fc47c84a58ee020a78b7378bfee"} err="failed to get container status \"65966d33004e0a6740f0b450f404729baec86fc47c84a58ee020a78b7378bfee\": rpc error: code = NotFound desc = could not find container \"65966d33004e0a6740f0b450f404729baec86fc47c84a58ee020a78b7378bfee\": container with ID starting with 65966d33004e0a6740f0b450f404729baec86fc47c84a58ee020a78b7378bfee not found: ID does not exist" Dec 10 14:36:29 crc kubenswrapper[4718]: I1210 14:36:29.318590 4718 scope.go:117] "RemoveContainer" containerID="a3cbf8dfd3e1278a733c6ca7e888788d0bec845ff72dfd618e2070262d05b6a7" Dec 10 14:36:29 crc kubenswrapper[4718]: I1210 14:36:29.318868 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a3cbf8dfd3e1278a733c6ca7e888788d0bec845ff72dfd618e2070262d05b6a7"} err="failed to get container status \"a3cbf8dfd3e1278a733c6ca7e888788d0bec845ff72dfd618e2070262d05b6a7\": rpc error: code = NotFound desc = could not find container \"a3cbf8dfd3e1278a733c6ca7e888788d0bec845ff72dfd618e2070262d05b6a7\": container with ID starting with a3cbf8dfd3e1278a733c6ca7e888788d0bec845ff72dfd618e2070262d05b6a7 not found: ID does not exist" Dec 10 14:36:29 crc kubenswrapper[4718]: I1210 14:36:29.318883 4718 scope.go:117] "RemoveContainer" containerID="d3d4b682855c5510a5957e308008602e4f749bd44ca52f974cb4be27d06a7c80" Dec 10 14:36:29 crc kubenswrapper[4718]: I1210 14:36:29.319116 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d3d4b682855c5510a5957e308008602e4f749bd44ca52f974cb4be27d06a7c80"} err="failed to get container status \"d3d4b682855c5510a5957e308008602e4f749bd44ca52f974cb4be27d06a7c80\": rpc error: code = NotFound desc = could not find container \"d3d4b682855c5510a5957e308008602e4f749bd44ca52f974cb4be27d06a7c80\": container with ID starting with d3d4b682855c5510a5957e308008602e4f749bd44ca52f974cb4be27d06a7c80 not found: ID does not exist" Dec 10 14:36:29 crc kubenswrapper[4718]: I1210 14:36:29.319131 4718 scope.go:117] "RemoveContainer" containerID="9d6e990a4a3f575ffc8a6399b8dd1fd59a84db71b34885946046fc7c5bd0a8e8" Dec 10 14:36:29 crc kubenswrapper[4718]: I1210 14:36:29.319426 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9d6e990a4a3f575ffc8a6399b8dd1fd59a84db71b34885946046fc7c5bd0a8e8"} err="failed to get container status \"9d6e990a4a3f575ffc8a6399b8dd1fd59a84db71b34885946046fc7c5bd0a8e8\": rpc error: code = NotFound desc = could not find container \"9d6e990a4a3f575ffc8a6399b8dd1fd59a84db71b34885946046fc7c5bd0a8e8\": container with ID starting with 9d6e990a4a3f575ffc8a6399b8dd1fd59a84db71b34885946046fc7c5bd0a8e8 not found: ID does not exist" Dec 10 14:36:29 crc kubenswrapper[4718]: I1210 14:36:29.319443 4718 scope.go:117] "RemoveContainer" containerID="6f363cabfc65ba2cb737968ead6f21568f30416a1f385f511f2d1a45da1e831a" Dec 10 14:36:29 crc kubenswrapper[4718]: I1210 14:36:29.320063 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6f363cabfc65ba2cb737968ead6f21568f30416a1f385f511f2d1a45da1e831a"} err="failed to get container status \"6f363cabfc65ba2cb737968ead6f21568f30416a1f385f511f2d1a45da1e831a\": rpc error: code = NotFound desc = could not find container \"6f363cabfc65ba2cb737968ead6f21568f30416a1f385f511f2d1a45da1e831a\": container with ID starting with 6f363cabfc65ba2cb737968ead6f21568f30416a1f385f511f2d1a45da1e831a not found: ID does not exist" Dec 10 14:36:29 crc kubenswrapper[4718]: I1210 14:36:29.320086 4718 scope.go:117] "RemoveContainer" containerID="cfa44e8f79d7525847ab10384aaf41105a8e6769b384679417cd0a7379748dee" Dec 10 14:36:29 crc kubenswrapper[4718]: I1210 14:36:29.320298 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cfa44e8f79d7525847ab10384aaf41105a8e6769b384679417cd0a7379748dee"} err="failed to get container status \"cfa44e8f79d7525847ab10384aaf41105a8e6769b384679417cd0a7379748dee\": rpc error: code = NotFound desc = could not find container \"cfa44e8f79d7525847ab10384aaf41105a8e6769b384679417cd0a7379748dee\": container with ID starting with cfa44e8f79d7525847ab10384aaf41105a8e6769b384679417cd0a7379748dee not found: ID does not exist" Dec 10 14:36:29 crc kubenswrapper[4718]: I1210 14:36:29.320320 4718 scope.go:117] "RemoveContainer" containerID="32fcede9b35f9a45177a341e272e7c2cbe2e5c6bf699e877664a603a5f96124a" Dec 10 14:36:29 crc kubenswrapper[4718]: I1210 14:36:29.320634 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"32fcede9b35f9a45177a341e272e7c2cbe2e5c6bf699e877664a603a5f96124a"} err="failed to get container status \"32fcede9b35f9a45177a341e272e7c2cbe2e5c6bf699e877664a603a5f96124a\": rpc error: code = NotFound desc = could not find container \"32fcede9b35f9a45177a341e272e7c2cbe2e5c6bf699e877664a603a5f96124a\": container with ID starting with 32fcede9b35f9a45177a341e272e7c2cbe2e5c6bf699e877664a603a5f96124a not found: ID does not exist" Dec 10 14:36:29 crc kubenswrapper[4718]: I1210 14:36:29.320652 4718 scope.go:117] "RemoveContainer" containerID="65966d33004e0a6740f0b450f404729baec86fc47c84a58ee020a78b7378bfee" Dec 10 14:36:29 crc kubenswrapper[4718]: I1210 14:36:29.320904 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"65966d33004e0a6740f0b450f404729baec86fc47c84a58ee020a78b7378bfee"} err="failed to get container status \"65966d33004e0a6740f0b450f404729baec86fc47c84a58ee020a78b7378bfee\": rpc error: code = NotFound desc = could not find container \"65966d33004e0a6740f0b450f404729baec86fc47c84a58ee020a78b7378bfee\": container with ID starting with 65966d33004e0a6740f0b450f404729baec86fc47c84a58ee020a78b7378bfee not found: ID does not exist" Dec 10 14:36:29 crc kubenswrapper[4718]: I1210 14:36:29.320923 4718 scope.go:117] "RemoveContainer" containerID="a3cbf8dfd3e1278a733c6ca7e888788d0bec845ff72dfd618e2070262d05b6a7" Dec 10 14:36:29 crc kubenswrapper[4718]: I1210 14:36:29.321135 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a3cbf8dfd3e1278a733c6ca7e888788d0bec845ff72dfd618e2070262d05b6a7"} err="failed to get container status \"a3cbf8dfd3e1278a733c6ca7e888788d0bec845ff72dfd618e2070262d05b6a7\": rpc error: code = NotFound desc = could not find container \"a3cbf8dfd3e1278a733c6ca7e888788d0bec845ff72dfd618e2070262d05b6a7\": container with ID starting with a3cbf8dfd3e1278a733c6ca7e888788d0bec845ff72dfd618e2070262d05b6a7 not found: ID does not exist" Dec 10 14:36:29 crc kubenswrapper[4718]: I1210 14:36:29.630075 4718 generic.go:334] "Generic (PLEG): container finished" podID="09fe8e9b-3f80-4979-8203-7aca9407605d" containerID="30813467ed4bf581f31f34768ceda4d82b2b1e2b67fdef5874fe464fa02523fe" exitCode=0 Dec 10 14:36:29 crc kubenswrapper[4718]: I1210 14:36:29.630145 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"09fe8e9b-3f80-4979-8203-7aca9407605d","Type":"ContainerDied","Data":"30813467ed4bf581f31f34768ceda4d82b2b1e2b67fdef5874fe464fa02523fe"} Dec 10 14:36:29 crc kubenswrapper[4718]: I1210 14:36:29.630836 4718 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.17:6443: connect: connection refused" Dec 10 14:36:29 crc kubenswrapper[4718]: I1210 14:36:29.631100 4718 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.17:6443: connect: connection refused" Dec 10 14:36:29 crc kubenswrapper[4718]: I1210 14:36:29.631472 4718 status_manager.go:851] "Failed to get status for pod" podUID="09fe8e9b-3f80-4979-8203-7aca9407605d" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.17:6443: connect: connection refused" Dec 10 14:36:29 crc kubenswrapper[4718]: I1210 14:36:29.632069 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"654cde95780bd524bd014d2d2c06218535751be6bc802e41fdb9882f77a20f4d"} Dec 10 14:36:29 crc kubenswrapper[4718]: I1210 14:36:29.632837 4718 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.17:6443: connect: connection refused" Dec 10 14:36:29 crc kubenswrapper[4718]: I1210 14:36:29.632999 4718 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.17:6443: connect: connection refused" Dec 10 14:36:29 crc kubenswrapper[4718]: I1210 14:36:29.633148 4718 status_manager.go:851] "Failed to get status for pod" podUID="09fe8e9b-3f80-4979-8203-7aca9407605d" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.17:6443: connect: connection refused" Dec 10 14:36:29 crc kubenswrapper[4718]: I1210 14:36:29.633966 4718 generic.go:334] "Generic (PLEG): container finished" podID="90744db6-7e3a-4f7f-a0ae-4f4dfed7df33" containerID="528ce00589e35ed31d99b6f71e5cd305814a543f03811cf8b692571b5421a8c7" exitCode=0 Dec 10 14:36:29 crc kubenswrapper[4718]: I1210 14:36:29.634013 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-tk99n" event={"ID":"90744db6-7e3a-4f7f-a0ae-4f4dfed7df33","Type":"ContainerDied","Data":"528ce00589e35ed31d99b6f71e5cd305814a543f03811cf8b692571b5421a8c7"} Dec 10 14:36:29 crc kubenswrapper[4718]: I1210 14:36:29.635076 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 10 14:36:29 crc kubenswrapper[4718]: I1210 14:36:29.650574 4718 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.17:6443: connect: connection refused" Dec 10 14:36:29 crc kubenswrapper[4718]: I1210 14:36:29.650811 4718 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.17:6443: connect: connection refused" Dec 10 14:36:29 crc kubenswrapper[4718]: I1210 14:36:29.651005 4718 status_manager.go:851] "Failed to get status for pod" podUID="09fe8e9b-3f80-4979-8203-7aca9407605d" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.17:6443: connect: connection refused" Dec 10 14:36:29 crc kubenswrapper[4718]: I1210 14:36:29.833032 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-rwvfp_bc324349-4229-4bd4-b5b1-35de752d9f85/registry-server/0.log" Dec 10 14:36:29 crc kubenswrapper[4718]: I1210 14:36:29.834057 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rwvfp" Dec 10 14:36:29 crc kubenswrapper[4718]: I1210 14:36:29.835173 4718 status_manager.go:851] "Failed to get status for pod" podUID="bc324349-4229-4bd4-b5b1-35de752d9f85" pod="openshift-marketplace/redhat-operators-rwvfp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-rwvfp\": dial tcp 38.102.83.17:6443: connect: connection refused" Dec 10 14:36:29 crc kubenswrapper[4718]: I1210 14:36:29.835713 4718 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.17:6443: connect: connection refused" Dec 10 14:36:29 crc kubenswrapper[4718]: I1210 14:36:29.836120 4718 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.17:6443: connect: connection refused" Dec 10 14:36:29 crc kubenswrapper[4718]: I1210 14:36:29.836666 4718 status_manager.go:851] "Failed to get status for pod" podUID="09fe8e9b-3f80-4979-8203-7aca9407605d" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.17:6443: connect: connection refused" Dec 10 14:36:29 crc kubenswrapper[4718]: I1210 14:36:29.837536 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-tk99n" Dec 10 14:36:29 crc kubenswrapper[4718]: I1210 14:36:29.838230 4718 status_manager.go:851] "Failed to get status for pod" podUID="09fe8e9b-3f80-4979-8203-7aca9407605d" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.17:6443: connect: connection refused" Dec 10 14:36:29 crc kubenswrapper[4718]: I1210 14:36:29.838610 4718 status_manager.go:851] "Failed to get status for pod" podUID="90744db6-7e3a-4f7f-a0ae-4f4dfed7df33" pod="openshift-authentication/oauth-openshift-558db77b4-tk99n" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-tk99n\": dial tcp 38.102.83.17:6443: connect: connection refused" Dec 10 14:36:29 crc kubenswrapper[4718]: I1210 14:36:29.838885 4718 status_manager.go:851] "Failed to get status for pod" podUID="bc324349-4229-4bd4-b5b1-35de752d9f85" pod="openshift-marketplace/redhat-operators-rwvfp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-rwvfp\": dial tcp 38.102.83.17:6443: connect: connection refused" Dec 10 14:36:29 crc kubenswrapper[4718]: I1210 14:36:29.839186 4718 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.17:6443: connect: connection refused" Dec 10 14:36:29 crc kubenswrapper[4718]: I1210 14:36:29.839453 4718 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.17:6443: connect: connection refused" Dec 10 14:36:29 crc kubenswrapper[4718]: I1210 14:36:29.841631 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-d8blb_f73d3e76-59b1-446b-b599-18b0c115447f/registry-server/0.log" Dec 10 14:36:29 crc kubenswrapper[4718]: I1210 14:36:29.842360 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-d8blb" Dec 10 14:36:29 crc kubenswrapper[4718]: I1210 14:36:29.842960 4718 status_manager.go:851] "Failed to get status for pod" podUID="90744db6-7e3a-4f7f-a0ae-4f4dfed7df33" pod="openshift-authentication/oauth-openshift-558db77b4-tk99n" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-tk99n\": dial tcp 38.102.83.17:6443: connect: connection refused" Dec 10 14:36:29 crc kubenswrapper[4718]: I1210 14:36:29.843401 4718 status_manager.go:851] "Failed to get status for pod" podUID="bc324349-4229-4bd4-b5b1-35de752d9f85" pod="openshift-marketplace/redhat-operators-rwvfp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-rwvfp\": dial tcp 38.102.83.17:6443: connect: connection refused" Dec 10 14:36:29 crc kubenswrapper[4718]: I1210 14:36:29.843644 4718 status_manager.go:851] "Failed to get status for pod" podUID="f73d3e76-59b1-446b-b599-18b0c115447f" pod="openshift-marketplace/redhat-marketplace-d8blb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-d8blb\": dial tcp 38.102.83.17:6443: connect: connection refused" Dec 10 14:36:29 crc kubenswrapper[4718]: I1210 14:36:29.843954 4718 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.17:6443: connect: connection refused" Dec 10 14:36:29 crc kubenswrapper[4718]: I1210 14:36:29.844333 4718 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.17:6443: connect: connection refused" Dec 10 14:36:29 crc kubenswrapper[4718]: I1210 14:36:29.844639 4718 status_manager.go:851] "Failed to get status for pod" podUID="09fe8e9b-3f80-4979-8203-7aca9407605d" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.17:6443: connect: connection refused" Dec 10 14:36:29 crc kubenswrapper[4718]: I1210 14:36:29.970316 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/90744db6-7e3a-4f7f-a0ae-4f4dfed7df33-audit-policies\") pod \"90744db6-7e3a-4f7f-a0ae-4f4dfed7df33\" (UID: \"90744db6-7e3a-4f7f-a0ae-4f4dfed7df33\") " Dec 10 14:36:29 crc kubenswrapper[4718]: I1210 14:36:29.970429 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kclr9\" (UniqueName: \"kubernetes.io/projected/90744db6-7e3a-4f7f-a0ae-4f4dfed7df33-kube-api-access-kclr9\") pod \"90744db6-7e3a-4f7f-a0ae-4f4dfed7df33\" (UID: \"90744db6-7e3a-4f7f-a0ae-4f4dfed7df33\") " Dec 10 14:36:29 crc kubenswrapper[4718]: I1210 14:36:29.970469 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4fb6p\" (UniqueName: \"kubernetes.io/projected/f73d3e76-59b1-446b-b599-18b0c115447f-kube-api-access-4fb6p\") pod \"f73d3e76-59b1-446b-b599-18b0c115447f\" (UID: \"f73d3e76-59b1-446b-b599-18b0c115447f\") " Dec 10 14:36:29 crc kubenswrapper[4718]: I1210 14:36:29.970500 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bc324349-4229-4bd4-b5b1-35de752d9f85-utilities\") pod \"bc324349-4229-4bd4-b5b1-35de752d9f85\" (UID: \"bc324349-4229-4bd4-b5b1-35de752d9f85\") " Dec 10 14:36:29 crc kubenswrapper[4718]: I1210 14:36:29.970527 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/90744db6-7e3a-4f7f-a0ae-4f4dfed7df33-audit-dir\") pod \"90744db6-7e3a-4f7f-a0ae-4f4dfed7df33\" (UID: \"90744db6-7e3a-4f7f-a0ae-4f4dfed7df33\") " Dec 10 14:36:29 crc kubenswrapper[4718]: I1210 14:36:29.970557 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/90744db6-7e3a-4f7f-a0ae-4f4dfed7df33-v4-0-config-user-template-error\") pod \"90744db6-7e3a-4f7f-a0ae-4f4dfed7df33\" (UID: \"90744db6-7e3a-4f7f-a0ae-4f4dfed7df33\") " Dec 10 14:36:29 crc kubenswrapper[4718]: I1210 14:36:29.970586 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/90744db6-7e3a-4f7f-a0ae-4f4dfed7df33-v4-0-config-user-template-login\") pod \"90744db6-7e3a-4f7f-a0ae-4f4dfed7df33\" (UID: \"90744db6-7e3a-4f7f-a0ae-4f4dfed7df33\") " Dec 10 14:36:29 crc kubenswrapper[4718]: I1210 14:36:29.970618 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/90744db6-7e3a-4f7f-a0ae-4f4dfed7df33-v4-0-config-system-session\") pod \"90744db6-7e3a-4f7f-a0ae-4f4dfed7df33\" (UID: \"90744db6-7e3a-4f7f-a0ae-4f4dfed7df33\") " Dec 10 14:36:29 crc kubenswrapper[4718]: I1210 14:36:29.970665 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/90744db6-7e3a-4f7f-a0ae-4f4dfed7df33-v4-0-config-system-ocp-branding-template\") pod \"90744db6-7e3a-4f7f-a0ae-4f4dfed7df33\" (UID: \"90744db6-7e3a-4f7f-a0ae-4f4dfed7df33\") " Dec 10 14:36:29 crc kubenswrapper[4718]: I1210 14:36:29.970696 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/90744db6-7e3a-4f7f-a0ae-4f4dfed7df33-v4-0-config-user-template-provider-selection\") pod \"90744db6-7e3a-4f7f-a0ae-4f4dfed7df33\" (UID: \"90744db6-7e3a-4f7f-a0ae-4f4dfed7df33\") " Dec 10 14:36:29 crc kubenswrapper[4718]: I1210 14:36:29.970819 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f73d3e76-59b1-446b-b599-18b0c115447f-utilities\") pod \"f73d3e76-59b1-446b-b599-18b0c115447f\" (UID: \"f73d3e76-59b1-446b-b599-18b0c115447f\") " Dec 10 14:36:29 crc kubenswrapper[4718]: I1210 14:36:29.970853 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/90744db6-7e3a-4f7f-a0ae-4f4dfed7df33-v4-0-config-system-serving-cert\") pod \"90744db6-7e3a-4f7f-a0ae-4f4dfed7df33\" (UID: \"90744db6-7e3a-4f7f-a0ae-4f4dfed7df33\") " Dec 10 14:36:29 crc kubenswrapper[4718]: I1210 14:36:29.970905 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bc324349-4229-4bd4-b5b1-35de752d9f85-catalog-content\") pod \"bc324349-4229-4bd4-b5b1-35de752d9f85\" (UID: \"bc324349-4229-4bd4-b5b1-35de752d9f85\") " Dec 10 14:36:29 crc kubenswrapper[4718]: I1210 14:36:29.970936 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/90744db6-7e3a-4f7f-a0ae-4f4dfed7df33-v4-0-config-system-cliconfig\") pod \"90744db6-7e3a-4f7f-a0ae-4f4dfed7df33\" (UID: \"90744db6-7e3a-4f7f-a0ae-4f4dfed7df33\") " Dec 10 14:36:29 crc kubenswrapper[4718]: I1210 14:36:29.970959 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/90744db6-7e3a-4f7f-a0ae-4f4dfed7df33-v4-0-config-system-service-ca\") pod \"90744db6-7e3a-4f7f-a0ae-4f4dfed7df33\" (UID: \"90744db6-7e3a-4f7f-a0ae-4f4dfed7df33\") " Dec 10 14:36:29 crc kubenswrapper[4718]: I1210 14:36:29.970984 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fs954\" (UniqueName: \"kubernetes.io/projected/bc324349-4229-4bd4-b5b1-35de752d9f85-kube-api-access-fs954\") pod \"bc324349-4229-4bd4-b5b1-35de752d9f85\" (UID: \"bc324349-4229-4bd4-b5b1-35de752d9f85\") " Dec 10 14:36:29 crc kubenswrapper[4718]: I1210 14:36:29.971025 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/90744db6-7e3a-4f7f-a0ae-4f4dfed7df33-v4-0-config-system-trusted-ca-bundle\") pod \"90744db6-7e3a-4f7f-a0ae-4f4dfed7df33\" (UID: \"90744db6-7e3a-4f7f-a0ae-4f4dfed7df33\") " Dec 10 14:36:29 crc kubenswrapper[4718]: I1210 14:36:29.971058 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/90744db6-7e3a-4f7f-a0ae-4f4dfed7df33-v4-0-config-system-router-certs\") pod \"90744db6-7e3a-4f7f-a0ae-4f4dfed7df33\" (UID: \"90744db6-7e3a-4f7f-a0ae-4f4dfed7df33\") " Dec 10 14:36:29 crc kubenswrapper[4718]: I1210 14:36:29.971881 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/90744db6-7e3a-4f7f-a0ae-4f4dfed7df33-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "90744db6-7e3a-4f7f-a0ae-4f4dfed7df33" (UID: "90744db6-7e3a-4f7f-a0ae-4f4dfed7df33"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:36:29 crc kubenswrapper[4718]: I1210 14:36:29.971900 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/90744db6-7e3a-4f7f-a0ae-4f4dfed7df33-v4-0-config-user-idp-0-file-data\") pod \"90744db6-7e3a-4f7f-a0ae-4f4dfed7df33\" (UID: \"90744db6-7e3a-4f7f-a0ae-4f4dfed7df33\") " Dec 10 14:36:29 crc kubenswrapper[4718]: I1210 14:36:29.972106 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f73d3e76-59b1-446b-b599-18b0c115447f-catalog-content\") pod \"f73d3e76-59b1-446b-b599-18b0c115447f\" (UID: \"f73d3e76-59b1-446b-b599-18b0c115447f\") " Dec 10 14:36:29 crc kubenswrapper[4718]: I1210 14:36:29.972251 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/90744db6-7e3a-4f7f-a0ae-4f4dfed7df33-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "90744db6-7e3a-4f7f-a0ae-4f4dfed7df33" (UID: "90744db6-7e3a-4f7f-a0ae-4f4dfed7df33"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:36:29 crc kubenswrapper[4718]: I1210 14:36:29.972449 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/90744db6-7e3a-4f7f-a0ae-4f4dfed7df33-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "90744db6-7e3a-4f7f-a0ae-4f4dfed7df33" (UID: "90744db6-7e3a-4f7f-a0ae-4f4dfed7df33"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 10 14:36:29 crc kubenswrapper[4718]: I1210 14:36:29.973063 4718 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/90744db6-7e3a-4f7f-a0ae-4f4dfed7df33-audit-dir\") on node \"crc\" DevicePath \"\"" Dec 10 14:36:29 crc kubenswrapper[4718]: I1210 14:36:29.973103 4718 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/90744db6-7e3a-4f7f-a0ae-4f4dfed7df33-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Dec 10 14:36:29 crc kubenswrapper[4718]: I1210 14:36:29.973120 4718 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/90744db6-7e3a-4f7f-a0ae-4f4dfed7df33-audit-policies\") on node \"crc\" DevicePath \"\"" Dec 10 14:36:29 crc kubenswrapper[4718]: I1210 14:36:29.973229 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc324349-4229-4bd4-b5b1-35de752d9f85-utilities" (OuterVolumeSpecName: "utilities") pod "bc324349-4229-4bd4-b5b1-35de752d9f85" (UID: "bc324349-4229-4bd4-b5b1-35de752d9f85"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 14:36:29 crc kubenswrapper[4718]: I1210 14:36:29.977228 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/90744db6-7e3a-4f7f-a0ae-4f4dfed7df33-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "90744db6-7e3a-4f7f-a0ae-4f4dfed7df33" (UID: "90744db6-7e3a-4f7f-a0ae-4f4dfed7df33"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 14:36:29 crc kubenswrapper[4718]: I1210 14:36:29.977338 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/90744db6-7e3a-4f7f-a0ae-4f4dfed7df33-kube-api-access-kclr9" (OuterVolumeSpecName: "kube-api-access-kclr9") pod "90744db6-7e3a-4f7f-a0ae-4f4dfed7df33" (UID: "90744db6-7e3a-4f7f-a0ae-4f4dfed7df33"). InnerVolumeSpecName "kube-api-access-kclr9". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 14:36:29 crc kubenswrapper[4718]: I1210 14:36:29.977803 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/90744db6-7e3a-4f7f-a0ae-4f4dfed7df33-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "90744db6-7e3a-4f7f-a0ae-4f4dfed7df33" (UID: "90744db6-7e3a-4f7f-a0ae-4f4dfed7df33"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:36:29 crc kubenswrapper[4718]: I1210 14:36:29.977844 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/90744db6-7e3a-4f7f-a0ae-4f4dfed7df33-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "90744db6-7e3a-4f7f-a0ae-4f4dfed7df33" (UID: "90744db6-7e3a-4f7f-a0ae-4f4dfed7df33"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 14:36:29 crc kubenswrapper[4718]: I1210 14:36:29.978146 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/90744db6-7e3a-4f7f-a0ae-4f4dfed7df33-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "90744db6-7e3a-4f7f-a0ae-4f4dfed7df33" (UID: "90744db6-7e3a-4f7f-a0ae-4f4dfed7df33"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 14:36:29 crc kubenswrapper[4718]: I1210 14:36:29.978221 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f73d3e76-59b1-446b-b599-18b0c115447f-utilities" (OuterVolumeSpecName: "utilities") pod "f73d3e76-59b1-446b-b599-18b0c115447f" (UID: "f73d3e76-59b1-446b-b599-18b0c115447f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 14:36:29 crc kubenswrapper[4718]: I1210 14:36:29.978337 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/90744db6-7e3a-4f7f-a0ae-4f4dfed7df33-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "90744db6-7e3a-4f7f-a0ae-4f4dfed7df33" (UID: "90744db6-7e3a-4f7f-a0ae-4f4dfed7df33"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:36:29 crc kubenswrapper[4718]: I1210 14:36:29.978647 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f73d3e76-59b1-446b-b599-18b0c115447f-kube-api-access-4fb6p" (OuterVolumeSpecName: "kube-api-access-4fb6p") pod "f73d3e76-59b1-446b-b599-18b0c115447f" (UID: "f73d3e76-59b1-446b-b599-18b0c115447f"). InnerVolumeSpecName "kube-api-access-4fb6p". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 14:36:29 crc kubenswrapper[4718]: I1210 14:36:29.979075 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/90744db6-7e3a-4f7f-a0ae-4f4dfed7df33-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "90744db6-7e3a-4f7f-a0ae-4f4dfed7df33" (UID: "90744db6-7e3a-4f7f-a0ae-4f4dfed7df33"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 14:36:29 crc kubenswrapper[4718]: I1210 14:36:29.979444 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc324349-4229-4bd4-b5b1-35de752d9f85-kube-api-access-fs954" (OuterVolumeSpecName: "kube-api-access-fs954") pod "bc324349-4229-4bd4-b5b1-35de752d9f85" (UID: "bc324349-4229-4bd4-b5b1-35de752d9f85"). InnerVolumeSpecName "kube-api-access-fs954". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 14:36:29 crc kubenswrapper[4718]: I1210 14:36:29.979747 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/90744db6-7e3a-4f7f-a0ae-4f4dfed7df33-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "90744db6-7e3a-4f7f-a0ae-4f4dfed7df33" (UID: "90744db6-7e3a-4f7f-a0ae-4f4dfed7df33"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 14:36:29 crc kubenswrapper[4718]: I1210 14:36:29.980095 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/90744db6-7e3a-4f7f-a0ae-4f4dfed7df33-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "90744db6-7e3a-4f7f-a0ae-4f4dfed7df33" (UID: "90744db6-7e3a-4f7f-a0ae-4f4dfed7df33"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 14:36:29 crc kubenswrapper[4718]: I1210 14:36:29.980337 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/90744db6-7e3a-4f7f-a0ae-4f4dfed7df33-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "90744db6-7e3a-4f7f-a0ae-4f4dfed7df33" (UID: "90744db6-7e3a-4f7f-a0ae-4f4dfed7df33"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 14:36:29 crc kubenswrapper[4718]: I1210 14:36:29.983540 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/90744db6-7e3a-4f7f-a0ae-4f4dfed7df33-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "90744db6-7e3a-4f7f-a0ae-4f4dfed7df33" (UID: "90744db6-7e3a-4f7f-a0ae-4f4dfed7df33"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 14:36:30 crc kubenswrapper[4718]: I1210 14:36:30.001902 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f73d3e76-59b1-446b-b599-18b0c115447f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f73d3e76-59b1-446b-b599-18b0c115447f" (UID: "f73d3e76-59b1-446b-b599-18b0c115447f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 14:36:30 crc kubenswrapper[4718]: I1210 14:36:30.029885 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Dec 10 14:36:30 crc kubenswrapper[4718]: I1210 14:36:30.075326 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kclr9\" (UniqueName: \"kubernetes.io/projected/90744db6-7e3a-4f7f-a0ae-4f4dfed7df33-kube-api-access-kclr9\") on node \"crc\" DevicePath \"\"" Dec 10 14:36:30 crc kubenswrapper[4718]: I1210 14:36:30.075424 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4fb6p\" (UniqueName: \"kubernetes.io/projected/f73d3e76-59b1-446b-b599-18b0c115447f-kube-api-access-4fb6p\") on node \"crc\" DevicePath \"\"" Dec 10 14:36:30 crc kubenswrapper[4718]: I1210 14:36:30.075435 4718 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bc324349-4229-4bd4-b5b1-35de752d9f85-utilities\") on node \"crc\" DevicePath \"\"" Dec 10 14:36:30 crc kubenswrapper[4718]: I1210 14:36:30.075449 4718 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/90744db6-7e3a-4f7f-a0ae-4f4dfed7df33-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Dec 10 14:36:30 crc kubenswrapper[4718]: I1210 14:36:30.075463 4718 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/90744db6-7e3a-4f7f-a0ae-4f4dfed7df33-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Dec 10 14:36:30 crc kubenswrapper[4718]: I1210 14:36:30.075473 4718 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/90744db6-7e3a-4f7f-a0ae-4f4dfed7df33-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Dec 10 14:36:30 crc kubenswrapper[4718]: I1210 14:36:30.075485 4718 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/90744db6-7e3a-4f7f-a0ae-4f4dfed7df33-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Dec 10 14:36:30 crc kubenswrapper[4718]: I1210 14:36:30.075496 4718 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/90744db6-7e3a-4f7f-a0ae-4f4dfed7df33-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Dec 10 14:36:30 crc kubenswrapper[4718]: I1210 14:36:30.075506 4718 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f73d3e76-59b1-446b-b599-18b0c115447f-utilities\") on node \"crc\" DevicePath \"\"" Dec 10 14:36:30 crc kubenswrapper[4718]: I1210 14:36:30.075516 4718 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/90744db6-7e3a-4f7f-a0ae-4f4dfed7df33-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 10 14:36:30 crc kubenswrapper[4718]: I1210 14:36:30.075526 4718 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/90744db6-7e3a-4f7f-a0ae-4f4dfed7df33-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Dec 10 14:36:30 crc kubenswrapper[4718]: I1210 14:36:30.075537 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fs954\" (UniqueName: \"kubernetes.io/projected/bc324349-4229-4bd4-b5b1-35de752d9f85-kube-api-access-fs954\") on node \"crc\" DevicePath \"\"" Dec 10 14:36:30 crc kubenswrapper[4718]: I1210 14:36:30.075547 4718 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/90744db6-7e3a-4f7f-a0ae-4f4dfed7df33-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 10 14:36:30 crc kubenswrapper[4718]: I1210 14:36:30.075557 4718 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/90744db6-7e3a-4f7f-a0ae-4f4dfed7df33-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Dec 10 14:36:30 crc kubenswrapper[4718]: I1210 14:36:30.075567 4718 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/90744db6-7e3a-4f7f-a0ae-4f4dfed7df33-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Dec 10 14:36:30 crc kubenswrapper[4718]: I1210 14:36:30.075576 4718 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f73d3e76-59b1-446b-b599-18b0c115447f-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 10 14:36:30 crc kubenswrapper[4718]: I1210 14:36:30.097298 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc324349-4229-4bd4-b5b1-35de752d9f85-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "bc324349-4229-4bd4-b5b1-35de752d9f85" (UID: "bc324349-4229-4bd4-b5b1-35de752d9f85"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 14:36:30 crc kubenswrapper[4718]: I1210 14:36:30.177161 4718 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bc324349-4229-4bd4-b5b1-35de752d9f85-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 10 14:36:30 crc kubenswrapper[4718]: I1210 14:36:30.644206 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-tk99n" Dec 10 14:36:30 crc kubenswrapper[4718]: I1210 14:36:30.644245 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-tk99n" event={"ID":"90744db6-7e3a-4f7f-a0ae-4f4dfed7df33","Type":"ContainerDied","Data":"1f06d59f36d52e5f658bf3dca6193202b48ddbf4d6f959bb6067ec2aa5367e47"} Dec 10 14:36:30 crc kubenswrapper[4718]: I1210 14:36:30.644333 4718 scope.go:117] "RemoveContainer" containerID="528ce00589e35ed31d99b6f71e5cd305814a543f03811cf8b692571b5421a8c7" Dec 10 14:36:30 crc kubenswrapper[4718]: I1210 14:36:30.645723 4718 status_manager.go:851] "Failed to get status for pod" podUID="90744db6-7e3a-4f7f-a0ae-4f4dfed7df33" pod="openshift-authentication/oauth-openshift-558db77b4-tk99n" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-tk99n\": dial tcp 38.102.83.17:6443: connect: connection refused" Dec 10 14:36:30 crc kubenswrapper[4718]: I1210 14:36:30.646429 4718 status_manager.go:851] "Failed to get status for pod" podUID="bc324349-4229-4bd4-b5b1-35de752d9f85" pod="openshift-marketplace/redhat-operators-rwvfp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-rwvfp\": dial tcp 38.102.83.17:6443: connect: connection refused" Dec 10 14:36:30 crc kubenswrapper[4718]: I1210 14:36:30.646880 4718 status_manager.go:851] "Failed to get status for pod" podUID="f73d3e76-59b1-446b-b599-18b0c115447f" pod="openshift-marketplace/redhat-marketplace-d8blb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-d8blb\": dial tcp 38.102.83.17:6443: connect: connection refused" Dec 10 14:36:30 crc kubenswrapper[4718]: I1210 14:36:30.647283 4718 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.17:6443: connect: connection refused" Dec 10 14:36:30 crc kubenswrapper[4718]: I1210 14:36:30.648616 4718 status_manager.go:851] "Failed to get status for pod" podUID="09fe8e9b-3f80-4979-8203-7aca9407605d" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.17:6443: connect: connection refused" Dec 10 14:36:30 crc kubenswrapper[4718]: I1210 14:36:30.649258 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-d8blb_f73d3e76-59b1-446b-b599-18b0c115447f/registry-server/0.log" Dec 10 14:36:30 crc kubenswrapper[4718]: I1210 14:36:30.650916 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-d8blb" event={"ID":"f73d3e76-59b1-446b-b599-18b0c115447f","Type":"ContainerDied","Data":"50331affcb8059e7dfd2ad6f1199619153a8b67c9dabae953adc200b91b09f54"} Dec 10 14:36:30 crc kubenswrapper[4718]: I1210 14:36:30.651038 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-d8blb" Dec 10 14:36:30 crc kubenswrapper[4718]: I1210 14:36:30.651965 4718 status_manager.go:851] "Failed to get status for pod" podUID="f73d3e76-59b1-446b-b599-18b0c115447f" pod="openshift-marketplace/redhat-marketplace-d8blb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-d8blb\": dial tcp 38.102.83.17:6443: connect: connection refused" Dec 10 14:36:30 crc kubenswrapper[4718]: I1210 14:36:30.652317 4718 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.17:6443: connect: connection refused" Dec 10 14:36:30 crc kubenswrapper[4718]: I1210 14:36:30.652703 4718 status_manager.go:851] "Failed to get status for pod" podUID="09fe8e9b-3f80-4979-8203-7aca9407605d" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.17:6443: connect: connection refused" Dec 10 14:36:30 crc kubenswrapper[4718]: I1210 14:36:30.653003 4718 status_manager.go:851] "Failed to get status for pod" podUID="90744db6-7e3a-4f7f-a0ae-4f4dfed7df33" pod="openshift-authentication/oauth-openshift-558db77b4-tk99n" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-tk99n\": dial tcp 38.102.83.17:6443: connect: connection refused" Dec 10 14:36:30 crc kubenswrapper[4718]: I1210 14:36:30.653240 4718 status_manager.go:851] "Failed to get status for pod" podUID="bc324349-4229-4bd4-b5b1-35de752d9f85" pod="openshift-marketplace/redhat-operators-rwvfp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-rwvfp\": dial tcp 38.102.83.17:6443: connect: connection refused" Dec 10 14:36:30 crc kubenswrapper[4718]: I1210 14:36:30.653349 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-rwvfp_bc324349-4229-4bd4-b5b1-35de752d9f85/registry-server/0.log" Dec 10 14:36:30 crc kubenswrapper[4718]: I1210 14:36:30.654416 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rwvfp" event={"ID":"bc324349-4229-4bd4-b5b1-35de752d9f85","Type":"ContainerDied","Data":"69bbfded079dece5ede0ba96ccdbfd2db7f4e9067394aa6654a485e064bc0bef"} Dec 10 14:36:30 crc kubenswrapper[4718]: I1210 14:36:30.654750 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rwvfp" Dec 10 14:36:30 crc kubenswrapper[4718]: I1210 14:36:30.655412 4718 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.17:6443: connect: connection refused" Dec 10 14:36:30 crc kubenswrapper[4718]: I1210 14:36:30.655885 4718 status_manager.go:851] "Failed to get status for pod" podUID="09fe8e9b-3f80-4979-8203-7aca9407605d" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.17:6443: connect: connection refused" Dec 10 14:36:30 crc kubenswrapper[4718]: I1210 14:36:30.656091 4718 status_manager.go:851] "Failed to get status for pod" podUID="90744db6-7e3a-4f7f-a0ae-4f4dfed7df33" pod="openshift-authentication/oauth-openshift-558db77b4-tk99n" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-tk99n\": dial tcp 38.102.83.17:6443: connect: connection refused" Dec 10 14:36:30 crc kubenswrapper[4718]: I1210 14:36:30.656259 4718 status_manager.go:851] "Failed to get status for pod" podUID="bc324349-4229-4bd4-b5b1-35de752d9f85" pod="openshift-marketplace/redhat-operators-rwvfp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-rwvfp\": dial tcp 38.102.83.17:6443: connect: connection refused" Dec 10 14:36:30 crc kubenswrapper[4718]: I1210 14:36:30.656435 4718 status_manager.go:851] "Failed to get status for pod" podUID="f73d3e76-59b1-446b-b599-18b0c115447f" pod="openshift-marketplace/redhat-marketplace-d8blb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-d8blb\": dial tcp 38.102.83.17:6443: connect: connection refused" Dec 10 14:36:30 crc kubenswrapper[4718]: I1210 14:36:30.656646 4718 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.17:6443: connect: connection refused" Dec 10 14:36:30 crc kubenswrapper[4718]: I1210 14:36:30.656807 4718 status_manager.go:851] "Failed to get status for pod" podUID="09fe8e9b-3f80-4979-8203-7aca9407605d" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.17:6443: connect: connection refused" Dec 10 14:36:30 crc kubenswrapper[4718]: I1210 14:36:30.656953 4718 status_manager.go:851] "Failed to get status for pod" podUID="90744db6-7e3a-4f7f-a0ae-4f4dfed7df33" pod="openshift-authentication/oauth-openshift-558db77b4-tk99n" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-tk99n\": dial tcp 38.102.83.17:6443: connect: connection refused" Dec 10 14:36:30 crc kubenswrapper[4718]: I1210 14:36:30.657103 4718 status_manager.go:851] "Failed to get status for pod" podUID="bc324349-4229-4bd4-b5b1-35de752d9f85" pod="openshift-marketplace/redhat-operators-rwvfp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-rwvfp\": dial tcp 38.102.83.17:6443: connect: connection refused" Dec 10 14:36:30 crc kubenswrapper[4718]: I1210 14:36:30.657248 4718 status_manager.go:851] "Failed to get status for pod" podUID="f73d3e76-59b1-446b-b599-18b0c115447f" pod="openshift-marketplace/redhat-marketplace-d8blb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-d8blb\": dial tcp 38.102.83.17:6443: connect: connection refused" Dec 10 14:36:30 crc kubenswrapper[4718]: I1210 14:36:30.678516 4718 scope.go:117] "RemoveContainer" containerID="df40adfd077501ccc7e53deb046ef0ce9625dee1121be6149b6a565b9bad290c" Dec 10 14:36:30 crc kubenswrapper[4718]: I1210 14:36:30.687834 4718 status_manager.go:851] "Failed to get status for pod" podUID="09fe8e9b-3f80-4979-8203-7aca9407605d" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.17:6443: connect: connection refused" Dec 10 14:36:30 crc kubenswrapper[4718]: I1210 14:36:30.688635 4718 status_manager.go:851] "Failed to get status for pod" podUID="90744db6-7e3a-4f7f-a0ae-4f4dfed7df33" pod="openshift-authentication/oauth-openshift-558db77b4-tk99n" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-tk99n\": dial tcp 38.102.83.17:6443: connect: connection refused" Dec 10 14:36:30 crc kubenswrapper[4718]: I1210 14:36:30.689518 4718 status_manager.go:851] "Failed to get status for pod" podUID="bc324349-4229-4bd4-b5b1-35de752d9f85" pod="openshift-marketplace/redhat-operators-rwvfp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-rwvfp\": dial tcp 38.102.83.17:6443: connect: connection refused" Dec 10 14:36:30 crc kubenswrapper[4718]: I1210 14:36:30.690260 4718 status_manager.go:851] "Failed to get status for pod" podUID="f73d3e76-59b1-446b-b599-18b0c115447f" pod="openshift-marketplace/redhat-marketplace-d8blb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-d8blb\": dial tcp 38.102.83.17:6443: connect: connection refused" Dec 10 14:36:30 crc kubenswrapper[4718]: I1210 14:36:30.690689 4718 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.17:6443: connect: connection refused" Dec 10 14:36:30 crc kubenswrapper[4718]: I1210 14:36:30.702084 4718 scope.go:117] "RemoveContainer" containerID="1e613538c65f1d16e639d7f0e6c979fad10c933cce720339f1a55db520f74c79" Dec 10 14:36:30 crc kubenswrapper[4718]: I1210 14:36:30.724818 4718 scope.go:117] "RemoveContainer" containerID="350ea21326b7dc3e696f2746bfa06f4be08012e6c6fb25f111eed083de7035b2" Dec 10 14:36:30 crc kubenswrapper[4718]: I1210 14:36:30.747073 4718 scope.go:117] "RemoveContainer" containerID="8fec9bf53304bf4615509afbf99c6388c5503aa74686b65b39182a40bb5e0abe" Dec 10 14:36:30 crc kubenswrapper[4718]: I1210 14:36:30.768194 4718 scope.go:117] "RemoveContainer" containerID="91d16ef19dad6da53c2e02c62a58f445543841825b8ab78df5d5e14094fb4712" Dec 10 14:36:30 crc kubenswrapper[4718]: I1210 14:36:30.792891 4718 scope.go:117] "RemoveContainer" containerID="558531f00547567ce6eaf7db773fdc97bf415af2184066524a36edac6bcbdfec" Dec 10 14:36:30 crc kubenswrapper[4718]: I1210 14:36:30.846608 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Dec 10 14:36:30 crc kubenswrapper[4718]: I1210 14:36:30.847494 4718 status_manager.go:851] "Failed to get status for pod" podUID="f73d3e76-59b1-446b-b599-18b0c115447f" pod="openshift-marketplace/redhat-marketplace-d8blb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-d8blb\": dial tcp 38.102.83.17:6443: connect: connection refused" Dec 10 14:36:30 crc kubenswrapper[4718]: I1210 14:36:30.847869 4718 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.17:6443: connect: connection refused" Dec 10 14:36:30 crc kubenswrapper[4718]: I1210 14:36:30.848212 4718 status_manager.go:851] "Failed to get status for pod" podUID="09fe8e9b-3f80-4979-8203-7aca9407605d" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.17:6443: connect: connection refused" Dec 10 14:36:30 crc kubenswrapper[4718]: I1210 14:36:30.848575 4718 status_manager.go:851] "Failed to get status for pod" podUID="90744db6-7e3a-4f7f-a0ae-4f4dfed7df33" pod="openshift-authentication/oauth-openshift-558db77b4-tk99n" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-tk99n\": dial tcp 38.102.83.17:6443: connect: connection refused" Dec 10 14:36:30 crc kubenswrapper[4718]: I1210 14:36:30.848883 4718 status_manager.go:851] "Failed to get status for pod" podUID="bc324349-4229-4bd4-b5b1-35de752d9f85" pod="openshift-marketplace/redhat-operators-rwvfp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-rwvfp\": dial tcp 38.102.83.17:6443: connect: connection refused" Dec 10 14:36:30 crc kubenswrapper[4718]: I1210 14:36:30.992279 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/09fe8e9b-3f80-4979-8203-7aca9407605d-var-lock\") pod \"09fe8e9b-3f80-4979-8203-7aca9407605d\" (UID: \"09fe8e9b-3f80-4979-8203-7aca9407605d\") " Dec 10 14:36:30 crc kubenswrapper[4718]: I1210 14:36:30.992434 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/09fe8e9b-3f80-4979-8203-7aca9407605d-kube-api-access\") pod \"09fe8e9b-3f80-4979-8203-7aca9407605d\" (UID: \"09fe8e9b-3f80-4979-8203-7aca9407605d\") " Dec 10 14:36:30 crc kubenswrapper[4718]: I1210 14:36:30.992454 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/09fe8e9b-3f80-4979-8203-7aca9407605d-kubelet-dir\") pod \"09fe8e9b-3f80-4979-8203-7aca9407605d\" (UID: \"09fe8e9b-3f80-4979-8203-7aca9407605d\") " Dec 10 14:36:30 crc kubenswrapper[4718]: I1210 14:36:30.992457 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/09fe8e9b-3f80-4979-8203-7aca9407605d-var-lock" (OuterVolumeSpecName: "var-lock") pod "09fe8e9b-3f80-4979-8203-7aca9407605d" (UID: "09fe8e9b-3f80-4979-8203-7aca9407605d"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 10 14:36:30 crc kubenswrapper[4718]: I1210 14:36:30.992668 4718 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/09fe8e9b-3f80-4979-8203-7aca9407605d-var-lock\") on node \"crc\" DevicePath \"\"" Dec 10 14:36:30 crc kubenswrapper[4718]: I1210 14:36:30.992706 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/09fe8e9b-3f80-4979-8203-7aca9407605d-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "09fe8e9b-3f80-4979-8203-7aca9407605d" (UID: "09fe8e9b-3f80-4979-8203-7aca9407605d"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 10 14:36:30 crc kubenswrapper[4718]: I1210 14:36:30.999769 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09fe8e9b-3f80-4979-8203-7aca9407605d-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "09fe8e9b-3f80-4979-8203-7aca9407605d" (UID: "09fe8e9b-3f80-4979-8203-7aca9407605d"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 14:36:31 crc kubenswrapper[4718]: E1210 14:36:31.034976 4718 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.17:6443: connect: connection refused" interval="6.4s" Dec 10 14:36:31 crc kubenswrapper[4718]: I1210 14:36:31.094967 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/09fe8e9b-3f80-4979-8203-7aca9407605d-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 10 14:36:31 crc kubenswrapper[4718]: I1210 14:36:31.095637 4718 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/09fe8e9b-3f80-4979-8203-7aca9407605d-kubelet-dir\") on node \"crc\" DevicePath \"\"" Dec 10 14:36:31 crc kubenswrapper[4718]: I1210 14:36:31.667656 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"09fe8e9b-3f80-4979-8203-7aca9407605d","Type":"ContainerDied","Data":"244f3eb28636604d0434301a99fbf9a282b6d8bfb6bffafe89d1636a90681496"} Dec 10 14:36:31 crc kubenswrapper[4718]: I1210 14:36:31.667749 4718 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="244f3eb28636604d0434301a99fbf9a282b6d8bfb6bffafe89d1636a90681496" Dec 10 14:36:31 crc kubenswrapper[4718]: I1210 14:36:31.667708 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Dec 10 14:36:31 crc kubenswrapper[4718]: I1210 14:36:31.700745 4718 status_manager.go:851] "Failed to get status for pod" podUID="f73d3e76-59b1-446b-b599-18b0c115447f" pod="openshift-marketplace/redhat-marketplace-d8blb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-d8blb\": dial tcp 38.102.83.17:6443: connect: connection refused" Dec 10 14:36:31 crc kubenswrapper[4718]: I1210 14:36:31.701449 4718 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.17:6443: connect: connection refused" Dec 10 14:36:31 crc kubenswrapper[4718]: I1210 14:36:31.702011 4718 status_manager.go:851] "Failed to get status for pod" podUID="09fe8e9b-3f80-4979-8203-7aca9407605d" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.17:6443: connect: connection refused" Dec 10 14:36:31 crc kubenswrapper[4718]: I1210 14:36:31.702290 4718 status_manager.go:851] "Failed to get status for pod" podUID="90744db6-7e3a-4f7f-a0ae-4f4dfed7df33" pod="openshift-authentication/oauth-openshift-558db77b4-tk99n" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-tk99n\": dial tcp 38.102.83.17:6443: connect: connection refused" Dec 10 14:36:31 crc kubenswrapper[4718]: I1210 14:36:31.702800 4718 status_manager.go:851] "Failed to get status for pod" podUID="bc324349-4229-4bd4-b5b1-35de752d9f85" pod="openshift-marketplace/redhat-operators-rwvfp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-rwvfp\": dial tcp 38.102.83.17:6443: connect: connection refused" Dec 10 14:36:32 crc kubenswrapper[4718]: I1210 14:36:32.682136 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Dec 10 14:36:32 crc kubenswrapper[4718]: I1210 14:36:32.682220 4718 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="2d1de02a2f41beed3e92a05e69a6c76b44861e7dba46d723a38fffe8c3d0ed42" exitCode=1 Dec 10 14:36:32 crc kubenswrapper[4718]: I1210 14:36:32.682281 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"2d1de02a2f41beed3e92a05e69a6c76b44861e7dba46d723a38fffe8c3d0ed42"} Dec 10 14:36:32 crc kubenswrapper[4718]: I1210 14:36:32.683010 4718 scope.go:117] "RemoveContainer" containerID="2d1de02a2f41beed3e92a05e69a6c76b44861e7dba46d723a38fffe8c3d0ed42" Dec 10 14:36:32 crc kubenswrapper[4718]: I1210 14:36:32.684187 4718 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.17:6443: connect: connection refused" Dec 10 14:36:32 crc kubenswrapper[4718]: I1210 14:36:32.684526 4718 status_manager.go:851] "Failed to get status for pod" podUID="09fe8e9b-3f80-4979-8203-7aca9407605d" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.17:6443: connect: connection refused" Dec 10 14:36:32 crc kubenswrapper[4718]: I1210 14:36:32.684751 4718 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.17:6443: connect: connection refused" Dec 10 14:36:32 crc kubenswrapper[4718]: I1210 14:36:32.685046 4718 status_manager.go:851] "Failed to get status for pod" podUID="90744db6-7e3a-4f7f-a0ae-4f4dfed7df33" pod="openshift-authentication/oauth-openshift-558db77b4-tk99n" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-tk99n\": dial tcp 38.102.83.17:6443: connect: connection refused" Dec 10 14:36:32 crc kubenswrapper[4718]: I1210 14:36:32.685448 4718 status_manager.go:851] "Failed to get status for pod" podUID="bc324349-4229-4bd4-b5b1-35de752d9f85" pod="openshift-marketplace/redhat-operators-rwvfp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-rwvfp\": dial tcp 38.102.83.17:6443: connect: connection refused" Dec 10 14:36:32 crc kubenswrapper[4718]: I1210 14:36:32.685910 4718 status_manager.go:851] "Failed to get status for pod" podUID="f73d3e76-59b1-446b-b599-18b0c115447f" pod="openshift-marketplace/redhat-marketplace-d8blb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-d8blb\": dial tcp 38.102.83.17:6443: connect: connection refused" Dec 10 14:36:32 crc kubenswrapper[4718]: E1210 14:36:32.802520 4718 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.17:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.187fe15f6f4e372a openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-10 14:36:18.958063402 +0000 UTC m=+283.907286819,LastTimestamp:2025-12-10 14:36:18.958063402 +0000 UTC m=+283.907286819,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 10 14:36:33 crc kubenswrapper[4718]: I1210 14:36:33.691282 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Dec 10 14:36:33 crc kubenswrapper[4718]: I1210 14:36:33.691579 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"bb3a10ee2dad76f3bd16e33ec02b87f1a442bb78877d6a619138b1a4bf9fcd4e"} Dec 10 14:36:33 crc kubenswrapper[4718]: I1210 14:36:33.692647 4718 status_manager.go:851] "Failed to get status for pod" podUID="f73d3e76-59b1-446b-b599-18b0c115447f" pod="openshift-marketplace/redhat-marketplace-d8blb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-d8blb\": dial tcp 38.102.83.17:6443: connect: connection refused" Dec 10 14:36:33 crc kubenswrapper[4718]: I1210 14:36:33.693125 4718 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.17:6443: connect: connection refused" Dec 10 14:36:33 crc kubenswrapper[4718]: I1210 14:36:33.693449 4718 status_manager.go:851] "Failed to get status for pod" podUID="09fe8e9b-3f80-4979-8203-7aca9407605d" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.17:6443: connect: connection refused" Dec 10 14:36:33 crc kubenswrapper[4718]: I1210 14:36:33.693841 4718 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.17:6443: connect: connection refused" Dec 10 14:36:33 crc kubenswrapper[4718]: I1210 14:36:33.694131 4718 status_manager.go:851] "Failed to get status for pod" podUID="90744db6-7e3a-4f7f-a0ae-4f4dfed7df33" pod="openshift-authentication/oauth-openshift-558db77b4-tk99n" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-tk99n\": dial tcp 38.102.83.17:6443: connect: connection refused" Dec 10 14:36:33 crc kubenswrapper[4718]: I1210 14:36:33.694404 4718 status_manager.go:851] "Failed to get status for pod" podUID="bc324349-4229-4bd4-b5b1-35de752d9f85" pod="openshift-marketplace/redhat-operators-rwvfp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-rwvfp\": dial tcp 38.102.83.17:6443: connect: connection refused" Dec 10 14:36:35 crc kubenswrapper[4718]: E1210 14:36:35.852763 4718 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:36:35Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:36:35Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:36:35Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-10T14:36:35Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Patch \"https://api-int.crc.testing:6443/api/v1/nodes/crc/status?timeout=10s\": dial tcp 38.102.83.17:6443: connect: connection refused" Dec 10 14:36:35 crc kubenswrapper[4718]: E1210 14:36:35.853544 4718 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.17:6443: connect: connection refused" Dec 10 14:36:35 crc kubenswrapper[4718]: E1210 14:36:35.853762 4718 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.17:6443: connect: connection refused" Dec 10 14:36:35 crc kubenswrapper[4718]: E1210 14:36:35.853963 4718 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.17:6443: connect: connection refused" Dec 10 14:36:35 crc kubenswrapper[4718]: E1210 14:36:35.854179 4718 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.17:6443: connect: connection refused" Dec 10 14:36:35 crc kubenswrapper[4718]: E1210 14:36:35.854196 4718 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 10 14:36:36 crc kubenswrapper[4718]: I1210 14:36:36.026936 4718 status_manager.go:851] "Failed to get status for pod" podUID="09fe8e9b-3f80-4979-8203-7aca9407605d" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.17:6443: connect: connection refused" Dec 10 14:36:36 crc kubenswrapper[4718]: I1210 14:36:36.027367 4718 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.17:6443: connect: connection refused" Dec 10 14:36:36 crc kubenswrapper[4718]: I1210 14:36:36.027771 4718 status_manager.go:851] "Failed to get status for pod" podUID="90744db6-7e3a-4f7f-a0ae-4f4dfed7df33" pod="openshift-authentication/oauth-openshift-558db77b4-tk99n" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-tk99n\": dial tcp 38.102.83.17:6443: connect: connection refused" Dec 10 14:36:36 crc kubenswrapper[4718]: I1210 14:36:36.028088 4718 status_manager.go:851] "Failed to get status for pod" podUID="bc324349-4229-4bd4-b5b1-35de752d9f85" pod="openshift-marketplace/redhat-operators-rwvfp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-rwvfp\": dial tcp 38.102.83.17:6443: connect: connection refused" Dec 10 14:36:36 crc kubenswrapper[4718]: I1210 14:36:36.028420 4718 status_manager.go:851] "Failed to get status for pod" podUID="f73d3e76-59b1-446b-b599-18b0c115447f" pod="openshift-marketplace/redhat-marketplace-d8blb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-d8blb\": dial tcp 38.102.83.17:6443: connect: connection refused" Dec 10 14:36:36 crc kubenswrapper[4718]: I1210 14:36:36.028678 4718 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.17:6443: connect: connection refused" Dec 10 14:36:37 crc kubenswrapper[4718]: E1210 14:36:37.441365 4718 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.17:6443: connect: connection refused" interval="7s" Dec 10 14:36:40 crc kubenswrapper[4718]: I1210 14:36:40.295950 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 10 14:36:40 crc kubenswrapper[4718]: I1210 14:36:40.594561 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 10 14:36:40 crc kubenswrapper[4718]: I1210 14:36:40.595118 4718 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Dec 10 14:36:40 crc kubenswrapper[4718]: I1210 14:36:40.595225 4718 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Dec 10 14:36:41 crc kubenswrapper[4718]: I1210 14:36:41.020313 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 10 14:36:41 crc kubenswrapper[4718]: I1210 14:36:41.022085 4718 status_manager.go:851] "Failed to get status for pod" podUID="bc324349-4229-4bd4-b5b1-35de752d9f85" pod="openshift-marketplace/redhat-operators-rwvfp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-rwvfp\": dial tcp 38.102.83.17:6443: connect: connection refused" Dec 10 14:36:41 crc kubenswrapper[4718]: I1210 14:36:41.022656 4718 status_manager.go:851] "Failed to get status for pod" podUID="f73d3e76-59b1-446b-b599-18b0c115447f" pod="openshift-marketplace/redhat-marketplace-d8blb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-d8blb\": dial tcp 38.102.83.17:6443: connect: connection refused" Dec 10 14:36:41 crc kubenswrapper[4718]: I1210 14:36:41.022958 4718 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.17:6443: connect: connection refused" Dec 10 14:36:41 crc kubenswrapper[4718]: I1210 14:36:41.023428 4718 status_manager.go:851] "Failed to get status for pod" podUID="09fe8e9b-3f80-4979-8203-7aca9407605d" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.17:6443: connect: connection refused" Dec 10 14:36:41 crc kubenswrapper[4718]: I1210 14:36:41.023914 4718 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.17:6443: connect: connection refused" Dec 10 14:36:41 crc kubenswrapper[4718]: I1210 14:36:41.024339 4718 status_manager.go:851] "Failed to get status for pod" podUID="90744db6-7e3a-4f7f-a0ae-4f4dfed7df33" pod="openshift-authentication/oauth-openshift-558db77b4-tk99n" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-tk99n\": dial tcp 38.102.83.17:6443: connect: connection refused" Dec 10 14:36:41 crc kubenswrapper[4718]: I1210 14:36:41.064752 4718 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="52b297ad-8ff7-498e-8248-b64014de744f" Dec 10 14:36:41 crc kubenswrapper[4718]: I1210 14:36:41.064830 4718 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="52b297ad-8ff7-498e-8248-b64014de744f" Dec 10 14:36:41 crc kubenswrapper[4718]: E1210 14:36:41.065890 4718 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.17:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 10 14:36:41 crc kubenswrapper[4718]: I1210 14:36:41.066718 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 10 14:36:41 crc kubenswrapper[4718]: I1210 14:36:41.745346 4718 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="1e6f7750ab175d85aa42f113cc67ef803c0d2d0e46337ce9b623e79c099e1da5" exitCode=0 Dec 10 14:36:41 crc kubenswrapper[4718]: I1210 14:36:41.745478 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"1e6f7750ab175d85aa42f113cc67ef803c0d2d0e46337ce9b623e79c099e1da5"} Dec 10 14:36:41 crc kubenswrapper[4718]: I1210 14:36:41.745695 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"e263b5d7d9e5f3c73692cab102d975a9cca8fc3050729a421a24bb50ad4244e0"} Dec 10 14:36:41 crc kubenswrapper[4718]: I1210 14:36:41.746017 4718 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="52b297ad-8ff7-498e-8248-b64014de744f" Dec 10 14:36:41 crc kubenswrapper[4718]: I1210 14:36:41.746033 4718 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="52b297ad-8ff7-498e-8248-b64014de744f" Dec 10 14:36:41 crc kubenswrapper[4718]: E1210 14:36:41.746700 4718 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.17:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 10 14:36:41 crc kubenswrapper[4718]: I1210 14:36:41.747063 4718 status_manager.go:851] "Failed to get status for pod" podUID="09fe8e9b-3f80-4979-8203-7aca9407605d" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.17:6443: connect: connection refused" Dec 10 14:36:41 crc kubenswrapper[4718]: I1210 14:36:41.747794 4718 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.17:6443: connect: connection refused" Dec 10 14:36:41 crc kubenswrapper[4718]: I1210 14:36:41.748092 4718 status_manager.go:851] "Failed to get status for pod" podUID="90744db6-7e3a-4f7f-a0ae-4f4dfed7df33" pod="openshift-authentication/oauth-openshift-558db77b4-tk99n" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-tk99n\": dial tcp 38.102.83.17:6443: connect: connection refused" Dec 10 14:36:41 crc kubenswrapper[4718]: I1210 14:36:41.748418 4718 status_manager.go:851] "Failed to get status for pod" podUID="bc324349-4229-4bd4-b5b1-35de752d9f85" pod="openshift-marketplace/redhat-operators-rwvfp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-rwvfp\": dial tcp 38.102.83.17:6443: connect: connection refused" Dec 10 14:36:41 crc kubenswrapper[4718]: I1210 14:36:41.748655 4718 status_manager.go:851] "Failed to get status for pod" podUID="f73d3e76-59b1-446b-b599-18b0c115447f" pod="openshift-marketplace/redhat-marketplace-d8blb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-d8blb\": dial tcp 38.102.83.17:6443: connect: connection refused" Dec 10 14:36:41 crc kubenswrapper[4718]: I1210 14:36:41.748997 4718 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.17:6443: connect: connection refused" Dec 10 14:36:42 crc kubenswrapper[4718]: I1210 14:36:42.757573 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"264278577abc0b3a93faaf0c714de212fba8db086df4e47fb7e0df8bc8317620"} Dec 10 14:36:42 crc kubenswrapper[4718]: I1210 14:36:42.757960 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"86716e8516d650564653ad6ab769a1dcfd89e6b3ff043b2d7a00873de704b418"} Dec 10 14:36:43 crc kubenswrapper[4718]: I1210 14:36:43.771907 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"91a0aaced4c80d38d9ae759923e8ed1e4f23107684b324b30ebf9bbe8a4800da"} Dec 10 14:36:43 crc kubenswrapper[4718]: I1210 14:36:43.771971 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"5be9d9628ffa037813ada407c8e05aa3e133095b6513daad124027803c7850d0"} Dec 10 14:36:43 crc kubenswrapper[4718]: I1210 14:36:43.771984 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"040eb9e815df66ac8a0aecd74e084caa44b136b5e1a6049f5688d07f32deef63"} Dec 10 14:36:43 crc kubenswrapper[4718]: I1210 14:36:43.772336 4718 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="52b297ad-8ff7-498e-8248-b64014de744f" Dec 10 14:36:43 crc kubenswrapper[4718]: I1210 14:36:43.772357 4718 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="52b297ad-8ff7-498e-8248-b64014de744f" Dec 10 14:36:43 crc kubenswrapper[4718]: I1210 14:36:43.772655 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 10 14:36:46 crc kubenswrapper[4718]: I1210 14:36:46.067271 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 10 14:36:46 crc kubenswrapper[4718]: I1210 14:36:46.067713 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 10 14:36:46 crc kubenswrapper[4718]: I1210 14:36:46.073194 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 10 14:36:48 crc kubenswrapper[4718]: I1210 14:36:48.793651 4718 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 10 14:36:48 crc kubenswrapper[4718]: I1210 14:36:48.975854 4718 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="555d9146-4bfc-42f5-81b6-7601d4ca6be3" Dec 10 14:36:49 crc kubenswrapper[4718]: I1210 14:36:49.808468 4718 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="52b297ad-8ff7-498e-8248-b64014de744f" Dec 10 14:36:49 crc kubenswrapper[4718]: I1210 14:36:49.808790 4718 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="52b297ad-8ff7-498e-8248-b64014de744f" Dec 10 14:36:49 crc kubenswrapper[4718]: I1210 14:36:49.812713 4718 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="555d9146-4bfc-42f5-81b6-7601d4ca6be3" Dec 10 14:36:49 crc kubenswrapper[4718]: I1210 14:36:49.813011 4718 status_manager.go:308] "Container readiness changed before pod has synced" pod="openshift-kube-apiserver/kube-apiserver-crc" containerID="cri-o://86716e8516d650564653ad6ab769a1dcfd89e6b3ff043b2d7a00873de704b418" Dec 10 14:36:49 crc kubenswrapper[4718]: I1210 14:36:49.813060 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 10 14:36:50 crc kubenswrapper[4718]: I1210 14:36:50.595221 4718 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Dec 10 14:36:50 crc kubenswrapper[4718]: I1210 14:36:50.595317 4718 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Dec 10 14:36:50 crc kubenswrapper[4718]: I1210 14:36:50.816494 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-node-identity_network-node-identity-vrzqb_ef543e1b-8068-4ea3-b32a-61027b32e95d/approver/0.log" Dec 10 14:36:50 crc kubenswrapper[4718]: I1210 14:36:50.817347 4718 generic.go:334] "Generic (PLEG): container finished" podID="ef543e1b-8068-4ea3-b32a-61027b32e95d" containerID="3a6ff07473ddfa3d1db1e5fe47937274f803c400ee435102c3b0383c02af4340" exitCode=1 Dec 10 14:36:50 crc kubenswrapper[4718]: I1210 14:36:50.817476 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerDied","Data":"3a6ff07473ddfa3d1db1e5fe47937274f803c400ee435102c3b0383c02af4340"} Dec 10 14:36:50 crc kubenswrapper[4718]: I1210 14:36:50.817922 4718 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="52b297ad-8ff7-498e-8248-b64014de744f" Dec 10 14:36:50 crc kubenswrapper[4718]: I1210 14:36:50.817943 4718 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="52b297ad-8ff7-498e-8248-b64014de744f" Dec 10 14:36:50 crc kubenswrapper[4718]: I1210 14:36:50.819574 4718 scope.go:117] "RemoveContainer" containerID="3a6ff07473ddfa3d1db1e5fe47937274f803c400ee435102c3b0383c02af4340" Dec 10 14:36:50 crc kubenswrapper[4718]: I1210 14:36:50.821819 4718 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="555d9146-4bfc-42f5-81b6-7601d4ca6be3" Dec 10 14:36:51 crc kubenswrapper[4718]: I1210 14:36:51.825260 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-node-identity_network-node-identity-vrzqb_ef543e1b-8068-4ea3-b32a-61027b32e95d/approver/0.log" Dec 10 14:36:51 crc kubenswrapper[4718]: I1210 14:36:51.826250 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"6a1b278aa47823b94b314354d8b86e0e9469809cda7f286be0c8196504cd003b"} Dec 10 14:37:00 crc kubenswrapper[4718]: I1210 14:37:00.595768 4718 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Dec 10 14:37:00 crc kubenswrapper[4718]: I1210 14:37:00.596707 4718 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Dec 10 14:37:00 crc kubenswrapper[4718]: I1210 14:37:00.596982 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 10 14:37:00 crc kubenswrapper[4718]: I1210 14:37:00.599103 4718 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="kube-controller-manager" containerStatusID={"Type":"cri-o","ID":"bb3a10ee2dad76f3bd16e33ec02b87f1a442bb78877d6a619138b1a4bf9fcd4e"} pod="openshift-kube-controller-manager/kube-controller-manager-crc" containerMessage="Container kube-controller-manager failed startup probe, will be restarted" Dec 10 14:37:00 crc kubenswrapper[4718]: I1210 14:37:00.599535 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" containerID="cri-o://bb3a10ee2dad76f3bd16e33ec02b87f1a442bb78877d6a619138b1a4bf9fcd4e" gracePeriod=30 Dec 10 14:37:03 crc kubenswrapper[4718]: I1210 14:37:03.306123 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Dec 10 14:37:03 crc kubenswrapper[4718]: I1210 14:37:03.641796 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Dec 10 14:37:08 crc kubenswrapper[4718]: I1210 14:37:08.864965 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Dec 10 14:37:10 crc kubenswrapper[4718]: I1210 14:37:10.470984 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Dec 10 14:37:10 crc kubenswrapper[4718]: I1210 14:37:10.863104 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Dec 10 14:37:11 crc kubenswrapper[4718]: I1210 14:37:11.524319 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Dec 10 14:37:12 crc kubenswrapper[4718]: I1210 14:37:12.285497 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Dec 10 14:37:12 crc kubenswrapper[4718]: I1210 14:37:12.540206 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Dec 10 14:37:12 crc kubenswrapper[4718]: I1210 14:37:12.629737 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Dec 10 14:37:13 crc kubenswrapper[4718]: I1210 14:37:13.186663 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Dec 10 14:37:14 crc kubenswrapper[4718]: I1210 14:37:14.265257 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Dec 10 14:37:17 crc kubenswrapper[4718]: I1210 14:37:17.015250 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Dec 10 14:37:17 crc kubenswrapper[4718]: I1210 14:37:17.522036 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Dec 10 14:37:18 crc kubenswrapper[4718]: I1210 14:37:18.668471 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Dec 10 14:37:18 crc kubenswrapper[4718]: I1210 14:37:18.879570 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Dec 10 14:37:19 crc kubenswrapper[4718]: I1210 14:37:19.439786 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Dec 10 14:37:19 crc kubenswrapper[4718]: I1210 14:37:19.691722 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Dec 10 14:37:19 crc kubenswrapper[4718]: I1210 14:37:19.948290 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Dec 10 14:37:20 crc kubenswrapper[4718]: I1210 14:37:20.878103 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Dec 10 14:37:20 crc kubenswrapper[4718]: I1210 14:37:20.943670 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Dec 10 14:37:21 crc kubenswrapper[4718]: I1210 14:37:21.835873 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Dec 10 14:37:22 crc kubenswrapper[4718]: I1210 14:37:22.499646 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Dec 10 14:37:23 crc kubenswrapper[4718]: I1210 14:37:23.135590 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Dec 10 14:37:23 crc kubenswrapper[4718]: I1210 14:37:23.485000 4718 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Dec 10 14:37:24 crc kubenswrapper[4718]: I1210 14:37:24.505182 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Dec 10 14:37:24 crc kubenswrapper[4718]: I1210 14:37:24.540532 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Dec 10 14:37:24 crc kubenswrapper[4718]: I1210 14:37:24.633653 4718 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Dec 10 14:37:24 crc kubenswrapper[4718]: I1210 14:37:24.673351 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Dec 10 14:37:24 crc kubenswrapper[4718]: I1210 14:37:24.987714 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Dec 10 14:37:25 crc kubenswrapper[4718]: I1210 14:37:25.194489 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Dec 10 14:37:25 crc kubenswrapper[4718]: I1210 14:37:25.258509 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Dec 10 14:37:25 crc kubenswrapper[4718]: I1210 14:37:25.418695 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Dec 10 14:37:25 crc kubenswrapper[4718]: I1210 14:37:25.422753 4718 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Dec 10 14:37:25 crc kubenswrapper[4718]: I1210 14:37:25.678568 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Dec 10 14:37:25 crc kubenswrapper[4718]: I1210 14:37:25.832616 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Dec 10 14:37:25 crc kubenswrapper[4718]: I1210 14:37:25.857083 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Dec 10 14:37:25 crc kubenswrapper[4718]: I1210 14:37:25.937351 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Dec 10 14:37:26 crc kubenswrapper[4718]: I1210 14:37:26.008232 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Dec 10 14:37:26 crc kubenswrapper[4718]: I1210 14:37:26.219009 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Dec 10 14:37:26 crc kubenswrapper[4718]: I1210 14:37:26.345753 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Dec 10 14:37:26 crc kubenswrapper[4718]: I1210 14:37:26.503048 4718 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Dec 10 14:37:26 crc kubenswrapper[4718]: I1210 14:37:26.507154 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podStartSLOduration=68.507110437 podStartE2EDuration="1m8.507110437s" podCreationTimestamp="2025-12-10 14:36:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 14:36:48.811431523 +0000 UTC m=+313.760654940" watchObservedRunningTime="2025-12-10 14:37:26.507110437 +0000 UTC m=+351.456333854" Dec 10 14:37:26 crc kubenswrapper[4718]: I1210 14:37:26.510150 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-tk99n","openshift-marketplace/redhat-operators-rwvfp","openshift-kube-apiserver/kube-apiserver-crc","openshift-marketplace/redhat-marketplace-d8blb"] Dec 10 14:37:26 crc kubenswrapper[4718]: I1210 14:37:26.510253 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Dec 10 14:37:26 crc kubenswrapper[4718]: I1210 14:37:26.520362 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 10 14:37:26 crc kubenswrapper[4718]: I1210 14:37:26.532819 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=38.532797626 podStartE2EDuration="38.532797626s" podCreationTimestamp="2025-12-10 14:36:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 14:37:26.532691984 +0000 UTC m=+351.481915401" watchObservedRunningTime="2025-12-10 14:37:26.532797626 +0000 UTC m=+351.482021084" Dec 10 14:37:26 crc kubenswrapper[4718]: I1210 14:37:26.628855 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Dec 10 14:37:26 crc kubenswrapper[4718]: I1210 14:37:26.766333 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Dec 10 14:37:26 crc kubenswrapper[4718]: I1210 14:37:26.997566 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Dec 10 14:37:27 crc kubenswrapper[4718]: I1210 14:37:27.033543 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Dec 10 14:37:27 crc kubenswrapper[4718]: I1210 14:37:27.338818 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Dec 10 14:37:27 crc kubenswrapper[4718]: I1210 14:37:27.415765 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Dec 10 14:37:27 crc kubenswrapper[4718]: I1210 14:37:27.425158 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Dec 10 14:37:27 crc kubenswrapper[4718]: I1210 14:37:27.649885 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Dec 10 14:37:27 crc kubenswrapper[4718]: I1210 14:37:27.772759 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Dec 10 14:37:27 crc kubenswrapper[4718]: I1210 14:37:27.872886 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Dec 10 14:37:27 crc kubenswrapper[4718]: I1210 14:37:27.909829 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Dec 10 14:37:28 crc kubenswrapper[4718]: I1210 14:37:28.024032 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Dec 10 14:37:28 crc kubenswrapper[4718]: I1210 14:37:28.028742 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="90744db6-7e3a-4f7f-a0ae-4f4dfed7df33" path="/var/lib/kubelet/pods/90744db6-7e3a-4f7f-a0ae-4f4dfed7df33/volumes" Dec 10 14:37:28 crc kubenswrapper[4718]: I1210 14:37:28.029677 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc324349-4229-4bd4-b5b1-35de752d9f85" path="/var/lib/kubelet/pods/bc324349-4229-4bd4-b5b1-35de752d9f85/volumes" Dec 10 14:37:28 crc kubenswrapper[4718]: I1210 14:37:28.030353 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f73d3e76-59b1-446b-b599-18b0c115447f" path="/var/lib/kubelet/pods/f73d3e76-59b1-446b-b599-18b0c115447f/volumes" Dec 10 14:37:28 crc kubenswrapper[4718]: I1210 14:37:28.436574 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Dec 10 14:37:28 crc kubenswrapper[4718]: I1210 14:37:28.526514 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Dec 10 14:37:28 crc kubenswrapper[4718]: I1210 14:37:28.830063 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Dec 10 14:37:28 crc kubenswrapper[4718]: I1210 14:37:28.927965 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Dec 10 14:37:29 crc kubenswrapper[4718]: I1210 14:37:29.045731 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Dec 10 14:37:29 crc kubenswrapper[4718]: I1210 14:37:29.053306 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Dec 10 14:37:29 crc kubenswrapper[4718]: I1210 14:37:29.346054 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Dec 10 14:37:29 crc kubenswrapper[4718]: I1210 14:37:29.401458 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Dec 10 14:37:29 crc kubenswrapper[4718]: I1210 14:37:29.448878 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Dec 10 14:37:29 crc kubenswrapper[4718]: I1210 14:37:29.461628 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Dec 10 14:37:29 crc kubenswrapper[4718]: I1210 14:37:29.463250 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Dec 10 14:37:29 crc kubenswrapper[4718]: I1210 14:37:29.667046 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Dec 10 14:37:29 crc kubenswrapper[4718]: I1210 14:37:29.927845 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Dec 10 14:37:30 crc kubenswrapper[4718]: I1210 14:37:30.075896 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Dec 10 14:37:30 crc kubenswrapper[4718]: I1210 14:37:30.434627 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Dec 10 14:37:30 crc kubenswrapper[4718]: I1210 14:37:30.479141 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Dec 10 14:37:30 crc kubenswrapper[4718]: I1210 14:37:30.631099 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Dec 10 14:37:30 crc kubenswrapper[4718]: I1210 14:37:30.677656 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Dec 10 14:37:30 crc kubenswrapper[4718]: I1210 14:37:30.696859 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Dec 10 14:37:30 crc kubenswrapper[4718]: I1210 14:37:30.890008 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Dec 10 14:37:30 crc kubenswrapper[4718]: I1210 14:37:30.932319 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Dec 10 14:37:30 crc kubenswrapper[4718]: I1210 14:37:30.936371 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Dec 10 14:37:31 crc kubenswrapper[4718]: I1210 14:37:31.073031 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/1.log" Dec 10 14:37:31 crc kubenswrapper[4718]: I1210 14:37:31.075475 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Dec 10 14:37:31 crc kubenswrapper[4718]: I1210 14:37:31.075540 4718 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="bb3a10ee2dad76f3bd16e33ec02b87f1a442bb78877d6a619138b1a4bf9fcd4e" exitCode=137 Dec 10 14:37:31 crc kubenswrapper[4718]: I1210 14:37:31.075599 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"bb3a10ee2dad76f3bd16e33ec02b87f1a442bb78877d6a619138b1a4bf9fcd4e"} Dec 10 14:37:31 crc kubenswrapper[4718]: I1210 14:37:31.075678 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"b6c064ae0a51e4d3ce23418d27c4b537a17c5c2f9680507d4dcc19d2c7063ea2"} Dec 10 14:37:31 crc kubenswrapper[4718]: I1210 14:37:31.075712 4718 scope.go:117] "RemoveContainer" containerID="2d1de02a2f41beed3e92a05e69a6c76b44861e7dba46d723a38fffe8c3d0ed42" Dec 10 14:37:31 crc kubenswrapper[4718]: I1210 14:37:31.142738 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Dec 10 14:37:31 crc kubenswrapper[4718]: I1210 14:37:31.435277 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Dec 10 14:37:31 crc kubenswrapper[4718]: I1210 14:37:31.819282 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Dec 10 14:37:31 crc kubenswrapper[4718]: I1210 14:37:31.824898 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Dec 10 14:37:31 crc kubenswrapper[4718]: I1210 14:37:31.829270 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Dec 10 14:37:31 crc kubenswrapper[4718]: I1210 14:37:31.874921 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Dec 10 14:37:31 crc kubenswrapper[4718]: I1210 14:37:31.977068 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Dec 10 14:37:32 crc kubenswrapper[4718]: I1210 14:37:32.045700 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Dec 10 14:37:32 crc kubenswrapper[4718]: I1210 14:37:32.086714 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/1.log" Dec 10 14:37:32 crc kubenswrapper[4718]: I1210 14:37:32.221087 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Dec 10 14:37:32 crc kubenswrapper[4718]: I1210 14:37:32.444375 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Dec 10 14:37:32 crc kubenswrapper[4718]: I1210 14:37:32.663533 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Dec 10 14:37:32 crc kubenswrapper[4718]: I1210 14:37:32.772713 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Dec 10 14:37:32 crc kubenswrapper[4718]: I1210 14:37:32.791676 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Dec 10 14:37:32 crc kubenswrapper[4718]: I1210 14:37:32.792757 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Dec 10 14:37:32 crc kubenswrapper[4718]: I1210 14:37:32.798596 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Dec 10 14:37:32 crc kubenswrapper[4718]: I1210 14:37:32.891050 4718 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Dec 10 14:37:32 crc kubenswrapper[4718]: I1210 14:37:32.891440 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://654cde95780bd524bd014d2d2c06218535751be6bc802e41fdb9882f77a20f4d" gracePeriod=5 Dec 10 14:37:32 crc kubenswrapper[4718]: I1210 14:37:32.946449 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Dec 10 14:37:33 crc kubenswrapper[4718]: I1210 14:37:33.136646 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Dec 10 14:37:33 crc kubenswrapper[4718]: I1210 14:37:33.142901 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Dec 10 14:37:33 crc kubenswrapper[4718]: I1210 14:37:33.321572 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Dec 10 14:37:33 crc kubenswrapper[4718]: I1210 14:37:33.387812 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Dec 10 14:37:33 crc kubenswrapper[4718]: I1210 14:37:33.431489 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Dec 10 14:37:33 crc kubenswrapper[4718]: I1210 14:37:33.707467 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Dec 10 14:37:33 crc kubenswrapper[4718]: I1210 14:37:33.780455 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Dec 10 14:37:33 crc kubenswrapper[4718]: I1210 14:37:33.815599 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Dec 10 14:37:33 crc kubenswrapper[4718]: I1210 14:37:33.845431 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Dec 10 14:37:33 crc kubenswrapper[4718]: I1210 14:37:33.912018 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Dec 10 14:37:33 crc kubenswrapper[4718]: I1210 14:37:33.929752 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Dec 10 14:37:33 crc kubenswrapper[4718]: I1210 14:37:33.957081 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Dec 10 14:37:34 crc kubenswrapper[4718]: I1210 14:37:34.007160 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Dec 10 14:37:34 crc kubenswrapper[4718]: I1210 14:37:34.047443 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Dec 10 14:37:34 crc kubenswrapper[4718]: I1210 14:37:34.183438 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Dec 10 14:37:34 crc kubenswrapper[4718]: I1210 14:37:34.529000 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Dec 10 14:37:34 crc kubenswrapper[4718]: I1210 14:37:34.534524 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Dec 10 14:37:34 crc kubenswrapper[4718]: I1210 14:37:34.627715 4718 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Dec 10 14:37:34 crc kubenswrapper[4718]: I1210 14:37:34.661993 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Dec 10 14:37:34 crc kubenswrapper[4718]: I1210 14:37:34.696755 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Dec 10 14:37:34 crc kubenswrapper[4718]: I1210 14:37:34.700899 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Dec 10 14:37:34 crc kubenswrapper[4718]: I1210 14:37:34.728005 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Dec 10 14:37:34 crc kubenswrapper[4718]: I1210 14:37:34.752956 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Dec 10 14:37:34 crc kubenswrapper[4718]: I1210 14:37:34.830311 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Dec 10 14:37:34 crc kubenswrapper[4718]: I1210 14:37:34.830623 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Dec 10 14:37:34 crc kubenswrapper[4718]: I1210 14:37:34.864451 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Dec 10 14:37:35 crc kubenswrapper[4718]: I1210 14:37:35.116751 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Dec 10 14:37:35 crc kubenswrapper[4718]: I1210 14:37:35.228792 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Dec 10 14:37:35 crc kubenswrapper[4718]: I1210 14:37:35.353443 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Dec 10 14:37:35 crc kubenswrapper[4718]: I1210 14:37:35.497469 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Dec 10 14:37:35 crc kubenswrapper[4718]: I1210 14:37:35.622768 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Dec 10 14:37:35 crc kubenswrapper[4718]: I1210 14:37:35.811230 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Dec 10 14:37:35 crc kubenswrapper[4718]: I1210 14:37:35.891763 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Dec 10 14:37:35 crc kubenswrapper[4718]: I1210 14:37:35.959736 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Dec 10 14:37:36 crc kubenswrapper[4718]: I1210 14:37:36.322274 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Dec 10 14:37:36 crc kubenswrapper[4718]: I1210 14:37:36.632710 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Dec 10 14:37:36 crc kubenswrapper[4718]: I1210 14:37:36.773522 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Dec 10 14:37:36 crc kubenswrapper[4718]: I1210 14:37:36.811661 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Dec 10 14:37:36 crc kubenswrapper[4718]: I1210 14:37:36.975592 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Dec 10 14:37:36 crc kubenswrapper[4718]: I1210 14:37:36.983578 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Dec 10 14:37:37 crc kubenswrapper[4718]: I1210 14:37:37.135223 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Dec 10 14:37:37 crc kubenswrapper[4718]: I1210 14:37:37.206863 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Dec 10 14:37:37 crc kubenswrapper[4718]: I1210 14:37:37.380046 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Dec 10 14:37:37 crc kubenswrapper[4718]: I1210 14:37:37.420731 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Dec 10 14:37:37 crc kubenswrapper[4718]: I1210 14:37:37.485657 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Dec 10 14:37:37 crc kubenswrapper[4718]: I1210 14:37:37.620917 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Dec 10 14:37:37 crc kubenswrapper[4718]: I1210 14:37:37.672295 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Dec 10 14:37:37 crc kubenswrapper[4718]: I1210 14:37:37.686226 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Dec 10 14:37:37 crc kubenswrapper[4718]: I1210 14:37:37.772613 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Dec 10 14:37:37 crc kubenswrapper[4718]: I1210 14:37:37.918106 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Dec 10 14:37:37 crc kubenswrapper[4718]: I1210 14:37:37.971108 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Dec 10 14:37:37 crc kubenswrapper[4718]: I1210 14:37:37.990291 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Dec 10 14:37:38 crc kubenswrapper[4718]: I1210 14:37:38.019843 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Dec 10 14:37:38 crc kubenswrapper[4718]: I1210 14:37:38.123995 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Dec 10 14:37:38 crc kubenswrapper[4718]: I1210 14:37:38.124082 4718 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="654cde95780bd524bd014d2d2c06218535751be6bc802e41fdb9882f77a20f4d" exitCode=137 Dec 10 14:37:38 crc kubenswrapper[4718]: I1210 14:37:38.144755 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Dec 10 14:37:38 crc kubenswrapper[4718]: I1210 14:37:38.287618 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Dec 10 14:37:38 crc kubenswrapper[4718]: I1210 14:37:38.294020 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Dec 10 14:37:38 crc kubenswrapper[4718]: I1210 14:37:38.479468 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Dec 10 14:37:38 crc kubenswrapper[4718]: I1210 14:37:38.479588 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 10 14:37:38 crc kubenswrapper[4718]: I1210 14:37:38.634534 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Dec 10 14:37:38 crc kubenswrapper[4718]: I1210 14:37:38.634690 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Dec 10 14:37:38 crc kubenswrapper[4718]: I1210 14:37:38.634690 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 10 14:37:38 crc kubenswrapper[4718]: I1210 14:37:38.634724 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Dec 10 14:37:38 crc kubenswrapper[4718]: I1210 14:37:38.634752 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Dec 10 14:37:38 crc kubenswrapper[4718]: I1210 14:37:38.634772 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 10 14:37:38 crc kubenswrapper[4718]: I1210 14:37:38.634808 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 10 14:37:38 crc kubenswrapper[4718]: I1210 14:37:38.634881 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Dec 10 14:37:38 crc kubenswrapper[4718]: I1210 14:37:38.634924 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 10 14:37:38 crc kubenswrapper[4718]: I1210 14:37:38.635171 4718 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Dec 10 14:37:38 crc kubenswrapper[4718]: I1210 14:37:38.635187 4718 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Dec 10 14:37:38 crc kubenswrapper[4718]: I1210 14:37:38.635197 4718 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Dec 10 14:37:38 crc kubenswrapper[4718]: I1210 14:37:38.635207 4718 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Dec 10 14:37:38 crc kubenswrapper[4718]: I1210 14:37:38.644346 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 10 14:37:38 crc kubenswrapper[4718]: I1210 14:37:38.683242 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Dec 10 14:37:38 crc kubenswrapper[4718]: I1210 14:37:38.691811 4718 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Dec 10 14:37:38 crc kubenswrapper[4718]: I1210 14:37:38.736494 4718 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Dec 10 14:37:38 crc kubenswrapper[4718]: I1210 14:37:38.768567 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Dec 10 14:37:38 crc kubenswrapper[4718]: I1210 14:37:38.834733 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Dec 10 14:37:39 crc kubenswrapper[4718]: I1210 14:37:39.135095 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Dec 10 14:37:39 crc kubenswrapper[4718]: I1210 14:37:39.135188 4718 scope.go:117] "RemoveContainer" containerID="654cde95780bd524bd014d2d2c06218535751be6bc802e41fdb9882f77a20f4d" Dec 10 14:37:39 crc kubenswrapper[4718]: I1210 14:37:39.135304 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 10 14:37:39 crc kubenswrapper[4718]: I1210 14:37:39.213314 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Dec 10 14:37:39 crc kubenswrapper[4718]: I1210 14:37:39.444838 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Dec 10 14:37:39 crc kubenswrapper[4718]: I1210 14:37:39.453309 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Dec 10 14:37:39 crc kubenswrapper[4718]: I1210 14:37:39.466923 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Dec 10 14:37:39 crc kubenswrapper[4718]: I1210 14:37:39.526227 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Dec 10 14:37:39 crc kubenswrapper[4718]: I1210 14:37:39.529025 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Dec 10 14:37:39 crc kubenswrapper[4718]: I1210 14:37:39.609578 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Dec 10 14:37:39 crc kubenswrapper[4718]: I1210 14:37:39.894812 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Dec 10 14:37:40 crc kubenswrapper[4718]: I1210 14:37:40.027543 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Dec 10 14:37:40 crc kubenswrapper[4718]: I1210 14:37:40.027821 4718 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="" Dec 10 14:37:40 crc kubenswrapper[4718]: I1210 14:37:40.039970 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Dec 10 14:37:40 crc kubenswrapper[4718]: I1210 14:37:40.040042 4718 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="ae3a9358-b95f-48e4-b054-b692933e5942" Dec 10 14:37:40 crc kubenswrapper[4718]: I1210 14:37:40.044717 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Dec 10 14:37:40 crc kubenswrapper[4718]: I1210 14:37:40.044766 4718 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="ae3a9358-b95f-48e4-b054-b692933e5942" Dec 10 14:37:40 crc kubenswrapper[4718]: I1210 14:37:40.295295 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 10 14:37:40 crc kubenswrapper[4718]: I1210 14:37:40.389122 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Dec 10 14:37:40 crc kubenswrapper[4718]: I1210 14:37:40.441731 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Dec 10 14:37:40 crc kubenswrapper[4718]: I1210 14:37:40.595465 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 10 14:37:40 crc kubenswrapper[4718]: I1210 14:37:40.609150 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 10 14:37:40 crc kubenswrapper[4718]: I1210 14:37:40.673921 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Dec 10 14:37:40 crc kubenswrapper[4718]: I1210 14:37:40.800700 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Dec 10 14:37:40 crc kubenswrapper[4718]: I1210 14:37:40.833324 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Dec 10 14:37:40 crc kubenswrapper[4718]: I1210 14:37:40.863545 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Dec 10 14:37:41 crc kubenswrapper[4718]: I1210 14:37:41.010031 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Dec 10 14:37:41 crc kubenswrapper[4718]: I1210 14:37:41.112754 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Dec 10 14:37:41 crc kubenswrapper[4718]: I1210 14:37:41.154912 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 10 14:37:41 crc kubenswrapper[4718]: I1210 14:37:41.197356 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Dec 10 14:37:41 crc kubenswrapper[4718]: I1210 14:37:41.349029 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Dec 10 14:37:41 crc kubenswrapper[4718]: I1210 14:37:41.554719 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Dec 10 14:37:41 crc kubenswrapper[4718]: I1210 14:37:41.650912 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Dec 10 14:37:42 crc kubenswrapper[4718]: I1210 14:37:42.153357 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Dec 10 14:37:42 crc kubenswrapper[4718]: I1210 14:37:42.162432 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Dec 10 14:37:42 crc kubenswrapper[4718]: I1210 14:37:42.165141 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Dec 10 14:37:42 crc kubenswrapper[4718]: I1210 14:37:42.209922 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Dec 10 14:37:42 crc kubenswrapper[4718]: I1210 14:37:42.213817 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Dec 10 14:37:42 crc kubenswrapper[4718]: I1210 14:37:42.277974 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Dec 10 14:37:42 crc kubenswrapper[4718]: I1210 14:37:42.372567 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Dec 10 14:37:42 crc kubenswrapper[4718]: I1210 14:37:42.635793 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Dec 10 14:37:42 crc kubenswrapper[4718]: I1210 14:37:42.700320 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Dec 10 14:37:42 crc kubenswrapper[4718]: I1210 14:37:42.809082 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Dec 10 14:37:42 crc kubenswrapper[4718]: I1210 14:37:42.809310 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Dec 10 14:37:42 crc kubenswrapper[4718]: I1210 14:37:42.871624 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Dec 10 14:37:43 crc kubenswrapper[4718]: I1210 14:37:43.068683 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Dec 10 14:37:43 crc kubenswrapper[4718]: I1210 14:37:43.083287 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Dec 10 14:37:43 crc kubenswrapper[4718]: I1210 14:37:43.086316 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Dec 10 14:37:43 crc kubenswrapper[4718]: I1210 14:37:43.152551 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Dec 10 14:37:43 crc kubenswrapper[4718]: I1210 14:37:43.222060 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Dec 10 14:37:43 crc kubenswrapper[4718]: I1210 14:37:43.268315 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Dec 10 14:37:43 crc kubenswrapper[4718]: I1210 14:37:43.291842 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Dec 10 14:37:43 crc kubenswrapper[4718]: I1210 14:37:43.801335 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Dec 10 14:37:43 crc kubenswrapper[4718]: I1210 14:37:43.823149 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Dec 10 14:37:43 crc kubenswrapper[4718]: I1210 14:37:43.951066 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Dec 10 14:37:44 crc kubenswrapper[4718]: I1210 14:37:44.138301 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Dec 10 14:37:44 crc kubenswrapper[4718]: I1210 14:37:44.141840 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Dec 10 14:37:44 crc kubenswrapper[4718]: I1210 14:37:44.394658 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Dec 10 14:37:44 crc kubenswrapper[4718]: I1210 14:37:44.574533 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Dec 10 14:37:44 crc kubenswrapper[4718]: I1210 14:37:44.651569 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Dec 10 14:37:44 crc kubenswrapper[4718]: I1210 14:37:44.894842 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Dec 10 14:37:45 crc kubenswrapper[4718]: I1210 14:37:45.640485 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Dec 10 14:37:45 crc kubenswrapper[4718]: I1210 14:37:45.700263 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Dec 10 14:37:45 crc kubenswrapper[4718]: I1210 14:37:45.855998 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Dec 10 14:37:45 crc kubenswrapper[4718]: I1210 14:37:45.860648 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Dec 10 14:37:46 crc kubenswrapper[4718]: I1210 14:37:46.182894 4718 generic.go:334] "Generic (PLEG): container finished" podID="451fb12e-a97f-441e-8a8c-d4c217640aef" containerID="e845f171f5180489cbc7cd369993220b5a604c652034411ca26e46365ecb30b8" exitCode=0 Dec 10 14:37:46 crc kubenswrapper[4718]: I1210 14:37:46.182957 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-twtlk" event={"ID":"451fb12e-a97f-441e-8a8c-d4c217640aef","Type":"ContainerDied","Data":"e845f171f5180489cbc7cd369993220b5a604c652034411ca26e46365ecb30b8"} Dec 10 14:37:46 crc kubenswrapper[4718]: I1210 14:37:46.183611 4718 scope.go:117] "RemoveContainer" containerID="e845f171f5180489cbc7cd369993220b5a604c652034411ca26e46365ecb30b8" Dec 10 14:37:46 crc kubenswrapper[4718]: I1210 14:37:46.252243 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Dec 10 14:37:46 crc kubenswrapper[4718]: I1210 14:37:46.310569 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Dec 10 14:37:46 crc kubenswrapper[4718]: I1210 14:37:46.326514 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Dec 10 14:37:46 crc kubenswrapper[4718]: I1210 14:37:46.357272 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Dec 10 14:37:46 crc kubenswrapper[4718]: I1210 14:37:46.401366 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-77c8c5f65c-xjdft"] Dec 10 14:37:46 crc kubenswrapper[4718]: E1210 14:37:46.401764 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f73d3e76-59b1-446b-b599-18b0c115447f" containerName="registry-server" Dec 10 14:37:46 crc kubenswrapper[4718]: I1210 14:37:46.401788 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="f73d3e76-59b1-446b-b599-18b0c115447f" containerName="registry-server" Dec 10 14:37:46 crc kubenswrapper[4718]: E1210 14:37:46.401806 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f73d3e76-59b1-446b-b599-18b0c115447f" containerName="extract-content" Dec 10 14:37:46 crc kubenswrapper[4718]: I1210 14:37:46.401819 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="f73d3e76-59b1-446b-b599-18b0c115447f" containerName="extract-content" Dec 10 14:37:46 crc kubenswrapper[4718]: E1210 14:37:46.401832 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc324349-4229-4bd4-b5b1-35de752d9f85" containerName="registry-server" Dec 10 14:37:46 crc kubenswrapper[4718]: I1210 14:37:46.401839 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc324349-4229-4bd4-b5b1-35de752d9f85" containerName="registry-server" Dec 10 14:37:46 crc kubenswrapper[4718]: E1210 14:37:46.401852 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Dec 10 14:37:46 crc kubenswrapper[4718]: I1210 14:37:46.401863 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Dec 10 14:37:46 crc kubenswrapper[4718]: E1210 14:37:46.401871 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f73d3e76-59b1-446b-b599-18b0c115447f" containerName="extract-utilities" Dec 10 14:37:46 crc kubenswrapper[4718]: I1210 14:37:46.401878 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="f73d3e76-59b1-446b-b599-18b0c115447f" containerName="extract-utilities" Dec 10 14:37:46 crc kubenswrapper[4718]: E1210 14:37:46.401888 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc324349-4229-4bd4-b5b1-35de752d9f85" containerName="extract-utilities" Dec 10 14:37:46 crc kubenswrapper[4718]: I1210 14:37:46.401897 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc324349-4229-4bd4-b5b1-35de752d9f85" containerName="extract-utilities" Dec 10 14:37:46 crc kubenswrapper[4718]: E1210 14:37:46.401907 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="09fe8e9b-3f80-4979-8203-7aca9407605d" containerName="installer" Dec 10 14:37:46 crc kubenswrapper[4718]: I1210 14:37:46.401913 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="09fe8e9b-3f80-4979-8203-7aca9407605d" containerName="installer" Dec 10 14:37:46 crc kubenswrapper[4718]: E1210 14:37:46.401920 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc324349-4229-4bd4-b5b1-35de752d9f85" containerName="extract-content" Dec 10 14:37:46 crc kubenswrapper[4718]: I1210 14:37:46.401926 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc324349-4229-4bd4-b5b1-35de752d9f85" containerName="extract-content" Dec 10 14:37:46 crc kubenswrapper[4718]: E1210 14:37:46.401943 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="90744db6-7e3a-4f7f-a0ae-4f4dfed7df33" containerName="oauth-openshift" Dec 10 14:37:46 crc kubenswrapper[4718]: I1210 14:37:46.401949 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="90744db6-7e3a-4f7f-a0ae-4f4dfed7df33" containerName="oauth-openshift" Dec 10 14:37:46 crc kubenswrapper[4718]: I1210 14:37:46.402052 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="f73d3e76-59b1-446b-b599-18b0c115447f" containerName="registry-server" Dec 10 14:37:46 crc kubenswrapper[4718]: I1210 14:37:46.402069 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Dec 10 14:37:46 crc kubenswrapper[4718]: I1210 14:37:46.402078 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="bc324349-4229-4bd4-b5b1-35de752d9f85" containerName="registry-server" Dec 10 14:37:46 crc kubenswrapper[4718]: I1210 14:37:46.402087 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="90744db6-7e3a-4f7f-a0ae-4f4dfed7df33" containerName="oauth-openshift" Dec 10 14:37:46 crc kubenswrapper[4718]: I1210 14:37:46.402095 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="09fe8e9b-3f80-4979-8203-7aca9407605d" containerName="installer" Dec 10 14:37:46 crc kubenswrapper[4718]: I1210 14:37:46.402670 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-77c8c5f65c-xjdft" Dec 10 14:37:46 crc kubenswrapper[4718]: I1210 14:37:46.407059 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Dec 10 14:37:46 crc kubenswrapper[4718]: I1210 14:37:46.407260 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Dec 10 14:37:46 crc kubenswrapper[4718]: I1210 14:37:46.407220 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Dec 10 14:37:46 crc kubenswrapper[4718]: I1210 14:37:46.407503 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Dec 10 14:37:46 crc kubenswrapper[4718]: I1210 14:37:46.407818 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Dec 10 14:37:46 crc kubenswrapper[4718]: I1210 14:37:46.408128 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Dec 10 14:37:46 crc kubenswrapper[4718]: I1210 14:37:46.408202 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Dec 10 14:37:46 crc kubenswrapper[4718]: I1210 14:37:46.408317 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Dec 10 14:37:46 crc kubenswrapper[4718]: I1210 14:37:46.409068 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Dec 10 14:37:46 crc kubenswrapper[4718]: I1210 14:37:46.409754 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Dec 10 14:37:46 crc kubenswrapper[4718]: I1210 14:37:46.409972 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Dec 10 14:37:46 crc kubenswrapper[4718]: I1210 14:37:46.409980 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Dec 10 14:37:46 crc kubenswrapper[4718]: I1210 14:37:46.417638 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Dec 10 14:37:46 crc kubenswrapper[4718]: I1210 14:37:46.418705 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Dec 10 14:37:46 crc kubenswrapper[4718]: I1210 14:37:46.424125 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Dec 10 14:37:46 crc kubenswrapper[4718]: I1210 14:37:46.448425 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/f8cf7291-3a30-4a68-842f-9bd7caba1a06-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-77c8c5f65c-xjdft\" (UID: \"f8cf7291-3a30-4a68-842f-9bd7caba1a06\") " pod="openshift-authentication/oauth-openshift-77c8c5f65c-xjdft" Dec 10 14:37:46 crc kubenswrapper[4718]: I1210 14:37:46.448474 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-blqwz\" (UniqueName: \"kubernetes.io/projected/f8cf7291-3a30-4a68-842f-9bd7caba1a06-kube-api-access-blqwz\") pod \"oauth-openshift-77c8c5f65c-xjdft\" (UID: \"f8cf7291-3a30-4a68-842f-9bd7caba1a06\") " pod="openshift-authentication/oauth-openshift-77c8c5f65c-xjdft" Dec 10 14:37:46 crc kubenswrapper[4718]: I1210 14:37:46.448502 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/f8cf7291-3a30-4a68-842f-9bd7caba1a06-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-77c8c5f65c-xjdft\" (UID: \"f8cf7291-3a30-4a68-842f-9bd7caba1a06\") " pod="openshift-authentication/oauth-openshift-77c8c5f65c-xjdft" Dec 10 14:37:46 crc kubenswrapper[4718]: I1210 14:37:46.448526 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f8cf7291-3a30-4a68-842f-9bd7caba1a06-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-77c8c5f65c-xjdft\" (UID: \"f8cf7291-3a30-4a68-842f-9bd7caba1a06\") " pod="openshift-authentication/oauth-openshift-77c8c5f65c-xjdft" Dec 10 14:37:46 crc kubenswrapper[4718]: I1210 14:37:46.448545 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f8cf7291-3a30-4a68-842f-9bd7caba1a06-audit-dir\") pod \"oauth-openshift-77c8c5f65c-xjdft\" (UID: \"f8cf7291-3a30-4a68-842f-9bd7caba1a06\") " pod="openshift-authentication/oauth-openshift-77c8c5f65c-xjdft" Dec 10 14:37:46 crc kubenswrapper[4718]: I1210 14:37:46.448560 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/f8cf7291-3a30-4a68-842f-9bd7caba1a06-v4-0-config-user-template-error\") pod \"oauth-openshift-77c8c5f65c-xjdft\" (UID: \"f8cf7291-3a30-4a68-842f-9bd7caba1a06\") " pod="openshift-authentication/oauth-openshift-77c8c5f65c-xjdft" Dec 10 14:37:46 crc kubenswrapper[4718]: I1210 14:37:46.448579 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f8cf7291-3a30-4a68-842f-9bd7caba1a06-audit-policies\") pod \"oauth-openshift-77c8c5f65c-xjdft\" (UID: \"f8cf7291-3a30-4a68-842f-9bd7caba1a06\") " pod="openshift-authentication/oauth-openshift-77c8c5f65c-xjdft" Dec 10 14:37:46 crc kubenswrapper[4718]: I1210 14:37:46.448593 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/f8cf7291-3a30-4a68-842f-9bd7caba1a06-v4-0-config-system-cliconfig\") pod \"oauth-openshift-77c8c5f65c-xjdft\" (UID: \"f8cf7291-3a30-4a68-842f-9bd7caba1a06\") " pod="openshift-authentication/oauth-openshift-77c8c5f65c-xjdft" Dec 10 14:37:46 crc kubenswrapper[4718]: I1210 14:37:46.448641 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/f8cf7291-3a30-4a68-842f-9bd7caba1a06-v4-0-config-system-session\") pod \"oauth-openshift-77c8c5f65c-xjdft\" (UID: \"f8cf7291-3a30-4a68-842f-9bd7caba1a06\") " pod="openshift-authentication/oauth-openshift-77c8c5f65c-xjdft" Dec 10 14:37:46 crc kubenswrapper[4718]: I1210 14:37:46.448670 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/f8cf7291-3a30-4a68-842f-9bd7caba1a06-v4-0-config-system-serving-cert\") pod \"oauth-openshift-77c8c5f65c-xjdft\" (UID: \"f8cf7291-3a30-4a68-842f-9bd7caba1a06\") " pod="openshift-authentication/oauth-openshift-77c8c5f65c-xjdft" Dec 10 14:37:46 crc kubenswrapper[4718]: I1210 14:37:46.448690 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/f8cf7291-3a30-4a68-842f-9bd7caba1a06-v4-0-config-user-template-login\") pod \"oauth-openshift-77c8c5f65c-xjdft\" (UID: \"f8cf7291-3a30-4a68-842f-9bd7caba1a06\") " pod="openshift-authentication/oauth-openshift-77c8c5f65c-xjdft" Dec 10 14:37:46 crc kubenswrapper[4718]: I1210 14:37:46.448723 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/f8cf7291-3a30-4a68-842f-9bd7caba1a06-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-77c8c5f65c-xjdft\" (UID: \"f8cf7291-3a30-4a68-842f-9bd7caba1a06\") " pod="openshift-authentication/oauth-openshift-77c8c5f65c-xjdft" Dec 10 14:37:46 crc kubenswrapper[4718]: I1210 14:37:46.448750 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/f8cf7291-3a30-4a68-842f-9bd7caba1a06-v4-0-config-system-service-ca\") pod \"oauth-openshift-77c8c5f65c-xjdft\" (UID: \"f8cf7291-3a30-4a68-842f-9bd7caba1a06\") " pod="openshift-authentication/oauth-openshift-77c8c5f65c-xjdft" Dec 10 14:37:46 crc kubenswrapper[4718]: I1210 14:37:46.448804 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/f8cf7291-3a30-4a68-842f-9bd7caba1a06-v4-0-config-system-router-certs\") pod \"oauth-openshift-77c8c5f65c-xjdft\" (UID: \"f8cf7291-3a30-4a68-842f-9bd7caba1a06\") " pod="openshift-authentication/oauth-openshift-77c8c5f65c-xjdft" Dec 10 14:37:46 crc kubenswrapper[4718]: I1210 14:37:46.449763 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-twtlk" Dec 10 14:37:46 crc kubenswrapper[4718]: I1210 14:37:46.449794 4718 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-marketplace/marketplace-operator-79b997595-twtlk" Dec 10 14:37:46 crc kubenswrapper[4718]: I1210 14:37:46.549739 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f8cf7291-3a30-4a68-842f-9bd7caba1a06-audit-policies\") pod \"oauth-openshift-77c8c5f65c-xjdft\" (UID: \"f8cf7291-3a30-4a68-842f-9bd7caba1a06\") " pod="openshift-authentication/oauth-openshift-77c8c5f65c-xjdft" Dec 10 14:37:46 crc kubenswrapper[4718]: I1210 14:37:46.549789 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/f8cf7291-3a30-4a68-842f-9bd7caba1a06-v4-0-config-system-cliconfig\") pod \"oauth-openshift-77c8c5f65c-xjdft\" (UID: \"f8cf7291-3a30-4a68-842f-9bd7caba1a06\") " pod="openshift-authentication/oauth-openshift-77c8c5f65c-xjdft" Dec 10 14:37:46 crc kubenswrapper[4718]: I1210 14:37:46.549824 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/f8cf7291-3a30-4a68-842f-9bd7caba1a06-v4-0-config-system-session\") pod \"oauth-openshift-77c8c5f65c-xjdft\" (UID: \"f8cf7291-3a30-4a68-842f-9bd7caba1a06\") " pod="openshift-authentication/oauth-openshift-77c8c5f65c-xjdft" Dec 10 14:37:46 crc kubenswrapper[4718]: I1210 14:37:46.549840 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/f8cf7291-3a30-4a68-842f-9bd7caba1a06-v4-0-config-system-serving-cert\") pod \"oauth-openshift-77c8c5f65c-xjdft\" (UID: \"f8cf7291-3a30-4a68-842f-9bd7caba1a06\") " pod="openshift-authentication/oauth-openshift-77c8c5f65c-xjdft" Dec 10 14:37:46 crc kubenswrapper[4718]: I1210 14:37:46.549866 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/f8cf7291-3a30-4a68-842f-9bd7caba1a06-v4-0-config-user-template-login\") pod \"oauth-openshift-77c8c5f65c-xjdft\" (UID: \"f8cf7291-3a30-4a68-842f-9bd7caba1a06\") " pod="openshift-authentication/oauth-openshift-77c8c5f65c-xjdft" Dec 10 14:37:46 crc kubenswrapper[4718]: I1210 14:37:46.549896 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/f8cf7291-3a30-4a68-842f-9bd7caba1a06-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-77c8c5f65c-xjdft\" (UID: \"f8cf7291-3a30-4a68-842f-9bd7caba1a06\") " pod="openshift-authentication/oauth-openshift-77c8c5f65c-xjdft" Dec 10 14:37:46 crc kubenswrapper[4718]: I1210 14:37:46.549916 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/f8cf7291-3a30-4a68-842f-9bd7caba1a06-v4-0-config-system-service-ca\") pod \"oauth-openshift-77c8c5f65c-xjdft\" (UID: \"f8cf7291-3a30-4a68-842f-9bd7caba1a06\") " pod="openshift-authentication/oauth-openshift-77c8c5f65c-xjdft" Dec 10 14:37:46 crc kubenswrapper[4718]: I1210 14:37:46.549937 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/f8cf7291-3a30-4a68-842f-9bd7caba1a06-v4-0-config-system-router-certs\") pod \"oauth-openshift-77c8c5f65c-xjdft\" (UID: \"f8cf7291-3a30-4a68-842f-9bd7caba1a06\") " pod="openshift-authentication/oauth-openshift-77c8c5f65c-xjdft" Dec 10 14:37:46 crc kubenswrapper[4718]: I1210 14:37:46.550017 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/f8cf7291-3a30-4a68-842f-9bd7caba1a06-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-77c8c5f65c-xjdft\" (UID: \"f8cf7291-3a30-4a68-842f-9bd7caba1a06\") " pod="openshift-authentication/oauth-openshift-77c8c5f65c-xjdft" Dec 10 14:37:46 crc kubenswrapper[4718]: I1210 14:37:46.550073 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-blqwz\" (UniqueName: \"kubernetes.io/projected/f8cf7291-3a30-4a68-842f-9bd7caba1a06-kube-api-access-blqwz\") pod \"oauth-openshift-77c8c5f65c-xjdft\" (UID: \"f8cf7291-3a30-4a68-842f-9bd7caba1a06\") " pod="openshift-authentication/oauth-openshift-77c8c5f65c-xjdft" Dec 10 14:37:46 crc kubenswrapper[4718]: I1210 14:37:46.550153 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/f8cf7291-3a30-4a68-842f-9bd7caba1a06-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-77c8c5f65c-xjdft\" (UID: \"f8cf7291-3a30-4a68-842f-9bd7caba1a06\") " pod="openshift-authentication/oauth-openshift-77c8c5f65c-xjdft" Dec 10 14:37:46 crc kubenswrapper[4718]: I1210 14:37:46.550212 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f8cf7291-3a30-4a68-842f-9bd7caba1a06-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-77c8c5f65c-xjdft\" (UID: \"f8cf7291-3a30-4a68-842f-9bd7caba1a06\") " pod="openshift-authentication/oauth-openshift-77c8c5f65c-xjdft" Dec 10 14:37:46 crc kubenswrapper[4718]: I1210 14:37:46.550243 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f8cf7291-3a30-4a68-842f-9bd7caba1a06-audit-dir\") pod \"oauth-openshift-77c8c5f65c-xjdft\" (UID: \"f8cf7291-3a30-4a68-842f-9bd7caba1a06\") " pod="openshift-authentication/oauth-openshift-77c8c5f65c-xjdft" Dec 10 14:37:46 crc kubenswrapper[4718]: I1210 14:37:46.550267 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/f8cf7291-3a30-4a68-842f-9bd7caba1a06-v4-0-config-user-template-error\") pod \"oauth-openshift-77c8c5f65c-xjdft\" (UID: \"f8cf7291-3a30-4a68-842f-9bd7caba1a06\") " pod="openshift-authentication/oauth-openshift-77c8c5f65c-xjdft" Dec 10 14:37:46 crc kubenswrapper[4718]: I1210 14:37:46.550901 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f8cf7291-3a30-4a68-842f-9bd7caba1a06-audit-dir\") pod \"oauth-openshift-77c8c5f65c-xjdft\" (UID: \"f8cf7291-3a30-4a68-842f-9bd7caba1a06\") " pod="openshift-authentication/oauth-openshift-77c8c5f65c-xjdft" Dec 10 14:37:46 crc kubenswrapper[4718]: I1210 14:37:46.550926 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f8cf7291-3a30-4a68-842f-9bd7caba1a06-audit-policies\") pod \"oauth-openshift-77c8c5f65c-xjdft\" (UID: \"f8cf7291-3a30-4a68-842f-9bd7caba1a06\") " pod="openshift-authentication/oauth-openshift-77c8c5f65c-xjdft" Dec 10 14:37:46 crc kubenswrapper[4718]: I1210 14:37:46.551613 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/f8cf7291-3a30-4a68-842f-9bd7caba1a06-v4-0-config-system-service-ca\") pod \"oauth-openshift-77c8c5f65c-xjdft\" (UID: \"f8cf7291-3a30-4a68-842f-9bd7caba1a06\") " pod="openshift-authentication/oauth-openshift-77c8c5f65c-xjdft" Dec 10 14:37:46 crc kubenswrapper[4718]: I1210 14:37:46.551688 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/f8cf7291-3a30-4a68-842f-9bd7caba1a06-v4-0-config-system-cliconfig\") pod \"oauth-openshift-77c8c5f65c-xjdft\" (UID: \"f8cf7291-3a30-4a68-842f-9bd7caba1a06\") " pod="openshift-authentication/oauth-openshift-77c8c5f65c-xjdft" Dec 10 14:37:46 crc kubenswrapper[4718]: I1210 14:37:46.552121 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f8cf7291-3a30-4a68-842f-9bd7caba1a06-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-77c8c5f65c-xjdft\" (UID: \"f8cf7291-3a30-4a68-842f-9bd7caba1a06\") " pod="openshift-authentication/oauth-openshift-77c8c5f65c-xjdft" Dec 10 14:37:46 crc kubenswrapper[4718]: I1210 14:37:46.556951 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/f8cf7291-3a30-4a68-842f-9bd7caba1a06-v4-0-config-user-template-login\") pod \"oauth-openshift-77c8c5f65c-xjdft\" (UID: \"f8cf7291-3a30-4a68-842f-9bd7caba1a06\") " pod="openshift-authentication/oauth-openshift-77c8c5f65c-xjdft" Dec 10 14:37:46 crc kubenswrapper[4718]: I1210 14:37:46.557214 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/f8cf7291-3a30-4a68-842f-9bd7caba1a06-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-77c8c5f65c-xjdft\" (UID: \"f8cf7291-3a30-4a68-842f-9bd7caba1a06\") " pod="openshift-authentication/oauth-openshift-77c8c5f65c-xjdft" Dec 10 14:37:46 crc kubenswrapper[4718]: I1210 14:37:46.557419 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/f8cf7291-3a30-4a68-842f-9bd7caba1a06-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-77c8c5f65c-xjdft\" (UID: \"f8cf7291-3a30-4a68-842f-9bd7caba1a06\") " pod="openshift-authentication/oauth-openshift-77c8c5f65c-xjdft" Dec 10 14:37:46 crc kubenswrapper[4718]: I1210 14:37:46.557655 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/f8cf7291-3a30-4a68-842f-9bd7caba1a06-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-77c8c5f65c-xjdft\" (UID: \"f8cf7291-3a30-4a68-842f-9bd7caba1a06\") " pod="openshift-authentication/oauth-openshift-77c8c5f65c-xjdft" Dec 10 14:37:46 crc kubenswrapper[4718]: I1210 14:37:46.558042 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/f8cf7291-3a30-4a68-842f-9bd7caba1a06-v4-0-config-system-serving-cert\") pod \"oauth-openshift-77c8c5f65c-xjdft\" (UID: \"f8cf7291-3a30-4a68-842f-9bd7caba1a06\") " pod="openshift-authentication/oauth-openshift-77c8c5f65c-xjdft" Dec 10 14:37:46 crc kubenswrapper[4718]: I1210 14:37:46.563207 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/f8cf7291-3a30-4a68-842f-9bd7caba1a06-v4-0-config-system-session\") pod \"oauth-openshift-77c8c5f65c-xjdft\" (UID: \"f8cf7291-3a30-4a68-842f-9bd7caba1a06\") " pod="openshift-authentication/oauth-openshift-77c8c5f65c-xjdft" Dec 10 14:37:46 crc kubenswrapper[4718]: I1210 14:37:46.563811 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/f8cf7291-3a30-4a68-842f-9bd7caba1a06-v4-0-config-system-router-certs\") pod \"oauth-openshift-77c8c5f65c-xjdft\" (UID: \"f8cf7291-3a30-4a68-842f-9bd7caba1a06\") " pod="openshift-authentication/oauth-openshift-77c8c5f65c-xjdft" Dec 10 14:37:46 crc kubenswrapper[4718]: I1210 14:37:46.568894 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/f8cf7291-3a30-4a68-842f-9bd7caba1a06-v4-0-config-user-template-error\") pod \"oauth-openshift-77c8c5f65c-xjdft\" (UID: \"f8cf7291-3a30-4a68-842f-9bd7caba1a06\") " pod="openshift-authentication/oauth-openshift-77c8c5f65c-xjdft" Dec 10 14:37:46 crc kubenswrapper[4718]: I1210 14:37:46.577041 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-blqwz\" (UniqueName: \"kubernetes.io/projected/f8cf7291-3a30-4a68-842f-9bd7caba1a06-kube-api-access-blqwz\") pod \"oauth-openshift-77c8c5f65c-xjdft\" (UID: \"f8cf7291-3a30-4a68-842f-9bd7caba1a06\") " pod="openshift-authentication/oauth-openshift-77c8c5f65c-xjdft" Dec 10 14:37:46 crc kubenswrapper[4718]: I1210 14:37:46.708520 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Dec 10 14:37:46 crc kubenswrapper[4718]: I1210 14:37:46.725127 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-77c8c5f65c-xjdft" Dec 10 14:37:46 crc kubenswrapper[4718]: I1210 14:37:46.839762 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Dec 10 14:37:47 crc kubenswrapper[4718]: I1210 14:37:47.003822 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Dec 10 14:37:47 crc kubenswrapper[4718]: I1210 14:37:47.026034 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Dec 10 14:37:47 crc kubenswrapper[4718]: I1210 14:37:47.106484 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-77c8c5f65c-xjdft"] Dec 10 14:37:47 crc kubenswrapper[4718]: I1210 14:37:47.167866 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Dec 10 14:37:47 crc kubenswrapper[4718]: I1210 14:37:47.189260 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-twtlk" event={"ID":"451fb12e-a97f-441e-8a8c-d4c217640aef","Type":"ContainerStarted","Data":"1fffa7e433a4fa6ef1ae1906db45898d8fdc307505d17158a28896b0db709bfc"} Dec 10 14:37:47 crc kubenswrapper[4718]: I1210 14:37:47.190521 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-twtlk" Dec 10 14:37:47 crc kubenswrapper[4718]: I1210 14:37:47.196521 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-twtlk" Dec 10 14:37:47 crc kubenswrapper[4718]: I1210 14:37:47.379166 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Dec 10 14:37:47 crc kubenswrapper[4718]: I1210 14:37:47.415839 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Dec 10 14:37:47 crc kubenswrapper[4718]: I1210 14:37:47.568328 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-77c8c5f65c-xjdft"] Dec 10 14:37:47 crc kubenswrapper[4718]: I1210 14:37:47.755727 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Dec 10 14:37:48 crc kubenswrapper[4718]: I1210 14:37:48.000624 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Dec 10 14:37:48 crc kubenswrapper[4718]: I1210 14:37:48.071169 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Dec 10 14:37:48 crc kubenswrapper[4718]: I1210 14:37:48.085186 4718 patch_prober.go:28] interesting pod/machine-config-daemon-8zmhn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 10 14:37:48 crc kubenswrapper[4718]: I1210 14:37:48.085287 4718 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" podUID="8db53917-7cfb-496d-b8a0-5cc68f3be4e7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 10 14:37:48 crc kubenswrapper[4718]: I1210 14:37:48.116408 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Dec 10 14:37:48 crc kubenswrapper[4718]: I1210 14:37:48.200771 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-77c8c5f65c-xjdft" event={"ID":"f8cf7291-3a30-4a68-842f-9bd7caba1a06","Type":"ContainerStarted","Data":"080895a0a58090bfbee6de72208183dbdfd4e98c6f0195641c1ac9e4b042f406"} Dec 10 14:37:48 crc kubenswrapper[4718]: I1210 14:37:48.200824 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-77c8c5f65c-xjdft" event={"ID":"f8cf7291-3a30-4a68-842f-9bd7caba1a06","Type":"ContainerStarted","Data":"32078b368658711c01fe353fe7d48c5105701517dce2bf1c12e325534c60db04"} Dec 10 14:37:48 crc kubenswrapper[4718]: I1210 14:37:48.201248 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-77c8c5f65c-xjdft" Dec 10 14:37:48 crc kubenswrapper[4718]: I1210 14:37:48.226552 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-77c8c5f65c-xjdft" podStartSLOduration=110.226524003 podStartE2EDuration="1m50.226524003s" podCreationTimestamp="2025-12-10 14:35:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 14:37:48.225545688 +0000 UTC m=+373.174769115" watchObservedRunningTime="2025-12-10 14:37:48.226524003 +0000 UTC m=+373.175747420" Dec 10 14:37:48 crc kubenswrapper[4718]: I1210 14:37:48.514963 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Dec 10 14:37:48 crc kubenswrapper[4718]: I1210 14:37:48.698075 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Dec 10 14:37:48 crc kubenswrapper[4718]: I1210 14:37:48.772350 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-77c8c5f65c-xjdft" Dec 10 14:37:49 crc kubenswrapper[4718]: I1210 14:37:49.173064 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Dec 10 14:37:49 crc kubenswrapper[4718]: I1210 14:37:49.215176 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Dec 10 14:37:49 crc kubenswrapper[4718]: I1210 14:37:49.922494 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Dec 10 14:37:50 crc kubenswrapper[4718]: I1210 14:37:50.294941 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Dec 10 14:37:50 crc kubenswrapper[4718]: I1210 14:37:50.499472 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Dec 10 14:37:50 crc kubenswrapper[4718]: I1210 14:37:50.887246 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Dec 10 14:37:51 crc kubenswrapper[4718]: I1210 14:37:51.097964 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Dec 10 14:37:51 crc kubenswrapper[4718]: I1210 14:37:51.146873 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Dec 10 14:37:51 crc kubenswrapper[4718]: I1210 14:37:51.253630 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Dec 10 14:37:52 crc kubenswrapper[4718]: I1210 14:37:52.068262 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Dec 10 14:37:52 crc kubenswrapper[4718]: I1210 14:37:52.228085 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Dec 10 14:37:52 crc kubenswrapper[4718]: I1210 14:37:52.262450 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Dec 10 14:37:52 crc kubenswrapper[4718]: I1210 14:37:52.407817 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Dec 10 14:37:52 crc kubenswrapper[4718]: I1210 14:37:52.421970 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Dec 10 14:37:52 crc kubenswrapper[4718]: I1210 14:37:52.573554 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Dec 10 14:37:52 crc kubenswrapper[4718]: I1210 14:37:52.848406 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Dec 10 14:37:53 crc kubenswrapper[4718]: I1210 14:37:53.411770 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Dec 10 14:38:18 crc kubenswrapper[4718]: I1210 14:38:18.084952 4718 patch_prober.go:28] interesting pod/machine-config-daemon-8zmhn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 10 14:38:18 crc kubenswrapper[4718]: I1210 14:38:18.086073 4718 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" podUID="8db53917-7cfb-496d-b8a0-5cc68f3be4e7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 10 14:38:21 crc kubenswrapper[4718]: I1210 14:38:21.958128 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-vws9k"] Dec 10 14:38:21 crc kubenswrapper[4718]: I1210 14:38:21.958664 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-vws9k" podUID="850edb82-dd95-474f-a41c-3aa48faa4b87" containerName="controller-manager" containerID="cri-o://fc3f83f97e473e473d600afa8c79c475b7ebec5f2afb79a3edf440cff907b615" gracePeriod=30 Dec 10 14:38:22 crc kubenswrapper[4718]: I1210 14:38:22.083126 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-wgzns"] Dec 10 14:38:22 crc kubenswrapper[4718]: I1210 14:38:22.083438 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wgzns" podUID="b0aac105-3797-4d99-ad0f-443048b96b0a" containerName="route-controller-manager" containerID="cri-o://3c3729af7be3d4a65679951d5a5d171c24534603c2174e2c37bf9c32c7bf0f5a" gracePeriod=30 Dec 10 14:38:22 crc kubenswrapper[4718]: I1210 14:38:22.418845 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-vws9k" Dec 10 14:38:22 crc kubenswrapper[4718]: I1210 14:38:22.426681 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wgzns" Dec 10 14:38:22 crc kubenswrapper[4718]: I1210 14:38:22.428323 4718 generic.go:334] "Generic (PLEG): container finished" podID="850edb82-dd95-474f-a41c-3aa48faa4b87" containerID="fc3f83f97e473e473d600afa8c79c475b7ebec5f2afb79a3edf440cff907b615" exitCode=0 Dec 10 14:38:22 crc kubenswrapper[4718]: I1210 14:38:22.428361 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-vws9k" Dec 10 14:38:22 crc kubenswrapper[4718]: I1210 14:38:22.428435 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-vws9k" event={"ID":"850edb82-dd95-474f-a41c-3aa48faa4b87","Type":"ContainerDied","Data":"fc3f83f97e473e473d600afa8c79c475b7ebec5f2afb79a3edf440cff907b615"} Dec 10 14:38:22 crc kubenswrapper[4718]: I1210 14:38:22.428648 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-vws9k" event={"ID":"850edb82-dd95-474f-a41c-3aa48faa4b87","Type":"ContainerDied","Data":"ea1157e1a81b647049cbf7d0c5af958cbebe649061d75d5bbde5fdbbb39354a2"} Dec 10 14:38:22 crc kubenswrapper[4718]: I1210 14:38:22.428713 4718 scope.go:117] "RemoveContainer" containerID="fc3f83f97e473e473d600afa8c79c475b7ebec5f2afb79a3edf440cff907b615" Dec 10 14:38:22 crc kubenswrapper[4718]: I1210 14:38:22.432007 4718 generic.go:334] "Generic (PLEG): container finished" podID="b0aac105-3797-4d99-ad0f-443048b96b0a" containerID="3c3729af7be3d4a65679951d5a5d171c24534603c2174e2c37bf9c32c7bf0f5a" exitCode=0 Dec 10 14:38:22 crc kubenswrapper[4718]: I1210 14:38:22.432061 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wgzns" event={"ID":"b0aac105-3797-4d99-ad0f-443048b96b0a","Type":"ContainerDied","Data":"3c3729af7be3d4a65679951d5a5d171c24534603c2174e2c37bf9c32c7bf0f5a"} Dec 10 14:38:22 crc kubenswrapper[4718]: I1210 14:38:22.432083 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wgzns" Dec 10 14:38:22 crc kubenswrapper[4718]: I1210 14:38:22.432117 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wgzns" event={"ID":"b0aac105-3797-4d99-ad0f-443048b96b0a","Type":"ContainerDied","Data":"c7f213ddcf78d1aa56392e71cbb1b0e8edaafc8a9a983d6862df6b13ffdbde8c"} Dec 10 14:38:22 crc kubenswrapper[4718]: I1210 14:38:22.451568 4718 scope.go:117] "RemoveContainer" containerID="fc3f83f97e473e473d600afa8c79c475b7ebec5f2afb79a3edf440cff907b615" Dec 10 14:38:22 crc kubenswrapper[4718]: E1210 14:38:22.452127 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fc3f83f97e473e473d600afa8c79c475b7ebec5f2afb79a3edf440cff907b615\": container with ID starting with fc3f83f97e473e473d600afa8c79c475b7ebec5f2afb79a3edf440cff907b615 not found: ID does not exist" containerID="fc3f83f97e473e473d600afa8c79c475b7ebec5f2afb79a3edf440cff907b615" Dec 10 14:38:22 crc kubenswrapper[4718]: I1210 14:38:22.452200 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fc3f83f97e473e473d600afa8c79c475b7ebec5f2afb79a3edf440cff907b615"} err="failed to get container status \"fc3f83f97e473e473d600afa8c79c475b7ebec5f2afb79a3edf440cff907b615\": rpc error: code = NotFound desc = could not find container \"fc3f83f97e473e473d600afa8c79c475b7ebec5f2afb79a3edf440cff907b615\": container with ID starting with fc3f83f97e473e473d600afa8c79c475b7ebec5f2afb79a3edf440cff907b615 not found: ID does not exist" Dec 10 14:38:22 crc kubenswrapper[4718]: I1210 14:38:22.452242 4718 scope.go:117] "RemoveContainer" containerID="3c3729af7be3d4a65679951d5a5d171c24534603c2174e2c37bf9c32c7bf0f5a" Dec 10 14:38:22 crc kubenswrapper[4718]: I1210 14:38:22.471732 4718 scope.go:117] "RemoveContainer" containerID="3c3729af7be3d4a65679951d5a5d171c24534603c2174e2c37bf9c32c7bf0f5a" Dec 10 14:38:22 crc kubenswrapper[4718]: E1210 14:38:22.472291 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3c3729af7be3d4a65679951d5a5d171c24534603c2174e2c37bf9c32c7bf0f5a\": container with ID starting with 3c3729af7be3d4a65679951d5a5d171c24534603c2174e2c37bf9c32c7bf0f5a not found: ID does not exist" containerID="3c3729af7be3d4a65679951d5a5d171c24534603c2174e2c37bf9c32c7bf0f5a" Dec 10 14:38:22 crc kubenswrapper[4718]: I1210 14:38:22.472336 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3c3729af7be3d4a65679951d5a5d171c24534603c2174e2c37bf9c32c7bf0f5a"} err="failed to get container status \"3c3729af7be3d4a65679951d5a5d171c24534603c2174e2c37bf9c32c7bf0f5a\": rpc error: code = NotFound desc = could not find container \"3c3729af7be3d4a65679951d5a5d171c24534603c2174e2c37bf9c32c7bf0f5a\": container with ID starting with 3c3729af7be3d4a65679951d5a5d171c24534603c2174e2c37bf9c32c7bf0f5a not found: ID does not exist" Dec 10 14:38:22 crc kubenswrapper[4718]: I1210 14:38:22.609151 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b0aac105-3797-4d99-ad0f-443048b96b0a-client-ca\") pod \"b0aac105-3797-4d99-ad0f-443048b96b0a\" (UID: \"b0aac105-3797-4d99-ad0f-443048b96b0a\") " Dec 10 14:38:22 crc kubenswrapper[4718]: I1210 14:38:22.609213 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/850edb82-dd95-474f-a41c-3aa48faa4b87-serving-cert\") pod \"850edb82-dd95-474f-a41c-3aa48faa4b87\" (UID: \"850edb82-dd95-474f-a41c-3aa48faa4b87\") " Dec 10 14:38:22 crc kubenswrapper[4718]: I1210 14:38:22.609240 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b0aac105-3797-4d99-ad0f-443048b96b0a-serving-cert\") pod \"b0aac105-3797-4d99-ad0f-443048b96b0a\" (UID: \"b0aac105-3797-4d99-ad0f-443048b96b0a\") " Dec 10 14:38:22 crc kubenswrapper[4718]: I1210 14:38:22.609263 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b0aac105-3797-4d99-ad0f-443048b96b0a-config\") pod \"b0aac105-3797-4d99-ad0f-443048b96b0a\" (UID: \"b0aac105-3797-4d99-ad0f-443048b96b0a\") " Dec 10 14:38:22 crc kubenswrapper[4718]: I1210 14:38:22.609323 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/850edb82-dd95-474f-a41c-3aa48faa4b87-config\") pod \"850edb82-dd95-474f-a41c-3aa48faa4b87\" (UID: \"850edb82-dd95-474f-a41c-3aa48faa4b87\") " Dec 10 14:38:22 crc kubenswrapper[4718]: I1210 14:38:22.609364 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/850edb82-dd95-474f-a41c-3aa48faa4b87-client-ca\") pod \"850edb82-dd95-474f-a41c-3aa48faa4b87\" (UID: \"850edb82-dd95-474f-a41c-3aa48faa4b87\") " Dec 10 14:38:22 crc kubenswrapper[4718]: I1210 14:38:22.609419 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bwk8h\" (UniqueName: \"kubernetes.io/projected/850edb82-dd95-474f-a41c-3aa48faa4b87-kube-api-access-bwk8h\") pod \"850edb82-dd95-474f-a41c-3aa48faa4b87\" (UID: \"850edb82-dd95-474f-a41c-3aa48faa4b87\") " Dec 10 14:38:22 crc kubenswrapper[4718]: I1210 14:38:22.609445 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n9gpp\" (UniqueName: \"kubernetes.io/projected/b0aac105-3797-4d99-ad0f-443048b96b0a-kube-api-access-n9gpp\") pod \"b0aac105-3797-4d99-ad0f-443048b96b0a\" (UID: \"b0aac105-3797-4d99-ad0f-443048b96b0a\") " Dec 10 14:38:22 crc kubenswrapper[4718]: I1210 14:38:22.609523 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/850edb82-dd95-474f-a41c-3aa48faa4b87-proxy-ca-bundles\") pod \"850edb82-dd95-474f-a41c-3aa48faa4b87\" (UID: \"850edb82-dd95-474f-a41c-3aa48faa4b87\") " Dec 10 14:38:22 crc kubenswrapper[4718]: I1210 14:38:22.610598 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/850edb82-dd95-474f-a41c-3aa48faa4b87-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "850edb82-dd95-474f-a41c-3aa48faa4b87" (UID: "850edb82-dd95-474f-a41c-3aa48faa4b87"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:38:22 crc kubenswrapper[4718]: I1210 14:38:22.610589 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b0aac105-3797-4d99-ad0f-443048b96b0a-client-ca" (OuterVolumeSpecName: "client-ca") pod "b0aac105-3797-4d99-ad0f-443048b96b0a" (UID: "b0aac105-3797-4d99-ad0f-443048b96b0a"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:38:22 crc kubenswrapper[4718]: I1210 14:38:22.610683 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/850edb82-dd95-474f-a41c-3aa48faa4b87-config" (OuterVolumeSpecName: "config") pod "850edb82-dd95-474f-a41c-3aa48faa4b87" (UID: "850edb82-dd95-474f-a41c-3aa48faa4b87"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:38:22 crc kubenswrapper[4718]: I1210 14:38:22.610697 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b0aac105-3797-4d99-ad0f-443048b96b0a-config" (OuterVolumeSpecName: "config") pod "b0aac105-3797-4d99-ad0f-443048b96b0a" (UID: "b0aac105-3797-4d99-ad0f-443048b96b0a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:38:22 crc kubenswrapper[4718]: I1210 14:38:22.610677 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/850edb82-dd95-474f-a41c-3aa48faa4b87-client-ca" (OuterVolumeSpecName: "client-ca") pod "850edb82-dd95-474f-a41c-3aa48faa4b87" (UID: "850edb82-dd95-474f-a41c-3aa48faa4b87"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:38:22 crc kubenswrapper[4718]: I1210 14:38:22.617528 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b0aac105-3797-4d99-ad0f-443048b96b0a-kube-api-access-n9gpp" (OuterVolumeSpecName: "kube-api-access-n9gpp") pod "b0aac105-3797-4d99-ad0f-443048b96b0a" (UID: "b0aac105-3797-4d99-ad0f-443048b96b0a"). InnerVolumeSpecName "kube-api-access-n9gpp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 14:38:22 crc kubenswrapper[4718]: I1210 14:38:22.617610 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/850edb82-dd95-474f-a41c-3aa48faa4b87-kube-api-access-bwk8h" (OuterVolumeSpecName: "kube-api-access-bwk8h") pod "850edb82-dd95-474f-a41c-3aa48faa4b87" (UID: "850edb82-dd95-474f-a41c-3aa48faa4b87"). InnerVolumeSpecName "kube-api-access-bwk8h". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 14:38:22 crc kubenswrapper[4718]: I1210 14:38:22.617668 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/850edb82-dd95-474f-a41c-3aa48faa4b87-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "850edb82-dd95-474f-a41c-3aa48faa4b87" (UID: "850edb82-dd95-474f-a41c-3aa48faa4b87"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 14:38:22 crc kubenswrapper[4718]: I1210 14:38:22.618073 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b0aac105-3797-4d99-ad0f-443048b96b0a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "b0aac105-3797-4d99-ad0f-443048b96b0a" (UID: "b0aac105-3797-4d99-ad0f-443048b96b0a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 14:38:22 crc kubenswrapper[4718]: I1210 14:38:22.711252 4718 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b0aac105-3797-4d99-ad0f-443048b96b0a-config\") on node \"crc\" DevicePath \"\"" Dec 10 14:38:22 crc kubenswrapper[4718]: I1210 14:38:22.711294 4718 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/850edb82-dd95-474f-a41c-3aa48faa4b87-config\") on node \"crc\" DevicePath \"\"" Dec 10 14:38:22 crc kubenswrapper[4718]: I1210 14:38:22.711303 4718 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/850edb82-dd95-474f-a41c-3aa48faa4b87-client-ca\") on node \"crc\" DevicePath \"\"" Dec 10 14:38:22 crc kubenswrapper[4718]: I1210 14:38:22.711318 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bwk8h\" (UniqueName: \"kubernetes.io/projected/850edb82-dd95-474f-a41c-3aa48faa4b87-kube-api-access-bwk8h\") on node \"crc\" DevicePath \"\"" Dec 10 14:38:22 crc kubenswrapper[4718]: I1210 14:38:22.711331 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n9gpp\" (UniqueName: \"kubernetes.io/projected/b0aac105-3797-4d99-ad0f-443048b96b0a-kube-api-access-n9gpp\") on node \"crc\" DevicePath \"\"" Dec 10 14:38:22 crc kubenswrapper[4718]: I1210 14:38:22.711340 4718 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/850edb82-dd95-474f-a41c-3aa48faa4b87-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 10 14:38:22 crc kubenswrapper[4718]: I1210 14:38:22.711348 4718 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b0aac105-3797-4d99-ad0f-443048b96b0a-client-ca\") on node \"crc\" DevicePath \"\"" Dec 10 14:38:22 crc kubenswrapper[4718]: I1210 14:38:22.711356 4718 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/850edb82-dd95-474f-a41c-3aa48faa4b87-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 10 14:38:22 crc kubenswrapper[4718]: I1210 14:38:22.711365 4718 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b0aac105-3797-4d99-ad0f-443048b96b0a-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 10 14:38:22 crc kubenswrapper[4718]: I1210 14:38:22.761123 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-vws9k"] Dec 10 14:38:22 crc kubenswrapper[4718]: I1210 14:38:22.765100 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-vws9k"] Dec 10 14:38:22 crc kubenswrapper[4718]: I1210 14:38:22.775168 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-wgzns"] Dec 10 14:38:22 crc kubenswrapper[4718]: I1210 14:38:22.777891 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-wgzns"] Dec 10 14:38:23 crc kubenswrapper[4718]: I1210 14:38:23.424205 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-598fcbc587-7z88q"] Dec 10 14:38:23 crc kubenswrapper[4718]: E1210 14:38:23.424710 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b0aac105-3797-4d99-ad0f-443048b96b0a" containerName="route-controller-manager" Dec 10 14:38:23 crc kubenswrapper[4718]: I1210 14:38:23.424731 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="b0aac105-3797-4d99-ad0f-443048b96b0a" containerName="route-controller-manager" Dec 10 14:38:23 crc kubenswrapper[4718]: E1210 14:38:23.424740 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="850edb82-dd95-474f-a41c-3aa48faa4b87" containerName="controller-manager" Dec 10 14:38:23 crc kubenswrapper[4718]: I1210 14:38:23.424747 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="850edb82-dd95-474f-a41c-3aa48faa4b87" containerName="controller-manager" Dec 10 14:38:23 crc kubenswrapper[4718]: I1210 14:38:23.424911 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="850edb82-dd95-474f-a41c-3aa48faa4b87" containerName="controller-manager" Dec 10 14:38:23 crc kubenswrapper[4718]: I1210 14:38:23.424923 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="b0aac105-3797-4d99-ad0f-443048b96b0a" containerName="route-controller-manager" Dec 10 14:38:23 crc kubenswrapper[4718]: I1210 14:38:23.425591 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-598fcbc587-7z88q" Dec 10 14:38:23 crc kubenswrapper[4718]: I1210 14:38:23.430485 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Dec 10 14:38:23 crc kubenswrapper[4718]: I1210 14:38:23.430614 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Dec 10 14:38:23 crc kubenswrapper[4718]: I1210 14:38:23.430673 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Dec 10 14:38:23 crc kubenswrapper[4718]: I1210 14:38:23.431106 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Dec 10 14:38:23 crc kubenswrapper[4718]: I1210 14:38:23.431224 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Dec 10 14:38:23 crc kubenswrapper[4718]: I1210 14:38:23.431346 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Dec 10 14:38:23 crc kubenswrapper[4718]: I1210 14:38:23.448673 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Dec 10 14:38:23 crc kubenswrapper[4718]: I1210 14:38:23.451365 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-598fcbc587-7z88q"] Dec 10 14:38:23 crc kubenswrapper[4718]: I1210 14:38:23.524505 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/83c01afd-c114-42b0-9146-375d914b36a6-config\") pod \"controller-manager-598fcbc587-7z88q\" (UID: \"83c01afd-c114-42b0-9146-375d914b36a6\") " pod="openshift-controller-manager/controller-manager-598fcbc587-7z88q" Dec 10 14:38:23 crc kubenswrapper[4718]: I1210 14:38:23.524578 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/83c01afd-c114-42b0-9146-375d914b36a6-serving-cert\") pod \"controller-manager-598fcbc587-7z88q\" (UID: \"83c01afd-c114-42b0-9146-375d914b36a6\") " pod="openshift-controller-manager/controller-manager-598fcbc587-7z88q" Dec 10 14:38:23 crc kubenswrapper[4718]: I1210 14:38:23.524619 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/83c01afd-c114-42b0-9146-375d914b36a6-client-ca\") pod \"controller-manager-598fcbc587-7z88q\" (UID: \"83c01afd-c114-42b0-9146-375d914b36a6\") " pod="openshift-controller-manager/controller-manager-598fcbc587-7z88q" Dec 10 14:38:23 crc kubenswrapper[4718]: I1210 14:38:23.524705 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/83c01afd-c114-42b0-9146-375d914b36a6-proxy-ca-bundles\") pod \"controller-manager-598fcbc587-7z88q\" (UID: \"83c01afd-c114-42b0-9146-375d914b36a6\") " pod="openshift-controller-manager/controller-manager-598fcbc587-7z88q" Dec 10 14:38:23 crc kubenswrapper[4718]: I1210 14:38:23.524739 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bhr62\" (UniqueName: \"kubernetes.io/projected/83c01afd-c114-42b0-9146-375d914b36a6-kube-api-access-bhr62\") pod \"controller-manager-598fcbc587-7z88q\" (UID: \"83c01afd-c114-42b0-9146-375d914b36a6\") " pod="openshift-controller-manager/controller-manager-598fcbc587-7z88q" Dec 10 14:38:23 crc kubenswrapper[4718]: I1210 14:38:23.625885 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/83c01afd-c114-42b0-9146-375d914b36a6-config\") pod \"controller-manager-598fcbc587-7z88q\" (UID: \"83c01afd-c114-42b0-9146-375d914b36a6\") " pod="openshift-controller-manager/controller-manager-598fcbc587-7z88q" Dec 10 14:38:23 crc kubenswrapper[4718]: I1210 14:38:23.625946 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/83c01afd-c114-42b0-9146-375d914b36a6-serving-cert\") pod \"controller-manager-598fcbc587-7z88q\" (UID: \"83c01afd-c114-42b0-9146-375d914b36a6\") " pod="openshift-controller-manager/controller-manager-598fcbc587-7z88q" Dec 10 14:38:23 crc kubenswrapper[4718]: I1210 14:38:23.625971 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/83c01afd-c114-42b0-9146-375d914b36a6-client-ca\") pod \"controller-manager-598fcbc587-7z88q\" (UID: \"83c01afd-c114-42b0-9146-375d914b36a6\") " pod="openshift-controller-manager/controller-manager-598fcbc587-7z88q" Dec 10 14:38:23 crc kubenswrapper[4718]: I1210 14:38:23.626043 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/83c01afd-c114-42b0-9146-375d914b36a6-proxy-ca-bundles\") pod \"controller-manager-598fcbc587-7z88q\" (UID: \"83c01afd-c114-42b0-9146-375d914b36a6\") " pod="openshift-controller-manager/controller-manager-598fcbc587-7z88q" Dec 10 14:38:23 crc kubenswrapper[4718]: I1210 14:38:23.626155 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bhr62\" (UniqueName: \"kubernetes.io/projected/83c01afd-c114-42b0-9146-375d914b36a6-kube-api-access-bhr62\") pod \"controller-manager-598fcbc587-7z88q\" (UID: \"83c01afd-c114-42b0-9146-375d914b36a6\") " pod="openshift-controller-manager/controller-manager-598fcbc587-7z88q" Dec 10 14:38:23 crc kubenswrapper[4718]: I1210 14:38:23.627996 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/83c01afd-c114-42b0-9146-375d914b36a6-config\") pod \"controller-manager-598fcbc587-7z88q\" (UID: \"83c01afd-c114-42b0-9146-375d914b36a6\") " pod="openshift-controller-manager/controller-manager-598fcbc587-7z88q" Dec 10 14:38:23 crc kubenswrapper[4718]: I1210 14:38:23.628002 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/83c01afd-c114-42b0-9146-375d914b36a6-client-ca\") pod \"controller-manager-598fcbc587-7z88q\" (UID: \"83c01afd-c114-42b0-9146-375d914b36a6\") " pod="openshift-controller-manager/controller-manager-598fcbc587-7z88q" Dec 10 14:38:23 crc kubenswrapper[4718]: I1210 14:38:23.628184 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/83c01afd-c114-42b0-9146-375d914b36a6-proxy-ca-bundles\") pod \"controller-manager-598fcbc587-7z88q\" (UID: \"83c01afd-c114-42b0-9146-375d914b36a6\") " pod="openshift-controller-manager/controller-manager-598fcbc587-7z88q" Dec 10 14:38:23 crc kubenswrapper[4718]: I1210 14:38:23.631158 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/83c01afd-c114-42b0-9146-375d914b36a6-serving-cert\") pod \"controller-manager-598fcbc587-7z88q\" (UID: \"83c01afd-c114-42b0-9146-375d914b36a6\") " pod="openshift-controller-manager/controller-manager-598fcbc587-7z88q" Dec 10 14:38:23 crc kubenswrapper[4718]: I1210 14:38:23.650675 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bhr62\" (UniqueName: \"kubernetes.io/projected/83c01afd-c114-42b0-9146-375d914b36a6-kube-api-access-bhr62\") pod \"controller-manager-598fcbc587-7z88q\" (UID: \"83c01afd-c114-42b0-9146-375d914b36a6\") " pod="openshift-controller-manager/controller-manager-598fcbc587-7z88q" Dec 10 14:38:23 crc kubenswrapper[4718]: I1210 14:38:23.748469 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-598fcbc587-7z88q" Dec 10 14:38:23 crc kubenswrapper[4718]: I1210 14:38:23.950760 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-598fcbc587-7z88q"] Dec 10 14:38:24 crc kubenswrapper[4718]: I1210 14:38:24.034669 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="850edb82-dd95-474f-a41c-3aa48faa4b87" path="/var/lib/kubelet/pods/850edb82-dd95-474f-a41c-3aa48faa4b87/volumes" Dec 10 14:38:24 crc kubenswrapper[4718]: I1210 14:38:24.035598 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b0aac105-3797-4d99-ad0f-443048b96b0a" path="/var/lib/kubelet/pods/b0aac105-3797-4d99-ad0f-443048b96b0a/volumes" Dec 10 14:38:24 crc kubenswrapper[4718]: I1210 14:38:24.427842 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-86c947bc4c-wdrp8"] Dec 10 14:38:24 crc kubenswrapper[4718]: I1210 14:38:24.429206 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-86c947bc4c-wdrp8" Dec 10 14:38:24 crc kubenswrapper[4718]: I1210 14:38:24.432768 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Dec 10 14:38:24 crc kubenswrapper[4718]: I1210 14:38:24.434266 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Dec 10 14:38:24 crc kubenswrapper[4718]: I1210 14:38:24.434419 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Dec 10 14:38:24 crc kubenswrapper[4718]: I1210 14:38:24.434751 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Dec 10 14:38:24 crc kubenswrapper[4718]: I1210 14:38:24.434852 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Dec 10 14:38:24 crc kubenswrapper[4718]: I1210 14:38:24.435142 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Dec 10 14:38:24 crc kubenswrapper[4718]: I1210 14:38:24.440230 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q9cd5\" (UniqueName: \"kubernetes.io/projected/1a9006e9-841e-421c-8dc9-dc7d51a6dc46-kube-api-access-q9cd5\") pod \"route-controller-manager-86c947bc4c-wdrp8\" (UID: \"1a9006e9-841e-421c-8dc9-dc7d51a6dc46\") " pod="openshift-route-controller-manager/route-controller-manager-86c947bc4c-wdrp8" Dec 10 14:38:24 crc kubenswrapper[4718]: I1210 14:38:24.440381 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1a9006e9-841e-421c-8dc9-dc7d51a6dc46-config\") pod \"route-controller-manager-86c947bc4c-wdrp8\" (UID: \"1a9006e9-841e-421c-8dc9-dc7d51a6dc46\") " pod="openshift-route-controller-manager/route-controller-manager-86c947bc4c-wdrp8" Dec 10 14:38:24 crc kubenswrapper[4718]: I1210 14:38:24.440615 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1a9006e9-841e-421c-8dc9-dc7d51a6dc46-serving-cert\") pod \"route-controller-manager-86c947bc4c-wdrp8\" (UID: \"1a9006e9-841e-421c-8dc9-dc7d51a6dc46\") " pod="openshift-route-controller-manager/route-controller-manager-86c947bc4c-wdrp8" Dec 10 14:38:24 crc kubenswrapper[4718]: I1210 14:38:24.440825 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1a9006e9-841e-421c-8dc9-dc7d51a6dc46-client-ca\") pod \"route-controller-manager-86c947bc4c-wdrp8\" (UID: \"1a9006e9-841e-421c-8dc9-dc7d51a6dc46\") " pod="openshift-route-controller-manager/route-controller-manager-86c947bc4c-wdrp8" Dec 10 14:38:24 crc kubenswrapper[4718]: I1210 14:38:24.443553 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-86c947bc4c-wdrp8"] Dec 10 14:38:24 crc kubenswrapper[4718]: I1210 14:38:24.454747 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-598fcbc587-7z88q" event={"ID":"83c01afd-c114-42b0-9146-375d914b36a6","Type":"ContainerStarted","Data":"5f73783c038e813e6b2b127527c8cf93271c83ebd19fb9de6c9c478d757c8452"} Dec 10 14:38:24 crc kubenswrapper[4718]: I1210 14:38:24.454794 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-598fcbc587-7z88q" event={"ID":"83c01afd-c114-42b0-9146-375d914b36a6","Type":"ContainerStarted","Data":"539abda571ef0a2392165eee930db10d871561c0230d9d9a181cb22e2c9ca9cf"} Dec 10 14:38:24 crc kubenswrapper[4718]: I1210 14:38:24.455481 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-598fcbc587-7z88q" Dec 10 14:38:24 crc kubenswrapper[4718]: I1210 14:38:24.459420 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-598fcbc587-7z88q" Dec 10 14:38:24 crc kubenswrapper[4718]: I1210 14:38:24.542647 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1a9006e9-841e-421c-8dc9-dc7d51a6dc46-client-ca\") pod \"route-controller-manager-86c947bc4c-wdrp8\" (UID: \"1a9006e9-841e-421c-8dc9-dc7d51a6dc46\") " pod="openshift-route-controller-manager/route-controller-manager-86c947bc4c-wdrp8" Dec 10 14:38:24 crc kubenswrapper[4718]: I1210 14:38:24.542757 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q9cd5\" (UniqueName: \"kubernetes.io/projected/1a9006e9-841e-421c-8dc9-dc7d51a6dc46-kube-api-access-q9cd5\") pod \"route-controller-manager-86c947bc4c-wdrp8\" (UID: \"1a9006e9-841e-421c-8dc9-dc7d51a6dc46\") " pod="openshift-route-controller-manager/route-controller-manager-86c947bc4c-wdrp8" Dec 10 14:38:24 crc kubenswrapper[4718]: I1210 14:38:24.542830 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1a9006e9-841e-421c-8dc9-dc7d51a6dc46-config\") pod \"route-controller-manager-86c947bc4c-wdrp8\" (UID: \"1a9006e9-841e-421c-8dc9-dc7d51a6dc46\") " pod="openshift-route-controller-manager/route-controller-manager-86c947bc4c-wdrp8" Dec 10 14:38:24 crc kubenswrapper[4718]: I1210 14:38:24.542901 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1a9006e9-841e-421c-8dc9-dc7d51a6dc46-serving-cert\") pod \"route-controller-manager-86c947bc4c-wdrp8\" (UID: \"1a9006e9-841e-421c-8dc9-dc7d51a6dc46\") " pod="openshift-route-controller-manager/route-controller-manager-86c947bc4c-wdrp8" Dec 10 14:38:24 crc kubenswrapper[4718]: I1210 14:38:24.548226 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1a9006e9-841e-421c-8dc9-dc7d51a6dc46-client-ca\") pod \"route-controller-manager-86c947bc4c-wdrp8\" (UID: \"1a9006e9-841e-421c-8dc9-dc7d51a6dc46\") " pod="openshift-route-controller-manager/route-controller-manager-86c947bc4c-wdrp8" Dec 10 14:38:24 crc kubenswrapper[4718]: I1210 14:38:24.550890 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1a9006e9-841e-421c-8dc9-dc7d51a6dc46-config\") pod \"route-controller-manager-86c947bc4c-wdrp8\" (UID: \"1a9006e9-841e-421c-8dc9-dc7d51a6dc46\") " pod="openshift-route-controller-manager/route-controller-manager-86c947bc4c-wdrp8" Dec 10 14:38:24 crc kubenswrapper[4718]: I1210 14:38:24.557041 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1a9006e9-841e-421c-8dc9-dc7d51a6dc46-serving-cert\") pod \"route-controller-manager-86c947bc4c-wdrp8\" (UID: \"1a9006e9-841e-421c-8dc9-dc7d51a6dc46\") " pod="openshift-route-controller-manager/route-controller-manager-86c947bc4c-wdrp8" Dec 10 14:38:24 crc kubenswrapper[4718]: I1210 14:38:24.579670 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q9cd5\" (UniqueName: \"kubernetes.io/projected/1a9006e9-841e-421c-8dc9-dc7d51a6dc46-kube-api-access-q9cd5\") pod \"route-controller-manager-86c947bc4c-wdrp8\" (UID: \"1a9006e9-841e-421c-8dc9-dc7d51a6dc46\") " pod="openshift-route-controller-manager/route-controller-manager-86c947bc4c-wdrp8" Dec 10 14:38:24 crc kubenswrapper[4718]: I1210 14:38:24.754125 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-86c947bc4c-wdrp8" Dec 10 14:38:24 crc kubenswrapper[4718]: I1210 14:38:24.964294 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-598fcbc587-7z88q" podStartSLOduration=2.964268827 podStartE2EDuration="2.964268827s" podCreationTimestamp="2025-12-10 14:38:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 14:38:24.49167802 +0000 UTC m=+409.440901447" watchObservedRunningTime="2025-12-10 14:38:24.964268827 +0000 UTC m=+409.913492244" Dec 10 14:38:24 crc kubenswrapper[4718]: I1210 14:38:24.967303 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-86c947bc4c-wdrp8"] Dec 10 14:38:25 crc kubenswrapper[4718]: I1210 14:38:25.464681 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-86c947bc4c-wdrp8" event={"ID":"1a9006e9-841e-421c-8dc9-dc7d51a6dc46","Type":"ContainerStarted","Data":"982b4dc460c8466354823fc2fd732c8aebf0f3a88ef6c03b1c25265632e727e2"} Dec 10 14:38:25 crc kubenswrapper[4718]: I1210 14:38:25.464743 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-86c947bc4c-wdrp8" event={"ID":"1a9006e9-841e-421c-8dc9-dc7d51a6dc46","Type":"ContainerStarted","Data":"720f814d29a8efcc5506357adc1f80ee58c34cbc1daf473283e7696a42403f55"} Dec 10 14:38:25 crc kubenswrapper[4718]: I1210 14:38:25.465230 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-86c947bc4c-wdrp8" Dec 10 14:38:25 crc kubenswrapper[4718]: I1210 14:38:25.495584 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-86c947bc4c-wdrp8" podStartSLOduration=3.495551891 podStartE2EDuration="3.495551891s" podCreationTimestamp="2025-12-10 14:38:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 14:38:25.495477199 +0000 UTC m=+410.444700636" watchObservedRunningTime="2025-12-10 14:38:25.495551891 +0000 UTC m=+410.444775308" Dec 10 14:38:25 crc kubenswrapper[4718]: I1210 14:38:25.692901 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-86c947bc4c-wdrp8" Dec 10 14:38:31 crc kubenswrapper[4718]: I1210 14:38:31.664983 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-598fcbc587-7z88q"] Dec 10 14:38:31 crc kubenswrapper[4718]: I1210 14:38:31.665780 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-598fcbc587-7z88q" podUID="83c01afd-c114-42b0-9146-375d914b36a6" containerName="controller-manager" containerID="cri-o://5f73783c038e813e6b2b127527c8cf93271c83ebd19fb9de6c9c478d757c8452" gracePeriod=30 Dec 10 14:38:32 crc kubenswrapper[4718]: I1210 14:38:32.505843 4718 generic.go:334] "Generic (PLEG): container finished" podID="83c01afd-c114-42b0-9146-375d914b36a6" containerID="5f73783c038e813e6b2b127527c8cf93271c83ebd19fb9de6c9c478d757c8452" exitCode=0 Dec 10 14:38:32 crc kubenswrapper[4718]: I1210 14:38:32.505919 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-598fcbc587-7z88q" event={"ID":"83c01afd-c114-42b0-9146-375d914b36a6","Type":"ContainerDied","Data":"5f73783c038e813e6b2b127527c8cf93271c83ebd19fb9de6c9c478d757c8452"} Dec 10 14:38:32 crc kubenswrapper[4718]: I1210 14:38:32.760979 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-598fcbc587-7z88q" Dec 10 14:38:32 crc kubenswrapper[4718]: I1210 14:38:32.794158 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-6b66d45b7f-sdggh"] Dec 10 14:38:32 crc kubenswrapper[4718]: E1210 14:38:32.794579 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83c01afd-c114-42b0-9146-375d914b36a6" containerName="controller-manager" Dec 10 14:38:32 crc kubenswrapper[4718]: I1210 14:38:32.794597 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="83c01afd-c114-42b0-9146-375d914b36a6" containerName="controller-manager" Dec 10 14:38:32 crc kubenswrapper[4718]: I1210 14:38:32.794736 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="83c01afd-c114-42b0-9146-375d914b36a6" containerName="controller-manager" Dec 10 14:38:32 crc kubenswrapper[4718]: I1210 14:38:32.795353 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6b66d45b7f-sdggh" Dec 10 14:38:32 crc kubenswrapper[4718]: I1210 14:38:32.802645 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6b66d45b7f-sdggh"] Dec 10 14:38:32 crc kubenswrapper[4718]: I1210 14:38:32.864570 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/83c01afd-c114-42b0-9146-375d914b36a6-serving-cert\") pod \"83c01afd-c114-42b0-9146-375d914b36a6\" (UID: \"83c01afd-c114-42b0-9146-375d914b36a6\") " Dec 10 14:38:32 crc kubenswrapper[4718]: I1210 14:38:32.864622 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/83c01afd-c114-42b0-9146-375d914b36a6-proxy-ca-bundles\") pod \"83c01afd-c114-42b0-9146-375d914b36a6\" (UID: \"83c01afd-c114-42b0-9146-375d914b36a6\") " Dec 10 14:38:32 crc kubenswrapper[4718]: I1210 14:38:32.864707 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/83c01afd-c114-42b0-9146-375d914b36a6-client-ca\") pod \"83c01afd-c114-42b0-9146-375d914b36a6\" (UID: \"83c01afd-c114-42b0-9146-375d914b36a6\") " Dec 10 14:38:32 crc kubenswrapper[4718]: I1210 14:38:32.864833 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bhr62\" (UniqueName: \"kubernetes.io/projected/83c01afd-c114-42b0-9146-375d914b36a6-kube-api-access-bhr62\") pod \"83c01afd-c114-42b0-9146-375d914b36a6\" (UID: \"83c01afd-c114-42b0-9146-375d914b36a6\") " Dec 10 14:38:32 crc kubenswrapper[4718]: I1210 14:38:32.864898 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/83c01afd-c114-42b0-9146-375d914b36a6-config\") pod \"83c01afd-c114-42b0-9146-375d914b36a6\" (UID: \"83c01afd-c114-42b0-9146-375d914b36a6\") " Dec 10 14:38:32 crc kubenswrapper[4718]: I1210 14:38:32.865562 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/83c01afd-c114-42b0-9146-375d914b36a6-client-ca" (OuterVolumeSpecName: "client-ca") pod "83c01afd-c114-42b0-9146-375d914b36a6" (UID: "83c01afd-c114-42b0-9146-375d914b36a6"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:38:32 crc kubenswrapper[4718]: I1210 14:38:32.865918 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/83c01afd-c114-42b0-9146-375d914b36a6-config" (OuterVolumeSpecName: "config") pod "83c01afd-c114-42b0-9146-375d914b36a6" (UID: "83c01afd-c114-42b0-9146-375d914b36a6"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:38:32 crc kubenswrapper[4718]: I1210 14:38:32.865968 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/83c01afd-c114-42b0-9146-375d914b36a6-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "83c01afd-c114-42b0-9146-375d914b36a6" (UID: "83c01afd-c114-42b0-9146-375d914b36a6"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:38:32 crc kubenswrapper[4718]: I1210 14:38:32.879258 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/83c01afd-c114-42b0-9146-375d914b36a6-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "83c01afd-c114-42b0-9146-375d914b36a6" (UID: "83c01afd-c114-42b0-9146-375d914b36a6"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 14:38:32 crc kubenswrapper[4718]: I1210 14:38:32.880375 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/83c01afd-c114-42b0-9146-375d914b36a6-kube-api-access-bhr62" (OuterVolumeSpecName: "kube-api-access-bhr62") pod "83c01afd-c114-42b0-9146-375d914b36a6" (UID: "83c01afd-c114-42b0-9146-375d914b36a6"). InnerVolumeSpecName "kube-api-access-bhr62". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 14:38:32 crc kubenswrapper[4718]: I1210 14:38:32.966276 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/94e42586-90ec-4349-a16b-53230a62d5cd-proxy-ca-bundles\") pod \"controller-manager-6b66d45b7f-sdggh\" (UID: \"94e42586-90ec-4349-a16b-53230a62d5cd\") " pod="openshift-controller-manager/controller-manager-6b66d45b7f-sdggh" Dec 10 14:38:32 crc kubenswrapper[4718]: I1210 14:38:32.966495 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/94e42586-90ec-4349-a16b-53230a62d5cd-serving-cert\") pod \"controller-manager-6b66d45b7f-sdggh\" (UID: \"94e42586-90ec-4349-a16b-53230a62d5cd\") " pod="openshift-controller-manager/controller-manager-6b66d45b7f-sdggh" Dec 10 14:38:32 crc kubenswrapper[4718]: I1210 14:38:32.966602 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/94e42586-90ec-4349-a16b-53230a62d5cd-config\") pod \"controller-manager-6b66d45b7f-sdggh\" (UID: \"94e42586-90ec-4349-a16b-53230a62d5cd\") " pod="openshift-controller-manager/controller-manager-6b66d45b7f-sdggh" Dec 10 14:38:32 crc kubenswrapper[4718]: I1210 14:38:32.966656 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/94e42586-90ec-4349-a16b-53230a62d5cd-client-ca\") pod \"controller-manager-6b66d45b7f-sdggh\" (UID: \"94e42586-90ec-4349-a16b-53230a62d5cd\") " pod="openshift-controller-manager/controller-manager-6b66d45b7f-sdggh" Dec 10 14:38:32 crc kubenswrapper[4718]: I1210 14:38:32.966702 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vbpk7\" (UniqueName: \"kubernetes.io/projected/94e42586-90ec-4349-a16b-53230a62d5cd-kube-api-access-vbpk7\") pod \"controller-manager-6b66d45b7f-sdggh\" (UID: \"94e42586-90ec-4349-a16b-53230a62d5cd\") " pod="openshift-controller-manager/controller-manager-6b66d45b7f-sdggh" Dec 10 14:38:32 crc kubenswrapper[4718]: I1210 14:38:32.967156 4718 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/83c01afd-c114-42b0-9146-375d914b36a6-config\") on node \"crc\" DevicePath \"\"" Dec 10 14:38:32 crc kubenswrapper[4718]: I1210 14:38:32.967216 4718 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/83c01afd-c114-42b0-9146-375d914b36a6-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 10 14:38:32 crc kubenswrapper[4718]: I1210 14:38:32.967234 4718 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/83c01afd-c114-42b0-9146-375d914b36a6-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 10 14:38:32 crc kubenswrapper[4718]: I1210 14:38:32.967246 4718 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/83c01afd-c114-42b0-9146-375d914b36a6-client-ca\") on node \"crc\" DevicePath \"\"" Dec 10 14:38:32 crc kubenswrapper[4718]: I1210 14:38:32.967266 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bhr62\" (UniqueName: \"kubernetes.io/projected/83c01afd-c114-42b0-9146-375d914b36a6-kube-api-access-bhr62\") on node \"crc\" DevicePath \"\"" Dec 10 14:38:33 crc kubenswrapper[4718]: I1210 14:38:33.069109 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/94e42586-90ec-4349-a16b-53230a62d5cd-client-ca\") pod \"controller-manager-6b66d45b7f-sdggh\" (UID: \"94e42586-90ec-4349-a16b-53230a62d5cd\") " pod="openshift-controller-manager/controller-manager-6b66d45b7f-sdggh" Dec 10 14:38:33 crc kubenswrapper[4718]: I1210 14:38:33.069184 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vbpk7\" (UniqueName: \"kubernetes.io/projected/94e42586-90ec-4349-a16b-53230a62d5cd-kube-api-access-vbpk7\") pod \"controller-manager-6b66d45b7f-sdggh\" (UID: \"94e42586-90ec-4349-a16b-53230a62d5cd\") " pod="openshift-controller-manager/controller-manager-6b66d45b7f-sdggh" Dec 10 14:38:33 crc kubenswrapper[4718]: I1210 14:38:33.069223 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/94e42586-90ec-4349-a16b-53230a62d5cd-proxy-ca-bundles\") pod \"controller-manager-6b66d45b7f-sdggh\" (UID: \"94e42586-90ec-4349-a16b-53230a62d5cd\") " pod="openshift-controller-manager/controller-manager-6b66d45b7f-sdggh" Dec 10 14:38:33 crc kubenswrapper[4718]: I1210 14:38:33.069259 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/94e42586-90ec-4349-a16b-53230a62d5cd-serving-cert\") pod \"controller-manager-6b66d45b7f-sdggh\" (UID: \"94e42586-90ec-4349-a16b-53230a62d5cd\") " pod="openshift-controller-manager/controller-manager-6b66d45b7f-sdggh" Dec 10 14:38:33 crc kubenswrapper[4718]: I1210 14:38:33.069330 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/94e42586-90ec-4349-a16b-53230a62d5cd-config\") pod \"controller-manager-6b66d45b7f-sdggh\" (UID: \"94e42586-90ec-4349-a16b-53230a62d5cd\") " pod="openshift-controller-manager/controller-manager-6b66d45b7f-sdggh" Dec 10 14:38:33 crc kubenswrapper[4718]: I1210 14:38:33.071470 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/94e42586-90ec-4349-a16b-53230a62d5cd-config\") pod \"controller-manager-6b66d45b7f-sdggh\" (UID: \"94e42586-90ec-4349-a16b-53230a62d5cd\") " pod="openshift-controller-manager/controller-manager-6b66d45b7f-sdggh" Dec 10 14:38:33 crc kubenswrapper[4718]: I1210 14:38:33.071606 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/94e42586-90ec-4349-a16b-53230a62d5cd-client-ca\") pod \"controller-manager-6b66d45b7f-sdggh\" (UID: \"94e42586-90ec-4349-a16b-53230a62d5cd\") " pod="openshift-controller-manager/controller-manager-6b66d45b7f-sdggh" Dec 10 14:38:33 crc kubenswrapper[4718]: I1210 14:38:33.072740 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/94e42586-90ec-4349-a16b-53230a62d5cd-proxy-ca-bundles\") pod \"controller-manager-6b66d45b7f-sdggh\" (UID: \"94e42586-90ec-4349-a16b-53230a62d5cd\") " pod="openshift-controller-manager/controller-manager-6b66d45b7f-sdggh" Dec 10 14:38:33 crc kubenswrapper[4718]: I1210 14:38:33.073428 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/94e42586-90ec-4349-a16b-53230a62d5cd-serving-cert\") pod \"controller-manager-6b66d45b7f-sdggh\" (UID: \"94e42586-90ec-4349-a16b-53230a62d5cd\") " pod="openshift-controller-manager/controller-manager-6b66d45b7f-sdggh" Dec 10 14:38:33 crc kubenswrapper[4718]: I1210 14:38:33.099043 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vbpk7\" (UniqueName: \"kubernetes.io/projected/94e42586-90ec-4349-a16b-53230a62d5cd-kube-api-access-vbpk7\") pod \"controller-manager-6b66d45b7f-sdggh\" (UID: \"94e42586-90ec-4349-a16b-53230a62d5cd\") " pod="openshift-controller-manager/controller-manager-6b66d45b7f-sdggh" Dec 10 14:38:33 crc kubenswrapper[4718]: I1210 14:38:33.119737 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6b66d45b7f-sdggh" Dec 10 14:38:33 crc kubenswrapper[4718]: I1210 14:38:33.520530 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-598fcbc587-7z88q" event={"ID":"83c01afd-c114-42b0-9146-375d914b36a6","Type":"ContainerDied","Data":"539abda571ef0a2392165eee930db10d871561c0230d9d9a181cb22e2c9ca9cf"} Dec 10 14:38:33 crc kubenswrapper[4718]: I1210 14:38:33.520628 4718 scope.go:117] "RemoveContainer" containerID="5f73783c038e813e6b2b127527c8cf93271c83ebd19fb9de6c9c478d757c8452" Dec 10 14:38:33 crc kubenswrapper[4718]: I1210 14:38:33.520899 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-598fcbc587-7z88q" Dec 10 14:38:33 crc kubenswrapper[4718]: I1210 14:38:33.561562 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6b66d45b7f-sdggh"] Dec 10 14:38:33 crc kubenswrapper[4718]: I1210 14:38:33.573242 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-598fcbc587-7z88q"] Dec 10 14:38:33 crc kubenswrapper[4718]: I1210 14:38:33.581012 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-598fcbc587-7z88q"] Dec 10 14:38:34 crc kubenswrapper[4718]: I1210 14:38:34.028827 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="83c01afd-c114-42b0-9146-375d914b36a6" path="/var/lib/kubelet/pods/83c01afd-c114-42b0-9146-375d914b36a6/volumes" Dec 10 14:38:34 crc kubenswrapper[4718]: I1210 14:38:34.529247 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6b66d45b7f-sdggh" event={"ID":"94e42586-90ec-4349-a16b-53230a62d5cd","Type":"ContainerStarted","Data":"b711a804f15d122ebd6b53b53f0c1e4b3b6022bfc049193ae4f92733289bcecd"} Dec 10 14:38:34 crc kubenswrapper[4718]: I1210 14:38:34.529311 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6b66d45b7f-sdggh" event={"ID":"94e42586-90ec-4349-a16b-53230a62d5cd","Type":"ContainerStarted","Data":"06b34aacc1ebd006d2e955e5f1b61d2370ecf2fc0069a47fc57e708b48966c93"} Dec 10 14:38:34 crc kubenswrapper[4718]: I1210 14:38:34.529557 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-6b66d45b7f-sdggh" Dec 10 14:38:34 crc kubenswrapper[4718]: I1210 14:38:34.534085 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-6b66d45b7f-sdggh" Dec 10 14:38:34 crc kubenswrapper[4718]: I1210 14:38:34.548301 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-6b66d45b7f-sdggh" podStartSLOduration=3.548271727 podStartE2EDuration="3.548271727s" podCreationTimestamp="2025-12-10 14:38:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 14:38:34.545136956 +0000 UTC m=+419.494360373" watchObservedRunningTime="2025-12-10 14:38:34.548271727 +0000 UTC m=+419.497495144" Dec 10 14:38:41 crc kubenswrapper[4718]: I1210 14:38:41.950508 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-86c947bc4c-wdrp8"] Dec 10 14:38:41 crc kubenswrapper[4718]: I1210 14:38:41.952461 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-86c947bc4c-wdrp8" podUID="1a9006e9-841e-421c-8dc9-dc7d51a6dc46" containerName="route-controller-manager" containerID="cri-o://982b4dc460c8466354823fc2fd732c8aebf0f3a88ef6c03b1c25265632e727e2" gracePeriod=30 Dec 10 14:38:42 crc kubenswrapper[4718]: I1210 14:38:42.450770 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-86c947bc4c-wdrp8" Dec 10 14:38:42 crc kubenswrapper[4718]: I1210 14:38:42.575176 4718 generic.go:334] "Generic (PLEG): container finished" podID="1a9006e9-841e-421c-8dc9-dc7d51a6dc46" containerID="982b4dc460c8466354823fc2fd732c8aebf0f3a88ef6c03b1c25265632e727e2" exitCode=0 Dec 10 14:38:42 crc kubenswrapper[4718]: I1210 14:38:42.575224 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-86c947bc4c-wdrp8" event={"ID":"1a9006e9-841e-421c-8dc9-dc7d51a6dc46","Type":"ContainerDied","Data":"982b4dc460c8466354823fc2fd732c8aebf0f3a88ef6c03b1c25265632e727e2"} Dec 10 14:38:42 crc kubenswrapper[4718]: I1210 14:38:42.575256 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-86c947bc4c-wdrp8" event={"ID":"1a9006e9-841e-421c-8dc9-dc7d51a6dc46","Type":"ContainerDied","Data":"720f814d29a8efcc5506357adc1f80ee58c34cbc1daf473283e7696a42403f55"} Dec 10 14:38:42 crc kubenswrapper[4718]: I1210 14:38:42.575273 4718 scope.go:117] "RemoveContainer" containerID="982b4dc460c8466354823fc2fd732c8aebf0f3a88ef6c03b1c25265632e727e2" Dec 10 14:38:42 crc kubenswrapper[4718]: I1210 14:38:42.575326 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-86c947bc4c-wdrp8" Dec 10 14:38:42 crc kubenswrapper[4718]: I1210 14:38:42.597088 4718 scope.go:117] "RemoveContainer" containerID="982b4dc460c8466354823fc2fd732c8aebf0f3a88ef6c03b1c25265632e727e2" Dec 10 14:38:42 crc kubenswrapper[4718]: E1210 14:38:42.598203 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"982b4dc460c8466354823fc2fd732c8aebf0f3a88ef6c03b1c25265632e727e2\": container with ID starting with 982b4dc460c8466354823fc2fd732c8aebf0f3a88ef6c03b1c25265632e727e2 not found: ID does not exist" containerID="982b4dc460c8466354823fc2fd732c8aebf0f3a88ef6c03b1c25265632e727e2" Dec 10 14:38:42 crc kubenswrapper[4718]: I1210 14:38:42.598262 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"982b4dc460c8466354823fc2fd732c8aebf0f3a88ef6c03b1c25265632e727e2"} err="failed to get container status \"982b4dc460c8466354823fc2fd732c8aebf0f3a88ef6c03b1c25265632e727e2\": rpc error: code = NotFound desc = could not find container \"982b4dc460c8466354823fc2fd732c8aebf0f3a88ef6c03b1c25265632e727e2\": container with ID starting with 982b4dc460c8466354823fc2fd732c8aebf0f3a88ef6c03b1c25265632e727e2 not found: ID does not exist" Dec 10 14:38:42 crc kubenswrapper[4718]: I1210 14:38:42.619789 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1a9006e9-841e-421c-8dc9-dc7d51a6dc46-serving-cert\") pod \"1a9006e9-841e-421c-8dc9-dc7d51a6dc46\" (UID: \"1a9006e9-841e-421c-8dc9-dc7d51a6dc46\") " Dec 10 14:38:42 crc kubenswrapper[4718]: I1210 14:38:42.619889 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1a9006e9-841e-421c-8dc9-dc7d51a6dc46-client-ca\") pod \"1a9006e9-841e-421c-8dc9-dc7d51a6dc46\" (UID: \"1a9006e9-841e-421c-8dc9-dc7d51a6dc46\") " Dec 10 14:38:42 crc kubenswrapper[4718]: I1210 14:38:42.619956 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1a9006e9-841e-421c-8dc9-dc7d51a6dc46-config\") pod \"1a9006e9-841e-421c-8dc9-dc7d51a6dc46\" (UID: \"1a9006e9-841e-421c-8dc9-dc7d51a6dc46\") " Dec 10 14:38:42 crc kubenswrapper[4718]: I1210 14:38:42.619990 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q9cd5\" (UniqueName: \"kubernetes.io/projected/1a9006e9-841e-421c-8dc9-dc7d51a6dc46-kube-api-access-q9cd5\") pod \"1a9006e9-841e-421c-8dc9-dc7d51a6dc46\" (UID: \"1a9006e9-841e-421c-8dc9-dc7d51a6dc46\") " Dec 10 14:38:42 crc kubenswrapper[4718]: I1210 14:38:42.621000 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1a9006e9-841e-421c-8dc9-dc7d51a6dc46-client-ca" (OuterVolumeSpecName: "client-ca") pod "1a9006e9-841e-421c-8dc9-dc7d51a6dc46" (UID: "1a9006e9-841e-421c-8dc9-dc7d51a6dc46"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:38:42 crc kubenswrapper[4718]: I1210 14:38:42.621008 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1a9006e9-841e-421c-8dc9-dc7d51a6dc46-config" (OuterVolumeSpecName: "config") pod "1a9006e9-841e-421c-8dc9-dc7d51a6dc46" (UID: "1a9006e9-841e-421c-8dc9-dc7d51a6dc46"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:38:42 crc kubenswrapper[4718]: I1210 14:38:42.625516 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1a9006e9-841e-421c-8dc9-dc7d51a6dc46-kube-api-access-q9cd5" (OuterVolumeSpecName: "kube-api-access-q9cd5") pod "1a9006e9-841e-421c-8dc9-dc7d51a6dc46" (UID: "1a9006e9-841e-421c-8dc9-dc7d51a6dc46"). InnerVolumeSpecName "kube-api-access-q9cd5". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 14:38:42 crc kubenswrapper[4718]: I1210 14:38:42.628662 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1a9006e9-841e-421c-8dc9-dc7d51a6dc46-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1a9006e9-841e-421c-8dc9-dc7d51a6dc46" (UID: "1a9006e9-841e-421c-8dc9-dc7d51a6dc46"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 14:38:42 crc kubenswrapper[4718]: I1210 14:38:42.721453 4718 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1a9006e9-841e-421c-8dc9-dc7d51a6dc46-config\") on node \"crc\" DevicePath \"\"" Dec 10 14:38:42 crc kubenswrapper[4718]: I1210 14:38:42.721503 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q9cd5\" (UniqueName: \"kubernetes.io/projected/1a9006e9-841e-421c-8dc9-dc7d51a6dc46-kube-api-access-q9cd5\") on node \"crc\" DevicePath \"\"" Dec 10 14:38:42 crc kubenswrapper[4718]: I1210 14:38:42.721520 4718 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1a9006e9-841e-421c-8dc9-dc7d51a6dc46-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 10 14:38:42 crc kubenswrapper[4718]: I1210 14:38:42.721532 4718 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1a9006e9-841e-421c-8dc9-dc7d51a6dc46-client-ca\") on node \"crc\" DevicePath \"\"" Dec 10 14:38:42 crc kubenswrapper[4718]: I1210 14:38:42.903848 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-86c947bc4c-wdrp8"] Dec 10 14:38:42 crc kubenswrapper[4718]: I1210 14:38:42.907127 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-86c947bc4c-wdrp8"] Dec 10 14:38:43 crc kubenswrapper[4718]: I1210 14:38:43.437851 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7f6f48d5f5-w75k9"] Dec 10 14:38:43 crc kubenswrapper[4718]: E1210 14:38:43.438788 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1a9006e9-841e-421c-8dc9-dc7d51a6dc46" containerName="route-controller-manager" Dec 10 14:38:43 crc kubenswrapper[4718]: I1210 14:38:43.438806 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="1a9006e9-841e-421c-8dc9-dc7d51a6dc46" containerName="route-controller-manager" Dec 10 14:38:43 crc kubenswrapper[4718]: I1210 14:38:43.438933 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="1a9006e9-841e-421c-8dc9-dc7d51a6dc46" containerName="route-controller-manager" Dec 10 14:38:43 crc kubenswrapper[4718]: I1210 14:38:43.439555 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7f6f48d5f5-w75k9" Dec 10 14:38:43 crc kubenswrapper[4718]: I1210 14:38:43.445775 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Dec 10 14:38:43 crc kubenswrapper[4718]: I1210 14:38:43.445953 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Dec 10 14:38:43 crc kubenswrapper[4718]: I1210 14:38:43.445954 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Dec 10 14:38:43 crc kubenswrapper[4718]: I1210 14:38:43.446054 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Dec 10 14:38:43 crc kubenswrapper[4718]: I1210 14:38:43.446354 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Dec 10 14:38:43 crc kubenswrapper[4718]: I1210 14:38:43.447764 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Dec 10 14:38:43 crc kubenswrapper[4718]: I1210 14:38:43.462574 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7f6f48d5f5-w75k9"] Dec 10 14:38:43 crc kubenswrapper[4718]: I1210 14:38:43.635328 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/06af909f-e72a-4ab0-948f-ca1a59d12675-config\") pod \"route-controller-manager-7f6f48d5f5-w75k9\" (UID: \"06af909f-e72a-4ab0-948f-ca1a59d12675\") " pod="openshift-route-controller-manager/route-controller-manager-7f6f48d5f5-w75k9" Dec 10 14:38:43 crc kubenswrapper[4718]: I1210 14:38:43.635728 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/06af909f-e72a-4ab0-948f-ca1a59d12675-serving-cert\") pod \"route-controller-manager-7f6f48d5f5-w75k9\" (UID: \"06af909f-e72a-4ab0-948f-ca1a59d12675\") " pod="openshift-route-controller-manager/route-controller-manager-7f6f48d5f5-w75k9" Dec 10 14:38:43 crc kubenswrapper[4718]: I1210 14:38:43.635779 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zc8zv\" (UniqueName: \"kubernetes.io/projected/06af909f-e72a-4ab0-948f-ca1a59d12675-kube-api-access-zc8zv\") pod \"route-controller-manager-7f6f48d5f5-w75k9\" (UID: \"06af909f-e72a-4ab0-948f-ca1a59d12675\") " pod="openshift-route-controller-manager/route-controller-manager-7f6f48d5f5-w75k9" Dec 10 14:38:43 crc kubenswrapper[4718]: I1210 14:38:43.635870 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/06af909f-e72a-4ab0-948f-ca1a59d12675-client-ca\") pod \"route-controller-manager-7f6f48d5f5-w75k9\" (UID: \"06af909f-e72a-4ab0-948f-ca1a59d12675\") " pod="openshift-route-controller-manager/route-controller-manager-7f6f48d5f5-w75k9" Dec 10 14:38:43 crc kubenswrapper[4718]: I1210 14:38:43.737933 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/06af909f-e72a-4ab0-948f-ca1a59d12675-client-ca\") pod \"route-controller-manager-7f6f48d5f5-w75k9\" (UID: \"06af909f-e72a-4ab0-948f-ca1a59d12675\") " pod="openshift-route-controller-manager/route-controller-manager-7f6f48d5f5-w75k9" Dec 10 14:38:43 crc kubenswrapper[4718]: I1210 14:38:43.738044 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/06af909f-e72a-4ab0-948f-ca1a59d12675-config\") pod \"route-controller-manager-7f6f48d5f5-w75k9\" (UID: \"06af909f-e72a-4ab0-948f-ca1a59d12675\") " pod="openshift-route-controller-manager/route-controller-manager-7f6f48d5f5-w75k9" Dec 10 14:38:43 crc kubenswrapper[4718]: I1210 14:38:43.738104 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/06af909f-e72a-4ab0-948f-ca1a59d12675-serving-cert\") pod \"route-controller-manager-7f6f48d5f5-w75k9\" (UID: \"06af909f-e72a-4ab0-948f-ca1a59d12675\") " pod="openshift-route-controller-manager/route-controller-manager-7f6f48d5f5-w75k9" Dec 10 14:38:43 crc kubenswrapper[4718]: I1210 14:38:43.738135 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zc8zv\" (UniqueName: \"kubernetes.io/projected/06af909f-e72a-4ab0-948f-ca1a59d12675-kube-api-access-zc8zv\") pod \"route-controller-manager-7f6f48d5f5-w75k9\" (UID: \"06af909f-e72a-4ab0-948f-ca1a59d12675\") " pod="openshift-route-controller-manager/route-controller-manager-7f6f48d5f5-w75k9" Dec 10 14:38:43 crc kubenswrapper[4718]: I1210 14:38:43.739584 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/06af909f-e72a-4ab0-948f-ca1a59d12675-client-ca\") pod \"route-controller-manager-7f6f48d5f5-w75k9\" (UID: \"06af909f-e72a-4ab0-948f-ca1a59d12675\") " pod="openshift-route-controller-manager/route-controller-manager-7f6f48d5f5-w75k9" Dec 10 14:38:43 crc kubenswrapper[4718]: I1210 14:38:43.739936 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/06af909f-e72a-4ab0-948f-ca1a59d12675-config\") pod \"route-controller-manager-7f6f48d5f5-w75k9\" (UID: \"06af909f-e72a-4ab0-948f-ca1a59d12675\") " pod="openshift-route-controller-manager/route-controller-manager-7f6f48d5f5-w75k9" Dec 10 14:38:43 crc kubenswrapper[4718]: I1210 14:38:43.743265 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/06af909f-e72a-4ab0-948f-ca1a59d12675-serving-cert\") pod \"route-controller-manager-7f6f48d5f5-w75k9\" (UID: \"06af909f-e72a-4ab0-948f-ca1a59d12675\") " pod="openshift-route-controller-manager/route-controller-manager-7f6f48d5f5-w75k9" Dec 10 14:38:43 crc kubenswrapper[4718]: I1210 14:38:43.755172 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zc8zv\" (UniqueName: \"kubernetes.io/projected/06af909f-e72a-4ab0-948f-ca1a59d12675-kube-api-access-zc8zv\") pod \"route-controller-manager-7f6f48d5f5-w75k9\" (UID: \"06af909f-e72a-4ab0-948f-ca1a59d12675\") " pod="openshift-route-controller-manager/route-controller-manager-7f6f48d5f5-w75k9" Dec 10 14:38:44 crc kubenswrapper[4718]: I1210 14:38:44.029149 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1a9006e9-841e-421c-8dc9-dc7d51a6dc46" path="/var/lib/kubelet/pods/1a9006e9-841e-421c-8dc9-dc7d51a6dc46/volumes" Dec 10 14:38:44 crc kubenswrapper[4718]: I1210 14:38:44.054710 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7f6f48d5f5-w75k9" Dec 10 14:38:44 crc kubenswrapper[4718]: I1210 14:38:44.523362 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7f6f48d5f5-w75k9"] Dec 10 14:38:44 crc kubenswrapper[4718]: I1210 14:38:44.592453 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7f6f48d5f5-w75k9" event={"ID":"06af909f-e72a-4ab0-948f-ca1a59d12675","Type":"ContainerStarted","Data":"f4c3dc46a2599adda35b68bd6f4dfb8ce6a0444ffebdabc40bfdb405923ffa9f"} Dec 10 14:38:45 crc kubenswrapper[4718]: I1210 14:38:45.601440 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7f6f48d5f5-w75k9" event={"ID":"06af909f-e72a-4ab0-948f-ca1a59d12675","Type":"ContainerStarted","Data":"46469f61239a1c1f4fb547ec9aa8251c51da3bf930156d96367e6aacf14b0d7a"} Dec 10 14:38:45 crc kubenswrapper[4718]: I1210 14:38:45.601984 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-7f6f48d5f5-w75k9" Dec 10 14:38:45 crc kubenswrapper[4718]: I1210 14:38:45.608517 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-7f6f48d5f5-w75k9" Dec 10 14:38:45 crc kubenswrapper[4718]: I1210 14:38:45.620257 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-7f6f48d5f5-w75k9" podStartSLOduration=4.620223806 podStartE2EDuration="4.620223806s" podCreationTimestamp="2025-12-10 14:38:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 14:38:45.61692965 +0000 UTC m=+430.566153067" watchObservedRunningTime="2025-12-10 14:38:45.620223806 +0000 UTC m=+430.569447223" Dec 10 14:38:48 crc kubenswrapper[4718]: I1210 14:38:48.084277 4718 patch_prober.go:28] interesting pod/machine-config-daemon-8zmhn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 10 14:38:48 crc kubenswrapper[4718]: I1210 14:38:48.084363 4718 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" podUID="8db53917-7cfb-496d-b8a0-5cc68f3be4e7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 10 14:38:48 crc kubenswrapper[4718]: I1210 14:38:48.084441 4718 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" Dec 10 14:38:48 crc kubenswrapper[4718]: I1210 14:38:48.085227 4718 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d7904af25779ae17073393bf7640a10c1799049d2c7e4bb956caaee74cd26ba8"} pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 10 14:38:48 crc kubenswrapper[4718]: I1210 14:38:48.085296 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" podUID="8db53917-7cfb-496d-b8a0-5cc68f3be4e7" containerName="machine-config-daemon" containerID="cri-o://d7904af25779ae17073393bf7640a10c1799049d2c7e4bb956caaee74cd26ba8" gracePeriod=600 Dec 10 14:38:48 crc kubenswrapper[4718]: I1210 14:38:48.623027 4718 generic.go:334] "Generic (PLEG): container finished" podID="8db53917-7cfb-496d-b8a0-5cc68f3be4e7" containerID="d7904af25779ae17073393bf7640a10c1799049d2c7e4bb956caaee74cd26ba8" exitCode=0 Dec 10 14:38:48 crc kubenswrapper[4718]: I1210 14:38:48.623629 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" event={"ID":"8db53917-7cfb-496d-b8a0-5cc68f3be4e7","Type":"ContainerDied","Data":"d7904af25779ae17073393bf7640a10c1799049d2c7e4bb956caaee74cd26ba8"} Dec 10 14:38:48 crc kubenswrapper[4718]: I1210 14:38:48.623676 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" event={"ID":"8db53917-7cfb-496d-b8a0-5cc68f3be4e7","Type":"ContainerStarted","Data":"550d69ce2e0f8c19ae4a30039fc1e4b39d84c5575da8bb040b8c3a1798409fd1"} Dec 10 14:38:48 crc kubenswrapper[4718]: I1210 14:38:48.623703 4718 scope.go:117] "RemoveContainer" containerID="f420f195ef9f1879dc97af4b3feb8835025c2cbe0ede4113fb1b0e0b08ba6c2a" Dec 10 14:38:54 crc kubenswrapper[4718]: I1210 14:38:54.889422 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-4mrgx"] Dec 10 14:38:54 crc kubenswrapper[4718]: I1210 14:38:54.890961 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-4mrgx" Dec 10 14:38:54 crc kubenswrapper[4718]: I1210 14:38:54.913451 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-4mrgx"] Dec 10 14:38:54 crc kubenswrapper[4718]: I1210 14:38:54.917283 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/db4fa310-c60d-4bbb-b77d-3656c669e9d3-ca-trust-extracted\") pod \"image-registry-66df7c8f76-4mrgx\" (UID: \"db4fa310-c60d-4bbb-b77d-3656c669e9d3\") " pod="openshift-image-registry/image-registry-66df7c8f76-4mrgx" Dec 10 14:38:54 crc kubenswrapper[4718]: I1210 14:38:54.917343 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-275h7\" (UniqueName: \"kubernetes.io/projected/db4fa310-c60d-4bbb-b77d-3656c669e9d3-kube-api-access-275h7\") pod \"image-registry-66df7c8f76-4mrgx\" (UID: \"db4fa310-c60d-4bbb-b77d-3656c669e9d3\") " pod="openshift-image-registry/image-registry-66df7c8f76-4mrgx" Dec 10 14:38:54 crc kubenswrapper[4718]: I1210 14:38:54.917408 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-4mrgx\" (UID: \"db4fa310-c60d-4bbb-b77d-3656c669e9d3\") " pod="openshift-image-registry/image-registry-66df7c8f76-4mrgx" Dec 10 14:38:54 crc kubenswrapper[4718]: I1210 14:38:54.917446 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/db4fa310-c60d-4bbb-b77d-3656c669e9d3-trusted-ca\") pod \"image-registry-66df7c8f76-4mrgx\" (UID: \"db4fa310-c60d-4bbb-b77d-3656c669e9d3\") " pod="openshift-image-registry/image-registry-66df7c8f76-4mrgx" Dec 10 14:38:54 crc kubenswrapper[4718]: I1210 14:38:54.917493 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/db4fa310-c60d-4bbb-b77d-3656c669e9d3-registry-certificates\") pod \"image-registry-66df7c8f76-4mrgx\" (UID: \"db4fa310-c60d-4bbb-b77d-3656c669e9d3\") " pod="openshift-image-registry/image-registry-66df7c8f76-4mrgx" Dec 10 14:38:54 crc kubenswrapper[4718]: I1210 14:38:54.917521 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/db4fa310-c60d-4bbb-b77d-3656c669e9d3-installation-pull-secrets\") pod \"image-registry-66df7c8f76-4mrgx\" (UID: \"db4fa310-c60d-4bbb-b77d-3656c669e9d3\") " pod="openshift-image-registry/image-registry-66df7c8f76-4mrgx" Dec 10 14:38:54 crc kubenswrapper[4718]: I1210 14:38:54.917542 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/db4fa310-c60d-4bbb-b77d-3656c669e9d3-registry-tls\") pod \"image-registry-66df7c8f76-4mrgx\" (UID: \"db4fa310-c60d-4bbb-b77d-3656c669e9d3\") " pod="openshift-image-registry/image-registry-66df7c8f76-4mrgx" Dec 10 14:38:54 crc kubenswrapper[4718]: I1210 14:38:54.917658 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/db4fa310-c60d-4bbb-b77d-3656c669e9d3-bound-sa-token\") pod \"image-registry-66df7c8f76-4mrgx\" (UID: \"db4fa310-c60d-4bbb-b77d-3656c669e9d3\") " pod="openshift-image-registry/image-registry-66df7c8f76-4mrgx" Dec 10 14:38:54 crc kubenswrapper[4718]: I1210 14:38:54.966833 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-4mrgx\" (UID: \"db4fa310-c60d-4bbb-b77d-3656c669e9d3\") " pod="openshift-image-registry/image-registry-66df7c8f76-4mrgx" Dec 10 14:38:55 crc kubenswrapper[4718]: I1210 14:38:55.020962 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/db4fa310-c60d-4bbb-b77d-3656c669e9d3-bound-sa-token\") pod \"image-registry-66df7c8f76-4mrgx\" (UID: \"db4fa310-c60d-4bbb-b77d-3656c669e9d3\") " pod="openshift-image-registry/image-registry-66df7c8f76-4mrgx" Dec 10 14:38:55 crc kubenswrapper[4718]: I1210 14:38:55.021085 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/db4fa310-c60d-4bbb-b77d-3656c669e9d3-ca-trust-extracted\") pod \"image-registry-66df7c8f76-4mrgx\" (UID: \"db4fa310-c60d-4bbb-b77d-3656c669e9d3\") " pod="openshift-image-registry/image-registry-66df7c8f76-4mrgx" Dec 10 14:38:55 crc kubenswrapper[4718]: I1210 14:38:55.021122 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-275h7\" (UniqueName: \"kubernetes.io/projected/db4fa310-c60d-4bbb-b77d-3656c669e9d3-kube-api-access-275h7\") pod \"image-registry-66df7c8f76-4mrgx\" (UID: \"db4fa310-c60d-4bbb-b77d-3656c669e9d3\") " pod="openshift-image-registry/image-registry-66df7c8f76-4mrgx" Dec 10 14:38:55 crc kubenswrapper[4718]: I1210 14:38:55.021159 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/db4fa310-c60d-4bbb-b77d-3656c669e9d3-trusted-ca\") pod \"image-registry-66df7c8f76-4mrgx\" (UID: \"db4fa310-c60d-4bbb-b77d-3656c669e9d3\") " pod="openshift-image-registry/image-registry-66df7c8f76-4mrgx" Dec 10 14:38:55 crc kubenswrapper[4718]: I1210 14:38:55.021209 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/db4fa310-c60d-4bbb-b77d-3656c669e9d3-registry-certificates\") pod \"image-registry-66df7c8f76-4mrgx\" (UID: \"db4fa310-c60d-4bbb-b77d-3656c669e9d3\") " pod="openshift-image-registry/image-registry-66df7c8f76-4mrgx" Dec 10 14:38:55 crc kubenswrapper[4718]: I1210 14:38:55.021237 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/db4fa310-c60d-4bbb-b77d-3656c669e9d3-installation-pull-secrets\") pod \"image-registry-66df7c8f76-4mrgx\" (UID: \"db4fa310-c60d-4bbb-b77d-3656c669e9d3\") " pod="openshift-image-registry/image-registry-66df7c8f76-4mrgx" Dec 10 14:38:55 crc kubenswrapper[4718]: I1210 14:38:55.021259 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/db4fa310-c60d-4bbb-b77d-3656c669e9d3-registry-tls\") pod \"image-registry-66df7c8f76-4mrgx\" (UID: \"db4fa310-c60d-4bbb-b77d-3656c669e9d3\") " pod="openshift-image-registry/image-registry-66df7c8f76-4mrgx" Dec 10 14:38:55 crc kubenswrapper[4718]: I1210 14:38:55.023850 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/db4fa310-c60d-4bbb-b77d-3656c669e9d3-ca-trust-extracted\") pod \"image-registry-66df7c8f76-4mrgx\" (UID: \"db4fa310-c60d-4bbb-b77d-3656c669e9d3\") " pod="openshift-image-registry/image-registry-66df7c8f76-4mrgx" Dec 10 14:38:55 crc kubenswrapper[4718]: I1210 14:38:55.032783 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/db4fa310-c60d-4bbb-b77d-3656c669e9d3-registry-tls\") pod \"image-registry-66df7c8f76-4mrgx\" (UID: \"db4fa310-c60d-4bbb-b77d-3656c669e9d3\") " pod="openshift-image-registry/image-registry-66df7c8f76-4mrgx" Dec 10 14:38:55 crc kubenswrapper[4718]: I1210 14:38:55.036116 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/db4fa310-c60d-4bbb-b77d-3656c669e9d3-trusted-ca\") pod \"image-registry-66df7c8f76-4mrgx\" (UID: \"db4fa310-c60d-4bbb-b77d-3656c669e9d3\") " pod="openshift-image-registry/image-registry-66df7c8f76-4mrgx" Dec 10 14:38:55 crc kubenswrapper[4718]: I1210 14:38:55.041275 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/db4fa310-c60d-4bbb-b77d-3656c669e9d3-installation-pull-secrets\") pod \"image-registry-66df7c8f76-4mrgx\" (UID: \"db4fa310-c60d-4bbb-b77d-3656c669e9d3\") " pod="openshift-image-registry/image-registry-66df7c8f76-4mrgx" Dec 10 14:38:55 crc kubenswrapper[4718]: I1210 14:38:55.049249 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/db4fa310-c60d-4bbb-b77d-3656c669e9d3-bound-sa-token\") pod \"image-registry-66df7c8f76-4mrgx\" (UID: \"db4fa310-c60d-4bbb-b77d-3656c669e9d3\") " pod="openshift-image-registry/image-registry-66df7c8f76-4mrgx" Dec 10 14:38:55 crc kubenswrapper[4718]: I1210 14:38:55.052525 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-275h7\" (UniqueName: \"kubernetes.io/projected/db4fa310-c60d-4bbb-b77d-3656c669e9d3-kube-api-access-275h7\") pod \"image-registry-66df7c8f76-4mrgx\" (UID: \"db4fa310-c60d-4bbb-b77d-3656c669e9d3\") " pod="openshift-image-registry/image-registry-66df7c8f76-4mrgx" Dec 10 14:38:55 crc kubenswrapper[4718]: I1210 14:38:55.067188 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/db4fa310-c60d-4bbb-b77d-3656c669e9d3-registry-certificates\") pod \"image-registry-66df7c8f76-4mrgx\" (UID: \"db4fa310-c60d-4bbb-b77d-3656c669e9d3\") " pod="openshift-image-registry/image-registry-66df7c8f76-4mrgx" Dec 10 14:38:55 crc kubenswrapper[4718]: I1210 14:38:55.209424 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-4mrgx" Dec 10 14:38:55 crc kubenswrapper[4718]: I1210 14:38:55.673226 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-4mrgx"] Dec 10 14:38:55 crc kubenswrapper[4718]: W1210 14:38:55.683990 4718 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddb4fa310_c60d_4bbb_b77d_3656c669e9d3.slice/crio-89c3a28d256479dcd384ccb7cb5021bb4394df2ff0eed72c2ec49e98ba23eb31 WatchSource:0}: Error finding container 89c3a28d256479dcd384ccb7cb5021bb4394df2ff0eed72c2ec49e98ba23eb31: Status 404 returned error can't find the container with id 89c3a28d256479dcd384ccb7cb5021bb4394df2ff0eed72c2ec49e98ba23eb31 Dec 10 14:38:56 crc kubenswrapper[4718]: I1210 14:38:56.676670 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-4mrgx" event={"ID":"db4fa310-c60d-4bbb-b77d-3656c669e9d3","Type":"ContainerStarted","Data":"12963b8f4bfdb4600a2ed80c57549d98ddb0013cf129f8fe35b5f7d1e6bea39b"} Dec 10 14:38:56 crc kubenswrapper[4718]: I1210 14:38:56.677279 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-4mrgx" event={"ID":"db4fa310-c60d-4bbb-b77d-3656c669e9d3","Type":"ContainerStarted","Data":"89c3a28d256479dcd384ccb7cb5021bb4394df2ff0eed72c2ec49e98ba23eb31"} Dec 10 14:38:56 crc kubenswrapper[4718]: I1210 14:38:56.677308 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-4mrgx" Dec 10 14:38:56 crc kubenswrapper[4718]: I1210 14:38:56.698418 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-4mrgx" podStartSLOduration=2.698372105 podStartE2EDuration="2.698372105s" podCreationTimestamp="2025-12-10 14:38:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 14:38:56.6978066 +0000 UTC m=+441.647030027" watchObservedRunningTime="2025-12-10 14:38:56.698372105 +0000 UTC m=+441.647595522" Dec 10 14:39:15 crc kubenswrapper[4718]: I1210 14:39:15.216001 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-4mrgx" Dec 10 14:39:15 crc kubenswrapper[4718]: I1210 14:39:15.277111 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-n2lcz"] Dec 10 14:39:30 crc kubenswrapper[4718]: I1210 14:39:30.006496 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-b4bq2"] Dec 10 14:39:30 crc kubenswrapper[4718]: I1210 14:39:30.008086 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-b4bq2" podUID="37550300-88f8-40dc-be98-1825518dc65c" containerName="registry-server" containerID="cri-o://d048cbae971635d05801ceadf3707f59f0ab1cd0869b5e0e4668d14a898b890c" gracePeriod=30 Dec 10 14:39:30 crc kubenswrapper[4718]: I1210 14:39:30.081757 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-l2t5d"] Dec 10 14:39:30 crc kubenswrapper[4718]: I1210 14:39:30.081844 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-twtlk"] Dec 10 14:39:30 crc kubenswrapper[4718]: I1210 14:39:30.081859 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-6bztl"] Dec 10 14:39:30 crc kubenswrapper[4718]: I1210 14:39:30.082198 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-6bztl" podUID="d0a8400b-e706-4ec6-b7e8-e4c25bfe9440" containerName="registry-server" containerID="cri-o://0458b8172a28e2047758a32f9869b39dea094544b91b0c93c34eab8e99b04cef" gracePeriod=30 Dec 10 14:39:30 crc kubenswrapper[4718]: I1210 14:39:30.082658 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-l2t5d" podUID="a267a67e-effa-40d4-8923-7669798e594d" containerName="registry-server" containerID="cri-o://cd3457ae829587d2caa906dddebd7a06d9b8bde377bb5e017d6869df51191026" gracePeriod=30 Dec 10 14:39:30 crc kubenswrapper[4718]: I1210 14:39:30.083056 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-twtlk" podUID="451fb12e-a97f-441e-8a8c-d4c217640aef" containerName="marketplace-operator" containerID="cri-o://1fffa7e433a4fa6ef1ae1906db45898d8fdc307505d17158a28896b0db709bfc" gracePeriod=30 Dec 10 14:39:30 crc kubenswrapper[4718]: I1210 14:39:30.096047 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-f8m8j"] Dec 10 14:39:30 crc kubenswrapper[4718]: I1210 14:39:30.099950 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-f8m8j" Dec 10 14:39:30 crc kubenswrapper[4718]: I1210 14:39:30.110238 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-6xgvp"] Dec 10 14:39:30 crc kubenswrapper[4718]: I1210 14:39:30.110890 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-6xgvp" podUID="413a70a8-23c0-4e34-a2f7-ec5cb980bfb5" containerName="registry-server" containerID="cri-o://aa306118d66fce8d8475ce09b0bc834630b33d511d6497fac629d3ef6c6fa23f" gracePeriod=30 Dec 10 14:39:30 crc kubenswrapper[4718]: I1210 14:39:30.135249 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-f8m8j"] Dec 10 14:39:30 crc kubenswrapper[4718]: I1210 14:39:30.194814 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/fe45ddb0-b6e9-4690-8d2c-dfcef7de0d90-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-f8m8j\" (UID: \"fe45ddb0-b6e9-4690-8d2c-dfcef7de0d90\") " pod="openshift-marketplace/marketplace-operator-79b997595-f8m8j" Dec 10 14:39:30 crc kubenswrapper[4718]: I1210 14:39:30.194944 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/fe45ddb0-b6e9-4690-8d2c-dfcef7de0d90-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-f8m8j\" (UID: \"fe45ddb0-b6e9-4690-8d2c-dfcef7de0d90\") " pod="openshift-marketplace/marketplace-operator-79b997595-f8m8j" Dec 10 14:39:30 crc kubenswrapper[4718]: I1210 14:39:30.194980 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jncvz\" (UniqueName: \"kubernetes.io/projected/fe45ddb0-b6e9-4690-8d2c-dfcef7de0d90-kube-api-access-jncvz\") pod \"marketplace-operator-79b997595-f8m8j\" (UID: \"fe45ddb0-b6e9-4690-8d2c-dfcef7de0d90\") " pod="openshift-marketplace/marketplace-operator-79b997595-f8m8j" Dec 10 14:39:30 crc kubenswrapper[4718]: I1210 14:39:30.302358 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jncvz\" (UniqueName: \"kubernetes.io/projected/fe45ddb0-b6e9-4690-8d2c-dfcef7de0d90-kube-api-access-jncvz\") pod \"marketplace-operator-79b997595-f8m8j\" (UID: \"fe45ddb0-b6e9-4690-8d2c-dfcef7de0d90\") " pod="openshift-marketplace/marketplace-operator-79b997595-f8m8j" Dec 10 14:39:30 crc kubenswrapper[4718]: I1210 14:39:30.302748 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/fe45ddb0-b6e9-4690-8d2c-dfcef7de0d90-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-f8m8j\" (UID: \"fe45ddb0-b6e9-4690-8d2c-dfcef7de0d90\") " pod="openshift-marketplace/marketplace-operator-79b997595-f8m8j" Dec 10 14:39:30 crc kubenswrapper[4718]: I1210 14:39:30.302867 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/fe45ddb0-b6e9-4690-8d2c-dfcef7de0d90-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-f8m8j\" (UID: \"fe45ddb0-b6e9-4690-8d2c-dfcef7de0d90\") " pod="openshift-marketplace/marketplace-operator-79b997595-f8m8j" Dec 10 14:39:30 crc kubenswrapper[4718]: I1210 14:39:30.306300 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/fe45ddb0-b6e9-4690-8d2c-dfcef7de0d90-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-f8m8j\" (UID: \"fe45ddb0-b6e9-4690-8d2c-dfcef7de0d90\") " pod="openshift-marketplace/marketplace-operator-79b997595-f8m8j" Dec 10 14:39:30 crc kubenswrapper[4718]: I1210 14:39:30.338264 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/fe45ddb0-b6e9-4690-8d2c-dfcef7de0d90-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-f8m8j\" (UID: \"fe45ddb0-b6e9-4690-8d2c-dfcef7de0d90\") " pod="openshift-marketplace/marketplace-operator-79b997595-f8m8j" Dec 10 14:39:30 crc kubenswrapper[4718]: I1210 14:39:30.352570 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jncvz\" (UniqueName: \"kubernetes.io/projected/fe45ddb0-b6e9-4690-8d2c-dfcef7de0d90-kube-api-access-jncvz\") pod \"marketplace-operator-79b997595-f8m8j\" (UID: \"fe45ddb0-b6e9-4690-8d2c-dfcef7de0d90\") " pod="openshift-marketplace/marketplace-operator-79b997595-f8m8j" Dec 10 14:39:30 crc kubenswrapper[4718]: I1210 14:39:30.428036 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-f8m8j" Dec 10 14:39:30 crc kubenswrapper[4718]: I1210 14:39:30.503172 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-b4bq2" Dec 10 14:39:30 crc kubenswrapper[4718]: I1210 14:39:30.607789 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/37550300-88f8-40dc-be98-1825518dc65c-utilities\") pod \"37550300-88f8-40dc-be98-1825518dc65c\" (UID: \"37550300-88f8-40dc-be98-1825518dc65c\") " Dec 10 14:39:30 crc kubenswrapper[4718]: I1210 14:39:30.607988 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/37550300-88f8-40dc-be98-1825518dc65c-catalog-content\") pod \"37550300-88f8-40dc-be98-1825518dc65c\" (UID: \"37550300-88f8-40dc-be98-1825518dc65c\") " Dec 10 14:39:30 crc kubenswrapper[4718]: I1210 14:39:30.608132 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k4zhd\" (UniqueName: \"kubernetes.io/projected/37550300-88f8-40dc-be98-1825518dc65c-kube-api-access-k4zhd\") pod \"37550300-88f8-40dc-be98-1825518dc65c\" (UID: \"37550300-88f8-40dc-be98-1825518dc65c\") " Dec 10 14:39:30 crc kubenswrapper[4718]: I1210 14:39:30.614897 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/37550300-88f8-40dc-be98-1825518dc65c-utilities" (OuterVolumeSpecName: "utilities") pod "37550300-88f8-40dc-be98-1825518dc65c" (UID: "37550300-88f8-40dc-be98-1825518dc65c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 14:39:30 crc kubenswrapper[4718]: I1210 14:39:30.616110 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/37550300-88f8-40dc-be98-1825518dc65c-kube-api-access-k4zhd" (OuterVolumeSpecName: "kube-api-access-k4zhd") pod "37550300-88f8-40dc-be98-1825518dc65c" (UID: "37550300-88f8-40dc-be98-1825518dc65c"). InnerVolumeSpecName "kube-api-access-k4zhd". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 14:39:30 crc kubenswrapper[4718]: I1210 14:39:30.675401 4718 generic.go:334] "Generic (PLEG): container finished" podID="451fb12e-a97f-441e-8a8c-d4c217640aef" containerID="1fffa7e433a4fa6ef1ae1906db45898d8fdc307505d17158a28896b0db709bfc" exitCode=0 Dec 10 14:39:30 crc kubenswrapper[4718]: I1210 14:39:30.675504 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-twtlk" event={"ID":"451fb12e-a97f-441e-8a8c-d4c217640aef","Type":"ContainerDied","Data":"1fffa7e433a4fa6ef1ae1906db45898d8fdc307505d17158a28896b0db709bfc"} Dec 10 14:39:30 crc kubenswrapper[4718]: I1210 14:39:30.675556 4718 scope.go:117] "RemoveContainer" containerID="e845f171f5180489cbc7cd369993220b5a604c652034411ca26e46365ecb30b8" Dec 10 14:39:30 crc kubenswrapper[4718]: I1210 14:39:30.685521 4718 generic.go:334] "Generic (PLEG): container finished" podID="37550300-88f8-40dc-be98-1825518dc65c" containerID="d048cbae971635d05801ceadf3707f59f0ab1cd0869b5e0e4668d14a898b890c" exitCode=0 Dec 10 14:39:30 crc kubenswrapper[4718]: I1210 14:39:30.685665 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-b4bq2" Dec 10 14:39:30 crc kubenswrapper[4718]: I1210 14:39:30.685720 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-b4bq2" event={"ID":"37550300-88f8-40dc-be98-1825518dc65c","Type":"ContainerDied","Data":"d048cbae971635d05801ceadf3707f59f0ab1cd0869b5e0e4668d14a898b890c"} Dec 10 14:39:30 crc kubenswrapper[4718]: I1210 14:39:30.685835 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-b4bq2" event={"ID":"37550300-88f8-40dc-be98-1825518dc65c","Type":"ContainerDied","Data":"b988ed9c5a009640a28c4b3c6e96048b12118f8dc7d5b9aa4574f486b53afa86"} Dec 10 14:39:30 crc kubenswrapper[4718]: I1210 14:39:30.702588 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/37550300-88f8-40dc-be98-1825518dc65c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "37550300-88f8-40dc-be98-1825518dc65c" (UID: "37550300-88f8-40dc-be98-1825518dc65c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 14:39:30 crc kubenswrapper[4718]: I1210 14:39:30.706112 4718 generic.go:334] "Generic (PLEG): container finished" podID="413a70a8-23c0-4e34-a2f7-ec5cb980bfb5" containerID="aa306118d66fce8d8475ce09b0bc834630b33d511d6497fac629d3ef6c6fa23f" exitCode=0 Dec 10 14:39:30 crc kubenswrapper[4718]: I1210 14:39:30.706216 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6xgvp" event={"ID":"413a70a8-23c0-4e34-a2f7-ec5cb980bfb5","Type":"ContainerDied","Data":"aa306118d66fce8d8475ce09b0bc834630b33d511d6497fac629d3ef6c6fa23f"} Dec 10 14:39:30 crc kubenswrapper[4718]: I1210 14:39:30.711471 4718 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/37550300-88f8-40dc-be98-1825518dc65c-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 10 14:39:30 crc kubenswrapper[4718]: I1210 14:39:30.711506 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k4zhd\" (UniqueName: \"kubernetes.io/projected/37550300-88f8-40dc-be98-1825518dc65c-kube-api-access-k4zhd\") on node \"crc\" DevicePath \"\"" Dec 10 14:39:30 crc kubenswrapper[4718]: I1210 14:39:30.711522 4718 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/37550300-88f8-40dc-be98-1825518dc65c-utilities\") on node \"crc\" DevicePath \"\"" Dec 10 14:39:30 crc kubenswrapper[4718]: I1210 14:39:30.717348 4718 generic.go:334] "Generic (PLEG): container finished" podID="a267a67e-effa-40d4-8923-7669798e594d" containerID="cd3457ae829587d2caa906dddebd7a06d9b8bde377bb5e017d6869df51191026" exitCode=0 Dec 10 14:39:30 crc kubenswrapper[4718]: I1210 14:39:30.717494 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-l2t5d" event={"ID":"a267a67e-effa-40d4-8923-7669798e594d","Type":"ContainerDied","Data":"cd3457ae829587d2caa906dddebd7a06d9b8bde377bb5e017d6869df51191026"} Dec 10 14:39:30 crc kubenswrapper[4718]: I1210 14:39:30.737784 4718 scope.go:117] "RemoveContainer" containerID="d048cbae971635d05801ceadf3707f59f0ab1cd0869b5e0e4668d14a898b890c" Dec 10 14:39:30 crc kubenswrapper[4718]: I1210 14:39:30.742770 4718 generic.go:334] "Generic (PLEG): container finished" podID="d0a8400b-e706-4ec6-b7e8-e4c25bfe9440" containerID="0458b8172a28e2047758a32f9869b39dea094544b91b0c93c34eab8e99b04cef" exitCode=0 Dec 10 14:39:30 crc kubenswrapper[4718]: I1210 14:39:30.742867 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6bztl" event={"ID":"d0a8400b-e706-4ec6-b7e8-e4c25bfe9440","Type":"ContainerDied","Data":"0458b8172a28e2047758a32f9869b39dea094544b91b0c93c34eab8e99b04cef"} Dec 10 14:39:30 crc kubenswrapper[4718]: I1210 14:39:30.764759 4718 scope.go:117] "RemoveContainer" containerID="e3d0252968a2a71bd70788f113565b16c6c4d4d152c98d47e9c6cd996697727f" Dec 10 14:39:30 crc kubenswrapper[4718]: I1210 14:39:30.786127 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-twtlk" Dec 10 14:39:30 crc kubenswrapper[4718]: I1210 14:39:30.798072 4718 scope.go:117] "RemoveContainer" containerID="d7a55fd31986a10f214a9adf99798c784b8d68678422ff7c483f2be79bfb0421" Dec 10 14:39:30 crc kubenswrapper[4718]: I1210 14:39:30.840539 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6bztl" Dec 10 14:39:30 crc kubenswrapper[4718]: I1210 14:39:30.847073 4718 scope.go:117] "RemoveContainer" containerID="d048cbae971635d05801ceadf3707f59f0ab1cd0869b5e0e4668d14a898b890c" Dec 10 14:39:30 crc kubenswrapper[4718]: E1210 14:39:30.847603 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d048cbae971635d05801ceadf3707f59f0ab1cd0869b5e0e4668d14a898b890c\": container with ID starting with d048cbae971635d05801ceadf3707f59f0ab1cd0869b5e0e4668d14a898b890c not found: ID does not exist" containerID="d048cbae971635d05801ceadf3707f59f0ab1cd0869b5e0e4668d14a898b890c" Dec 10 14:39:30 crc kubenswrapper[4718]: I1210 14:39:30.847635 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d048cbae971635d05801ceadf3707f59f0ab1cd0869b5e0e4668d14a898b890c"} err="failed to get container status \"d048cbae971635d05801ceadf3707f59f0ab1cd0869b5e0e4668d14a898b890c\": rpc error: code = NotFound desc = could not find container \"d048cbae971635d05801ceadf3707f59f0ab1cd0869b5e0e4668d14a898b890c\": container with ID starting with d048cbae971635d05801ceadf3707f59f0ab1cd0869b5e0e4668d14a898b890c not found: ID does not exist" Dec 10 14:39:30 crc kubenswrapper[4718]: I1210 14:39:30.847659 4718 scope.go:117] "RemoveContainer" containerID="e3d0252968a2a71bd70788f113565b16c6c4d4d152c98d47e9c6cd996697727f" Dec 10 14:39:30 crc kubenswrapper[4718]: E1210 14:39:30.848099 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e3d0252968a2a71bd70788f113565b16c6c4d4d152c98d47e9c6cd996697727f\": container with ID starting with e3d0252968a2a71bd70788f113565b16c6c4d4d152c98d47e9c6cd996697727f not found: ID does not exist" containerID="e3d0252968a2a71bd70788f113565b16c6c4d4d152c98d47e9c6cd996697727f" Dec 10 14:39:30 crc kubenswrapper[4718]: I1210 14:39:30.848160 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e3d0252968a2a71bd70788f113565b16c6c4d4d152c98d47e9c6cd996697727f"} err="failed to get container status \"e3d0252968a2a71bd70788f113565b16c6c4d4d152c98d47e9c6cd996697727f\": rpc error: code = NotFound desc = could not find container \"e3d0252968a2a71bd70788f113565b16c6c4d4d152c98d47e9c6cd996697727f\": container with ID starting with e3d0252968a2a71bd70788f113565b16c6c4d4d152c98d47e9c6cd996697727f not found: ID does not exist" Dec 10 14:39:30 crc kubenswrapper[4718]: I1210 14:39:30.848199 4718 scope.go:117] "RemoveContainer" containerID="d7a55fd31986a10f214a9adf99798c784b8d68678422ff7c483f2be79bfb0421" Dec 10 14:39:30 crc kubenswrapper[4718]: E1210 14:39:30.848620 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d7a55fd31986a10f214a9adf99798c784b8d68678422ff7c483f2be79bfb0421\": container with ID starting with d7a55fd31986a10f214a9adf99798c784b8d68678422ff7c483f2be79bfb0421 not found: ID does not exist" containerID="d7a55fd31986a10f214a9adf99798c784b8d68678422ff7c483f2be79bfb0421" Dec 10 14:39:30 crc kubenswrapper[4718]: I1210 14:39:30.848652 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d7a55fd31986a10f214a9adf99798c784b8d68678422ff7c483f2be79bfb0421"} err="failed to get container status \"d7a55fd31986a10f214a9adf99798c784b8d68678422ff7c483f2be79bfb0421\": rpc error: code = NotFound desc = could not find container \"d7a55fd31986a10f214a9adf99798c784b8d68678422ff7c483f2be79bfb0421\": container with ID starting with d7a55fd31986a10f214a9adf99798c784b8d68678422ff7c483f2be79bfb0421 not found: ID does not exist" Dec 10 14:39:30 crc kubenswrapper[4718]: I1210 14:39:30.850448 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6xgvp" Dec 10 14:39:30 crc kubenswrapper[4718]: I1210 14:39:30.917875 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/451fb12e-a97f-441e-8a8c-d4c217640aef-marketplace-trusted-ca\") pod \"451fb12e-a97f-441e-8a8c-d4c217640aef\" (UID: \"451fb12e-a97f-441e-8a8c-d4c217640aef\") " Dec 10 14:39:30 crc kubenswrapper[4718]: I1210 14:39:30.918022 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/413a70a8-23c0-4e34-a2f7-ec5cb980bfb5-catalog-content\") pod \"413a70a8-23c0-4e34-a2f7-ec5cb980bfb5\" (UID: \"413a70a8-23c0-4e34-a2f7-ec5cb980bfb5\") " Dec 10 14:39:30 crc kubenswrapper[4718]: I1210 14:39:30.918131 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/413a70a8-23c0-4e34-a2f7-ec5cb980bfb5-utilities\") pod \"413a70a8-23c0-4e34-a2f7-ec5cb980bfb5\" (UID: \"413a70a8-23c0-4e34-a2f7-ec5cb980bfb5\") " Dec 10 14:39:30 crc kubenswrapper[4718]: I1210 14:39:30.918177 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/451fb12e-a97f-441e-8a8c-d4c217640aef-marketplace-operator-metrics\") pod \"451fb12e-a97f-441e-8a8c-d4c217640aef\" (UID: \"451fb12e-a97f-441e-8a8c-d4c217640aef\") " Dec 10 14:39:30 crc kubenswrapper[4718]: I1210 14:39:30.918206 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wpzrg\" (UniqueName: \"kubernetes.io/projected/d0a8400b-e706-4ec6-b7e8-e4c25bfe9440-kube-api-access-wpzrg\") pod \"d0a8400b-e706-4ec6-b7e8-e4c25bfe9440\" (UID: \"d0a8400b-e706-4ec6-b7e8-e4c25bfe9440\") " Dec 10 14:39:30 crc kubenswrapper[4718]: I1210 14:39:30.918254 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d0a8400b-e706-4ec6-b7e8-e4c25bfe9440-utilities\") pod \"d0a8400b-e706-4ec6-b7e8-e4c25bfe9440\" (UID: \"d0a8400b-e706-4ec6-b7e8-e4c25bfe9440\") " Dec 10 14:39:30 crc kubenswrapper[4718]: I1210 14:39:30.918308 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d0a8400b-e706-4ec6-b7e8-e4c25bfe9440-catalog-content\") pod \"d0a8400b-e706-4ec6-b7e8-e4c25bfe9440\" (UID: \"d0a8400b-e706-4ec6-b7e8-e4c25bfe9440\") " Dec 10 14:39:30 crc kubenswrapper[4718]: I1210 14:39:30.918350 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jzqrt\" (UniqueName: \"kubernetes.io/projected/451fb12e-a97f-441e-8a8c-d4c217640aef-kube-api-access-jzqrt\") pod \"451fb12e-a97f-441e-8a8c-d4c217640aef\" (UID: \"451fb12e-a97f-441e-8a8c-d4c217640aef\") " Dec 10 14:39:30 crc kubenswrapper[4718]: I1210 14:39:30.918378 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c5qwj\" (UniqueName: \"kubernetes.io/projected/413a70a8-23c0-4e34-a2f7-ec5cb980bfb5-kube-api-access-c5qwj\") pod \"413a70a8-23c0-4e34-a2f7-ec5cb980bfb5\" (UID: \"413a70a8-23c0-4e34-a2f7-ec5cb980bfb5\") " Dec 10 14:39:30 crc kubenswrapper[4718]: I1210 14:39:30.919664 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/413a70a8-23c0-4e34-a2f7-ec5cb980bfb5-utilities" (OuterVolumeSpecName: "utilities") pod "413a70a8-23c0-4e34-a2f7-ec5cb980bfb5" (UID: "413a70a8-23c0-4e34-a2f7-ec5cb980bfb5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 14:39:30 crc kubenswrapper[4718]: I1210 14:39:30.919874 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d0a8400b-e706-4ec6-b7e8-e4c25bfe9440-utilities" (OuterVolumeSpecName: "utilities") pod "d0a8400b-e706-4ec6-b7e8-e4c25bfe9440" (UID: "d0a8400b-e706-4ec6-b7e8-e4c25bfe9440"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 14:39:30 crc kubenswrapper[4718]: I1210 14:39:30.920850 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/451fb12e-a97f-441e-8a8c-d4c217640aef-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "451fb12e-a97f-441e-8a8c-d4c217640aef" (UID: "451fb12e-a97f-441e-8a8c-d4c217640aef"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:39:30 crc kubenswrapper[4718]: I1210 14:39:30.926105 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/413a70a8-23c0-4e34-a2f7-ec5cb980bfb5-kube-api-access-c5qwj" (OuterVolumeSpecName: "kube-api-access-c5qwj") pod "413a70a8-23c0-4e34-a2f7-ec5cb980bfb5" (UID: "413a70a8-23c0-4e34-a2f7-ec5cb980bfb5"). InnerVolumeSpecName "kube-api-access-c5qwj". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 14:39:30 crc kubenswrapper[4718]: I1210 14:39:30.927248 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/451fb12e-a97f-441e-8a8c-d4c217640aef-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "451fb12e-a97f-441e-8a8c-d4c217640aef" (UID: "451fb12e-a97f-441e-8a8c-d4c217640aef"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 14:39:30 crc kubenswrapper[4718]: I1210 14:39:30.928082 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/451fb12e-a97f-441e-8a8c-d4c217640aef-kube-api-access-jzqrt" (OuterVolumeSpecName: "kube-api-access-jzqrt") pod "451fb12e-a97f-441e-8a8c-d4c217640aef" (UID: "451fb12e-a97f-441e-8a8c-d4c217640aef"). InnerVolumeSpecName "kube-api-access-jzqrt". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 14:39:30 crc kubenswrapper[4718]: I1210 14:39:30.928210 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d0a8400b-e706-4ec6-b7e8-e4c25bfe9440-kube-api-access-wpzrg" (OuterVolumeSpecName: "kube-api-access-wpzrg") pod "d0a8400b-e706-4ec6-b7e8-e4c25bfe9440" (UID: "d0a8400b-e706-4ec6-b7e8-e4c25bfe9440"). InnerVolumeSpecName "kube-api-access-wpzrg". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 14:39:30 crc kubenswrapper[4718]: I1210 14:39:30.943245 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d0a8400b-e706-4ec6-b7e8-e4c25bfe9440-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d0a8400b-e706-4ec6-b7e8-e4c25bfe9440" (UID: "d0a8400b-e706-4ec6-b7e8-e4c25bfe9440"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 14:39:30 crc kubenswrapper[4718]: I1210 14:39:30.993939 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-l2t5d" Dec 10 14:39:31 crc kubenswrapper[4718]: I1210 14:39:31.021532 4718 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/451fb12e-a97f-441e-8a8c-d4c217640aef-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Dec 10 14:39:31 crc kubenswrapper[4718]: I1210 14:39:31.021574 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wpzrg\" (UniqueName: \"kubernetes.io/projected/d0a8400b-e706-4ec6-b7e8-e4c25bfe9440-kube-api-access-wpzrg\") on node \"crc\" DevicePath \"\"" Dec 10 14:39:31 crc kubenswrapper[4718]: I1210 14:39:31.021588 4718 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d0a8400b-e706-4ec6-b7e8-e4c25bfe9440-utilities\") on node \"crc\" DevicePath \"\"" Dec 10 14:39:31 crc kubenswrapper[4718]: I1210 14:39:31.021605 4718 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d0a8400b-e706-4ec6-b7e8-e4c25bfe9440-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 10 14:39:31 crc kubenswrapper[4718]: I1210 14:39:31.021617 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jzqrt\" (UniqueName: \"kubernetes.io/projected/451fb12e-a97f-441e-8a8c-d4c217640aef-kube-api-access-jzqrt\") on node \"crc\" DevicePath \"\"" Dec 10 14:39:31 crc kubenswrapper[4718]: I1210 14:39:31.021629 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c5qwj\" (UniqueName: \"kubernetes.io/projected/413a70a8-23c0-4e34-a2f7-ec5cb980bfb5-kube-api-access-c5qwj\") on node \"crc\" DevicePath \"\"" Dec 10 14:39:31 crc kubenswrapper[4718]: I1210 14:39:31.021641 4718 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/451fb12e-a97f-441e-8a8c-d4c217640aef-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 10 14:39:31 crc kubenswrapper[4718]: I1210 14:39:31.021657 4718 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/413a70a8-23c0-4e34-a2f7-ec5cb980bfb5-utilities\") on node \"crc\" DevicePath \"\"" Dec 10 14:39:31 crc kubenswrapper[4718]: I1210 14:39:31.025963 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-f8m8j"] Dec 10 14:39:31 crc kubenswrapper[4718]: I1210 14:39:31.043432 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-b4bq2"] Dec 10 14:39:31 crc kubenswrapper[4718]: I1210 14:39:31.055964 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-b4bq2"] Dec 10 14:39:31 crc kubenswrapper[4718]: I1210 14:39:31.081647 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/413a70a8-23c0-4e34-a2f7-ec5cb980bfb5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "413a70a8-23c0-4e34-a2f7-ec5cb980bfb5" (UID: "413a70a8-23c0-4e34-a2f7-ec5cb980bfb5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 14:39:31 crc kubenswrapper[4718]: I1210 14:39:31.122187 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a267a67e-effa-40d4-8923-7669798e594d-utilities\") pod \"a267a67e-effa-40d4-8923-7669798e594d\" (UID: \"a267a67e-effa-40d4-8923-7669798e594d\") " Dec 10 14:39:31 crc kubenswrapper[4718]: I1210 14:39:31.122453 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-75td8\" (UniqueName: \"kubernetes.io/projected/a267a67e-effa-40d4-8923-7669798e594d-kube-api-access-75td8\") pod \"a267a67e-effa-40d4-8923-7669798e594d\" (UID: \"a267a67e-effa-40d4-8923-7669798e594d\") " Dec 10 14:39:31 crc kubenswrapper[4718]: I1210 14:39:31.122511 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a267a67e-effa-40d4-8923-7669798e594d-catalog-content\") pod \"a267a67e-effa-40d4-8923-7669798e594d\" (UID: \"a267a67e-effa-40d4-8923-7669798e594d\") " Dec 10 14:39:31 crc kubenswrapper[4718]: I1210 14:39:31.122900 4718 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/413a70a8-23c0-4e34-a2f7-ec5cb980bfb5-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 10 14:39:31 crc kubenswrapper[4718]: I1210 14:39:31.123343 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a267a67e-effa-40d4-8923-7669798e594d-utilities" (OuterVolumeSpecName: "utilities") pod "a267a67e-effa-40d4-8923-7669798e594d" (UID: "a267a67e-effa-40d4-8923-7669798e594d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 14:39:31 crc kubenswrapper[4718]: I1210 14:39:31.127737 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a267a67e-effa-40d4-8923-7669798e594d-kube-api-access-75td8" (OuterVolumeSpecName: "kube-api-access-75td8") pod "a267a67e-effa-40d4-8923-7669798e594d" (UID: "a267a67e-effa-40d4-8923-7669798e594d"). InnerVolumeSpecName "kube-api-access-75td8". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 14:39:31 crc kubenswrapper[4718]: I1210 14:39:31.189095 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a267a67e-effa-40d4-8923-7669798e594d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a267a67e-effa-40d4-8923-7669798e594d" (UID: "a267a67e-effa-40d4-8923-7669798e594d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 14:39:31 crc kubenswrapper[4718]: I1210 14:39:31.224517 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-75td8\" (UniqueName: \"kubernetes.io/projected/a267a67e-effa-40d4-8923-7669798e594d-kube-api-access-75td8\") on node \"crc\" DevicePath \"\"" Dec 10 14:39:31 crc kubenswrapper[4718]: I1210 14:39:31.224577 4718 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a267a67e-effa-40d4-8923-7669798e594d-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 10 14:39:31 crc kubenswrapper[4718]: I1210 14:39:31.224589 4718 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a267a67e-effa-40d4-8923-7669798e594d-utilities\") on node \"crc\" DevicePath \"\"" Dec 10 14:39:31 crc kubenswrapper[4718]: I1210 14:39:31.755117 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6xgvp" event={"ID":"413a70a8-23c0-4e34-a2f7-ec5cb980bfb5","Type":"ContainerDied","Data":"b1ccd011ad5cbc576752302792179e8d4eeee09dca89768f2ceef6fae192c5a5"} Dec 10 14:39:31 crc kubenswrapper[4718]: I1210 14:39:31.755255 4718 scope.go:117] "RemoveContainer" containerID="aa306118d66fce8d8475ce09b0bc834630b33d511d6497fac629d3ef6c6fa23f" Dec 10 14:39:31 crc kubenswrapper[4718]: I1210 14:39:31.755308 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6xgvp" Dec 10 14:39:31 crc kubenswrapper[4718]: I1210 14:39:31.758043 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-f8m8j" event={"ID":"fe45ddb0-b6e9-4690-8d2c-dfcef7de0d90","Type":"ContainerStarted","Data":"2e3a8bd4ad1c3a570c7dbc20d0c281ff7f80d0be82ab77b07a396cccd7ce6496"} Dec 10 14:39:31 crc kubenswrapper[4718]: I1210 14:39:31.758115 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-f8m8j" event={"ID":"fe45ddb0-b6e9-4690-8d2c-dfcef7de0d90","Type":"ContainerStarted","Data":"564a9d80be168c61f3590231e54d678cda61a95312ba9cc18c116c095c6013c5"} Dec 10 14:39:31 crc kubenswrapper[4718]: I1210 14:39:31.762376 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-l2t5d" event={"ID":"a267a67e-effa-40d4-8923-7669798e594d","Type":"ContainerDied","Data":"d797228584c0ac788adfa4038d054498bf79033359cb3ecceafb24936ed67fae"} Dec 10 14:39:31 crc kubenswrapper[4718]: I1210 14:39:31.762596 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-l2t5d" Dec 10 14:39:31 crc kubenswrapper[4718]: I1210 14:39:31.771839 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6bztl" event={"ID":"d0a8400b-e706-4ec6-b7e8-e4c25bfe9440","Type":"ContainerDied","Data":"7d39c38ce1be6613f355711c5d12fb2dd06f2916379110362b99e06000f90aae"} Dec 10 14:39:31 crc kubenswrapper[4718]: I1210 14:39:31.771963 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6bztl" Dec 10 14:39:31 crc kubenswrapper[4718]: I1210 14:39:31.782804 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-twtlk" event={"ID":"451fb12e-a97f-441e-8a8c-d4c217640aef","Type":"ContainerDied","Data":"1d04a660357379885f49b3614a2508450398926ae43b100daeef376b23948e39"} Dec 10 14:39:31 crc kubenswrapper[4718]: I1210 14:39:31.783025 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-twtlk" Dec 10 14:39:31 crc kubenswrapper[4718]: I1210 14:39:31.787701 4718 scope.go:117] "RemoveContainer" containerID="52bd36c22d3cd72ebbc0f8798fec85da2a3b3436688f120e1d68324eacc73731" Dec 10 14:39:31 crc kubenswrapper[4718]: I1210 14:39:31.812411 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-6xgvp"] Dec 10 14:39:31 crc kubenswrapper[4718]: I1210 14:39:31.815144 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-6xgvp"] Dec 10 14:39:31 crc kubenswrapper[4718]: I1210 14:39:31.835431 4718 scope.go:117] "RemoveContainer" containerID="9887a8dcb7daee930078e09f41ad3b35998b45163dc1aecd013ebfe52605600a" Dec 10 14:39:31 crc kubenswrapper[4718]: I1210 14:39:31.851920 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-6bztl"] Dec 10 14:39:31 crc kubenswrapper[4718]: I1210 14:39:31.856925 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-6bztl"] Dec 10 14:39:31 crc kubenswrapper[4718]: I1210 14:39:31.874562 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-l2t5d"] Dec 10 14:39:31 crc kubenswrapper[4718]: I1210 14:39:31.882625 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-l2t5d"] Dec 10 14:39:31 crc kubenswrapper[4718]: I1210 14:39:31.887044 4718 scope.go:117] "RemoveContainer" containerID="cd3457ae829587d2caa906dddebd7a06d9b8bde377bb5e017d6869df51191026" Dec 10 14:39:31 crc kubenswrapper[4718]: I1210 14:39:31.888845 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-twtlk"] Dec 10 14:39:31 crc kubenswrapper[4718]: E1210 14:39:31.893029 4718 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod451fb12e_a97f_441e_8a8c_d4c217640aef.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod451fb12e_a97f_441e_8a8c_d4c217640aef.slice/crio-1d04a660357379885f49b3614a2508450398926ae43b100daeef376b23948e39\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod413a70a8_23c0_4e34_a2f7_ec5cb980bfb5.slice/crio-b1ccd011ad5cbc576752302792179e8d4eeee09dca89768f2ceef6fae192c5a5\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda267a67e_effa_40d4_8923_7669798e594d.slice\": RecentStats: unable to find data in memory cache]" Dec 10 14:39:31 crc kubenswrapper[4718]: I1210 14:39:31.898838 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-twtlk"] Dec 10 14:39:31 crc kubenswrapper[4718]: I1210 14:39:31.913272 4718 scope.go:117] "RemoveContainer" containerID="abf9badbcad981ca03bdb14664eb045c6026d29a565827fce4add513b5fac188" Dec 10 14:39:31 crc kubenswrapper[4718]: I1210 14:39:31.933742 4718 scope.go:117] "RemoveContainer" containerID="d481d1f81e106cf2e907fafebba31e29f613de20322d827fd2143b1c998aa97d" Dec 10 14:39:31 crc kubenswrapper[4718]: I1210 14:39:31.955663 4718 scope.go:117] "RemoveContainer" containerID="0458b8172a28e2047758a32f9869b39dea094544b91b0c93c34eab8e99b04cef" Dec 10 14:39:31 crc kubenswrapper[4718]: I1210 14:39:31.977028 4718 scope.go:117] "RemoveContainer" containerID="ebaf46d9ca1756e9f79fd538ebdad606c5f4d045b7d180c0089efe608634748f" Dec 10 14:39:31 crc kubenswrapper[4718]: I1210 14:39:31.996512 4718 scope.go:117] "RemoveContainer" containerID="4732a81b71d59cb55fd7132f89a1435c3626287ae2bce394831a6e93b3667c25" Dec 10 14:39:32 crc kubenswrapper[4718]: I1210 14:39:32.025700 4718 scope.go:117] "RemoveContainer" containerID="1fffa7e433a4fa6ef1ae1906db45898d8fdc307505d17158a28896b0db709bfc" Dec 10 14:39:32 crc kubenswrapper[4718]: I1210 14:39:32.028801 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="37550300-88f8-40dc-be98-1825518dc65c" path="/var/lib/kubelet/pods/37550300-88f8-40dc-be98-1825518dc65c/volumes" Dec 10 14:39:32 crc kubenswrapper[4718]: I1210 14:39:32.030004 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="413a70a8-23c0-4e34-a2f7-ec5cb980bfb5" path="/var/lib/kubelet/pods/413a70a8-23c0-4e34-a2f7-ec5cb980bfb5/volumes" Dec 10 14:39:32 crc kubenswrapper[4718]: I1210 14:39:32.030854 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="451fb12e-a97f-441e-8a8c-d4c217640aef" path="/var/lib/kubelet/pods/451fb12e-a97f-441e-8a8c-d4c217640aef/volumes" Dec 10 14:39:32 crc kubenswrapper[4718]: I1210 14:39:32.031584 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a267a67e-effa-40d4-8923-7669798e594d" path="/var/lib/kubelet/pods/a267a67e-effa-40d4-8923-7669798e594d/volumes" Dec 10 14:39:32 crc kubenswrapper[4718]: I1210 14:39:32.032771 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d0a8400b-e706-4ec6-b7e8-e4c25bfe9440" path="/var/lib/kubelet/pods/d0a8400b-e706-4ec6-b7e8-e4c25bfe9440/volumes" Dec 10 14:39:32 crc kubenswrapper[4718]: I1210 14:39:32.419447 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-zn4lg"] Dec 10 14:39:32 crc kubenswrapper[4718]: E1210 14:39:32.420197 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="413a70a8-23c0-4e34-a2f7-ec5cb980bfb5" containerName="extract-content" Dec 10 14:39:32 crc kubenswrapper[4718]: I1210 14:39:32.420291 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="413a70a8-23c0-4e34-a2f7-ec5cb980bfb5" containerName="extract-content" Dec 10 14:39:32 crc kubenswrapper[4718]: E1210 14:39:32.420437 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="37550300-88f8-40dc-be98-1825518dc65c" containerName="extract-utilities" Dec 10 14:39:32 crc kubenswrapper[4718]: I1210 14:39:32.420502 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="37550300-88f8-40dc-be98-1825518dc65c" containerName="extract-utilities" Dec 10 14:39:32 crc kubenswrapper[4718]: E1210 14:39:32.420563 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a267a67e-effa-40d4-8923-7669798e594d" containerName="extract-utilities" Dec 10 14:39:32 crc kubenswrapper[4718]: I1210 14:39:32.420632 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="a267a67e-effa-40d4-8923-7669798e594d" containerName="extract-utilities" Dec 10 14:39:32 crc kubenswrapper[4718]: E1210 14:39:32.420734 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="37550300-88f8-40dc-be98-1825518dc65c" containerName="extract-content" Dec 10 14:39:32 crc kubenswrapper[4718]: I1210 14:39:32.420796 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="37550300-88f8-40dc-be98-1825518dc65c" containerName="extract-content" Dec 10 14:39:32 crc kubenswrapper[4718]: E1210 14:39:32.420865 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="413a70a8-23c0-4e34-a2f7-ec5cb980bfb5" containerName="extract-utilities" Dec 10 14:39:32 crc kubenswrapper[4718]: I1210 14:39:32.420921 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="413a70a8-23c0-4e34-a2f7-ec5cb980bfb5" containerName="extract-utilities" Dec 10 14:39:32 crc kubenswrapper[4718]: E1210 14:39:32.420977 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a267a67e-effa-40d4-8923-7669798e594d" containerName="extract-content" Dec 10 14:39:32 crc kubenswrapper[4718]: I1210 14:39:32.421044 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="a267a67e-effa-40d4-8923-7669798e594d" containerName="extract-content" Dec 10 14:39:32 crc kubenswrapper[4718]: E1210 14:39:32.421102 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d0a8400b-e706-4ec6-b7e8-e4c25bfe9440" containerName="extract-utilities" Dec 10 14:39:32 crc kubenswrapper[4718]: I1210 14:39:32.421161 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="d0a8400b-e706-4ec6-b7e8-e4c25bfe9440" containerName="extract-utilities" Dec 10 14:39:32 crc kubenswrapper[4718]: E1210 14:39:32.421228 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="451fb12e-a97f-441e-8a8c-d4c217640aef" containerName="marketplace-operator" Dec 10 14:39:32 crc kubenswrapper[4718]: I1210 14:39:32.421282 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="451fb12e-a97f-441e-8a8c-d4c217640aef" containerName="marketplace-operator" Dec 10 14:39:32 crc kubenswrapper[4718]: E1210 14:39:32.421355 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="451fb12e-a97f-441e-8a8c-d4c217640aef" containerName="marketplace-operator" Dec 10 14:39:32 crc kubenswrapper[4718]: I1210 14:39:32.421466 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="451fb12e-a97f-441e-8a8c-d4c217640aef" containerName="marketplace-operator" Dec 10 14:39:32 crc kubenswrapper[4718]: E1210 14:39:32.421536 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d0a8400b-e706-4ec6-b7e8-e4c25bfe9440" containerName="registry-server" Dec 10 14:39:32 crc kubenswrapper[4718]: I1210 14:39:32.421603 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="d0a8400b-e706-4ec6-b7e8-e4c25bfe9440" containerName="registry-server" Dec 10 14:39:32 crc kubenswrapper[4718]: E1210 14:39:32.421659 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d0a8400b-e706-4ec6-b7e8-e4c25bfe9440" containerName="extract-content" Dec 10 14:39:32 crc kubenswrapper[4718]: I1210 14:39:32.421714 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="d0a8400b-e706-4ec6-b7e8-e4c25bfe9440" containerName="extract-content" Dec 10 14:39:32 crc kubenswrapper[4718]: E1210 14:39:32.421779 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a267a67e-effa-40d4-8923-7669798e594d" containerName="registry-server" Dec 10 14:39:32 crc kubenswrapper[4718]: I1210 14:39:32.421838 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="a267a67e-effa-40d4-8923-7669798e594d" containerName="registry-server" Dec 10 14:39:32 crc kubenswrapper[4718]: E1210 14:39:32.421906 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="413a70a8-23c0-4e34-a2f7-ec5cb980bfb5" containerName="registry-server" Dec 10 14:39:32 crc kubenswrapper[4718]: I1210 14:39:32.421961 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="413a70a8-23c0-4e34-a2f7-ec5cb980bfb5" containerName="registry-server" Dec 10 14:39:32 crc kubenswrapper[4718]: E1210 14:39:32.422021 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="37550300-88f8-40dc-be98-1825518dc65c" containerName="registry-server" Dec 10 14:39:32 crc kubenswrapper[4718]: I1210 14:39:32.422076 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="37550300-88f8-40dc-be98-1825518dc65c" containerName="registry-server" Dec 10 14:39:32 crc kubenswrapper[4718]: I1210 14:39:32.422276 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="d0a8400b-e706-4ec6-b7e8-e4c25bfe9440" containerName="registry-server" Dec 10 14:39:32 crc kubenswrapper[4718]: I1210 14:39:32.422366 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="451fb12e-a97f-441e-8a8c-d4c217640aef" containerName="marketplace-operator" Dec 10 14:39:32 crc kubenswrapper[4718]: I1210 14:39:32.422459 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="a267a67e-effa-40d4-8923-7669798e594d" containerName="registry-server" Dec 10 14:39:32 crc kubenswrapper[4718]: I1210 14:39:32.422519 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="413a70a8-23c0-4e34-a2f7-ec5cb980bfb5" containerName="registry-server" Dec 10 14:39:32 crc kubenswrapper[4718]: I1210 14:39:32.422575 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="451fb12e-a97f-441e-8a8c-d4c217640aef" containerName="marketplace-operator" Dec 10 14:39:32 crc kubenswrapper[4718]: I1210 14:39:32.422639 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="37550300-88f8-40dc-be98-1825518dc65c" containerName="registry-server" Dec 10 14:39:32 crc kubenswrapper[4718]: I1210 14:39:32.423606 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zn4lg" Dec 10 14:39:32 crc kubenswrapper[4718]: I1210 14:39:32.427971 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Dec 10 14:39:32 crc kubenswrapper[4718]: I1210 14:39:32.441399 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-zn4lg"] Dec 10 14:39:32 crc kubenswrapper[4718]: I1210 14:39:32.546056 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4jmf8\" (UniqueName: \"kubernetes.io/projected/d8eb2321-a379-4525-a11d-2b3f6712aa35-kube-api-access-4jmf8\") pod \"certified-operators-zn4lg\" (UID: \"d8eb2321-a379-4525-a11d-2b3f6712aa35\") " pod="openshift-marketplace/certified-operators-zn4lg" Dec 10 14:39:32 crc kubenswrapper[4718]: I1210 14:39:32.546127 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d8eb2321-a379-4525-a11d-2b3f6712aa35-catalog-content\") pod \"certified-operators-zn4lg\" (UID: \"d8eb2321-a379-4525-a11d-2b3f6712aa35\") " pod="openshift-marketplace/certified-operators-zn4lg" Dec 10 14:39:32 crc kubenswrapper[4718]: I1210 14:39:32.546164 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d8eb2321-a379-4525-a11d-2b3f6712aa35-utilities\") pod \"certified-operators-zn4lg\" (UID: \"d8eb2321-a379-4525-a11d-2b3f6712aa35\") " pod="openshift-marketplace/certified-operators-zn4lg" Dec 10 14:39:32 crc kubenswrapper[4718]: I1210 14:39:32.648280 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4jmf8\" (UniqueName: \"kubernetes.io/projected/d8eb2321-a379-4525-a11d-2b3f6712aa35-kube-api-access-4jmf8\") pod \"certified-operators-zn4lg\" (UID: \"d8eb2321-a379-4525-a11d-2b3f6712aa35\") " pod="openshift-marketplace/certified-operators-zn4lg" Dec 10 14:39:32 crc kubenswrapper[4718]: I1210 14:39:32.648368 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d8eb2321-a379-4525-a11d-2b3f6712aa35-catalog-content\") pod \"certified-operators-zn4lg\" (UID: \"d8eb2321-a379-4525-a11d-2b3f6712aa35\") " pod="openshift-marketplace/certified-operators-zn4lg" Dec 10 14:39:32 crc kubenswrapper[4718]: I1210 14:39:32.648433 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d8eb2321-a379-4525-a11d-2b3f6712aa35-utilities\") pod \"certified-operators-zn4lg\" (UID: \"d8eb2321-a379-4525-a11d-2b3f6712aa35\") " pod="openshift-marketplace/certified-operators-zn4lg" Dec 10 14:39:32 crc kubenswrapper[4718]: I1210 14:39:32.649153 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d8eb2321-a379-4525-a11d-2b3f6712aa35-catalog-content\") pod \"certified-operators-zn4lg\" (UID: \"d8eb2321-a379-4525-a11d-2b3f6712aa35\") " pod="openshift-marketplace/certified-operators-zn4lg" Dec 10 14:39:32 crc kubenswrapper[4718]: I1210 14:39:32.649315 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d8eb2321-a379-4525-a11d-2b3f6712aa35-utilities\") pod \"certified-operators-zn4lg\" (UID: \"d8eb2321-a379-4525-a11d-2b3f6712aa35\") " pod="openshift-marketplace/certified-operators-zn4lg" Dec 10 14:39:32 crc kubenswrapper[4718]: I1210 14:39:32.672157 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4jmf8\" (UniqueName: \"kubernetes.io/projected/d8eb2321-a379-4525-a11d-2b3f6712aa35-kube-api-access-4jmf8\") pod \"certified-operators-zn4lg\" (UID: \"d8eb2321-a379-4525-a11d-2b3f6712aa35\") " pod="openshift-marketplace/certified-operators-zn4lg" Dec 10 14:39:32 crc kubenswrapper[4718]: I1210 14:39:32.742600 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zn4lg" Dec 10 14:39:32 crc kubenswrapper[4718]: I1210 14:39:32.860022 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-f8m8j" Dec 10 14:39:32 crc kubenswrapper[4718]: I1210 14:39:32.865208 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-f8m8j" Dec 10 14:39:32 crc kubenswrapper[4718]: I1210 14:39:32.886304 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-f8m8j" podStartSLOduration=2.886273198 podStartE2EDuration="2.886273198s" podCreationTimestamp="2025-12-10 14:39:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 14:39:32.882706966 +0000 UTC m=+477.831930383" watchObservedRunningTime="2025-12-10 14:39:32.886273198 +0000 UTC m=+477.835496615" Dec 10 14:39:33 crc kubenswrapper[4718]: I1210 14:39:33.241995 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-zn4lg"] Dec 10 14:39:33 crc kubenswrapper[4718]: W1210 14:39:33.249058 4718 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd8eb2321_a379_4525_a11d_2b3f6712aa35.slice/crio-36373d1c4a6a44ebcf07edffda7f23d3fa75aa50bd75bd330123854e7e525b34 WatchSource:0}: Error finding container 36373d1c4a6a44ebcf07edffda7f23d3fa75aa50bd75bd330123854e7e525b34: Status 404 returned error can't find the container with id 36373d1c4a6a44ebcf07edffda7f23d3fa75aa50bd75bd330123854e7e525b34 Dec 10 14:39:33 crc kubenswrapper[4718]: I1210 14:39:33.877487 4718 generic.go:334] "Generic (PLEG): container finished" podID="d8eb2321-a379-4525-a11d-2b3f6712aa35" containerID="80c3eb787783e95e9c407e3bc425fc5a6a85da6c857d5f6956eea6141fba9b9e" exitCode=0 Dec 10 14:39:33 crc kubenswrapper[4718]: I1210 14:39:33.877607 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zn4lg" event={"ID":"d8eb2321-a379-4525-a11d-2b3f6712aa35","Type":"ContainerDied","Data":"80c3eb787783e95e9c407e3bc425fc5a6a85da6c857d5f6956eea6141fba9b9e"} Dec 10 14:39:33 crc kubenswrapper[4718]: I1210 14:39:33.877844 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zn4lg" event={"ID":"d8eb2321-a379-4525-a11d-2b3f6712aa35","Type":"ContainerStarted","Data":"36373d1c4a6a44ebcf07edffda7f23d3fa75aa50bd75bd330123854e7e525b34"} Dec 10 14:39:34 crc kubenswrapper[4718]: I1210 14:39:34.221832 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-fvqjr"] Dec 10 14:39:34 crc kubenswrapper[4718]: I1210 14:39:34.223525 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fvqjr" Dec 10 14:39:34 crc kubenswrapper[4718]: I1210 14:39:34.227764 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Dec 10 14:39:34 crc kubenswrapper[4718]: I1210 14:39:34.237227 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-fvqjr"] Dec 10 14:39:34 crc kubenswrapper[4718]: I1210 14:39:34.283285 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d800dab1-f8c3-46c8-b3bc-47186e9a999a-catalog-content\") pod \"community-operators-fvqjr\" (UID: \"d800dab1-f8c3-46c8-b3bc-47186e9a999a\") " pod="openshift-marketplace/community-operators-fvqjr" Dec 10 14:39:34 crc kubenswrapper[4718]: I1210 14:39:34.283377 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vdglx\" (UniqueName: \"kubernetes.io/projected/d800dab1-f8c3-46c8-b3bc-47186e9a999a-kube-api-access-vdglx\") pod \"community-operators-fvqjr\" (UID: \"d800dab1-f8c3-46c8-b3bc-47186e9a999a\") " pod="openshift-marketplace/community-operators-fvqjr" Dec 10 14:39:34 crc kubenswrapper[4718]: I1210 14:39:34.283452 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d800dab1-f8c3-46c8-b3bc-47186e9a999a-utilities\") pod \"community-operators-fvqjr\" (UID: \"d800dab1-f8c3-46c8-b3bc-47186e9a999a\") " pod="openshift-marketplace/community-operators-fvqjr" Dec 10 14:39:34 crc kubenswrapper[4718]: I1210 14:39:34.385120 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d800dab1-f8c3-46c8-b3bc-47186e9a999a-catalog-content\") pod \"community-operators-fvqjr\" (UID: \"d800dab1-f8c3-46c8-b3bc-47186e9a999a\") " pod="openshift-marketplace/community-operators-fvqjr" Dec 10 14:39:34 crc kubenswrapper[4718]: I1210 14:39:34.385198 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vdglx\" (UniqueName: \"kubernetes.io/projected/d800dab1-f8c3-46c8-b3bc-47186e9a999a-kube-api-access-vdglx\") pod \"community-operators-fvqjr\" (UID: \"d800dab1-f8c3-46c8-b3bc-47186e9a999a\") " pod="openshift-marketplace/community-operators-fvqjr" Dec 10 14:39:34 crc kubenswrapper[4718]: I1210 14:39:34.385227 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d800dab1-f8c3-46c8-b3bc-47186e9a999a-utilities\") pod \"community-operators-fvqjr\" (UID: \"d800dab1-f8c3-46c8-b3bc-47186e9a999a\") " pod="openshift-marketplace/community-operators-fvqjr" Dec 10 14:39:34 crc kubenswrapper[4718]: I1210 14:39:34.385859 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d800dab1-f8c3-46c8-b3bc-47186e9a999a-catalog-content\") pod \"community-operators-fvqjr\" (UID: \"d800dab1-f8c3-46c8-b3bc-47186e9a999a\") " pod="openshift-marketplace/community-operators-fvqjr" Dec 10 14:39:34 crc kubenswrapper[4718]: I1210 14:39:34.385927 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d800dab1-f8c3-46c8-b3bc-47186e9a999a-utilities\") pod \"community-operators-fvqjr\" (UID: \"d800dab1-f8c3-46c8-b3bc-47186e9a999a\") " pod="openshift-marketplace/community-operators-fvqjr" Dec 10 14:39:34 crc kubenswrapper[4718]: I1210 14:39:34.408434 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vdglx\" (UniqueName: \"kubernetes.io/projected/d800dab1-f8c3-46c8-b3bc-47186e9a999a-kube-api-access-vdglx\") pod \"community-operators-fvqjr\" (UID: \"d800dab1-f8c3-46c8-b3bc-47186e9a999a\") " pod="openshift-marketplace/community-operators-fvqjr" Dec 10 14:39:34 crc kubenswrapper[4718]: I1210 14:39:34.543530 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fvqjr" Dec 10 14:39:34 crc kubenswrapper[4718]: I1210 14:39:34.825885 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-drwpm"] Dec 10 14:39:34 crc kubenswrapper[4718]: I1210 14:39:34.827901 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-drwpm" Dec 10 14:39:34 crc kubenswrapper[4718]: I1210 14:39:34.832283 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Dec 10 14:39:34 crc kubenswrapper[4718]: I1210 14:39:34.839080 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-drwpm"] Dec 10 14:39:34 crc kubenswrapper[4718]: I1210 14:39:34.890380 4718 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 10 14:39:34 crc kubenswrapper[4718]: I1210 14:39:34.893498 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-td8kr\" (UniqueName: \"kubernetes.io/projected/b3c409a6-6840-483d-8019-68c6842f8d25-kube-api-access-td8kr\") pod \"redhat-operators-drwpm\" (UID: \"b3c409a6-6840-483d-8019-68c6842f8d25\") " pod="openshift-marketplace/redhat-operators-drwpm" Dec 10 14:39:34 crc kubenswrapper[4718]: I1210 14:39:34.895112 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b3c409a6-6840-483d-8019-68c6842f8d25-utilities\") pod \"redhat-operators-drwpm\" (UID: \"b3c409a6-6840-483d-8019-68c6842f8d25\") " pod="openshift-marketplace/redhat-operators-drwpm" Dec 10 14:39:34 crc kubenswrapper[4718]: I1210 14:39:34.895208 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b3c409a6-6840-483d-8019-68c6842f8d25-catalog-content\") pod \"redhat-operators-drwpm\" (UID: \"b3c409a6-6840-483d-8019-68c6842f8d25\") " pod="openshift-marketplace/redhat-operators-drwpm" Dec 10 14:39:34 crc kubenswrapper[4718]: I1210 14:39:34.969009 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-fvqjr"] Dec 10 14:39:34 crc kubenswrapper[4718]: I1210 14:39:34.997705 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-td8kr\" (UniqueName: \"kubernetes.io/projected/b3c409a6-6840-483d-8019-68c6842f8d25-kube-api-access-td8kr\") pod \"redhat-operators-drwpm\" (UID: \"b3c409a6-6840-483d-8019-68c6842f8d25\") " pod="openshift-marketplace/redhat-operators-drwpm" Dec 10 14:39:34 crc kubenswrapper[4718]: I1210 14:39:34.997790 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b3c409a6-6840-483d-8019-68c6842f8d25-utilities\") pod \"redhat-operators-drwpm\" (UID: \"b3c409a6-6840-483d-8019-68c6842f8d25\") " pod="openshift-marketplace/redhat-operators-drwpm" Dec 10 14:39:34 crc kubenswrapper[4718]: I1210 14:39:34.997905 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b3c409a6-6840-483d-8019-68c6842f8d25-catalog-content\") pod \"redhat-operators-drwpm\" (UID: \"b3c409a6-6840-483d-8019-68c6842f8d25\") " pod="openshift-marketplace/redhat-operators-drwpm" Dec 10 14:39:34 crc kubenswrapper[4718]: I1210 14:39:34.999476 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b3c409a6-6840-483d-8019-68c6842f8d25-utilities\") pod \"redhat-operators-drwpm\" (UID: \"b3c409a6-6840-483d-8019-68c6842f8d25\") " pod="openshift-marketplace/redhat-operators-drwpm" Dec 10 14:39:34 crc kubenswrapper[4718]: I1210 14:39:34.999579 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b3c409a6-6840-483d-8019-68c6842f8d25-catalog-content\") pod \"redhat-operators-drwpm\" (UID: \"b3c409a6-6840-483d-8019-68c6842f8d25\") " pod="openshift-marketplace/redhat-operators-drwpm" Dec 10 14:39:35 crc kubenswrapper[4718]: I1210 14:39:35.023867 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-td8kr\" (UniqueName: \"kubernetes.io/projected/b3c409a6-6840-483d-8019-68c6842f8d25-kube-api-access-td8kr\") pod \"redhat-operators-drwpm\" (UID: \"b3c409a6-6840-483d-8019-68c6842f8d25\") " pod="openshift-marketplace/redhat-operators-drwpm" Dec 10 14:39:35 crc kubenswrapper[4718]: I1210 14:39:35.165294 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-drwpm" Dec 10 14:39:35 crc kubenswrapper[4718]: I1210 14:39:35.594150 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-drwpm"] Dec 10 14:39:35 crc kubenswrapper[4718]: I1210 14:39:35.892367 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-drwpm" event={"ID":"b3c409a6-6840-483d-8019-68c6842f8d25","Type":"ContainerStarted","Data":"9dcfde76ae9f4e0f8d6ad176bc365c77d0fdd7a964a8d9610fe95ee402c86761"} Dec 10 14:39:35 crc kubenswrapper[4718]: I1210 14:39:35.894365 4718 generic.go:334] "Generic (PLEG): container finished" podID="d800dab1-f8c3-46c8-b3bc-47186e9a999a" containerID="f6943162e50739565457759982e522e18f092d943e3ebf2af33dc4ad5f83dede" exitCode=0 Dec 10 14:39:35 crc kubenswrapper[4718]: I1210 14:39:35.894423 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fvqjr" event={"ID":"d800dab1-f8c3-46c8-b3bc-47186e9a999a","Type":"ContainerDied","Data":"f6943162e50739565457759982e522e18f092d943e3ebf2af33dc4ad5f83dede"} Dec 10 14:39:35 crc kubenswrapper[4718]: I1210 14:39:35.894444 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fvqjr" event={"ID":"d800dab1-f8c3-46c8-b3bc-47186e9a999a","Type":"ContainerStarted","Data":"ef7354d3bc78d27bc9151affb853fc58d7b818bbaa35d5842b726e6527085f1a"} Dec 10 14:39:36 crc kubenswrapper[4718]: I1210 14:39:36.638987 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-rdjz4"] Dec 10 14:39:36 crc kubenswrapper[4718]: I1210 14:39:36.644062 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rdjz4" Dec 10 14:39:36 crc kubenswrapper[4718]: I1210 14:39:36.648160 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Dec 10 14:39:36 crc kubenswrapper[4718]: I1210 14:39:36.652036 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-rdjz4"] Dec 10 14:39:36 crc kubenswrapper[4718]: I1210 14:39:36.726888 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4e412181-46c3-4cde-81c7-92efeeacc196-utilities\") pod \"redhat-marketplace-rdjz4\" (UID: \"4e412181-46c3-4cde-81c7-92efeeacc196\") " pod="openshift-marketplace/redhat-marketplace-rdjz4" Dec 10 14:39:36 crc kubenswrapper[4718]: I1210 14:39:36.726957 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4e412181-46c3-4cde-81c7-92efeeacc196-catalog-content\") pod \"redhat-marketplace-rdjz4\" (UID: \"4e412181-46c3-4cde-81c7-92efeeacc196\") " pod="openshift-marketplace/redhat-marketplace-rdjz4" Dec 10 14:39:36 crc kubenswrapper[4718]: I1210 14:39:36.726993 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dq7d9\" (UniqueName: \"kubernetes.io/projected/4e412181-46c3-4cde-81c7-92efeeacc196-kube-api-access-dq7d9\") pod \"redhat-marketplace-rdjz4\" (UID: \"4e412181-46c3-4cde-81c7-92efeeacc196\") " pod="openshift-marketplace/redhat-marketplace-rdjz4" Dec 10 14:39:36 crc kubenswrapper[4718]: I1210 14:39:36.828702 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4e412181-46c3-4cde-81c7-92efeeacc196-utilities\") pod \"redhat-marketplace-rdjz4\" (UID: \"4e412181-46c3-4cde-81c7-92efeeacc196\") " pod="openshift-marketplace/redhat-marketplace-rdjz4" Dec 10 14:39:36 crc kubenswrapper[4718]: I1210 14:39:36.828779 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4e412181-46c3-4cde-81c7-92efeeacc196-catalog-content\") pod \"redhat-marketplace-rdjz4\" (UID: \"4e412181-46c3-4cde-81c7-92efeeacc196\") " pod="openshift-marketplace/redhat-marketplace-rdjz4" Dec 10 14:39:36 crc kubenswrapper[4718]: I1210 14:39:36.828812 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dq7d9\" (UniqueName: \"kubernetes.io/projected/4e412181-46c3-4cde-81c7-92efeeacc196-kube-api-access-dq7d9\") pod \"redhat-marketplace-rdjz4\" (UID: \"4e412181-46c3-4cde-81c7-92efeeacc196\") " pod="openshift-marketplace/redhat-marketplace-rdjz4" Dec 10 14:39:36 crc kubenswrapper[4718]: I1210 14:39:36.830029 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4e412181-46c3-4cde-81c7-92efeeacc196-utilities\") pod \"redhat-marketplace-rdjz4\" (UID: \"4e412181-46c3-4cde-81c7-92efeeacc196\") " pod="openshift-marketplace/redhat-marketplace-rdjz4" Dec 10 14:39:36 crc kubenswrapper[4718]: I1210 14:39:36.830277 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4e412181-46c3-4cde-81c7-92efeeacc196-catalog-content\") pod \"redhat-marketplace-rdjz4\" (UID: \"4e412181-46c3-4cde-81c7-92efeeacc196\") " pod="openshift-marketplace/redhat-marketplace-rdjz4" Dec 10 14:39:36 crc kubenswrapper[4718]: I1210 14:39:36.850313 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dq7d9\" (UniqueName: \"kubernetes.io/projected/4e412181-46c3-4cde-81c7-92efeeacc196-kube-api-access-dq7d9\") pod \"redhat-marketplace-rdjz4\" (UID: \"4e412181-46c3-4cde-81c7-92efeeacc196\") " pod="openshift-marketplace/redhat-marketplace-rdjz4" Dec 10 14:39:36 crc kubenswrapper[4718]: I1210 14:39:36.902408 4718 generic.go:334] "Generic (PLEG): container finished" podID="b3c409a6-6840-483d-8019-68c6842f8d25" containerID="680e900f544711763c2fad0dd55eda0e2277012656dbc9b9bebcb9752f3d9d87" exitCode=0 Dec 10 14:39:36 crc kubenswrapper[4718]: I1210 14:39:36.902499 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-drwpm" event={"ID":"b3c409a6-6840-483d-8019-68c6842f8d25","Type":"ContainerDied","Data":"680e900f544711763c2fad0dd55eda0e2277012656dbc9b9bebcb9752f3d9d87"} Dec 10 14:39:36 crc kubenswrapper[4718]: I1210 14:39:36.906452 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zn4lg" event={"ID":"d8eb2321-a379-4525-a11d-2b3f6712aa35","Type":"ContainerStarted","Data":"d024c5cbe9462581666877ad5f226c84f5d39bf34a588d577485aec013e52735"} Dec 10 14:39:36 crc kubenswrapper[4718]: I1210 14:39:36.979648 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rdjz4" Dec 10 14:39:37 crc kubenswrapper[4718]: I1210 14:39:37.406857 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-rdjz4"] Dec 10 14:39:37 crc kubenswrapper[4718]: W1210 14:39:37.424550 4718 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4e412181_46c3_4cde_81c7_92efeeacc196.slice/crio-af99bdd5a85bc8c27ea703287cf4d875d42d045d26df55ebfa70d2e457b6b86d WatchSource:0}: Error finding container af99bdd5a85bc8c27ea703287cf4d875d42d045d26df55ebfa70d2e457b6b86d: Status 404 returned error can't find the container with id af99bdd5a85bc8c27ea703287cf4d875d42d045d26df55ebfa70d2e457b6b86d Dec 10 14:39:37 crc kubenswrapper[4718]: I1210 14:39:37.914345 4718 generic.go:334] "Generic (PLEG): container finished" podID="d8eb2321-a379-4525-a11d-2b3f6712aa35" containerID="d024c5cbe9462581666877ad5f226c84f5d39bf34a588d577485aec013e52735" exitCode=0 Dec 10 14:39:37 crc kubenswrapper[4718]: I1210 14:39:37.914429 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zn4lg" event={"ID":"d8eb2321-a379-4525-a11d-2b3f6712aa35","Type":"ContainerDied","Data":"d024c5cbe9462581666877ad5f226c84f5d39bf34a588d577485aec013e52735"} Dec 10 14:39:37 crc kubenswrapper[4718]: I1210 14:39:37.917675 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rdjz4" event={"ID":"4e412181-46c3-4cde-81c7-92efeeacc196","Type":"ContainerStarted","Data":"af99bdd5a85bc8c27ea703287cf4d875d42d045d26df55ebfa70d2e457b6b86d"} Dec 10 14:39:38 crc kubenswrapper[4718]: I1210 14:39:38.929949 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zn4lg" event={"ID":"d8eb2321-a379-4525-a11d-2b3f6712aa35","Type":"ContainerStarted","Data":"dc5a564dc52aa672c70b5d9fee134aef6d3157e97aea63ed6fc136ce88356718"} Dec 10 14:39:38 crc kubenswrapper[4718]: I1210 14:39:38.966237 4718 generic.go:334] "Generic (PLEG): container finished" podID="d800dab1-f8c3-46c8-b3bc-47186e9a999a" containerID="1779b1e46ffd800cca7152253c87c8c125bf6836c7e0a91a7c06c725f3c76031" exitCode=0 Dec 10 14:39:38 crc kubenswrapper[4718]: I1210 14:39:38.966365 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fvqjr" event={"ID":"d800dab1-f8c3-46c8-b3bc-47186e9a999a","Type":"ContainerDied","Data":"1779b1e46ffd800cca7152253c87c8c125bf6836c7e0a91a7c06c725f3c76031"} Dec 10 14:39:38 crc kubenswrapper[4718]: I1210 14:39:38.970654 4718 generic.go:334] "Generic (PLEG): container finished" podID="4e412181-46c3-4cde-81c7-92efeeacc196" containerID="965812ec1f3c86e3a29068a99140426e4373901c4f786ffb09d7081a92fd60f2" exitCode=0 Dec 10 14:39:38 crc kubenswrapper[4718]: I1210 14:39:38.970729 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rdjz4" event={"ID":"4e412181-46c3-4cde-81c7-92efeeacc196","Type":"ContainerDied","Data":"965812ec1f3c86e3a29068a99140426e4373901c4f786ffb09d7081a92fd60f2"} Dec 10 14:39:38 crc kubenswrapper[4718]: I1210 14:39:38.979216 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-zn4lg" podStartSLOduration=3.274347167 podStartE2EDuration="6.979183269s" podCreationTimestamp="2025-12-10 14:39:32 +0000 UTC" firstStartedPulling="2025-12-10 14:39:34.889725021 +0000 UTC m=+479.838948438" lastFinishedPulling="2025-12-10 14:39:38.594561123 +0000 UTC m=+483.543784540" observedRunningTime="2025-12-10 14:39:38.969781576 +0000 UTC m=+483.919004993" watchObservedRunningTime="2025-12-10 14:39:38.979183269 +0000 UTC m=+483.928406696" Dec 10 14:39:40 crc kubenswrapper[4718]: I1210 14:39:40.317721 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-n2lcz" podUID="2638a0da-6209-4691-a4d4-6aa91a4ca547" containerName="registry" containerID="cri-o://39b7f994dda0d5ee0f4dadc168939bc66049f6d1013e620d0a2afbe47638a832" gracePeriod=30 Dec 10 14:39:40 crc kubenswrapper[4718]: I1210 14:39:40.755326 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-n2lcz" Dec 10 14:39:40 crc kubenswrapper[4718]: I1210 14:39:40.794356 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/2638a0da-6209-4691-a4d4-6aa91a4ca547-ca-trust-extracted\") pod \"2638a0da-6209-4691-a4d4-6aa91a4ca547\" (UID: \"2638a0da-6209-4691-a4d4-6aa91a4ca547\") " Dec 10 14:39:40 crc kubenswrapper[4718]: I1210 14:39:40.794440 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/2638a0da-6209-4691-a4d4-6aa91a4ca547-registry-tls\") pod \"2638a0da-6209-4691-a4d4-6aa91a4ca547\" (UID: \"2638a0da-6209-4691-a4d4-6aa91a4ca547\") " Dec 10 14:39:40 crc kubenswrapper[4718]: I1210 14:39:40.794531 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/2638a0da-6209-4691-a4d4-6aa91a4ca547-registry-certificates\") pod \"2638a0da-6209-4691-a4d4-6aa91a4ca547\" (UID: \"2638a0da-6209-4691-a4d4-6aa91a4ca547\") " Dec 10 14:39:40 crc kubenswrapper[4718]: I1210 14:39:40.794611 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/2638a0da-6209-4691-a4d4-6aa91a4ca547-bound-sa-token\") pod \"2638a0da-6209-4691-a4d4-6aa91a4ca547\" (UID: \"2638a0da-6209-4691-a4d4-6aa91a4ca547\") " Dec 10 14:39:40 crc kubenswrapper[4718]: I1210 14:39:40.794638 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/2638a0da-6209-4691-a4d4-6aa91a4ca547-installation-pull-secrets\") pod \"2638a0da-6209-4691-a4d4-6aa91a4ca547\" (UID: \"2638a0da-6209-4691-a4d4-6aa91a4ca547\") " Dec 10 14:39:40 crc kubenswrapper[4718]: I1210 14:39:40.794769 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"2638a0da-6209-4691-a4d4-6aa91a4ca547\" (UID: \"2638a0da-6209-4691-a4d4-6aa91a4ca547\") " Dec 10 14:39:40 crc kubenswrapper[4718]: I1210 14:39:40.794801 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2638a0da-6209-4691-a4d4-6aa91a4ca547-trusted-ca\") pod \"2638a0da-6209-4691-a4d4-6aa91a4ca547\" (UID: \"2638a0da-6209-4691-a4d4-6aa91a4ca547\") " Dec 10 14:39:40 crc kubenswrapper[4718]: I1210 14:39:40.794829 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4qkpt\" (UniqueName: \"kubernetes.io/projected/2638a0da-6209-4691-a4d4-6aa91a4ca547-kube-api-access-4qkpt\") pod \"2638a0da-6209-4691-a4d4-6aa91a4ca547\" (UID: \"2638a0da-6209-4691-a4d4-6aa91a4ca547\") " Dec 10 14:39:40 crc kubenswrapper[4718]: I1210 14:39:40.799849 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2638a0da-6209-4691-a4d4-6aa91a4ca547-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "2638a0da-6209-4691-a4d4-6aa91a4ca547" (UID: "2638a0da-6209-4691-a4d4-6aa91a4ca547"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:39:40 crc kubenswrapper[4718]: I1210 14:39:40.800235 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2638a0da-6209-4691-a4d4-6aa91a4ca547-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "2638a0da-6209-4691-a4d4-6aa91a4ca547" (UID: "2638a0da-6209-4691-a4d4-6aa91a4ca547"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:39:40 crc kubenswrapper[4718]: I1210 14:39:40.810242 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2638a0da-6209-4691-a4d4-6aa91a4ca547-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "2638a0da-6209-4691-a4d4-6aa91a4ca547" (UID: "2638a0da-6209-4691-a4d4-6aa91a4ca547"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 14:39:40 crc kubenswrapper[4718]: I1210 14:39:40.823371 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2638a0da-6209-4691-a4d4-6aa91a4ca547-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "2638a0da-6209-4691-a4d4-6aa91a4ca547" (UID: "2638a0da-6209-4691-a4d4-6aa91a4ca547"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 14:39:40 crc kubenswrapper[4718]: I1210 14:39:40.825582 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2638a0da-6209-4691-a4d4-6aa91a4ca547-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "2638a0da-6209-4691-a4d4-6aa91a4ca547" (UID: "2638a0da-6209-4691-a4d4-6aa91a4ca547"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 14:39:40 crc kubenswrapper[4718]: I1210 14:39:40.825811 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2638a0da-6209-4691-a4d4-6aa91a4ca547-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "2638a0da-6209-4691-a4d4-6aa91a4ca547" (UID: "2638a0da-6209-4691-a4d4-6aa91a4ca547"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 14:39:40 crc kubenswrapper[4718]: I1210 14:39:40.827588 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2638a0da-6209-4691-a4d4-6aa91a4ca547-kube-api-access-4qkpt" (OuterVolumeSpecName: "kube-api-access-4qkpt") pod "2638a0da-6209-4691-a4d4-6aa91a4ca547" (UID: "2638a0da-6209-4691-a4d4-6aa91a4ca547"). InnerVolumeSpecName "kube-api-access-4qkpt". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 14:39:40 crc kubenswrapper[4718]: I1210 14:39:40.837012 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "2638a0da-6209-4691-a4d4-6aa91a4ca547" (UID: "2638a0da-6209-4691-a4d4-6aa91a4ca547"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Dec 10 14:39:40 crc kubenswrapper[4718]: I1210 14:39:40.897169 4718 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/2638a0da-6209-4691-a4d4-6aa91a4ca547-registry-certificates\") on node \"crc\" DevicePath \"\"" Dec 10 14:39:40 crc kubenswrapper[4718]: I1210 14:39:40.897211 4718 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/2638a0da-6209-4691-a4d4-6aa91a4ca547-bound-sa-token\") on node \"crc\" DevicePath \"\"" Dec 10 14:39:40 crc kubenswrapper[4718]: I1210 14:39:40.897225 4718 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/2638a0da-6209-4691-a4d4-6aa91a4ca547-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Dec 10 14:39:40 crc kubenswrapper[4718]: I1210 14:39:40.897235 4718 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2638a0da-6209-4691-a4d4-6aa91a4ca547-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 10 14:39:40 crc kubenswrapper[4718]: I1210 14:39:40.897247 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4qkpt\" (UniqueName: \"kubernetes.io/projected/2638a0da-6209-4691-a4d4-6aa91a4ca547-kube-api-access-4qkpt\") on node \"crc\" DevicePath \"\"" Dec 10 14:39:40 crc kubenswrapper[4718]: I1210 14:39:40.897257 4718 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/2638a0da-6209-4691-a4d4-6aa91a4ca547-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Dec 10 14:39:40 crc kubenswrapper[4718]: I1210 14:39:40.897266 4718 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/2638a0da-6209-4691-a4d4-6aa91a4ca547-registry-tls\") on node \"crc\" DevicePath \"\"" Dec 10 14:39:40 crc kubenswrapper[4718]: I1210 14:39:40.988849 4718 generic.go:334] "Generic (PLEG): container finished" podID="2638a0da-6209-4691-a4d4-6aa91a4ca547" containerID="39b7f994dda0d5ee0f4dadc168939bc66049f6d1013e620d0a2afbe47638a832" exitCode=0 Dec 10 14:39:40 crc kubenswrapper[4718]: I1210 14:39:40.988979 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-n2lcz" Dec 10 14:39:40 crc kubenswrapper[4718]: I1210 14:39:40.989035 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-n2lcz" event={"ID":"2638a0da-6209-4691-a4d4-6aa91a4ca547","Type":"ContainerDied","Data":"39b7f994dda0d5ee0f4dadc168939bc66049f6d1013e620d0a2afbe47638a832"} Dec 10 14:39:40 crc kubenswrapper[4718]: I1210 14:39:40.989115 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-n2lcz" event={"ID":"2638a0da-6209-4691-a4d4-6aa91a4ca547","Type":"ContainerDied","Data":"89d8172ee68c1ce0612b0f9d92543a6555331767060a5644ea28b6db05f03443"} Dec 10 14:39:40 crc kubenswrapper[4718]: I1210 14:39:40.989149 4718 scope.go:117] "RemoveContainer" containerID="39b7f994dda0d5ee0f4dadc168939bc66049f6d1013e620d0a2afbe47638a832" Dec 10 14:39:40 crc kubenswrapper[4718]: I1210 14:39:40.997163 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fvqjr" event={"ID":"d800dab1-f8c3-46c8-b3bc-47186e9a999a","Type":"ContainerStarted","Data":"80675f0d42fc72cc6cad30b230b2d8e6f7088aa257a67b30dfaf82df8131f6e0"} Dec 10 14:39:41 crc kubenswrapper[4718]: I1210 14:39:41.000797 4718 generic.go:334] "Generic (PLEG): container finished" podID="4e412181-46c3-4cde-81c7-92efeeacc196" containerID="fb6420df1960282aba3dd21c3df6c3d6edc32e072cbdd176fba2428fcec25d1c" exitCode=0 Dec 10 14:39:41 crc kubenswrapper[4718]: I1210 14:39:41.000844 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rdjz4" event={"ID":"4e412181-46c3-4cde-81c7-92efeeacc196","Type":"ContainerDied","Data":"fb6420df1960282aba3dd21c3df6c3d6edc32e072cbdd176fba2428fcec25d1c"} Dec 10 14:39:41 crc kubenswrapper[4718]: I1210 14:39:41.018540 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-fvqjr" podStartSLOduration=3.354122351 podStartE2EDuration="7.018513579s" podCreationTimestamp="2025-12-10 14:39:34 +0000 UTC" firstStartedPulling="2025-12-10 14:39:35.896432356 +0000 UTC m=+480.845655773" lastFinishedPulling="2025-12-10 14:39:39.560823594 +0000 UTC m=+484.510047001" observedRunningTime="2025-12-10 14:39:41.018426377 +0000 UTC m=+485.967649804" watchObservedRunningTime="2025-12-10 14:39:41.018513579 +0000 UTC m=+485.967737006" Dec 10 14:39:41 crc kubenswrapper[4718]: I1210 14:39:41.036103 4718 scope.go:117] "RemoveContainer" containerID="39b7f994dda0d5ee0f4dadc168939bc66049f6d1013e620d0a2afbe47638a832" Dec 10 14:39:41 crc kubenswrapper[4718]: E1210 14:39:41.039405 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"39b7f994dda0d5ee0f4dadc168939bc66049f6d1013e620d0a2afbe47638a832\": container with ID starting with 39b7f994dda0d5ee0f4dadc168939bc66049f6d1013e620d0a2afbe47638a832 not found: ID does not exist" containerID="39b7f994dda0d5ee0f4dadc168939bc66049f6d1013e620d0a2afbe47638a832" Dec 10 14:39:41 crc kubenswrapper[4718]: I1210 14:39:41.039460 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"39b7f994dda0d5ee0f4dadc168939bc66049f6d1013e620d0a2afbe47638a832"} err="failed to get container status \"39b7f994dda0d5ee0f4dadc168939bc66049f6d1013e620d0a2afbe47638a832\": rpc error: code = NotFound desc = could not find container \"39b7f994dda0d5ee0f4dadc168939bc66049f6d1013e620d0a2afbe47638a832\": container with ID starting with 39b7f994dda0d5ee0f4dadc168939bc66049f6d1013e620d0a2afbe47638a832 not found: ID does not exist" Dec 10 14:39:41 crc kubenswrapper[4718]: I1210 14:39:41.084613 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-n2lcz"] Dec 10 14:39:41 crc kubenswrapper[4718]: I1210 14:39:41.094509 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-n2lcz"] Dec 10 14:39:42 crc kubenswrapper[4718]: I1210 14:39:42.030550 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2638a0da-6209-4691-a4d4-6aa91a4ca547" path="/var/lib/kubelet/pods/2638a0da-6209-4691-a4d4-6aa91a4ca547/volumes" Dec 10 14:39:42 crc kubenswrapper[4718]: I1210 14:39:42.743410 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-zn4lg" Dec 10 14:39:42 crc kubenswrapper[4718]: I1210 14:39:42.744956 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-zn4lg" Dec 10 14:39:42 crc kubenswrapper[4718]: I1210 14:39:42.797456 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-zn4lg" Dec 10 14:39:43 crc kubenswrapper[4718]: I1210 14:39:43.067894 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-zn4lg" Dec 10 14:39:44 crc kubenswrapper[4718]: I1210 14:39:44.544345 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-fvqjr" Dec 10 14:39:44 crc kubenswrapper[4718]: I1210 14:39:44.544716 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-fvqjr" Dec 10 14:39:44 crc kubenswrapper[4718]: I1210 14:39:44.595540 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-fvqjr" Dec 10 14:39:45 crc kubenswrapper[4718]: I1210 14:39:45.086146 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-fvqjr" Dec 10 14:39:47 crc kubenswrapper[4718]: I1210 14:39:47.064253 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rdjz4" event={"ID":"4e412181-46c3-4cde-81c7-92efeeacc196","Type":"ContainerStarted","Data":"dea1c744c4756b3c15aa6a9913fd65f74466d54c403347886adc2039d645eebc"} Dec 10 14:39:47 crc kubenswrapper[4718]: I1210 14:39:47.087937 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-rdjz4" podStartSLOduration=3.236149587 podStartE2EDuration="11.087908312s" podCreationTimestamp="2025-12-10 14:39:36 +0000 UTC" firstStartedPulling="2025-12-10 14:39:38.978889581 +0000 UTC m=+483.928112998" lastFinishedPulling="2025-12-10 14:39:46.830648306 +0000 UTC m=+491.779871723" observedRunningTime="2025-12-10 14:39:47.083152079 +0000 UTC m=+492.032375496" watchObservedRunningTime="2025-12-10 14:39:47.087908312 +0000 UTC m=+492.037131729" Dec 10 14:39:48 crc kubenswrapper[4718]: I1210 14:39:48.081730 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-drwpm" event={"ID":"b3c409a6-6840-483d-8019-68c6842f8d25","Type":"ContainerStarted","Data":"10cba698f07ee3337a11e6cefd4284f8328e1c174fd738cb6c4db10e2b3f8146"} Dec 10 14:39:49 crc kubenswrapper[4718]: I1210 14:39:49.091428 4718 generic.go:334] "Generic (PLEG): container finished" podID="b3c409a6-6840-483d-8019-68c6842f8d25" containerID="10cba698f07ee3337a11e6cefd4284f8328e1c174fd738cb6c4db10e2b3f8146" exitCode=0 Dec 10 14:39:49 crc kubenswrapper[4718]: I1210 14:39:49.091557 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-drwpm" event={"ID":"b3c409a6-6840-483d-8019-68c6842f8d25","Type":"ContainerDied","Data":"10cba698f07ee3337a11e6cefd4284f8328e1c174fd738cb6c4db10e2b3f8146"} Dec 10 14:39:51 crc kubenswrapper[4718]: I1210 14:39:51.108619 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-drwpm" event={"ID":"b3c409a6-6840-483d-8019-68c6842f8d25","Type":"ContainerStarted","Data":"90e1ce264c11fca223a878b758d2607840c88490a772dbdae8bc88e6155291a1"} Dec 10 14:39:51 crc kubenswrapper[4718]: I1210 14:39:51.141065 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-drwpm" podStartSLOduration=4.082538387 podStartE2EDuration="17.141038651s" podCreationTimestamp="2025-12-10 14:39:34 +0000 UTC" firstStartedPulling="2025-12-10 14:39:36.904592869 +0000 UTC m=+481.853816286" lastFinishedPulling="2025-12-10 14:39:49.963093133 +0000 UTC m=+494.912316550" observedRunningTime="2025-12-10 14:39:51.140571239 +0000 UTC m=+496.089794666" watchObservedRunningTime="2025-12-10 14:39:51.141038651 +0000 UTC m=+496.090262068" Dec 10 14:39:55 crc kubenswrapper[4718]: I1210 14:39:55.165501 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-drwpm" Dec 10 14:39:55 crc kubenswrapper[4718]: I1210 14:39:55.166062 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-drwpm" Dec 10 14:39:55 crc kubenswrapper[4718]: I1210 14:39:55.207100 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-drwpm" Dec 10 14:39:56 crc kubenswrapper[4718]: I1210 14:39:56.177000 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-drwpm" Dec 10 14:39:56 crc kubenswrapper[4718]: I1210 14:39:56.980520 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-rdjz4" Dec 10 14:39:56 crc kubenswrapper[4718]: I1210 14:39:56.980625 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-rdjz4" Dec 10 14:39:57 crc kubenswrapper[4718]: I1210 14:39:57.024299 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-rdjz4" Dec 10 14:39:57 crc kubenswrapper[4718]: I1210 14:39:57.184752 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-rdjz4" Dec 10 14:40:48 crc kubenswrapper[4718]: I1210 14:40:48.084512 4718 patch_prober.go:28] interesting pod/machine-config-daemon-8zmhn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 10 14:40:48 crc kubenswrapper[4718]: I1210 14:40:48.084911 4718 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" podUID="8db53917-7cfb-496d-b8a0-5cc68f3be4e7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 10 14:41:18 crc kubenswrapper[4718]: I1210 14:41:18.084934 4718 patch_prober.go:28] interesting pod/machine-config-daemon-8zmhn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 10 14:41:18 crc kubenswrapper[4718]: I1210 14:41:18.085510 4718 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" podUID="8db53917-7cfb-496d-b8a0-5cc68f3be4e7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 10 14:41:48 crc kubenswrapper[4718]: I1210 14:41:48.084009 4718 patch_prober.go:28] interesting pod/machine-config-daemon-8zmhn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 10 14:41:48 crc kubenswrapper[4718]: I1210 14:41:48.084587 4718 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" podUID="8db53917-7cfb-496d-b8a0-5cc68f3be4e7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 10 14:41:48 crc kubenswrapper[4718]: I1210 14:41:48.084635 4718 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" Dec 10 14:41:48 crc kubenswrapper[4718]: I1210 14:41:48.085326 4718 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"550d69ce2e0f8c19ae4a30039fc1e4b39d84c5575da8bb040b8c3a1798409fd1"} pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 10 14:41:48 crc kubenswrapper[4718]: I1210 14:41:48.085401 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" podUID="8db53917-7cfb-496d-b8a0-5cc68f3be4e7" containerName="machine-config-daemon" containerID="cri-o://550d69ce2e0f8c19ae4a30039fc1e4b39d84c5575da8bb040b8c3a1798409fd1" gracePeriod=600 Dec 10 14:41:48 crc kubenswrapper[4718]: I1210 14:41:48.885784 4718 generic.go:334] "Generic (PLEG): container finished" podID="8db53917-7cfb-496d-b8a0-5cc68f3be4e7" containerID="550d69ce2e0f8c19ae4a30039fc1e4b39d84c5575da8bb040b8c3a1798409fd1" exitCode=0 Dec 10 14:41:48 crc kubenswrapper[4718]: I1210 14:41:48.885855 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" event={"ID":"8db53917-7cfb-496d-b8a0-5cc68f3be4e7","Type":"ContainerDied","Data":"550d69ce2e0f8c19ae4a30039fc1e4b39d84c5575da8bb040b8c3a1798409fd1"} Dec 10 14:41:48 crc kubenswrapper[4718]: I1210 14:41:48.886184 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" event={"ID":"8db53917-7cfb-496d-b8a0-5cc68f3be4e7","Type":"ContainerStarted","Data":"469eaa563104223b399681102a51314f824a32389f5976a9684ee0f42162fac8"} Dec 10 14:41:48 crc kubenswrapper[4718]: I1210 14:41:48.886214 4718 scope.go:117] "RemoveContainer" containerID="d7904af25779ae17073393bf7640a10c1799049d2c7e4bb956caaee74cd26ba8" Dec 10 14:43:48 crc kubenswrapper[4718]: I1210 14:43:48.084468 4718 patch_prober.go:28] interesting pod/machine-config-daemon-8zmhn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 10 14:43:48 crc kubenswrapper[4718]: I1210 14:43:48.085091 4718 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" podUID="8db53917-7cfb-496d-b8a0-5cc68f3be4e7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 10 14:44:18 crc kubenswrapper[4718]: I1210 14:44:18.084322 4718 patch_prober.go:28] interesting pod/machine-config-daemon-8zmhn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 10 14:44:18 crc kubenswrapper[4718]: I1210 14:44:18.085020 4718 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" podUID="8db53917-7cfb-496d-b8a0-5cc68f3be4e7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 10 14:44:20 crc kubenswrapper[4718]: I1210 14:44:20.930875 4718 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Dec 10 14:44:48 crc kubenswrapper[4718]: I1210 14:44:48.084758 4718 patch_prober.go:28] interesting pod/machine-config-daemon-8zmhn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 10 14:44:48 crc kubenswrapper[4718]: I1210 14:44:48.085455 4718 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" podUID="8db53917-7cfb-496d-b8a0-5cc68f3be4e7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 10 14:44:48 crc kubenswrapper[4718]: I1210 14:44:48.085537 4718 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" Dec 10 14:44:48 crc kubenswrapper[4718]: I1210 14:44:48.086357 4718 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"469eaa563104223b399681102a51314f824a32389f5976a9684ee0f42162fac8"} pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 10 14:44:48 crc kubenswrapper[4718]: I1210 14:44:48.086466 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" podUID="8db53917-7cfb-496d-b8a0-5cc68f3be4e7" containerName="machine-config-daemon" containerID="cri-o://469eaa563104223b399681102a51314f824a32389f5976a9684ee0f42162fac8" gracePeriod=600 Dec 10 14:44:49 crc kubenswrapper[4718]: I1210 14:44:49.031835 4718 generic.go:334] "Generic (PLEG): container finished" podID="8db53917-7cfb-496d-b8a0-5cc68f3be4e7" containerID="469eaa563104223b399681102a51314f824a32389f5976a9684ee0f42162fac8" exitCode=0 Dec 10 14:44:49 crc kubenswrapper[4718]: I1210 14:44:49.031923 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" event={"ID":"8db53917-7cfb-496d-b8a0-5cc68f3be4e7","Type":"ContainerDied","Data":"469eaa563104223b399681102a51314f824a32389f5976a9684ee0f42162fac8"} Dec 10 14:44:49 crc kubenswrapper[4718]: I1210 14:44:49.032510 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" event={"ID":"8db53917-7cfb-496d-b8a0-5cc68f3be4e7","Type":"ContainerStarted","Data":"f5b3b0aee57acdc91ec51d93333e41cff8abf9d2f833c5f20d07f2e1f4175aed"} Dec 10 14:44:49 crc kubenswrapper[4718]: I1210 14:44:49.032544 4718 scope.go:117] "RemoveContainer" containerID="550d69ce2e0f8c19ae4a30039fc1e4b39d84c5575da8bb040b8c3a1798409fd1" Dec 10 14:45:00 crc kubenswrapper[4718]: I1210 14:45:00.180332 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29422965-6d2gq"] Dec 10 14:45:00 crc kubenswrapper[4718]: E1210 14:45:00.181159 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2638a0da-6209-4691-a4d4-6aa91a4ca547" containerName="registry" Dec 10 14:45:00 crc kubenswrapper[4718]: I1210 14:45:00.181173 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="2638a0da-6209-4691-a4d4-6aa91a4ca547" containerName="registry" Dec 10 14:45:00 crc kubenswrapper[4718]: I1210 14:45:00.181301 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="2638a0da-6209-4691-a4d4-6aa91a4ca547" containerName="registry" Dec 10 14:45:00 crc kubenswrapper[4718]: I1210 14:45:00.181866 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29422965-6d2gq" Dec 10 14:45:00 crc kubenswrapper[4718]: I1210 14:45:00.185271 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29422965-6d2gq"] Dec 10 14:45:00 crc kubenswrapper[4718]: I1210 14:45:00.185868 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 10 14:45:00 crc kubenswrapper[4718]: I1210 14:45:00.187881 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 10 14:45:00 crc kubenswrapper[4718]: I1210 14:45:00.197648 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6fd1608c-f123-43ee-ab77-cb3efa8a81cc-config-volume\") pod \"collect-profiles-29422965-6d2gq\" (UID: \"6fd1608c-f123-43ee-ab77-cb3efa8a81cc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29422965-6d2gq" Dec 10 14:45:00 crc kubenswrapper[4718]: I1210 14:45:00.197697 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6fd1608c-f123-43ee-ab77-cb3efa8a81cc-secret-volume\") pod \"collect-profiles-29422965-6d2gq\" (UID: \"6fd1608c-f123-43ee-ab77-cb3efa8a81cc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29422965-6d2gq" Dec 10 14:45:00 crc kubenswrapper[4718]: I1210 14:45:00.197782 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xsnq5\" (UniqueName: \"kubernetes.io/projected/6fd1608c-f123-43ee-ab77-cb3efa8a81cc-kube-api-access-xsnq5\") pod \"collect-profiles-29422965-6d2gq\" (UID: \"6fd1608c-f123-43ee-ab77-cb3efa8a81cc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29422965-6d2gq" Dec 10 14:45:00 crc kubenswrapper[4718]: I1210 14:45:00.298786 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xsnq5\" (UniqueName: \"kubernetes.io/projected/6fd1608c-f123-43ee-ab77-cb3efa8a81cc-kube-api-access-xsnq5\") pod \"collect-profiles-29422965-6d2gq\" (UID: \"6fd1608c-f123-43ee-ab77-cb3efa8a81cc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29422965-6d2gq" Dec 10 14:45:00 crc kubenswrapper[4718]: I1210 14:45:00.298913 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6fd1608c-f123-43ee-ab77-cb3efa8a81cc-config-volume\") pod \"collect-profiles-29422965-6d2gq\" (UID: \"6fd1608c-f123-43ee-ab77-cb3efa8a81cc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29422965-6d2gq" Dec 10 14:45:00 crc kubenswrapper[4718]: I1210 14:45:00.298939 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6fd1608c-f123-43ee-ab77-cb3efa8a81cc-secret-volume\") pod \"collect-profiles-29422965-6d2gq\" (UID: \"6fd1608c-f123-43ee-ab77-cb3efa8a81cc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29422965-6d2gq" Dec 10 14:45:00 crc kubenswrapper[4718]: I1210 14:45:00.299999 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6fd1608c-f123-43ee-ab77-cb3efa8a81cc-config-volume\") pod \"collect-profiles-29422965-6d2gq\" (UID: \"6fd1608c-f123-43ee-ab77-cb3efa8a81cc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29422965-6d2gq" Dec 10 14:45:00 crc kubenswrapper[4718]: I1210 14:45:00.305991 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6fd1608c-f123-43ee-ab77-cb3efa8a81cc-secret-volume\") pod \"collect-profiles-29422965-6d2gq\" (UID: \"6fd1608c-f123-43ee-ab77-cb3efa8a81cc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29422965-6d2gq" Dec 10 14:45:00 crc kubenswrapper[4718]: I1210 14:45:00.319569 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xsnq5\" (UniqueName: \"kubernetes.io/projected/6fd1608c-f123-43ee-ab77-cb3efa8a81cc-kube-api-access-xsnq5\") pod \"collect-profiles-29422965-6d2gq\" (UID: \"6fd1608c-f123-43ee-ab77-cb3efa8a81cc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29422965-6d2gq" Dec 10 14:45:00 crc kubenswrapper[4718]: I1210 14:45:00.501566 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29422965-6d2gq" Dec 10 14:45:00 crc kubenswrapper[4718]: I1210 14:45:00.719403 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29422965-6d2gq"] Dec 10 14:45:01 crc kubenswrapper[4718]: I1210 14:45:01.103062 4718 generic.go:334] "Generic (PLEG): container finished" podID="6fd1608c-f123-43ee-ab77-cb3efa8a81cc" containerID="2e54e9d50ad3dfcbdb453c22ecc606c47d8b5db9653ae77c0735306592f4ca52" exitCode=0 Dec 10 14:45:01 crc kubenswrapper[4718]: I1210 14:45:01.103118 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29422965-6d2gq" event={"ID":"6fd1608c-f123-43ee-ab77-cb3efa8a81cc","Type":"ContainerDied","Data":"2e54e9d50ad3dfcbdb453c22ecc606c47d8b5db9653ae77c0735306592f4ca52"} Dec 10 14:45:01 crc kubenswrapper[4718]: I1210 14:45:01.103151 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29422965-6d2gq" event={"ID":"6fd1608c-f123-43ee-ab77-cb3efa8a81cc","Type":"ContainerStarted","Data":"d1d138e7721516b59b982f4a4efab7cb2a4d53cf5aaa64949d6bbc974b1e73f5"} Dec 10 14:45:02 crc kubenswrapper[4718]: I1210 14:45:02.364416 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29422965-6d2gq" Dec 10 14:45:02 crc kubenswrapper[4718]: I1210 14:45:02.526599 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6fd1608c-f123-43ee-ab77-cb3efa8a81cc-secret-volume\") pod \"6fd1608c-f123-43ee-ab77-cb3efa8a81cc\" (UID: \"6fd1608c-f123-43ee-ab77-cb3efa8a81cc\") " Dec 10 14:45:02 crc kubenswrapper[4718]: I1210 14:45:02.526695 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6fd1608c-f123-43ee-ab77-cb3efa8a81cc-config-volume\") pod \"6fd1608c-f123-43ee-ab77-cb3efa8a81cc\" (UID: \"6fd1608c-f123-43ee-ab77-cb3efa8a81cc\") " Dec 10 14:45:02 crc kubenswrapper[4718]: I1210 14:45:02.526795 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xsnq5\" (UniqueName: \"kubernetes.io/projected/6fd1608c-f123-43ee-ab77-cb3efa8a81cc-kube-api-access-xsnq5\") pod \"6fd1608c-f123-43ee-ab77-cb3efa8a81cc\" (UID: \"6fd1608c-f123-43ee-ab77-cb3efa8a81cc\") " Dec 10 14:45:02 crc kubenswrapper[4718]: I1210 14:45:02.528150 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6fd1608c-f123-43ee-ab77-cb3efa8a81cc-config-volume" (OuterVolumeSpecName: "config-volume") pod "6fd1608c-f123-43ee-ab77-cb3efa8a81cc" (UID: "6fd1608c-f123-43ee-ab77-cb3efa8a81cc"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:45:02 crc kubenswrapper[4718]: I1210 14:45:02.532445 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6fd1608c-f123-43ee-ab77-cb3efa8a81cc-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "6fd1608c-f123-43ee-ab77-cb3efa8a81cc" (UID: "6fd1608c-f123-43ee-ab77-cb3efa8a81cc"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 14:45:02 crc kubenswrapper[4718]: I1210 14:45:02.533193 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6fd1608c-f123-43ee-ab77-cb3efa8a81cc-kube-api-access-xsnq5" (OuterVolumeSpecName: "kube-api-access-xsnq5") pod "6fd1608c-f123-43ee-ab77-cb3efa8a81cc" (UID: "6fd1608c-f123-43ee-ab77-cb3efa8a81cc"). InnerVolumeSpecName "kube-api-access-xsnq5". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 14:45:02 crc kubenswrapper[4718]: I1210 14:45:02.629035 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xsnq5\" (UniqueName: \"kubernetes.io/projected/6fd1608c-f123-43ee-ab77-cb3efa8a81cc-kube-api-access-xsnq5\") on node \"crc\" DevicePath \"\"" Dec 10 14:45:02 crc kubenswrapper[4718]: I1210 14:45:02.629087 4718 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6fd1608c-f123-43ee-ab77-cb3efa8a81cc-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 10 14:45:02 crc kubenswrapper[4718]: I1210 14:45:02.629098 4718 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6fd1608c-f123-43ee-ab77-cb3efa8a81cc-config-volume\") on node \"crc\" DevicePath \"\"" Dec 10 14:45:03 crc kubenswrapper[4718]: I1210 14:45:03.118685 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29422965-6d2gq" event={"ID":"6fd1608c-f123-43ee-ab77-cb3efa8a81cc","Type":"ContainerDied","Data":"d1d138e7721516b59b982f4a4efab7cb2a4d53cf5aaa64949d6bbc974b1e73f5"} Dec 10 14:45:03 crc kubenswrapper[4718]: I1210 14:45:03.119175 4718 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d1d138e7721516b59b982f4a4efab7cb2a4d53cf5aaa64949d6bbc974b1e73f5" Dec 10 14:45:03 crc kubenswrapper[4718]: I1210 14:45:03.118733 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29422965-6d2gq" Dec 10 14:45:33 crc kubenswrapper[4718]: I1210 14:45:33.282294 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-7f985d654d-kjxkw"] Dec 10 14:45:33 crc kubenswrapper[4718]: E1210 14:45:33.283005 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6fd1608c-f123-43ee-ab77-cb3efa8a81cc" containerName="collect-profiles" Dec 10 14:45:33 crc kubenswrapper[4718]: I1210 14:45:33.283019 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="6fd1608c-f123-43ee-ab77-cb3efa8a81cc" containerName="collect-profiles" Dec 10 14:45:33 crc kubenswrapper[4718]: I1210 14:45:33.283137 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="6fd1608c-f123-43ee-ab77-cb3efa8a81cc" containerName="collect-profiles" Dec 10 14:45:33 crc kubenswrapper[4718]: I1210 14:45:33.283569 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-7f985d654d-kjxkw" Dec 10 14:45:33 crc kubenswrapper[4718]: I1210 14:45:33.285678 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Dec 10 14:45:33 crc kubenswrapper[4718]: I1210 14:45:33.285825 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Dec 10 14:45:33 crc kubenswrapper[4718]: I1210 14:45:33.285857 4718 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-cfkzr" Dec 10 14:45:33 crc kubenswrapper[4718]: I1210 14:45:33.297550 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-7f985d654d-kjxkw"] Dec 10 14:45:33 crc kubenswrapper[4718]: I1210 14:45:33.305714 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-5b446d88c5-nmkz4"] Dec 10 14:45:33 crc kubenswrapper[4718]: I1210 14:45:33.306567 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-5b446d88c5-nmkz4" Dec 10 14:45:33 crc kubenswrapper[4718]: I1210 14:45:33.308870 4718 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-z5bsj" Dec 10 14:45:33 crc kubenswrapper[4718]: I1210 14:45:33.326495 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-5655c58dd6-t9tzp"] Dec 10 14:45:33 crc kubenswrapper[4718]: I1210 14:45:33.327546 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-5655c58dd6-t9tzp" Dec 10 14:45:33 crc kubenswrapper[4718]: I1210 14:45:33.329244 4718 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-ndhsp" Dec 10 14:45:33 crc kubenswrapper[4718]: I1210 14:45:33.331133 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-5b446d88c5-nmkz4"] Dec 10 14:45:33 crc kubenswrapper[4718]: I1210 14:45:33.344121 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-5655c58dd6-t9tzp"] Dec 10 14:45:33 crc kubenswrapper[4718]: I1210 14:45:33.447581 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6clfq\" (UniqueName: \"kubernetes.io/projected/33afc510-28a4-4f71-810d-da9f04ca2a86-kube-api-access-6clfq\") pod \"cert-manager-cainjector-7f985d654d-kjxkw\" (UID: \"33afc510-28a4-4f71-810d-da9f04ca2a86\") " pod="cert-manager/cert-manager-cainjector-7f985d654d-kjxkw" Dec 10 14:45:33 crc kubenswrapper[4718]: I1210 14:45:33.447689 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6xvt5\" (UniqueName: \"kubernetes.io/projected/07467c5a-e532-4233-8736-8191dbbdd234-kube-api-access-6xvt5\") pod \"cert-manager-webhook-5655c58dd6-t9tzp\" (UID: \"07467c5a-e532-4233-8736-8191dbbdd234\") " pod="cert-manager/cert-manager-webhook-5655c58dd6-t9tzp" Dec 10 14:45:33 crc kubenswrapper[4718]: I1210 14:45:33.447767 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4fdmv\" (UniqueName: \"kubernetes.io/projected/8989c984-6f88-4b26-9c39-37cd583802d7-kube-api-access-4fdmv\") pod \"cert-manager-5b446d88c5-nmkz4\" (UID: \"8989c984-6f88-4b26-9c39-37cd583802d7\") " pod="cert-manager/cert-manager-5b446d88c5-nmkz4" Dec 10 14:45:33 crc kubenswrapper[4718]: I1210 14:45:33.548625 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6clfq\" (UniqueName: \"kubernetes.io/projected/33afc510-28a4-4f71-810d-da9f04ca2a86-kube-api-access-6clfq\") pod \"cert-manager-cainjector-7f985d654d-kjxkw\" (UID: \"33afc510-28a4-4f71-810d-da9f04ca2a86\") " pod="cert-manager/cert-manager-cainjector-7f985d654d-kjxkw" Dec 10 14:45:33 crc kubenswrapper[4718]: I1210 14:45:33.548706 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6xvt5\" (UniqueName: \"kubernetes.io/projected/07467c5a-e532-4233-8736-8191dbbdd234-kube-api-access-6xvt5\") pod \"cert-manager-webhook-5655c58dd6-t9tzp\" (UID: \"07467c5a-e532-4233-8736-8191dbbdd234\") " pod="cert-manager/cert-manager-webhook-5655c58dd6-t9tzp" Dec 10 14:45:33 crc kubenswrapper[4718]: I1210 14:45:33.548766 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4fdmv\" (UniqueName: \"kubernetes.io/projected/8989c984-6f88-4b26-9c39-37cd583802d7-kube-api-access-4fdmv\") pod \"cert-manager-5b446d88c5-nmkz4\" (UID: \"8989c984-6f88-4b26-9c39-37cd583802d7\") " pod="cert-manager/cert-manager-5b446d88c5-nmkz4" Dec 10 14:45:33 crc kubenswrapper[4718]: I1210 14:45:33.570953 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6clfq\" (UniqueName: \"kubernetes.io/projected/33afc510-28a4-4f71-810d-da9f04ca2a86-kube-api-access-6clfq\") pod \"cert-manager-cainjector-7f985d654d-kjxkw\" (UID: \"33afc510-28a4-4f71-810d-da9f04ca2a86\") " pod="cert-manager/cert-manager-cainjector-7f985d654d-kjxkw" Dec 10 14:45:33 crc kubenswrapper[4718]: I1210 14:45:33.572146 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6xvt5\" (UniqueName: \"kubernetes.io/projected/07467c5a-e532-4233-8736-8191dbbdd234-kube-api-access-6xvt5\") pod \"cert-manager-webhook-5655c58dd6-t9tzp\" (UID: \"07467c5a-e532-4233-8736-8191dbbdd234\") " pod="cert-manager/cert-manager-webhook-5655c58dd6-t9tzp" Dec 10 14:45:33 crc kubenswrapper[4718]: I1210 14:45:33.574038 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4fdmv\" (UniqueName: \"kubernetes.io/projected/8989c984-6f88-4b26-9c39-37cd583802d7-kube-api-access-4fdmv\") pod \"cert-manager-5b446d88c5-nmkz4\" (UID: \"8989c984-6f88-4b26-9c39-37cd583802d7\") " pod="cert-manager/cert-manager-5b446d88c5-nmkz4" Dec 10 14:45:33 crc kubenswrapper[4718]: I1210 14:45:33.600318 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-7f985d654d-kjxkw" Dec 10 14:45:33 crc kubenswrapper[4718]: I1210 14:45:33.625039 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-5b446d88c5-nmkz4" Dec 10 14:45:33 crc kubenswrapper[4718]: I1210 14:45:33.640247 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-5655c58dd6-t9tzp" Dec 10 14:45:33 crc kubenswrapper[4718]: I1210 14:45:33.840001 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-7f985d654d-kjxkw"] Dec 10 14:45:33 crc kubenswrapper[4718]: I1210 14:45:33.858133 4718 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 10 14:45:33 crc kubenswrapper[4718]: I1210 14:45:33.873839 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-5b446d88c5-nmkz4"] Dec 10 14:45:33 crc kubenswrapper[4718]: W1210 14:45:33.877863 4718 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8989c984_6f88_4b26_9c39_37cd583802d7.slice/crio-2dbe6691a3ec276850dbbde5a99331685353f3e1b2737e38b9abd9aa99ac8c99 WatchSource:0}: Error finding container 2dbe6691a3ec276850dbbde5a99331685353f3e1b2737e38b9abd9aa99ac8c99: Status 404 returned error can't find the container with id 2dbe6691a3ec276850dbbde5a99331685353f3e1b2737e38b9abd9aa99ac8c99 Dec 10 14:45:34 crc kubenswrapper[4718]: I1210 14:45:34.122577 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-5655c58dd6-t9tzp"] Dec 10 14:45:34 crc kubenswrapper[4718]: W1210 14:45:34.129108 4718 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod07467c5a_e532_4233_8736_8191dbbdd234.slice/crio-322bab95ad31a9c9c498c6d2a0bbb50d02068b8e7c52d7df9645c62d778f6fcd WatchSource:0}: Error finding container 322bab95ad31a9c9c498c6d2a0bbb50d02068b8e7c52d7df9645c62d778f6fcd: Status 404 returned error can't find the container with id 322bab95ad31a9c9c498c6d2a0bbb50d02068b8e7c52d7df9645c62d778f6fcd Dec 10 14:45:34 crc kubenswrapper[4718]: I1210 14:45:34.309788 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-5b446d88c5-nmkz4" event={"ID":"8989c984-6f88-4b26-9c39-37cd583802d7","Type":"ContainerStarted","Data":"2dbe6691a3ec276850dbbde5a99331685353f3e1b2737e38b9abd9aa99ac8c99"} Dec 10 14:45:34 crc kubenswrapper[4718]: I1210 14:45:34.312218 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-5655c58dd6-t9tzp" event={"ID":"07467c5a-e532-4233-8736-8191dbbdd234","Type":"ContainerStarted","Data":"322bab95ad31a9c9c498c6d2a0bbb50d02068b8e7c52d7df9645c62d778f6fcd"} Dec 10 14:45:34 crc kubenswrapper[4718]: I1210 14:45:34.313459 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-7f985d654d-kjxkw" event={"ID":"33afc510-28a4-4f71-810d-da9f04ca2a86","Type":"ContainerStarted","Data":"e946380b888c5fb27d391bd773a053f461416ed52fd1b261e348ac305bac6e13"} Dec 10 14:45:40 crc kubenswrapper[4718]: I1210 14:45:40.409858 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-5b446d88c5-nmkz4" event={"ID":"8989c984-6f88-4b26-9c39-37cd583802d7","Type":"ContainerStarted","Data":"379224c1c64a91f23cb243ae69cbf2371fc40d651ffd6249f6f1bb329d197d5c"} Dec 10 14:45:40 crc kubenswrapper[4718]: I1210 14:45:40.412681 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-5655c58dd6-t9tzp" event={"ID":"07467c5a-e532-4233-8736-8191dbbdd234","Type":"ContainerStarted","Data":"53e4dc9618843a0262a2abfe6973fcf44928522da945a2159dbf846548020efb"} Dec 10 14:45:40 crc kubenswrapper[4718]: I1210 14:45:40.413097 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-5655c58dd6-t9tzp" Dec 10 14:45:40 crc kubenswrapper[4718]: I1210 14:45:40.415137 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-7f985d654d-kjxkw" event={"ID":"33afc510-28a4-4f71-810d-da9f04ca2a86","Type":"ContainerStarted","Data":"275decbd312e35e990b6f50a0563dfbe528a4344b8e1a1dbffbdd56f43ac03b9"} Dec 10 14:45:40 crc kubenswrapper[4718]: I1210 14:45:40.428353 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-5b446d88c5-nmkz4" podStartSLOduration=1.992436234 podStartE2EDuration="7.428326848s" podCreationTimestamp="2025-12-10 14:45:33 +0000 UTC" firstStartedPulling="2025-12-10 14:45:33.881593375 +0000 UTC m=+838.830816782" lastFinishedPulling="2025-12-10 14:45:39.317483979 +0000 UTC m=+844.266707396" observedRunningTime="2025-12-10 14:45:40.423500914 +0000 UTC m=+845.372724331" watchObservedRunningTime="2025-12-10 14:45:40.428326848 +0000 UTC m=+845.377550265" Dec 10 14:45:40 crc kubenswrapper[4718]: I1210 14:45:40.449726 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-5655c58dd6-t9tzp" podStartSLOduration=2.1882052180000002 podStartE2EDuration="7.449687147s" podCreationTimestamp="2025-12-10 14:45:33 +0000 UTC" firstStartedPulling="2025-12-10 14:45:34.131217633 +0000 UTC m=+839.080441050" lastFinishedPulling="2025-12-10 14:45:39.392699542 +0000 UTC m=+844.341922979" observedRunningTime="2025-12-10 14:45:40.443228541 +0000 UTC m=+845.392451988" watchObservedRunningTime="2025-12-10 14:45:40.449687147 +0000 UTC m=+845.398910584" Dec 10 14:45:43 crc kubenswrapper[4718]: I1210 14:45:43.653104 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-7f985d654d-kjxkw" podStartSLOduration=5.178383526 podStartE2EDuration="10.653085007s" podCreationTimestamp="2025-12-10 14:45:33 +0000 UTC" firstStartedPulling="2025-12-10 14:45:33.857307381 +0000 UTC m=+838.806530808" lastFinishedPulling="2025-12-10 14:45:39.332008872 +0000 UTC m=+844.281232289" observedRunningTime="2025-12-10 14:45:40.472133924 +0000 UTC m=+845.421357351" watchObservedRunningTime="2025-12-10 14:45:43.653085007 +0000 UTC m=+848.602308424" Dec 10 14:45:43 crc kubenswrapper[4718]: I1210 14:45:43.655277 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-dtch2"] Dec 10 14:45:43 crc kubenswrapper[4718]: I1210 14:45:43.655759 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-dtch2" podUID="612af6cb-db4d-4874-a9ea-8b3c7eb8e30c" containerName="ovn-controller" containerID="cri-o://a9263fdd7b12f5852db038570b5a179fff5850893af790fa27694f00375a607f" gracePeriod=30 Dec 10 14:45:43 crc kubenswrapper[4718]: I1210 14:45:43.655876 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-dtch2" podUID="612af6cb-db4d-4874-a9ea-8b3c7eb8e30c" containerName="nbdb" containerID="cri-o://4e231c6918c86142643af798db9c4c6de9764822d2b60e1155c727b143191139" gracePeriod=30 Dec 10 14:45:43 crc kubenswrapper[4718]: I1210 14:45:43.655909 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-dtch2" podUID="612af6cb-db4d-4874-a9ea-8b3c7eb8e30c" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://02ee125e78be56eace63f1077a3952d4a9b9ccb387af5413a9325eb4bff80ea9" gracePeriod=30 Dec 10 14:45:43 crc kubenswrapper[4718]: I1210 14:45:43.655956 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-dtch2" podUID="612af6cb-db4d-4874-a9ea-8b3c7eb8e30c" containerName="kube-rbac-proxy-node" containerID="cri-o://f49783c85c42bd1d4afd9d2546c84af07c770b069b7e6f22cf3d95c0e173bcb8" gracePeriod=30 Dec 10 14:45:43 crc kubenswrapper[4718]: I1210 14:45:43.656037 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-dtch2" podUID="612af6cb-db4d-4874-a9ea-8b3c7eb8e30c" containerName="northd" containerID="cri-o://abe2f058bd1c2102c5f00a29af4869eb6dcd8b732ae90f5c4fda6759b460622a" gracePeriod=30 Dec 10 14:45:43 crc kubenswrapper[4718]: I1210 14:45:43.656074 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-dtch2" podUID="612af6cb-db4d-4874-a9ea-8b3c7eb8e30c" containerName="sbdb" containerID="cri-o://f27c89c7dd6f5334f05c340d706e938abde36968f00c30bf84f7b2fb30dfcb3b" gracePeriod=30 Dec 10 14:45:43 crc kubenswrapper[4718]: I1210 14:45:43.656090 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-dtch2" podUID="612af6cb-db4d-4874-a9ea-8b3c7eb8e30c" containerName="ovn-acl-logging" containerID="cri-o://d2ac19fc0eeba332014eefac978a35798eb8bf1ff79212bca24690236bd46e01" gracePeriod=30 Dec 10 14:45:43 crc kubenswrapper[4718]: I1210 14:45:43.702911 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-dtch2" podUID="612af6cb-db4d-4874-a9ea-8b3c7eb8e30c" containerName="ovnkube-controller" containerID="cri-o://32146f91ffb2907607dc1436062d7e3fe2a52f19e1e18831c50a5b3e94055750" gracePeriod=30 Dec 10 14:45:43 crc kubenswrapper[4718]: I1210 14:45:43.937305 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-dtch2_612af6cb-db4d-4874-a9ea-8b3c7eb8e30c/ovnkube-controller/3.log" Dec 10 14:45:43 crc kubenswrapper[4718]: I1210 14:45:43.939184 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-dtch2_612af6cb-db4d-4874-a9ea-8b3c7eb8e30c/ovn-acl-logging/0.log" Dec 10 14:45:43 crc kubenswrapper[4718]: I1210 14:45:43.939723 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-dtch2_612af6cb-db4d-4874-a9ea-8b3c7eb8e30c/ovn-controller/0.log" Dec 10 14:45:43 crc kubenswrapper[4718]: I1210 14:45:43.940137 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-dtch2" Dec 10 14:45:43 crc kubenswrapper[4718]: I1210 14:45:43.992310 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-xtj6h"] Dec 10 14:45:43 crc kubenswrapper[4718]: E1210 14:45:43.992630 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="612af6cb-db4d-4874-a9ea-8b3c7eb8e30c" containerName="northd" Dec 10 14:45:43 crc kubenswrapper[4718]: I1210 14:45:43.992656 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="612af6cb-db4d-4874-a9ea-8b3c7eb8e30c" containerName="northd" Dec 10 14:45:43 crc kubenswrapper[4718]: E1210 14:45:43.992672 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="612af6cb-db4d-4874-a9ea-8b3c7eb8e30c" containerName="ovnkube-controller" Dec 10 14:45:43 crc kubenswrapper[4718]: I1210 14:45:43.992680 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="612af6cb-db4d-4874-a9ea-8b3c7eb8e30c" containerName="ovnkube-controller" Dec 10 14:45:43 crc kubenswrapper[4718]: E1210 14:45:43.992691 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="612af6cb-db4d-4874-a9ea-8b3c7eb8e30c" containerName="ovn-controller" Dec 10 14:45:43 crc kubenswrapper[4718]: I1210 14:45:43.992700 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="612af6cb-db4d-4874-a9ea-8b3c7eb8e30c" containerName="ovn-controller" Dec 10 14:45:43 crc kubenswrapper[4718]: E1210 14:45:43.992710 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="612af6cb-db4d-4874-a9ea-8b3c7eb8e30c" containerName="sbdb" Dec 10 14:45:43 crc kubenswrapper[4718]: I1210 14:45:43.992718 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="612af6cb-db4d-4874-a9ea-8b3c7eb8e30c" containerName="sbdb" Dec 10 14:45:43 crc kubenswrapper[4718]: E1210 14:45:43.992730 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="612af6cb-db4d-4874-a9ea-8b3c7eb8e30c" containerName="ovnkube-controller" Dec 10 14:45:43 crc kubenswrapper[4718]: I1210 14:45:43.992736 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="612af6cb-db4d-4874-a9ea-8b3c7eb8e30c" containerName="ovnkube-controller" Dec 10 14:45:43 crc kubenswrapper[4718]: E1210 14:45:43.992747 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="612af6cb-db4d-4874-a9ea-8b3c7eb8e30c" containerName="kubecfg-setup" Dec 10 14:45:43 crc kubenswrapper[4718]: I1210 14:45:43.992756 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="612af6cb-db4d-4874-a9ea-8b3c7eb8e30c" containerName="kubecfg-setup" Dec 10 14:45:43 crc kubenswrapper[4718]: E1210 14:45:43.992771 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="612af6cb-db4d-4874-a9ea-8b3c7eb8e30c" containerName="kube-rbac-proxy-ovn-metrics" Dec 10 14:45:43 crc kubenswrapper[4718]: I1210 14:45:43.992782 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="612af6cb-db4d-4874-a9ea-8b3c7eb8e30c" containerName="kube-rbac-proxy-ovn-metrics" Dec 10 14:45:43 crc kubenswrapper[4718]: E1210 14:45:43.992796 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="612af6cb-db4d-4874-a9ea-8b3c7eb8e30c" containerName="ovn-acl-logging" Dec 10 14:45:43 crc kubenswrapper[4718]: I1210 14:45:43.992804 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="612af6cb-db4d-4874-a9ea-8b3c7eb8e30c" containerName="ovn-acl-logging" Dec 10 14:45:43 crc kubenswrapper[4718]: E1210 14:45:43.992816 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="612af6cb-db4d-4874-a9ea-8b3c7eb8e30c" containerName="kube-rbac-proxy-node" Dec 10 14:45:43 crc kubenswrapper[4718]: I1210 14:45:43.992823 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="612af6cb-db4d-4874-a9ea-8b3c7eb8e30c" containerName="kube-rbac-proxy-node" Dec 10 14:45:43 crc kubenswrapper[4718]: E1210 14:45:43.992835 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="612af6cb-db4d-4874-a9ea-8b3c7eb8e30c" containerName="nbdb" Dec 10 14:45:43 crc kubenswrapper[4718]: I1210 14:45:43.992842 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="612af6cb-db4d-4874-a9ea-8b3c7eb8e30c" containerName="nbdb" Dec 10 14:45:43 crc kubenswrapper[4718]: E1210 14:45:43.992853 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="612af6cb-db4d-4874-a9ea-8b3c7eb8e30c" containerName="ovnkube-controller" Dec 10 14:45:43 crc kubenswrapper[4718]: I1210 14:45:43.992860 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="612af6cb-db4d-4874-a9ea-8b3c7eb8e30c" containerName="ovnkube-controller" Dec 10 14:45:43 crc kubenswrapper[4718]: I1210 14:45:43.992977 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="612af6cb-db4d-4874-a9ea-8b3c7eb8e30c" containerName="kube-rbac-proxy-ovn-metrics" Dec 10 14:45:43 crc kubenswrapper[4718]: I1210 14:45:43.992997 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="612af6cb-db4d-4874-a9ea-8b3c7eb8e30c" containerName="kube-rbac-proxy-node" Dec 10 14:45:43 crc kubenswrapper[4718]: I1210 14:45:43.993007 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="612af6cb-db4d-4874-a9ea-8b3c7eb8e30c" containerName="ovnkube-controller" Dec 10 14:45:43 crc kubenswrapper[4718]: I1210 14:45:43.993016 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="612af6cb-db4d-4874-a9ea-8b3c7eb8e30c" containerName="nbdb" Dec 10 14:45:43 crc kubenswrapper[4718]: I1210 14:45:43.993025 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="612af6cb-db4d-4874-a9ea-8b3c7eb8e30c" containerName="ovnkube-controller" Dec 10 14:45:43 crc kubenswrapper[4718]: I1210 14:45:43.993034 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="612af6cb-db4d-4874-a9ea-8b3c7eb8e30c" containerName="northd" Dec 10 14:45:43 crc kubenswrapper[4718]: I1210 14:45:43.993044 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="612af6cb-db4d-4874-a9ea-8b3c7eb8e30c" containerName="ovnkube-controller" Dec 10 14:45:43 crc kubenswrapper[4718]: I1210 14:45:43.993052 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="612af6cb-db4d-4874-a9ea-8b3c7eb8e30c" containerName="ovn-acl-logging" Dec 10 14:45:43 crc kubenswrapper[4718]: I1210 14:45:43.993066 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="612af6cb-db4d-4874-a9ea-8b3c7eb8e30c" containerName="sbdb" Dec 10 14:45:43 crc kubenswrapper[4718]: I1210 14:45:43.993076 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="612af6cb-db4d-4874-a9ea-8b3c7eb8e30c" containerName="ovn-controller" Dec 10 14:45:43 crc kubenswrapper[4718]: E1210 14:45:43.993200 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="612af6cb-db4d-4874-a9ea-8b3c7eb8e30c" containerName="ovnkube-controller" Dec 10 14:45:43 crc kubenswrapper[4718]: I1210 14:45:43.993211 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="612af6cb-db4d-4874-a9ea-8b3c7eb8e30c" containerName="ovnkube-controller" Dec 10 14:45:43 crc kubenswrapper[4718]: E1210 14:45:43.993221 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="612af6cb-db4d-4874-a9ea-8b3c7eb8e30c" containerName="ovnkube-controller" Dec 10 14:45:43 crc kubenswrapper[4718]: I1210 14:45:43.993228 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="612af6cb-db4d-4874-a9ea-8b3c7eb8e30c" containerName="ovnkube-controller" Dec 10 14:45:43 crc kubenswrapper[4718]: I1210 14:45:43.993435 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="612af6cb-db4d-4874-a9ea-8b3c7eb8e30c" containerName="ovnkube-controller" Dec 10 14:45:43 crc kubenswrapper[4718]: I1210 14:45:43.993451 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="612af6cb-db4d-4874-a9ea-8b3c7eb8e30c" containerName="ovnkube-controller" Dec 10 14:45:43 crc kubenswrapper[4718]: I1210 14:45:43.996096 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-xtj6h" Dec 10 14:45:44 crc kubenswrapper[4718]: I1210 14:45:44.087151 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/612af6cb-db4d-4874-a9ea-8b3c7eb8e30c-systemd-units\") pod \"612af6cb-db4d-4874-a9ea-8b3c7eb8e30c\" (UID: \"612af6cb-db4d-4874-a9ea-8b3c7eb8e30c\") " Dec 10 14:45:44 crc kubenswrapper[4718]: I1210 14:45:44.087229 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/612af6cb-db4d-4874-a9ea-8b3c7eb8e30c-log-socket\") pod \"612af6cb-db4d-4874-a9ea-8b3c7eb8e30c\" (UID: \"612af6cb-db4d-4874-a9ea-8b3c7eb8e30c\") " Dec 10 14:45:44 crc kubenswrapper[4718]: I1210 14:45:44.087261 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/612af6cb-db4d-4874-a9ea-8b3c7eb8e30c-host-slash\") pod \"612af6cb-db4d-4874-a9ea-8b3c7eb8e30c\" (UID: \"612af6cb-db4d-4874-a9ea-8b3c7eb8e30c\") " Dec 10 14:45:44 crc kubenswrapper[4718]: I1210 14:45:44.087280 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/612af6cb-db4d-4874-a9ea-8b3c7eb8e30c-var-lib-openvswitch\") pod \"612af6cb-db4d-4874-a9ea-8b3c7eb8e30c\" (UID: \"612af6cb-db4d-4874-a9ea-8b3c7eb8e30c\") " Dec 10 14:45:44 crc kubenswrapper[4718]: I1210 14:45:44.087307 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/612af6cb-db4d-4874-a9ea-8b3c7eb8e30c-node-log\") pod \"612af6cb-db4d-4874-a9ea-8b3c7eb8e30c\" (UID: \"612af6cb-db4d-4874-a9ea-8b3c7eb8e30c\") " Dec 10 14:45:44 crc kubenswrapper[4718]: I1210 14:45:44.087350 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/612af6cb-db4d-4874-a9ea-8b3c7eb8e30c-host-run-netns\") pod \"612af6cb-db4d-4874-a9ea-8b3c7eb8e30c\" (UID: \"612af6cb-db4d-4874-a9ea-8b3c7eb8e30c\") " Dec 10 14:45:44 crc kubenswrapper[4718]: I1210 14:45:44.087375 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/612af6cb-db4d-4874-a9ea-8b3c7eb8e30c-host-cni-bin\") pod \"612af6cb-db4d-4874-a9ea-8b3c7eb8e30c\" (UID: \"612af6cb-db4d-4874-a9ea-8b3c7eb8e30c\") " Dec 10 14:45:44 crc kubenswrapper[4718]: I1210 14:45:44.087358 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/612af6cb-db4d-4874-a9ea-8b3c7eb8e30c-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "612af6cb-db4d-4874-a9ea-8b3c7eb8e30c" (UID: "612af6cb-db4d-4874-a9ea-8b3c7eb8e30c"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 10 14:45:44 crc kubenswrapper[4718]: I1210 14:45:44.087418 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/612af6cb-db4d-4874-a9ea-8b3c7eb8e30c-host-var-lib-cni-networks-ovn-kubernetes\") pod \"612af6cb-db4d-4874-a9ea-8b3c7eb8e30c\" (UID: \"612af6cb-db4d-4874-a9ea-8b3c7eb8e30c\") " Dec 10 14:45:44 crc kubenswrapper[4718]: I1210 14:45:44.087491 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/612af6cb-db4d-4874-a9ea-8b3c7eb8e30c-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "612af6cb-db4d-4874-a9ea-8b3c7eb8e30c" (UID: "612af6cb-db4d-4874-a9ea-8b3c7eb8e30c"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 10 14:45:44 crc kubenswrapper[4718]: I1210 14:45:44.087537 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/612af6cb-db4d-4874-a9ea-8b3c7eb8e30c-log-socket" (OuterVolumeSpecName: "log-socket") pod "612af6cb-db4d-4874-a9ea-8b3c7eb8e30c" (UID: "612af6cb-db4d-4874-a9ea-8b3c7eb8e30c"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 10 14:45:44 crc kubenswrapper[4718]: I1210 14:45:44.087554 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/612af6cb-db4d-4874-a9ea-8b3c7eb8e30c-host-slash" (OuterVolumeSpecName: "host-slash") pod "612af6cb-db4d-4874-a9ea-8b3c7eb8e30c" (UID: "612af6cb-db4d-4874-a9ea-8b3c7eb8e30c"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 10 14:45:44 crc kubenswrapper[4718]: I1210 14:45:44.087572 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/612af6cb-db4d-4874-a9ea-8b3c7eb8e30c-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "612af6cb-db4d-4874-a9ea-8b3c7eb8e30c" (UID: "612af6cb-db4d-4874-a9ea-8b3c7eb8e30c"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 10 14:45:44 crc kubenswrapper[4718]: I1210 14:45:44.087591 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/612af6cb-db4d-4874-a9ea-8b3c7eb8e30c-node-log" (OuterVolumeSpecName: "node-log") pod "612af6cb-db4d-4874-a9ea-8b3c7eb8e30c" (UID: "612af6cb-db4d-4874-a9ea-8b3c7eb8e30c"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 10 14:45:44 crc kubenswrapper[4718]: I1210 14:45:44.087609 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/612af6cb-db4d-4874-a9ea-8b3c7eb8e30c-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "612af6cb-db4d-4874-a9ea-8b3c7eb8e30c" (UID: "612af6cb-db4d-4874-a9ea-8b3c7eb8e30c"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 10 14:45:44 crc kubenswrapper[4718]: I1210 14:45:44.087613 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/612af6cb-db4d-4874-a9ea-8b3c7eb8e30c-run-ovn\") pod \"612af6cb-db4d-4874-a9ea-8b3c7eb8e30c\" (UID: \"612af6cb-db4d-4874-a9ea-8b3c7eb8e30c\") " Dec 10 14:45:44 crc kubenswrapper[4718]: I1210 14:45:44.087625 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/612af6cb-db4d-4874-a9ea-8b3c7eb8e30c-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "612af6cb-db4d-4874-a9ea-8b3c7eb8e30c" (UID: "612af6cb-db4d-4874-a9ea-8b3c7eb8e30c"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 10 14:45:44 crc kubenswrapper[4718]: I1210 14:45:44.087655 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/612af6cb-db4d-4874-a9ea-8b3c7eb8e30c-host-kubelet\") pod \"612af6cb-db4d-4874-a9ea-8b3c7eb8e30c\" (UID: \"612af6cb-db4d-4874-a9ea-8b3c7eb8e30c\") " Dec 10 14:45:44 crc kubenswrapper[4718]: I1210 14:45:44.087691 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/612af6cb-db4d-4874-a9ea-8b3c7eb8e30c-host-cni-netd\") pod \"612af6cb-db4d-4874-a9ea-8b3c7eb8e30c\" (UID: \"612af6cb-db4d-4874-a9ea-8b3c7eb8e30c\") " Dec 10 14:45:44 crc kubenswrapper[4718]: I1210 14:45:44.087729 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/612af6cb-db4d-4874-a9ea-8b3c7eb8e30c-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "612af6cb-db4d-4874-a9ea-8b3c7eb8e30c" (UID: "612af6cb-db4d-4874-a9ea-8b3c7eb8e30c"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 10 14:45:44 crc kubenswrapper[4718]: I1210 14:45:44.087745 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhb7n\" (UniqueName: \"kubernetes.io/projected/612af6cb-db4d-4874-a9ea-8b3c7eb8e30c-kube-api-access-jhb7n\") pod \"612af6cb-db4d-4874-a9ea-8b3c7eb8e30c\" (UID: \"612af6cb-db4d-4874-a9ea-8b3c7eb8e30c\") " Dec 10 14:45:44 crc kubenswrapper[4718]: I1210 14:45:44.087757 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/612af6cb-db4d-4874-a9ea-8b3c7eb8e30c-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "612af6cb-db4d-4874-a9ea-8b3c7eb8e30c" (UID: "612af6cb-db4d-4874-a9ea-8b3c7eb8e30c"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 10 14:45:44 crc kubenswrapper[4718]: I1210 14:45:44.087772 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/612af6cb-db4d-4874-a9ea-8b3c7eb8e30c-host-run-ovn-kubernetes\") pod \"612af6cb-db4d-4874-a9ea-8b3c7eb8e30c\" (UID: \"612af6cb-db4d-4874-a9ea-8b3c7eb8e30c\") " Dec 10 14:45:44 crc kubenswrapper[4718]: I1210 14:45:44.087800 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/612af6cb-db4d-4874-a9ea-8b3c7eb8e30c-ovn-node-metrics-cert\") pod \"612af6cb-db4d-4874-a9ea-8b3c7eb8e30c\" (UID: \"612af6cb-db4d-4874-a9ea-8b3c7eb8e30c\") " Dec 10 14:45:44 crc kubenswrapper[4718]: I1210 14:45:44.087768 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/612af6cb-db4d-4874-a9ea-8b3c7eb8e30c-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "612af6cb-db4d-4874-a9ea-8b3c7eb8e30c" (UID: "612af6cb-db4d-4874-a9ea-8b3c7eb8e30c"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 10 14:45:44 crc kubenswrapper[4718]: I1210 14:45:44.087853 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/612af6cb-db4d-4874-a9ea-8b3c7eb8e30c-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "612af6cb-db4d-4874-a9ea-8b3c7eb8e30c" (UID: "612af6cb-db4d-4874-a9ea-8b3c7eb8e30c"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 10 14:45:44 crc kubenswrapper[4718]: I1210 14:45:44.087829 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/612af6cb-db4d-4874-a9ea-8b3c7eb8e30c-etc-openvswitch\") pod \"612af6cb-db4d-4874-a9ea-8b3c7eb8e30c\" (UID: \"612af6cb-db4d-4874-a9ea-8b3c7eb8e30c\") " Dec 10 14:45:44 crc kubenswrapper[4718]: I1210 14:45:44.087884 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/612af6cb-db4d-4874-a9ea-8b3c7eb8e30c-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "612af6cb-db4d-4874-a9ea-8b3c7eb8e30c" (UID: "612af6cb-db4d-4874-a9ea-8b3c7eb8e30c"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 10 14:45:44 crc kubenswrapper[4718]: I1210 14:45:44.087913 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/612af6cb-db4d-4874-a9ea-8b3c7eb8e30c-ovnkube-config\") pod \"612af6cb-db4d-4874-a9ea-8b3c7eb8e30c\" (UID: \"612af6cb-db4d-4874-a9ea-8b3c7eb8e30c\") " Dec 10 14:45:44 crc kubenswrapper[4718]: I1210 14:45:44.087952 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/612af6cb-db4d-4874-a9ea-8b3c7eb8e30c-run-systemd\") pod \"612af6cb-db4d-4874-a9ea-8b3c7eb8e30c\" (UID: \"612af6cb-db4d-4874-a9ea-8b3c7eb8e30c\") " Dec 10 14:45:44 crc kubenswrapper[4718]: I1210 14:45:44.087994 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/612af6cb-db4d-4874-a9ea-8b3c7eb8e30c-env-overrides\") pod \"612af6cb-db4d-4874-a9ea-8b3c7eb8e30c\" (UID: \"612af6cb-db4d-4874-a9ea-8b3c7eb8e30c\") " Dec 10 14:45:44 crc kubenswrapper[4718]: I1210 14:45:44.088036 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/612af6cb-db4d-4874-a9ea-8b3c7eb8e30c-ovnkube-script-lib\") pod \"612af6cb-db4d-4874-a9ea-8b3c7eb8e30c\" (UID: \"612af6cb-db4d-4874-a9ea-8b3c7eb8e30c\") " Dec 10 14:45:44 crc kubenswrapper[4718]: I1210 14:45:44.088062 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/612af6cb-db4d-4874-a9ea-8b3c7eb8e30c-run-openvswitch\") pod \"612af6cb-db4d-4874-a9ea-8b3c7eb8e30c\" (UID: \"612af6cb-db4d-4874-a9ea-8b3c7eb8e30c\") " Dec 10 14:45:44 crc kubenswrapper[4718]: I1210 14:45:44.088314 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/597d67b9-ced7-43bc-8273-bdc80a8a96bc-ovnkube-script-lib\") pod \"ovnkube-node-xtj6h\" (UID: \"597d67b9-ced7-43bc-8273-bdc80a8a96bc\") " pod="openshift-ovn-kubernetes/ovnkube-node-xtj6h" Dec 10 14:45:44 crc kubenswrapper[4718]: I1210 14:45:44.088497 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/597d67b9-ced7-43bc-8273-bdc80a8a96bc-node-log\") pod \"ovnkube-node-xtj6h\" (UID: \"597d67b9-ced7-43bc-8273-bdc80a8a96bc\") " pod="openshift-ovn-kubernetes/ovnkube-node-xtj6h" Dec 10 14:45:44 crc kubenswrapper[4718]: I1210 14:45:44.088540 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/597d67b9-ced7-43bc-8273-bdc80a8a96bc-ovnkube-config\") pod \"ovnkube-node-xtj6h\" (UID: \"597d67b9-ced7-43bc-8273-bdc80a8a96bc\") " pod="openshift-ovn-kubernetes/ovnkube-node-xtj6h" Dec 10 14:45:44 crc kubenswrapper[4718]: I1210 14:45:44.088455 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/612af6cb-db4d-4874-a9ea-8b3c7eb8e30c-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "612af6cb-db4d-4874-a9ea-8b3c7eb8e30c" (UID: "612af6cb-db4d-4874-a9ea-8b3c7eb8e30c"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 10 14:45:44 crc kubenswrapper[4718]: I1210 14:45:44.088638 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/612af6cb-db4d-4874-a9ea-8b3c7eb8e30c-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "612af6cb-db4d-4874-a9ea-8b3c7eb8e30c" (UID: "612af6cb-db4d-4874-a9ea-8b3c7eb8e30c"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:45:44 crc kubenswrapper[4718]: I1210 14:45:44.088708 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/597d67b9-ced7-43bc-8273-bdc80a8a96bc-host-cni-bin\") pod \"ovnkube-node-xtj6h\" (UID: \"597d67b9-ced7-43bc-8273-bdc80a8a96bc\") " pod="openshift-ovn-kubernetes/ovnkube-node-xtj6h" Dec 10 14:45:44 crc kubenswrapper[4718]: I1210 14:45:44.088705 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/612af6cb-db4d-4874-a9ea-8b3c7eb8e30c-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "612af6cb-db4d-4874-a9ea-8b3c7eb8e30c" (UID: "612af6cb-db4d-4874-a9ea-8b3c7eb8e30c"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:45:44 crc kubenswrapper[4718]: I1210 14:45:44.088769 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/597d67b9-ced7-43bc-8273-bdc80a8a96bc-run-systemd\") pod \"ovnkube-node-xtj6h\" (UID: \"597d67b9-ced7-43bc-8273-bdc80a8a96bc\") " pod="openshift-ovn-kubernetes/ovnkube-node-xtj6h" Dec 10 14:45:44 crc kubenswrapper[4718]: I1210 14:45:44.088798 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/612af6cb-db4d-4874-a9ea-8b3c7eb8e30c-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "612af6cb-db4d-4874-a9ea-8b3c7eb8e30c" (UID: "612af6cb-db4d-4874-a9ea-8b3c7eb8e30c"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:45:44 crc kubenswrapper[4718]: I1210 14:45:44.088913 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/597d67b9-ced7-43bc-8273-bdc80a8a96bc-host-kubelet\") pod \"ovnkube-node-xtj6h\" (UID: \"597d67b9-ced7-43bc-8273-bdc80a8a96bc\") " pod="openshift-ovn-kubernetes/ovnkube-node-xtj6h" Dec 10 14:45:44 crc kubenswrapper[4718]: I1210 14:45:44.088967 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/597d67b9-ced7-43bc-8273-bdc80a8a96bc-var-lib-openvswitch\") pod \"ovnkube-node-xtj6h\" (UID: \"597d67b9-ced7-43bc-8273-bdc80a8a96bc\") " pod="openshift-ovn-kubernetes/ovnkube-node-xtj6h" Dec 10 14:45:44 crc kubenswrapper[4718]: I1210 14:45:44.089076 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/597d67b9-ced7-43bc-8273-bdc80a8a96bc-etc-openvswitch\") pod \"ovnkube-node-xtj6h\" (UID: \"597d67b9-ced7-43bc-8273-bdc80a8a96bc\") " pod="openshift-ovn-kubernetes/ovnkube-node-xtj6h" Dec 10 14:45:44 crc kubenswrapper[4718]: I1210 14:45:44.089145 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/597d67b9-ced7-43bc-8273-bdc80a8a96bc-host-cni-netd\") pod \"ovnkube-node-xtj6h\" (UID: \"597d67b9-ced7-43bc-8273-bdc80a8a96bc\") " pod="openshift-ovn-kubernetes/ovnkube-node-xtj6h" Dec 10 14:45:44 crc kubenswrapper[4718]: I1210 14:45:44.089179 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/597d67b9-ced7-43bc-8273-bdc80a8a96bc-log-socket\") pod \"ovnkube-node-xtj6h\" (UID: \"597d67b9-ced7-43bc-8273-bdc80a8a96bc\") " pod="openshift-ovn-kubernetes/ovnkube-node-xtj6h" Dec 10 14:45:44 crc kubenswrapper[4718]: I1210 14:45:44.089198 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/597d67b9-ced7-43bc-8273-bdc80a8a96bc-run-openvswitch\") pod \"ovnkube-node-xtj6h\" (UID: \"597d67b9-ced7-43bc-8273-bdc80a8a96bc\") " pod="openshift-ovn-kubernetes/ovnkube-node-xtj6h" Dec 10 14:45:44 crc kubenswrapper[4718]: I1210 14:45:44.089237 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/597d67b9-ced7-43bc-8273-bdc80a8a96bc-host-slash\") pod \"ovnkube-node-xtj6h\" (UID: \"597d67b9-ced7-43bc-8273-bdc80a8a96bc\") " pod="openshift-ovn-kubernetes/ovnkube-node-xtj6h" Dec 10 14:45:44 crc kubenswrapper[4718]: I1210 14:45:44.089321 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/597d67b9-ced7-43bc-8273-bdc80a8a96bc-env-overrides\") pod \"ovnkube-node-xtj6h\" (UID: \"597d67b9-ced7-43bc-8273-bdc80a8a96bc\") " pod="openshift-ovn-kubernetes/ovnkube-node-xtj6h" Dec 10 14:45:44 crc kubenswrapper[4718]: I1210 14:45:44.089353 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/597d67b9-ced7-43bc-8273-bdc80a8a96bc-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-xtj6h\" (UID: \"597d67b9-ced7-43bc-8273-bdc80a8a96bc\") " pod="openshift-ovn-kubernetes/ovnkube-node-xtj6h" Dec 10 14:45:44 crc kubenswrapper[4718]: I1210 14:45:44.089378 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/597d67b9-ced7-43bc-8273-bdc80a8a96bc-ovn-node-metrics-cert\") pod \"ovnkube-node-xtj6h\" (UID: \"597d67b9-ced7-43bc-8273-bdc80a8a96bc\") " pod="openshift-ovn-kubernetes/ovnkube-node-xtj6h" Dec 10 14:45:44 crc kubenswrapper[4718]: I1210 14:45:44.089448 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/597d67b9-ced7-43bc-8273-bdc80a8a96bc-host-run-ovn-kubernetes\") pod \"ovnkube-node-xtj6h\" (UID: \"597d67b9-ced7-43bc-8273-bdc80a8a96bc\") " pod="openshift-ovn-kubernetes/ovnkube-node-xtj6h" Dec 10 14:45:44 crc kubenswrapper[4718]: I1210 14:45:44.089494 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nxg8x\" (UniqueName: \"kubernetes.io/projected/597d67b9-ced7-43bc-8273-bdc80a8a96bc-kube-api-access-nxg8x\") pod \"ovnkube-node-xtj6h\" (UID: \"597d67b9-ced7-43bc-8273-bdc80a8a96bc\") " pod="openshift-ovn-kubernetes/ovnkube-node-xtj6h" Dec 10 14:45:44 crc kubenswrapper[4718]: I1210 14:45:44.089535 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/597d67b9-ced7-43bc-8273-bdc80a8a96bc-systemd-units\") pod \"ovnkube-node-xtj6h\" (UID: \"597d67b9-ced7-43bc-8273-bdc80a8a96bc\") " pod="openshift-ovn-kubernetes/ovnkube-node-xtj6h" Dec 10 14:45:44 crc kubenswrapper[4718]: I1210 14:45:44.089567 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/597d67b9-ced7-43bc-8273-bdc80a8a96bc-host-run-netns\") pod \"ovnkube-node-xtj6h\" (UID: \"597d67b9-ced7-43bc-8273-bdc80a8a96bc\") " pod="openshift-ovn-kubernetes/ovnkube-node-xtj6h" Dec 10 14:45:44 crc kubenswrapper[4718]: I1210 14:45:44.089596 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/597d67b9-ced7-43bc-8273-bdc80a8a96bc-run-ovn\") pod \"ovnkube-node-xtj6h\" (UID: \"597d67b9-ced7-43bc-8273-bdc80a8a96bc\") " pod="openshift-ovn-kubernetes/ovnkube-node-xtj6h" Dec 10 14:45:44 crc kubenswrapper[4718]: I1210 14:45:44.089773 4718 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/612af6cb-db4d-4874-a9ea-8b3c7eb8e30c-run-ovn\") on node \"crc\" DevicePath \"\"" Dec 10 14:45:44 crc kubenswrapper[4718]: I1210 14:45:44.089799 4718 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/612af6cb-db4d-4874-a9ea-8b3c7eb8e30c-host-kubelet\") on node \"crc\" DevicePath \"\"" Dec 10 14:45:44 crc kubenswrapper[4718]: I1210 14:45:44.089813 4718 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/612af6cb-db4d-4874-a9ea-8b3c7eb8e30c-host-cni-netd\") on node \"crc\" DevicePath \"\"" Dec 10 14:45:44 crc kubenswrapper[4718]: I1210 14:45:44.089826 4718 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/612af6cb-db4d-4874-a9ea-8b3c7eb8e30c-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Dec 10 14:45:44 crc kubenswrapper[4718]: I1210 14:45:44.089839 4718 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/612af6cb-db4d-4874-a9ea-8b3c7eb8e30c-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Dec 10 14:45:44 crc kubenswrapper[4718]: I1210 14:45:44.089850 4718 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/612af6cb-db4d-4874-a9ea-8b3c7eb8e30c-ovnkube-config\") on node \"crc\" DevicePath \"\"" Dec 10 14:45:44 crc kubenswrapper[4718]: I1210 14:45:44.089862 4718 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/612af6cb-db4d-4874-a9ea-8b3c7eb8e30c-env-overrides\") on node \"crc\" DevicePath \"\"" Dec 10 14:45:44 crc kubenswrapper[4718]: I1210 14:45:44.089873 4718 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/612af6cb-db4d-4874-a9ea-8b3c7eb8e30c-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Dec 10 14:45:44 crc kubenswrapper[4718]: I1210 14:45:44.089885 4718 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/612af6cb-db4d-4874-a9ea-8b3c7eb8e30c-run-openvswitch\") on node \"crc\" DevicePath \"\"" Dec 10 14:45:44 crc kubenswrapper[4718]: I1210 14:45:44.089897 4718 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/612af6cb-db4d-4874-a9ea-8b3c7eb8e30c-systemd-units\") on node \"crc\" DevicePath \"\"" Dec 10 14:45:44 crc kubenswrapper[4718]: I1210 14:45:44.089909 4718 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/612af6cb-db4d-4874-a9ea-8b3c7eb8e30c-log-socket\") on node \"crc\" DevicePath \"\"" Dec 10 14:45:44 crc kubenswrapper[4718]: I1210 14:45:44.089922 4718 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/612af6cb-db4d-4874-a9ea-8b3c7eb8e30c-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Dec 10 14:45:44 crc kubenswrapper[4718]: I1210 14:45:44.089934 4718 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/612af6cb-db4d-4874-a9ea-8b3c7eb8e30c-host-slash\") on node \"crc\" DevicePath \"\"" Dec 10 14:45:44 crc kubenswrapper[4718]: I1210 14:45:44.089948 4718 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/612af6cb-db4d-4874-a9ea-8b3c7eb8e30c-host-run-netns\") on node \"crc\" DevicePath \"\"" Dec 10 14:45:44 crc kubenswrapper[4718]: I1210 14:45:44.089962 4718 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/612af6cb-db4d-4874-a9ea-8b3c7eb8e30c-node-log\") on node \"crc\" DevicePath \"\"" Dec 10 14:45:44 crc kubenswrapper[4718]: I1210 14:45:44.089974 4718 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/612af6cb-db4d-4874-a9ea-8b3c7eb8e30c-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Dec 10 14:45:44 crc kubenswrapper[4718]: I1210 14:45:44.089985 4718 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/612af6cb-db4d-4874-a9ea-8b3c7eb8e30c-host-cni-bin\") on node \"crc\" DevicePath \"\"" Dec 10 14:45:44 crc kubenswrapper[4718]: I1210 14:45:44.093330 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/612af6cb-db4d-4874-a9ea-8b3c7eb8e30c-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "612af6cb-db4d-4874-a9ea-8b3c7eb8e30c" (UID: "612af6cb-db4d-4874-a9ea-8b3c7eb8e30c"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 14:45:44 crc kubenswrapper[4718]: I1210 14:45:44.093334 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/612af6cb-db4d-4874-a9ea-8b3c7eb8e30c-kube-api-access-jhb7n" (OuterVolumeSpecName: "kube-api-access-jhb7n") pod "612af6cb-db4d-4874-a9ea-8b3c7eb8e30c" (UID: "612af6cb-db4d-4874-a9ea-8b3c7eb8e30c"). InnerVolumeSpecName "kube-api-access-jhb7n". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 14:45:44 crc kubenswrapper[4718]: I1210 14:45:44.101785 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/612af6cb-db4d-4874-a9ea-8b3c7eb8e30c-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "612af6cb-db4d-4874-a9ea-8b3c7eb8e30c" (UID: "612af6cb-db4d-4874-a9ea-8b3c7eb8e30c"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 10 14:45:44 crc kubenswrapper[4718]: I1210 14:45:44.191628 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/597d67b9-ced7-43bc-8273-bdc80a8a96bc-env-overrides\") pod \"ovnkube-node-xtj6h\" (UID: \"597d67b9-ced7-43bc-8273-bdc80a8a96bc\") " pod="openshift-ovn-kubernetes/ovnkube-node-xtj6h" Dec 10 14:45:44 crc kubenswrapper[4718]: I1210 14:45:44.191692 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/597d67b9-ced7-43bc-8273-bdc80a8a96bc-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-xtj6h\" (UID: \"597d67b9-ced7-43bc-8273-bdc80a8a96bc\") " pod="openshift-ovn-kubernetes/ovnkube-node-xtj6h" Dec 10 14:45:44 crc kubenswrapper[4718]: I1210 14:45:44.191711 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/597d67b9-ced7-43bc-8273-bdc80a8a96bc-ovn-node-metrics-cert\") pod \"ovnkube-node-xtj6h\" (UID: \"597d67b9-ced7-43bc-8273-bdc80a8a96bc\") " pod="openshift-ovn-kubernetes/ovnkube-node-xtj6h" Dec 10 14:45:44 crc kubenswrapper[4718]: I1210 14:45:44.191734 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/597d67b9-ced7-43bc-8273-bdc80a8a96bc-host-run-ovn-kubernetes\") pod \"ovnkube-node-xtj6h\" (UID: \"597d67b9-ced7-43bc-8273-bdc80a8a96bc\") " pod="openshift-ovn-kubernetes/ovnkube-node-xtj6h" Dec 10 14:45:44 crc kubenswrapper[4718]: I1210 14:45:44.191750 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nxg8x\" (UniqueName: \"kubernetes.io/projected/597d67b9-ced7-43bc-8273-bdc80a8a96bc-kube-api-access-nxg8x\") pod \"ovnkube-node-xtj6h\" (UID: \"597d67b9-ced7-43bc-8273-bdc80a8a96bc\") " pod="openshift-ovn-kubernetes/ovnkube-node-xtj6h" Dec 10 14:45:44 crc kubenswrapper[4718]: I1210 14:45:44.191768 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/597d67b9-ced7-43bc-8273-bdc80a8a96bc-systemd-units\") pod \"ovnkube-node-xtj6h\" (UID: \"597d67b9-ced7-43bc-8273-bdc80a8a96bc\") " pod="openshift-ovn-kubernetes/ovnkube-node-xtj6h" Dec 10 14:45:44 crc kubenswrapper[4718]: I1210 14:45:44.191785 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/597d67b9-ced7-43bc-8273-bdc80a8a96bc-host-run-netns\") pod \"ovnkube-node-xtj6h\" (UID: \"597d67b9-ced7-43bc-8273-bdc80a8a96bc\") " pod="openshift-ovn-kubernetes/ovnkube-node-xtj6h" Dec 10 14:45:44 crc kubenswrapper[4718]: I1210 14:45:44.191816 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/597d67b9-ced7-43bc-8273-bdc80a8a96bc-run-ovn\") pod \"ovnkube-node-xtj6h\" (UID: \"597d67b9-ced7-43bc-8273-bdc80a8a96bc\") " pod="openshift-ovn-kubernetes/ovnkube-node-xtj6h" Dec 10 14:45:44 crc kubenswrapper[4718]: I1210 14:45:44.191848 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/597d67b9-ced7-43bc-8273-bdc80a8a96bc-ovnkube-script-lib\") pod \"ovnkube-node-xtj6h\" (UID: \"597d67b9-ced7-43bc-8273-bdc80a8a96bc\") " pod="openshift-ovn-kubernetes/ovnkube-node-xtj6h" Dec 10 14:45:44 crc kubenswrapper[4718]: I1210 14:45:44.191894 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/597d67b9-ced7-43bc-8273-bdc80a8a96bc-node-log\") pod \"ovnkube-node-xtj6h\" (UID: \"597d67b9-ced7-43bc-8273-bdc80a8a96bc\") " pod="openshift-ovn-kubernetes/ovnkube-node-xtj6h" Dec 10 14:45:44 crc kubenswrapper[4718]: I1210 14:45:44.191929 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/597d67b9-ced7-43bc-8273-bdc80a8a96bc-ovnkube-config\") pod \"ovnkube-node-xtj6h\" (UID: \"597d67b9-ced7-43bc-8273-bdc80a8a96bc\") " pod="openshift-ovn-kubernetes/ovnkube-node-xtj6h" Dec 10 14:45:44 crc kubenswrapper[4718]: I1210 14:45:44.191969 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/597d67b9-ced7-43bc-8273-bdc80a8a96bc-host-cni-bin\") pod \"ovnkube-node-xtj6h\" (UID: \"597d67b9-ced7-43bc-8273-bdc80a8a96bc\") " pod="openshift-ovn-kubernetes/ovnkube-node-xtj6h" Dec 10 14:45:44 crc kubenswrapper[4718]: I1210 14:45:44.191995 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/597d67b9-ced7-43bc-8273-bdc80a8a96bc-run-systemd\") pod \"ovnkube-node-xtj6h\" (UID: \"597d67b9-ced7-43bc-8273-bdc80a8a96bc\") " pod="openshift-ovn-kubernetes/ovnkube-node-xtj6h" Dec 10 14:45:44 crc kubenswrapper[4718]: I1210 14:45:44.192038 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/597d67b9-ced7-43bc-8273-bdc80a8a96bc-host-kubelet\") pod \"ovnkube-node-xtj6h\" (UID: \"597d67b9-ced7-43bc-8273-bdc80a8a96bc\") " pod="openshift-ovn-kubernetes/ovnkube-node-xtj6h" Dec 10 14:45:44 crc kubenswrapper[4718]: I1210 14:45:44.192069 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/597d67b9-ced7-43bc-8273-bdc80a8a96bc-var-lib-openvswitch\") pod \"ovnkube-node-xtj6h\" (UID: \"597d67b9-ced7-43bc-8273-bdc80a8a96bc\") " pod="openshift-ovn-kubernetes/ovnkube-node-xtj6h" Dec 10 14:45:44 crc kubenswrapper[4718]: I1210 14:45:44.192074 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/597d67b9-ced7-43bc-8273-bdc80a8a96bc-host-cni-bin\") pod \"ovnkube-node-xtj6h\" (UID: \"597d67b9-ced7-43bc-8273-bdc80a8a96bc\") " pod="openshift-ovn-kubernetes/ovnkube-node-xtj6h" Dec 10 14:45:44 crc kubenswrapper[4718]: I1210 14:45:44.192072 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/597d67b9-ced7-43bc-8273-bdc80a8a96bc-systemd-units\") pod \"ovnkube-node-xtj6h\" (UID: \"597d67b9-ced7-43bc-8273-bdc80a8a96bc\") " pod="openshift-ovn-kubernetes/ovnkube-node-xtj6h" Dec 10 14:45:44 crc kubenswrapper[4718]: I1210 14:45:44.192165 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/597d67b9-ced7-43bc-8273-bdc80a8a96bc-run-systemd\") pod \"ovnkube-node-xtj6h\" (UID: \"597d67b9-ced7-43bc-8273-bdc80a8a96bc\") " pod="openshift-ovn-kubernetes/ovnkube-node-xtj6h" Dec 10 14:45:44 crc kubenswrapper[4718]: I1210 14:45:44.192165 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/597d67b9-ced7-43bc-8273-bdc80a8a96bc-etc-openvswitch\") pod \"ovnkube-node-xtj6h\" (UID: \"597d67b9-ced7-43bc-8273-bdc80a8a96bc\") " pod="openshift-ovn-kubernetes/ovnkube-node-xtj6h" Dec 10 14:45:44 crc kubenswrapper[4718]: I1210 14:45:44.192195 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/597d67b9-ced7-43bc-8273-bdc80a8a96bc-node-log\") pod \"ovnkube-node-xtj6h\" (UID: \"597d67b9-ced7-43bc-8273-bdc80a8a96bc\") " pod="openshift-ovn-kubernetes/ovnkube-node-xtj6h" Dec 10 14:45:44 crc kubenswrapper[4718]: I1210 14:45:44.192096 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/597d67b9-ced7-43bc-8273-bdc80a8a96bc-etc-openvswitch\") pod \"ovnkube-node-xtj6h\" (UID: \"597d67b9-ced7-43bc-8273-bdc80a8a96bc\") " pod="openshift-ovn-kubernetes/ovnkube-node-xtj6h" Dec 10 14:45:44 crc kubenswrapper[4718]: I1210 14:45:44.192226 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/597d67b9-ced7-43bc-8273-bdc80a8a96bc-host-kubelet\") pod \"ovnkube-node-xtj6h\" (UID: \"597d67b9-ced7-43bc-8273-bdc80a8a96bc\") " pod="openshift-ovn-kubernetes/ovnkube-node-xtj6h" Dec 10 14:45:44 crc kubenswrapper[4718]: I1210 14:45:44.192028 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/597d67b9-ced7-43bc-8273-bdc80a8a96bc-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-xtj6h\" (UID: \"597d67b9-ced7-43bc-8273-bdc80a8a96bc\") " pod="openshift-ovn-kubernetes/ovnkube-node-xtj6h" Dec 10 14:45:44 crc kubenswrapper[4718]: I1210 14:45:44.192246 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/597d67b9-ced7-43bc-8273-bdc80a8a96bc-host-run-ovn-kubernetes\") pod \"ovnkube-node-xtj6h\" (UID: \"597d67b9-ced7-43bc-8273-bdc80a8a96bc\") " pod="openshift-ovn-kubernetes/ovnkube-node-xtj6h" Dec 10 14:45:44 crc kubenswrapper[4718]: I1210 14:45:44.192270 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/597d67b9-ced7-43bc-8273-bdc80a8a96bc-var-lib-openvswitch\") pod \"ovnkube-node-xtj6h\" (UID: \"597d67b9-ced7-43bc-8273-bdc80a8a96bc\") " pod="openshift-ovn-kubernetes/ovnkube-node-xtj6h" Dec 10 14:45:44 crc kubenswrapper[4718]: I1210 14:45:44.192318 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/597d67b9-ced7-43bc-8273-bdc80a8a96bc-host-run-netns\") pod \"ovnkube-node-xtj6h\" (UID: \"597d67b9-ced7-43bc-8273-bdc80a8a96bc\") " pod="openshift-ovn-kubernetes/ovnkube-node-xtj6h" Dec 10 14:45:44 crc kubenswrapper[4718]: I1210 14:45:44.192345 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/597d67b9-ced7-43bc-8273-bdc80a8a96bc-run-ovn\") pod \"ovnkube-node-xtj6h\" (UID: \"597d67b9-ced7-43bc-8273-bdc80a8a96bc\") " pod="openshift-ovn-kubernetes/ovnkube-node-xtj6h" Dec 10 14:45:44 crc kubenswrapper[4718]: I1210 14:45:44.192502 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/597d67b9-ced7-43bc-8273-bdc80a8a96bc-host-cni-netd\") pod \"ovnkube-node-xtj6h\" (UID: \"597d67b9-ced7-43bc-8273-bdc80a8a96bc\") " pod="openshift-ovn-kubernetes/ovnkube-node-xtj6h" Dec 10 14:45:44 crc kubenswrapper[4718]: I1210 14:45:44.192542 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/597d67b9-ced7-43bc-8273-bdc80a8a96bc-log-socket\") pod \"ovnkube-node-xtj6h\" (UID: \"597d67b9-ced7-43bc-8273-bdc80a8a96bc\") " pod="openshift-ovn-kubernetes/ovnkube-node-xtj6h" Dec 10 14:45:44 crc kubenswrapper[4718]: I1210 14:45:44.192572 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/597d67b9-ced7-43bc-8273-bdc80a8a96bc-run-openvswitch\") pod \"ovnkube-node-xtj6h\" (UID: \"597d67b9-ced7-43bc-8273-bdc80a8a96bc\") " pod="openshift-ovn-kubernetes/ovnkube-node-xtj6h" Dec 10 14:45:44 crc kubenswrapper[4718]: I1210 14:45:44.192603 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/597d67b9-ced7-43bc-8273-bdc80a8a96bc-host-slash\") pod \"ovnkube-node-xtj6h\" (UID: \"597d67b9-ced7-43bc-8273-bdc80a8a96bc\") " pod="openshift-ovn-kubernetes/ovnkube-node-xtj6h" Dec 10 14:45:44 crc kubenswrapper[4718]: I1210 14:45:44.192622 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/597d67b9-ced7-43bc-8273-bdc80a8a96bc-host-cni-netd\") pod \"ovnkube-node-xtj6h\" (UID: \"597d67b9-ced7-43bc-8273-bdc80a8a96bc\") " pod="openshift-ovn-kubernetes/ovnkube-node-xtj6h" Dec 10 14:45:44 crc kubenswrapper[4718]: I1210 14:45:44.192662 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/597d67b9-ced7-43bc-8273-bdc80a8a96bc-log-socket\") pod \"ovnkube-node-xtj6h\" (UID: \"597d67b9-ced7-43bc-8273-bdc80a8a96bc\") " pod="openshift-ovn-kubernetes/ovnkube-node-xtj6h" Dec 10 14:45:44 crc kubenswrapper[4718]: I1210 14:45:44.192702 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/597d67b9-ced7-43bc-8273-bdc80a8a96bc-env-overrides\") pod \"ovnkube-node-xtj6h\" (UID: \"597d67b9-ced7-43bc-8273-bdc80a8a96bc\") " pod="openshift-ovn-kubernetes/ovnkube-node-xtj6h" Dec 10 14:45:44 crc kubenswrapper[4718]: I1210 14:45:44.192767 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/597d67b9-ced7-43bc-8273-bdc80a8a96bc-host-slash\") pod \"ovnkube-node-xtj6h\" (UID: \"597d67b9-ced7-43bc-8273-bdc80a8a96bc\") " pod="openshift-ovn-kubernetes/ovnkube-node-xtj6h" Dec 10 14:45:44 crc kubenswrapper[4718]: I1210 14:45:44.192780 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/597d67b9-ced7-43bc-8273-bdc80a8a96bc-run-openvswitch\") pod \"ovnkube-node-xtj6h\" (UID: \"597d67b9-ced7-43bc-8273-bdc80a8a96bc\") " pod="openshift-ovn-kubernetes/ovnkube-node-xtj6h" Dec 10 14:45:44 crc kubenswrapper[4718]: I1210 14:45:44.192919 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/597d67b9-ced7-43bc-8273-bdc80a8a96bc-ovnkube-config\") pod \"ovnkube-node-xtj6h\" (UID: \"597d67b9-ced7-43bc-8273-bdc80a8a96bc\") " pod="openshift-ovn-kubernetes/ovnkube-node-xtj6h" Dec 10 14:45:44 crc kubenswrapper[4718]: I1210 14:45:44.193007 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhb7n\" (UniqueName: \"kubernetes.io/projected/612af6cb-db4d-4874-a9ea-8b3c7eb8e30c-kube-api-access-jhb7n\") on node \"crc\" DevicePath \"\"" Dec 10 14:45:44 crc kubenswrapper[4718]: I1210 14:45:44.193024 4718 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/612af6cb-db4d-4874-a9ea-8b3c7eb8e30c-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Dec 10 14:45:44 crc kubenswrapper[4718]: I1210 14:45:44.193037 4718 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/612af6cb-db4d-4874-a9ea-8b3c7eb8e30c-run-systemd\") on node \"crc\" DevicePath \"\"" Dec 10 14:45:44 crc kubenswrapper[4718]: I1210 14:45:44.193305 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/597d67b9-ced7-43bc-8273-bdc80a8a96bc-ovnkube-script-lib\") pod \"ovnkube-node-xtj6h\" (UID: \"597d67b9-ced7-43bc-8273-bdc80a8a96bc\") " pod="openshift-ovn-kubernetes/ovnkube-node-xtj6h" Dec 10 14:45:44 crc kubenswrapper[4718]: I1210 14:45:44.195846 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/597d67b9-ced7-43bc-8273-bdc80a8a96bc-ovn-node-metrics-cert\") pod \"ovnkube-node-xtj6h\" (UID: \"597d67b9-ced7-43bc-8273-bdc80a8a96bc\") " pod="openshift-ovn-kubernetes/ovnkube-node-xtj6h" Dec 10 14:45:44 crc kubenswrapper[4718]: I1210 14:45:44.210859 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nxg8x\" (UniqueName: \"kubernetes.io/projected/597d67b9-ced7-43bc-8273-bdc80a8a96bc-kube-api-access-nxg8x\") pod \"ovnkube-node-xtj6h\" (UID: \"597d67b9-ced7-43bc-8273-bdc80a8a96bc\") " pod="openshift-ovn-kubernetes/ovnkube-node-xtj6h" Dec 10 14:45:44 crc kubenswrapper[4718]: I1210 14:45:44.312372 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-xtj6h" Dec 10 14:45:44 crc kubenswrapper[4718]: W1210 14:45:44.336094 4718 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod597d67b9_ced7_43bc_8273_bdc80a8a96bc.slice/crio-037e1e408cd9281b7cdf267a417a7f054851c74cd3f88872dc55828d21f77808 WatchSource:0}: Error finding container 037e1e408cd9281b7cdf267a417a7f054851c74cd3f88872dc55828d21f77808: Status 404 returned error can't find the container with id 037e1e408cd9281b7cdf267a417a7f054851c74cd3f88872dc55828d21f77808 Dec 10 14:45:44 crc kubenswrapper[4718]: I1210 14:45:44.442238 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-dtch2_612af6cb-db4d-4874-a9ea-8b3c7eb8e30c/ovnkube-controller/3.log" Dec 10 14:45:44 crc kubenswrapper[4718]: I1210 14:45:44.445744 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-dtch2_612af6cb-db4d-4874-a9ea-8b3c7eb8e30c/ovn-acl-logging/0.log" Dec 10 14:45:44 crc kubenswrapper[4718]: I1210 14:45:44.446200 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-dtch2_612af6cb-db4d-4874-a9ea-8b3c7eb8e30c/ovn-controller/0.log" Dec 10 14:45:44 crc kubenswrapper[4718]: I1210 14:45:44.446735 4718 generic.go:334] "Generic (PLEG): container finished" podID="612af6cb-db4d-4874-a9ea-8b3c7eb8e30c" containerID="32146f91ffb2907607dc1436062d7e3fe2a52f19e1e18831c50a5b3e94055750" exitCode=0 Dec 10 14:45:44 crc kubenswrapper[4718]: I1210 14:45:44.446778 4718 generic.go:334] "Generic (PLEG): container finished" podID="612af6cb-db4d-4874-a9ea-8b3c7eb8e30c" containerID="f27c89c7dd6f5334f05c340d706e938abde36968f00c30bf84f7b2fb30dfcb3b" exitCode=0 Dec 10 14:45:44 crc kubenswrapper[4718]: I1210 14:45:44.446786 4718 generic.go:334] "Generic (PLEG): container finished" podID="612af6cb-db4d-4874-a9ea-8b3c7eb8e30c" containerID="4e231c6918c86142643af798db9c4c6de9764822d2b60e1155c727b143191139" exitCode=0 Dec 10 14:45:44 crc kubenswrapper[4718]: I1210 14:45:44.446792 4718 generic.go:334] "Generic (PLEG): container finished" podID="612af6cb-db4d-4874-a9ea-8b3c7eb8e30c" containerID="abe2f058bd1c2102c5f00a29af4869eb6dcd8b732ae90f5c4fda6759b460622a" exitCode=0 Dec 10 14:45:44 crc kubenswrapper[4718]: I1210 14:45:44.446799 4718 generic.go:334] "Generic (PLEG): container finished" podID="612af6cb-db4d-4874-a9ea-8b3c7eb8e30c" containerID="02ee125e78be56eace63f1077a3952d4a9b9ccb387af5413a9325eb4bff80ea9" exitCode=0 Dec 10 14:45:44 crc kubenswrapper[4718]: I1210 14:45:44.446805 4718 generic.go:334] "Generic (PLEG): container finished" podID="612af6cb-db4d-4874-a9ea-8b3c7eb8e30c" containerID="f49783c85c42bd1d4afd9d2546c84af07c770b069b7e6f22cf3d95c0e173bcb8" exitCode=0 Dec 10 14:45:44 crc kubenswrapper[4718]: I1210 14:45:44.446812 4718 generic.go:334] "Generic (PLEG): container finished" podID="612af6cb-db4d-4874-a9ea-8b3c7eb8e30c" containerID="d2ac19fc0eeba332014eefac978a35798eb8bf1ff79212bca24690236bd46e01" exitCode=143 Dec 10 14:45:44 crc kubenswrapper[4718]: I1210 14:45:44.446821 4718 generic.go:334] "Generic (PLEG): container finished" podID="612af6cb-db4d-4874-a9ea-8b3c7eb8e30c" containerID="a9263fdd7b12f5852db038570b5a179fff5850893af790fa27694f00375a607f" exitCode=143 Dec 10 14:45:44 crc kubenswrapper[4718]: I1210 14:45:44.446850 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-dtch2" Dec 10 14:45:44 crc kubenswrapper[4718]: I1210 14:45:44.446894 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dtch2" event={"ID":"612af6cb-db4d-4874-a9ea-8b3c7eb8e30c","Type":"ContainerDied","Data":"32146f91ffb2907607dc1436062d7e3fe2a52f19e1e18831c50a5b3e94055750"} Dec 10 14:45:44 crc kubenswrapper[4718]: I1210 14:45:44.446927 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dtch2" event={"ID":"612af6cb-db4d-4874-a9ea-8b3c7eb8e30c","Type":"ContainerDied","Data":"f27c89c7dd6f5334f05c340d706e938abde36968f00c30bf84f7b2fb30dfcb3b"} Dec 10 14:45:44 crc kubenswrapper[4718]: I1210 14:45:44.446941 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dtch2" event={"ID":"612af6cb-db4d-4874-a9ea-8b3c7eb8e30c","Type":"ContainerDied","Data":"4e231c6918c86142643af798db9c4c6de9764822d2b60e1155c727b143191139"} Dec 10 14:45:44 crc kubenswrapper[4718]: I1210 14:45:44.446978 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dtch2" event={"ID":"612af6cb-db4d-4874-a9ea-8b3c7eb8e30c","Type":"ContainerDied","Data":"abe2f058bd1c2102c5f00a29af4869eb6dcd8b732ae90f5c4fda6759b460622a"} Dec 10 14:45:44 crc kubenswrapper[4718]: I1210 14:45:44.446995 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dtch2" event={"ID":"612af6cb-db4d-4874-a9ea-8b3c7eb8e30c","Type":"ContainerDied","Data":"02ee125e78be56eace63f1077a3952d4a9b9ccb387af5413a9325eb4bff80ea9"} Dec 10 14:45:44 crc kubenswrapper[4718]: I1210 14:45:44.447006 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dtch2" event={"ID":"612af6cb-db4d-4874-a9ea-8b3c7eb8e30c","Type":"ContainerDied","Data":"f49783c85c42bd1d4afd9d2546c84af07c770b069b7e6f22cf3d95c0e173bcb8"} Dec 10 14:45:44 crc kubenswrapper[4718]: I1210 14:45:44.447050 4718 scope.go:117] "RemoveContainer" containerID="32146f91ffb2907607dc1436062d7e3fe2a52f19e1e18831c50a5b3e94055750" Dec 10 14:45:44 crc kubenswrapper[4718]: I1210 14:45:44.447026 4718 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f2ec586c9c00d7877087cb32e37d064b16317ea1c414ab474147f4c480026c70"} Dec 10 14:45:44 crc kubenswrapper[4718]: I1210 14:45:44.447210 4718 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f27c89c7dd6f5334f05c340d706e938abde36968f00c30bf84f7b2fb30dfcb3b"} Dec 10 14:45:44 crc kubenswrapper[4718]: I1210 14:45:44.447722 4718 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4e231c6918c86142643af798db9c4c6de9764822d2b60e1155c727b143191139"} Dec 10 14:45:44 crc kubenswrapper[4718]: I1210 14:45:44.447743 4718 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"abe2f058bd1c2102c5f00a29af4869eb6dcd8b732ae90f5c4fda6759b460622a"} Dec 10 14:45:44 crc kubenswrapper[4718]: I1210 14:45:44.447751 4718 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"02ee125e78be56eace63f1077a3952d4a9b9ccb387af5413a9325eb4bff80ea9"} Dec 10 14:45:44 crc kubenswrapper[4718]: I1210 14:45:44.447778 4718 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f49783c85c42bd1d4afd9d2546c84af07c770b069b7e6f22cf3d95c0e173bcb8"} Dec 10 14:45:44 crc kubenswrapper[4718]: I1210 14:45:44.447786 4718 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d2ac19fc0eeba332014eefac978a35798eb8bf1ff79212bca24690236bd46e01"} Dec 10 14:45:44 crc kubenswrapper[4718]: I1210 14:45:44.447792 4718 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a9263fdd7b12f5852db038570b5a179fff5850893af790fa27694f00375a607f"} Dec 10 14:45:44 crc kubenswrapper[4718]: I1210 14:45:44.447797 4718 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"df5db949bd1c53e8ae754b3e7e500e8c2e196aacd68a36d6331ddca378ef7504"} Dec 10 14:45:44 crc kubenswrapper[4718]: I1210 14:45:44.447824 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dtch2" event={"ID":"612af6cb-db4d-4874-a9ea-8b3c7eb8e30c","Type":"ContainerDied","Data":"d2ac19fc0eeba332014eefac978a35798eb8bf1ff79212bca24690236bd46e01"} Dec 10 14:45:44 crc kubenswrapper[4718]: I1210 14:45:44.447873 4718 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"32146f91ffb2907607dc1436062d7e3fe2a52f19e1e18831c50a5b3e94055750"} Dec 10 14:45:44 crc kubenswrapper[4718]: I1210 14:45:44.447882 4718 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f2ec586c9c00d7877087cb32e37d064b16317ea1c414ab474147f4c480026c70"} Dec 10 14:45:44 crc kubenswrapper[4718]: I1210 14:45:44.447888 4718 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f27c89c7dd6f5334f05c340d706e938abde36968f00c30bf84f7b2fb30dfcb3b"} Dec 10 14:45:44 crc kubenswrapper[4718]: I1210 14:45:44.447947 4718 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4e231c6918c86142643af798db9c4c6de9764822d2b60e1155c727b143191139"} Dec 10 14:45:44 crc kubenswrapper[4718]: I1210 14:45:44.447953 4718 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"abe2f058bd1c2102c5f00a29af4869eb6dcd8b732ae90f5c4fda6759b460622a"} Dec 10 14:45:44 crc kubenswrapper[4718]: I1210 14:45:44.447959 4718 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"02ee125e78be56eace63f1077a3952d4a9b9ccb387af5413a9325eb4bff80ea9"} Dec 10 14:45:44 crc kubenswrapper[4718]: I1210 14:45:44.447965 4718 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f49783c85c42bd1d4afd9d2546c84af07c770b069b7e6f22cf3d95c0e173bcb8"} Dec 10 14:45:44 crc kubenswrapper[4718]: I1210 14:45:44.447970 4718 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d2ac19fc0eeba332014eefac978a35798eb8bf1ff79212bca24690236bd46e01"} Dec 10 14:45:44 crc kubenswrapper[4718]: I1210 14:45:44.447976 4718 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a9263fdd7b12f5852db038570b5a179fff5850893af790fa27694f00375a607f"} Dec 10 14:45:44 crc kubenswrapper[4718]: I1210 14:45:44.447982 4718 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"df5db949bd1c53e8ae754b3e7e500e8c2e196aacd68a36d6331ddca378ef7504"} Dec 10 14:45:44 crc kubenswrapper[4718]: I1210 14:45:44.447989 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dtch2" event={"ID":"612af6cb-db4d-4874-a9ea-8b3c7eb8e30c","Type":"ContainerDied","Data":"a9263fdd7b12f5852db038570b5a179fff5850893af790fa27694f00375a607f"} Dec 10 14:45:44 crc kubenswrapper[4718]: I1210 14:45:44.448001 4718 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"32146f91ffb2907607dc1436062d7e3fe2a52f19e1e18831c50a5b3e94055750"} Dec 10 14:45:44 crc kubenswrapper[4718]: I1210 14:45:44.448046 4718 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f2ec586c9c00d7877087cb32e37d064b16317ea1c414ab474147f4c480026c70"} Dec 10 14:45:44 crc kubenswrapper[4718]: I1210 14:45:44.448055 4718 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f27c89c7dd6f5334f05c340d706e938abde36968f00c30bf84f7b2fb30dfcb3b"} Dec 10 14:45:44 crc kubenswrapper[4718]: I1210 14:45:44.448060 4718 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4e231c6918c86142643af798db9c4c6de9764822d2b60e1155c727b143191139"} Dec 10 14:45:44 crc kubenswrapper[4718]: I1210 14:45:44.448066 4718 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"abe2f058bd1c2102c5f00a29af4869eb6dcd8b732ae90f5c4fda6759b460622a"} Dec 10 14:45:44 crc kubenswrapper[4718]: I1210 14:45:44.448071 4718 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"02ee125e78be56eace63f1077a3952d4a9b9ccb387af5413a9325eb4bff80ea9"} Dec 10 14:45:44 crc kubenswrapper[4718]: I1210 14:45:44.448077 4718 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f49783c85c42bd1d4afd9d2546c84af07c770b069b7e6f22cf3d95c0e173bcb8"} Dec 10 14:45:44 crc kubenswrapper[4718]: I1210 14:45:44.448082 4718 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d2ac19fc0eeba332014eefac978a35798eb8bf1ff79212bca24690236bd46e01"} Dec 10 14:45:44 crc kubenswrapper[4718]: I1210 14:45:44.448088 4718 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a9263fdd7b12f5852db038570b5a179fff5850893af790fa27694f00375a607f"} Dec 10 14:45:44 crc kubenswrapper[4718]: I1210 14:45:44.448093 4718 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"df5db949bd1c53e8ae754b3e7e500e8c2e196aacd68a36d6331ddca378ef7504"} Dec 10 14:45:44 crc kubenswrapper[4718]: I1210 14:45:44.448101 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dtch2" event={"ID":"612af6cb-db4d-4874-a9ea-8b3c7eb8e30c","Type":"ContainerDied","Data":"41da8781c324f2c75342ff2d91854483b45d32b610071b068acb73c5cb47afa9"} Dec 10 14:45:44 crc kubenswrapper[4718]: I1210 14:45:44.448110 4718 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"32146f91ffb2907607dc1436062d7e3fe2a52f19e1e18831c50a5b3e94055750"} Dec 10 14:45:44 crc kubenswrapper[4718]: I1210 14:45:44.448118 4718 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f2ec586c9c00d7877087cb32e37d064b16317ea1c414ab474147f4c480026c70"} Dec 10 14:45:44 crc kubenswrapper[4718]: I1210 14:45:44.448124 4718 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f27c89c7dd6f5334f05c340d706e938abde36968f00c30bf84f7b2fb30dfcb3b"} Dec 10 14:45:44 crc kubenswrapper[4718]: I1210 14:45:44.448130 4718 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4e231c6918c86142643af798db9c4c6de9764822d2b60e1155c727b143191139"} Dec 10 14:45:44 crc kubenswrapper[4718]: I1210 14:45:44.448138 4718 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"abe2f058bd1c2102c5f00a29af4869eb6dcd8b732ae90f5c4fda6759b460622a"} Dec 10 14:45:44 crc kubenswrapper[4718]: I1210 14:45:44.448144 4718 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"02ee125e78be56eace63f1077a3952d4a9b9ccb387af5413a9325eb4bff80ea9"} Dec 10 14:45:44 crc kubenswrapper[4718]: I1210 14:45:44.448150 4718 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f49783c85c42bd1d4afd9d2546c84af07c770b069b7e6f22cf3d95c0e173bcb8"} Dec 10 14:45:44 crc kubenswrapper[4718]: I1210 14:45:44.448156 4718 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d2ac19fc0eeba332014eefac978a35798eb8bf1ff79212bca24690236bd46e01"} Dec 10 14:45:44 crc kubenswrapper[4718]: I1210 14:45:44.448161 4718 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a9263fdd7b12f5852db038570b5a179fff5850893af790fa27694f00375a607f"} Dec 10 14:45:44 crc kubenswrapper[4718]: I1210 14:45:44.448167 4718 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"df5db949bd1c53e8ae754b3e7e500e8c2e196aacd68a36d6331ddca378ef7504"} Dec 10 14:45:44 crc kubenswrapper[4718]: I1210 14:45:44.449445 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-hv62w_9db3984f-4589-462f-94d7-89a885be63d5/kube-multus/2.log" Dec 10 14:45:44 crc kubenswrapper[4718]: I1210 14:45:44.450545 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-hv62w_9db3984f-4589-462f-94d7-89a885be63d5/kube-multus/1.log" Dec 10 14:45:44 crc kubenswrapper[4718]: I1210 14:45:44.450628 4718 generic.go:334] "Generic (PLEG): container finished" podID="9db3984f-4589-462f-94d7-89a885be63d5" containerID="b0c86f72e14e1a163070f2925d7a72fc1412fc781bcff0326c25e2db755af5ba" exitCode=2 Dec 10 14:45:44 crc kubenswrapper[4718]: I1210 14:45:44.450709 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-hv62w" event={"ID":"9db3984f-4589-462f-94d7-89a885be63d5","Type":"ContainerDied","Data":"b0c86f72e14e1a163070f2925d7a72fc1412fc781bcff0326c25e2db755af5ba"} Dec 10 14:45:44 crc kubenswrapper[4718]: I1210 14:45:44.450759 4718 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0fb2ef285e022a0800e4756e854df7efc695588b19f6162afdccad0898285b93"} Dec 10 14:45:44 crc kubenswrapper[4718]: I1210 14:45:44.451318 4718 scope.go:117] "RemoveContainer" containerID="b0c86f72e14e1a163070f2925d7a72fc1412fc781bcff0326c25e2db755af5ba" Dec 10 14:45:44 crc kubenswrapper[4718]: I1210 14:45:44.451975 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xtj6h" event={"ID":"597d67b9-ced7-43bc-8273-bdc80a8a96bc","Type":"ContainerStarted","Data":"037e1e408cd9281b7cdf267a417a7f054851c74cd3f88872dc55828d21f77808"} Dec 10 14:45:44 crc kubenswrapper[4718]: I1210 14:45:44.470197 4718 scope.go:117] "RemoveContainer" containerID="f2ec586c9c00d7877087cb32e37d064b16317ea1c414ab474147f4c480026c70" Dec 10 14:45:44 crc kubenswrapper[4718]: I1210 14:45:44.496122 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-dtch2"] Dec 10 14:45:44 crc kubenswrapper[4718]: I1210 14:45:44.498345 4718 scope.go:117] "RemoveContainer" containerID="f27c89c7dd6f5334f05c340d706e938abde36968f00c30bf84f7b2fb30dfcb3b" Dec 10 14:45:44 crc kubenswrapper[4718]: I1210 14:45:44.499380 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-dtch2"] Dec 10 14:45:44 crc kubenswrapper[4718]: I1210 14:45:44.526027 4718 scope.go:117] "RemoveContainer" containerID="4e231c6918c86142643af798db9c4c6de9764822d2b60e1155c727b143191139" Dec 10 14:45:44 crc kubenswrapper[4718]: I1210 14:45:44.615133 4718 scope.go:117] "RemoveContainer" containerID="abe2f058bd1c2102c5f00a29af4869eb6dcd8b732ae90f5c4fda6759b460622a" Dec 10 14:45:44 crc kubenswrapper[4718]: I1210 14:45:44.641883 4718 scope.go:117] "RemoveContainer" containerID="02ee125e78be56eace63f1077a3952d4a9b9ccb387af5413a9325eb4bff80ea9" Dec 10 14:45:44 crc kubenswrapper[4718]: I1210 14:45:44.659696 4718 scope.go:117] "RemoveContainer" containerID="f49783c85c42bd1d4afd9d2546c84af07c770b069b7e6f22cf3d95c0e173bcb8" Dec 10 14:45:44 crc kubenswrapper[4718]: I1210 14:45:44.676905 4718 scope.go:117] "RemoveContainer" containerID="d2ac19fc0eeba332014eefac978a35798eb8bf1ff79212bca24690236bd46e01" Dec 10 14:45:44 crc kubenswrapper[4718]: I1210 14:45:44.694540 4718 scope.go:117] "RemoveContainer" containerID="a9263fdd7b12f5852db038570b5a179fff5850893af790fa27694f00375a607f" Dec 10 14:45:44 crc kubenswrapper[4718]: I1210 14:45:44.713557 4718 scope.go:117] "RemoveContainer" containerID="df5db949bd1c53e8ae754b3e7e500e8c2e196aacd68a36d6331ddca378ef7504" Dec 10 14:45:44 crc kubenswrapper[4718]: I1210 14:45:44.728255 4718 scope.go:117] "RemoveContainer" containerID="32146f91ffb2907607dc1436062d7e3fe2a52f19e1e18831c50a5b3e94055750" Dec 10 14:45:44 crc kubenswrapper[4718]: E1210 14:45:44.728857 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"32146f91ffb2907607dc1436062d7e3fe2a52f19e1e18831c50a5b3e94055750\": container with ID starting with 32146f91ffb2907607dc1436062d7e3fe2a52f19e1e18831c50a5b3e94055750 not found: ID does not exist" containerID="32146f91ffb2907607dc1436062d7e3fe2a52f19e1e18831c50a5b3e94055750" Dec 10 14:45:44 crc kubenswrapper[4718]: I1210 14:45:44.728904 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"32146f91ffb2907607dc1436062d7e3fe2a52f19e1e18831c50a5b3e94055750"} err="failed to get container status \"32146f91ffb2907607dc1436062d7e3fe2a52f19e1e18831c50a5b3e94055750\": rpc error: code = NotFound desc = could not find container \"32146f91ffb2907607dc1436062d7e3fe2a52f19e1e18831c50a5b3e94055750\": container with ID starting with 32146f91ffb2907607dc1436062d7e3fe2a52f19e1e18831c50a5b3e94055750 not found: ID does not exist" Dec 10 14:45:44 crc kubenswrapper[4718]: I1210 14:45:44.728939 4718 scope.go:117] "RemoveContainer" containerID="f2ec586c9c00d7877087cb32e37d064b16317ea1c414ab474147f4c480026c70" Dec 10 14:45:44 crc kubenswrapper[4718]: E1210 14:45:44.729483 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f2ec586c9c00d7877087cb32e37d064b16317ea1c414ab474147f4c480026c70\": container with ID starting with f2ec586c9c00d7877087cb32e37d064b16317ea1c414ab474147f4c480026c70 not found: ID does not exist" containerID="f2ec586c9c00d7877087cb32e37d064b16317ea1c414ab474147f4c480026c70" Dec 10 14:45:44 crc kubenswrapper[4718]: I1210 14:45:44.729504 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f2ec586c9c00d7877087cb32e37d064b16317ea1c414ab474147f4c480026c70"} err="failed to get container status \"f2ec586c9c00d7877087cb32e37d064b16317ea1c414ab474147f4c480026c70\": rpc error: code = NotFound desc = could not find container \"f2ec586c9c00d7877087cb32e37d064b16317ea1c414ab474147f4c480026c70\": container with ID starting with f2ec586c9c00d7877087cb32e37d064b16317ea1c414ab474147f4c480026c70 not found: ID does not exist" Dec 10 14:45:44 crc kubenswrapper[4718]: I1210 14:45:44.729521 4718 scope.go:117] "RemoveContainer" containerID="f27c89c7dd6f5334f05c340d706e938abde36968f00c30bf84f7b2fb30dfcb3b" Dec 10 14:45:44 crc kubenswrapper[4718]: E1210 14:45:44.729811 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f27c89c7dd6f5334f05c340d706e938abde36968f00c30bf84f7b2fb30dfcb3b\": container with ID starting with f27c89c7dd6f5334f05c340d706e938abde36968f00c30bf84f7b2fb30dfcb3b not found: ID does not exist" containerID="f27c89c7dd6f5334f05c340d706e938abde36968f00c30bf84f7b2fb30dfcb3b" Dec 10 14:45:44 crc kubenswrapper[4718]: I1210 14:45:44.729836 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f27c89c7dd6f5334f05c340d706e938abde36968f00c30bf84f7b2fb30dfcb3b"} err="failed to get container status \"f27c89c7dd6f5334f05c340d706e938abde36968f00c30bf84f7b2fb30dfcb3b\": rpc error: code = NotFound desc = could not find container \"f27c89c7dd6f5334f05c340d706e938abde36968f00c30bf84f7b2fb30dfcb3b\": container with ID starting with f27c89c7dd6f5334f05c340d706e938abde36968f00c30bf84f7b2fb30dfcb3b not found: ID does not exist" Dec 10 14:45:44 crc kubenswrapper[4718]: I1210 14:45:44.729852 4718 scope.go:117] "RemoveContainer" containerID="4e231c6918c86142643af798db9c4c6de9764822d2b60e1155c727b143191139" Dec 10 14:45:44 crc kubenswrapper[4718]: E1210 14:45:44.730281 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4e231c6918c86142643af798db9c4c6de9764822d2b60e1155c727b143191139\": container with ID starting with 4e231c6918c86142643af798db9c4c6de9764822d2b60e1155c727b143191139 not found: ID does not exist" containerID="4e231c6918c86142643af798db9c4c6de9764822d2b60e1155c727b143191139" Dec 10 14:45:44 crc kubenswrapper[4718]: I1210 14:45:44.730312 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4e231c6918c86142643af798db9c4c6de9764822d2b60e1155c727b143191139"} err="failed to get container status \"4e231c6918c86142643af798db9c4c6de9764822d2b60e1155c727b143191139\": rpc error: code = NotFound desc = could not find container \"4e231c6918c86142643af798db9c4c6de9764822d2b60e1155c727b143191139\": container with ID starting with 4e231c6918c86142643af798db9c4c6de9764822d2b60e1155c727b143191139 not found: ID does not exist" Dec 10 14:45:44 crc kubenswrapper[4718]: I1210 14:45:44.730338 4718 scope.go:117] "RemoveContainer" containerID="abe2f058bd1c2102c5f00a29af4869eb6dcd8b732ae90f5c4fda6759b460622a" Dec 10 14:45:44 crc kubenswrapper[4718]: E1210 14:45:44.730632 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"abe2f058bd1c2102c5f00a29af4869eb6dcd8b732ae90f5c4fda6759b460622a\": container with ID starting with abe2f058bd1c2102c5f00a29af4869eb6dcd8b732ae90f5c4fda6759b460622a not found: ID does not exist" containerID="abe2f058bd1c2102c5f00a29af4869eb6dcd8b732ae90f5c4fda6759b460622a" Dec 10 14:45:44 crc kubenswrapper[4718]: I1210 14:45:44.730661 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"abe2f058bd1c2102c5f00a29af4869eb6dcd8b732ae90f5c4fda6759b460622a"} err="failed to get container status \"abe2f058bd1c2102c5f00a29af4869eb6dcd8b732ae90f5c4fda6759b460622a\": rpc error: code = NotFound desc = could not find container \"abe2f058bd1c2102c5f00a29af4869eb6dcd8b732ae90f5c4fda6759b460622a\": container with ID starting with abe2f058bd1c2102c5f00a29af4869eb6dcd8b732ae90f5c4fda6759b460622a not found: ID does not exist" Dec 10 14:45:44 crc kubenswrapper[4718]: I1210 14:45:44.730675 4718 scope.go:117] "RemoveContainer" containerID="02ee125e78be56eace63f1077a3952d4a9b9ccb387af5413a9325eb4bff80ea9" Dec 10 14:45:44 crc kubenswrapper[4718]: E1210 14:45:44.731005 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"02ee125e78be56eace63f1077a3952d4a9b9ccb387af5413a9325eb4bff80ea9\": container with ID starting with 02ee125e78be56eace63f1077a3952d4a9b9ccb387af5413a9325eb4bff80ea9 not found: ID does not exist" containerID="02ee125e78be56eace63f1077a3952d4a9b9ccb387af5413a9325eb4bff80ea9" Dec 10 14:45:44 crc kubenswrapper[4718]: I1210 14:45:44.731032 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"02ee125e78be56eace63f1077a3952d4a9b9ccb387af5413a9325eb4bff80ea9"} err="failed to get container status \"02ee125e78be56eace63f1077a3952d4a9b9ccb387af5413a9325eb4bff80ea9\": rpc error: code = NotFound desc = could not find container \"02ee125e78be56eace63f1077a3952d4a9b9ccb387af5413a9325eb4bff80ea9\": container with ID starting with 02ee125e78be56eace63f1077a3952d4a9b9ccb387af5413a9325eb4bff80ea9 not found: ID does not exist" Dec 10 14:45:44 crc kubenswrapper[4718]: I1210 14:45:44.731047 4718 scope.go:117] "RemoveContainer" containerID="f49783c85c42bd1d4afd9d2546c84af07c770b069b7e6f22cf3d95c0e173bcb8" Dec 10 14:45:44 crc kubenswrapper[4718]: E1210 14:45:44.732026 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f49783c85c42bd1d4afd9d2546c84af07c770b069b7e6f22cf3d95c0e173bcb8\": container with ID starting with f49783c85c42bd1d4afd9d2546c84af07c770b069b7e6f22cf3d95c0e173bcb8 not found: ID does not exist" containerID="f49783c85c42bd1d4afd9d2546c84af07c770b069b7e6f22cf3d95c0e173bcb8" Dec 10 14:45:44 crc kubenswrapper[4718]: I1210 14:45:44.732046 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f49783c85c42bd1d4afd9d2546c84af07c770b069b7e6f22cf3d95c0e173bcb8"} err="failed to get container status \"f49783c85c42bd1d4afd9d2546c84af07c770b069b7e6f22cf3d95c0e173bcb8\": rpc error: code = NotFound desc = could not find container \"f49783c85c42bd1d4afd9d2546c84af07c770b069b7e6f22cf3d95c0e173bcb8\": container with ID starting with f49783c85c42bd1d4afd9d2546c84af07c770b069b7e6f22cf3d95c0e173bcb8 not found: ID does not exist" Dec 10 14:45:44 crc kubenswrapper[4718]: I1210 14:45:44.732061 4718 scope.go:117] "RemoveContainer" containerID="d2ac19fc0eeba332014eefac978a35798eb8bf1ff79212bca24690236bd46e01" Dec 10 14:45:44 crc kubenswrapper[4718]: E1210 14:45:44.732351 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d2ac19fc0eeba332014eefac978a35798eb8bf1ff79212bca24690236bd46e01\": container with ID starting with d2ac19fc0eeba332014eefac978a35798eb8bf1ff79212bca24690236bd46e01 not found: ID does not exist" containerID="d2ac19fc0eeba332014eefac978a35798eb8bf1ff79212bca24690236bd46e01" Dec 10 14:45:44 crc kubenswrapper[4718]: I1210 14:45:44.732381 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d2ac19fc0eeba332014eefac978a35798eb8bf1ff79212bca24690236bd46e01"} err="failed to get container status \"d2ac19fc0eeba332014eefac978a35798eb8bf1ff79212bca24690236bd46e01\": rpc error: code = NotFound desc = could not find container \"d2ac19fc0eeba332014eefac978a35798eb8bf1ff79212bca24690236bd46e01\": container with ID starting with d2ac19fc0eeba332014eefac978a35798eb8bf1ff79212bca24690236bd46e01 not found: ID does not exist" Dec 10 14:45:44 crc kubenswrapper[4718]: I1210 14:45:44.732415 4718 scope.go:117] "RemoveContainer" containerID="a9263fdd7b12f5852db038570b5a179fff5850893af790fa27694f00375a607f" Dec 10 14:45:44 crc kubenswrapper[4718]: E1210 14:45:44.732702 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a9263fdd7b12f5852db038570b5a179fff5850893af790fa27694f00375a607f\": container with ID starting with a9263fdd7b12f5852db038570b5a179fff5850893af790fa27694f00375a607f not found: ID does not exist" containerID="a9263fdd7b12f5852db038570b5a179fff5850893af790fa27694f00375a607f" Dec 10 14:45:44 crc kubenswrapper[4718]: I1210 14:45:44.732732 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a9263fdd7b12f5852db038570b5a179fff5850893af790fa27694f00375a607f"} err="failed to get container status \"a9263fdd7b12f5852db038570b5a179fff5850893af790fa27694f00375a607f\": rpc error: code = NotFound desc = could not find container \"a9263fdd7b12f5852db038570b5a179fff5850893af790fa27694f00375a607f\": container with ID starting with a9263fdd7b12f5852db038570b5a179fff5850893af790fa27694f00375a607f not found: ID does not exist" Dec 10 14:45:44 crc kubenswrapper[4718]: I1210 14:45:44.732751 4718 scope.go:117] "RemoveContainer" containerID="df5db949bd1c53e8ae754b3e7e500e8c2e196aacd68a36d6331ddca378ef7504" Dec 10 14:45:44 crc kubenswrapper[4718]: E1210 14:45:44.733236 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"df5db949bd1c53e8ae754b3e7e500e8c2e196aacd68a36d6331ddca378ef7504\": container with ID starting with df5db949bd1c53e8ae754b3e7e500e8c2e196aacd68a36d6331ddca378ef7504 not found: ID does not exist" containerID="df5db949bd1c53e8ae754b3e7e500e8c2e196aacd68a36d6331ddca378ef7504" Dec 10 14:45:44 crc kubenswrapper[4718]: I1210 14:45:44.733262 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"df5db949bd1c53e8ae754b3e7e500e8c2e196aacd68a36d6331ddca378ef7504"} err="failed to get container status \"df5db949bd1c53e8ae754b3e7e500e8c2e196aacd68a36d6331ddca378ef7504\": rpc error: code = NotFound desc = could not find container \"df5db949bd1c53e8ae754b3e7e500e8c2e196aacd68a36d6331ddca378ef7504\": container with ID starting with df5db949bd1c53e8ae754b3e7e500e8c2e196aacd68a36d6331ddca378ef7504 not found: ID does not exist" Dec 10 14:45:44 crc kubenswrapper[4718]: I1210 14:45:44.733284 4718 scope.go:117] "RemoveContainer" containerID="32146f91ffb2907607dc1436062d7e3fe2a52f19e1e18831c50a5b3e94055750" Dec 10 14:45:44 crc kubenswrapper[4718]: I1210 14:45:44.733801 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"32146f91ffb2907607dc1436062d7e3fe2a52f19e1e18831c50a5b3e94055750"} err="failed to get container status \"32146f91ffb2907607dc1436062d7e3fe2a52f19e1e18831c50a5b3e94055750\": rpc error: code = NotFound desc = could not find container \"32146f91ffb2907607dc1436062d7e3fe2a52f19e1e18831c50a5b3e94055750\": container with ID starting with 32146f91ffb2907607dc1436062d7e3fe2a52f19e1e18831c50a5b3e94055750 not found: ID does not exist" Dec 10 14:45:44 crc kubenswrapper[4718]: I1210 14:45:44.733882 4718 scope.go:117] "RemoveContainer" containerID="f2ec586c9c00d7877087cb32e37d064b16317ea1c414ab474147f4c480026c70" Dec 10 14:45:44 crc kubenswrapper[4718]: I1210 14:45:44.734643 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f2ec586c9c00d7877087cb32e37d064b16317ea1c414ab474147f4c480026c70"} err="failed to get container status \"f2ec586c9c00d7877087cb32e37d064b16317ea1c414ab474147f4c480026c70\": rpc error: code = NotFound desc = could not find container \"f2ec586c9c00d7877087cb32e37d064b16317ea1c414ab474147f4c480026c70\": container with ID starting with f2ec586c9c00d7877087cb32e37d064b16317ea1c414ab474147f4c480026c70 not found: ID does not exist" Dec 10 14:45:44 crc kubenswrapper[4718]: I1210 14:45:44.734673 4718 scope.go:117] "RemoveContainer" containerID="f27c89c7dd6f5334f05c340d706e938abde36968f00c30bf84f7b2fb30dfcb3b" Dec 10 14:45:44 crc kubenswrapper[4718]: I1210 14:45:44.735037 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f27c89c7dd6f5334f05c340d706e938abde36968f00c30bf84f7b2fb30dfcb3b"} err="failed to get container status \"f27c89c7dd6f5334f05c340d706e938abde36968f00c30bf84f7b2fb30dfcb3b\": rpc error: code = NotFound desc = could not find container \"f27c89c7dd6f5334f05c340d706e938abde36968f00c30bf84f7b2fb30dfcb3b\": container with ID starting with f27c89c7dd6f5334f05c340d706e938abde36968f00c30bf84f7b2fb30dfcb3b not found: ID does not exist" Dec 10 14:45:44 crc kubenswrapper[4718]: I1210 14:45:44.735064 4718 scope.go:117] "RemoveContainer" containerID="4e231c6918c86142643af798db9c4c6de9764822d2b60e1155c727b143191139" Dec 10 14:45:44 crc kubenswrapper[4718]: I1210 14:45:44.735564 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4e231c6918c86142643af798db9c4c6de9764822d2b60e1155c727b143191139"} err="failed to get container status \"4e231c6918c86142643af798db9c4c6de9764822d2b60e1155c727b143191139\": rpc error: code = NotFound desc = could not find container \"4e231c6918c86142643af798db9c4c6de9764822d2b60e1155c727b143191139\": container with ID starting with 4e231c6918c86142643af798db9c4c6de9764822d2b60e1155c727b143191139 not found: ID does not exist" Dec 10 14:45:44 crc kubenswrapper[4718]: I1210 14:45:44.735602 4718 scope.go:117] "RemoveContainer" containerID="abe2f058bd1c2102c5f00a29af4869eb6dcd8b732ae90f5c4fda6759b460622a" Dec 10 14:45:44 crc kubenswrapper[4718]: I1210 14:45:44.735922 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"abe2f058bd1c2102c5f00a29af4869eb6dcd8b732ae90f5c4fda6759b460622a"} err="failed to get container status \"abe2f058bd1c2102c5f00a29af4869eb6dcd8b732ae90f5c4fda6759b460622a\": rpc error: code = NotFound desc = could not find container \"abe2f058bd1c2102c5f00a29af4869eb6dcd8b732ae90f5c4fda6759b460622a\": container with ID starting with abe2f058bd1c2102c5f00a29af4869eb6dcd8b732ae90f5c4fda6759b460622a not found: ID does not exist" Dec 10 14:45:44 crc kubenswrapper[4718]: I1210 14:45:44.735947 4718 scope.go:117] "RemoveContainer" containerID="02ee125e78be56eace63f1077a3952d4a9b9ccb387af5413a9325eb4bff80ea9" Dec 10 14:45:44 crc kubenswrapper[4718]: I1210 14:45:44.736222 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"02ee125e78be56eace63f1077a3952d4a9b9ccb387af5413a9325eb4bff80ea9"} err="failed to get container status \"02ee125e78be56eace63f1077a3952d4a9b9ccb387af5413a9325eb4bff80ea9\": rpc error: code = NotFound desc = could not find container \"02ee125e78be56eace63f1077a3952d4a9b9ccb387af5413a9325eb4bff80ea9\": container with ID starting with 02ee125e78be56eace63f1077a3952d4a9b9ccb387af5413a9325eb4bff80ea9 not found: ID does not exist" Dec 10 14:45:44 crc kubenswrapper[4718]: I1210 14:45:44.736250 4718 scope.go:117] "RemoveContainer" containerID="f49783c85c42bd1d4afd9d2546c84af07c770b069b7e6f22cf3d95c0e173bcb8" Dec 10 14:45:44 crc kubenswrapper[4718]: I1210 14:45:44.736758 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f49783c85c42bd1d4afd9d2546c84af07c770b069b7e6f22cf3d95c0e173bcb8"} err="failed to get container status \"f49783c85c42bd1d4afd9d2546c84af07c770b069b7e6f22cf3d95c0e173bcb8\": rpc error: code = NotFound desc = could not find container \"f49783c85c42bd1d4afd9d2546c84af07c770b069b7e6f22cf3d95c0e173bcb8\": container with ID starting with f49783c85c42bd1d4afd9d2546c84af07c770b069b7e6f22cf3d95c0e173bcb8 not found: ID does not exist" Dec 10 14:45:44 crc kubenswrapper[4718]: I1210 14:45:44.736781 4718 scope.go:117] "RemoveContainer" containerID="d2ac19fc0eeba332014eefac978a35798eb8bf1ff79212bca24690236bd46e01" Dec 10 14:45:44 crc kubenswrapper[4718]: I1210 14:45:44.737059 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d2ac19fc0eeba332014eefac978a35798eb8bf1ff79212bca24690236bd46e01"} err="failed to get container status \"d2ac19fc0eeba332014eefac978a35798eb8bf1ff79212bca24690236bd46e01\": rpc error: code = NotFound desc = could not find container \"d2ac19fc0eeba332014eefac978a35798eb8bf1ff79212bca24690236bd46e01\": container with ID starting with d2ac19fc0eeba332014eefac978a35798eb8bf1ff79212bca24690236bd46e01 not found: ID does not exist" Dec 10 14:45:44 crc kubenswrapper[4718]: I1210 14:45:44.737089 4718 scope.go:117] "RemoveContainer" containerID="a9263fdd7b12f5852db038570b5a179fff5850893af790fa27694f00375a607f" Dec 10 14:45:44 crc kubenswrapper[4718]: I1210 14:45:44.737497 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a9263fdd7b12f5852db038570b5a179fff5850893af790fa27694f00375a607f"} err="failed to get container status \"a9263fdd7b12f5852db038570b5a179fff5850893af790fa27694f00375a607f\": rpc error: code = NotFound desc = could not find container \"a9263fdd7b12f5852db038570b5a179fff5850893af790fa27694f00375a607f\": container with ID starting with a9263fdd7b12f5852db038570b5a179fff5850893af790fa27694f00375a607f not found: ID does not exist" Dec 10 14:45:44 crc kubenswrapper[4718]: I1210 14:45:44.737522 4718 scope.go:117] "RemoveContainer" containerID="df5db949bd1c53e8ae754b3e7e500e8c2e196aacd68a36d6331ddca378ef7504" Dec 10 14:45:44 crc kubenswrapper[4718]: I1210 14:45:44.737935 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"df5db949bd1c53e8ae754b3e7e500e8c2e196aacd68a36d6331ddca378ef7504"} err="failed to get container status \"df5db949bd1c53e8ae754b3e7e500e8c2e196aacd68a36d6331ddca378ef7504\": rpc error: code = NotFound desc = could not find container \"df5db949bd1c53e8ae754b3e7e500e8c2e196aacd68a36d6331ddca378ef7504\": container with ID starting with df5db949bd1c53e8ae754b3e7e500e8c2e196aacd68a36d6331ddca378ef7504 not found: ID does not exist" Dec 10 14:45:44 crc kubenswrapper[4718]: I1210 14:45:44.737965 4718 scope.go:117] "RemoveContainer" containerID="32146f91ffb2907607dc1436062d7e3fe2a52f19e1e18831c50a5b3e94055750" Dec 10 14:45:44 crc kubenswrapper[4718]: I1210 14:45:44.738251 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"32146f91ffb2907607dc1436062d7e3fe2a52f19e1e18831c50a5b3e94055750"} err="failed to get container status \"32146f91ffb2907607dc1436062d7e3fe2a52f19e1e18831c50a5b3e94055750\": rpc error: code = NotFound desc = could not find container \"32146f91ffb2907607dc1436062d7e3fe2a52f19e1e18831c50a5b3e94055750\": container with ID starting with 32146f91ffb2907607dc1436062d7e3fe2a52f19e1e18831c50a5b3e94055750 not found: ID does not exist" Dec 10 14:45:44 crc kubenswrapper[4718]: I1210 14:45:44.738277 4718 scope.go:117] "RemoveContainer" containerID="f2ec586c9c00d7877087cb32e37d064b16317ea1c414ab474147f4c480026c70" Dec 10 14:45:44 crc kubenswrapper[4718]: I1210 14:45:44.738605 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f2ec586c9c00d7877087cb32e37d064b16317ea1c414ab474147f4c480026c70"} err="failed to get container status \"f2ec586c9c00d7877087cb32e37d064b16317ea1c414ab474147f4c480026c70\": rpc error: code = NotFound desc = could not find container \"f2ec586c9c00d7877087cb32e37d064b16317ea1c414ab474147f4c480026c70\": container with ID starting with f2ec586c9c00d7877087cb32e37d064b16317ea1c414ab474147f4c480026c70 not found: ID does not exist" Dec 10 14:45:44 crc kubenswrapper[4718]: I1210 14:45:44.738639 4718 scope.go:117] "RemoveContainer" containerID="f27c89c7dd6f5334f05c340d706e938abde36968f00c30bf84f7b2fb30dfcb3b" Dec 10 14:45:44 crc kubenswrapper[4718]: I1210 14:45:44.738937 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f27c89c7dd6f5334f05c340d706e938abde36968f00c30bf84f7b2fb30dfcb3b"} err="failed to get container status \"f27c89c7dd6f5334f05c340d706e938abde36968f00c30bf84f7b2fb30dfcb3b\": rpc error: code = NotFound desc = could not find container \"f27c89c7dd6f5334f05c340d706e938abde36968f00c30bf84f7b2fb30dfcb3b\": container with ID starting with f27c89c7dd6f5334f05c340d706e938abde36968f00c30bf84f7b2fb30dfcb3b not found: ID does not exist" Dec 10 14:45:44 crc kubenswrapper[4718]: I1210 14:45:44.738999 4718 scope.go:117] "RemoveContainer" containerID="4e231c6918c86142643af798db9c4c6de9764822d2b60e1155c727b143191139" Dec 10 14:45:44 crc kubenswrapper[4718]: I1210 14:45:44.739577 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4e231c6918c86142643af798db9c4c6de9764822d2b60e1155c727b143191139"} err="failed to get container status \"4e231c6918c86142643af798db9c4c6de9764822d2b60e1155c727b143191139\": rpc error: code = NotFound desc = could not find container \"4e231c6918c86142643af798db9c4c6de9764822d2b60e1155c727b143191139\": container with ID starting with 4e231c6918c86142643af798db9c4c6de9764822d2b60e1155c727b143191139 not found: ID does not exist" Dec 10 14:45:44 crc kubenswrapper[4718]: I1210 14:45:44.739611 4718 scope.go:117] "RemoveContainer" containerID="abe2f058bd1c2102c5f00a29af4869eb6dcd8b732ae90f5c4fda6759b460622a" Dec 10 14:45:44 crc kubenswrapper[4718]: I1210 14:45:44.740097 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"abe2f058bd1c2102c5f00a29af4869eb6dcd8b732ae90f5c4fda6759b460622a"} err="failed to get container status \"abe2f058bd1c2102c5f00a29af4869eb6dcd8b732ae90f5c4fda6759b460622a\": rpc error: code = NotFound desc = could not find container \"abe2f058bd1c2102c5f00a29af4869eb6dcd8b732ae90f5c4fda6759b460622a\": container with ID starting with abe2f058bd1c2102c5f00a29af4869eb6dcd8b732ae90f5c4fda6759b460622a not found: ID does not exist" Dec 10 14:45:44 crc kubenswrapper[4718]: I1210 14:45:44.740129 4718 scope.go:117] "RemoveContainer" containerID="02ee125e78be56eace63f1077a3952d4a9b9ccb387af5413a9325eb4bff80ea9" Dec 10 14:45:44 crc kubenswrapper[4718]: I1210 14:45:44.740761 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"02ee125e78be56eace63f1077a3952d4a9b9ccb387af5413a9325eb4bff80ea9"} err="failed to get container status \"02ee125e78be56eace63f1077a3952d4a9b9ccb387af5413a9325eb4bff80ea9\": rpc error: code = NotFound desc = could not find container \"02ee125e78be56eace63f1077a3952d4a9b9ccb387af5413a9325eb4bff80ea9\": container with ID starting with 02ee125e78be56eace63f1077a3952d4a9b9ccb387af5413a9325eb4bff80ea9 not found: ID does not exist" Dec 10 14:45:44 crc kubenswrapper[4718]: I1210 14:45:44.740844 4718 scope.go:117] "RemoveContainer" containerID="f49783c85c42bd1d4afd9d2546c84af07c770b069b7e6f22cf3d95c0e173bcb8" Dec 10 14:45:44 crc kubenswrapper[4718]: I1210 14:45:44.741218 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f49783c85c42bd1d4afd9d2546c84af07c770b069b7e6f22cf3d95c0e173bcb8"} err="failed to get container status \"f49783c85c42bd1d4afd9d2546c84af07c770b069b7e6f22cf3d95c0e173bcb8\": rpc error: code = NotFound desc = could not find container \"f49783c85c42bd1d4afd9d2546c84af07c770b069b7e6f22cf3d95c0e173bcb8\": container with ID starting with f49783c85c42bd1d4afd9d2546c84af07c770b069b7e6f22cf3d95c0e173bcb8 not found: ID does not exist" Dec 10 14:45:44 crc kubenswrapper[4718]: I1210 14:45:44.741241 4718 scope.go:117] "RemoveContainer" containerID="d2ac19fc0eeba332014eefac978a35798eb8bf1ff79212bca24690236bd46e01" Dec 10 14:45:44 crc kubenswrapper[4718]: I1210 14:45:44.741603 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d2ac19fc0eeba332014eefac978a35798eb8bf1ff79212bca24690236bd46e01"} err="failed to get container status \"d2ac19fc0eeba332014eefac978a35798eb8bf1ff79212bca24690236bd46e01\": rpc error: code = NotFound desc = could not find container \"d2ac19fc0eeba332014eefac978a35798eb8bf1ff79212bca24690236bd46e01\": container with ID starting with d2ac19fc0eeba332014eefac978a35798eb8bf1ff79212bca24690236bd46e01 not found: ID does not exist" Dec 10 14:45:44 crc kubenswrapper[4718]: I1210 14:45:44.741630 4718 scope.go:117] "RemoveContainer" containerID="a9263fdd7b12f5852db038570b5a179fff5850893af790fa27694f00375a607f" Dec 10 14:45:44 crc kubenswrapper[4718]: I1210 14:45:44.741949 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a9263fdd7b12f5852db038570b5a179fff5850893af790fa27694f00375a607f"} err="failed to get container status \"a9263fdd7b12f5852db038570b5a179fff5850893af790fa27694f00375a607f\": rpc error: code = NotFound desc = could not find container \"a9263fdd7b12f5852db038570b5a179fff5850893af790fa27694f00375a607f\": container with ID starting with a9263fdd7b12f5852db038570b5a179fff5850893af790fa27694f00375a607f not found: ID does not exist" Dec 10 14:45:44 crc kubenswrapper[4718]: I1210 14:45:44.741973 4718 scope.go:117] "RemoveContainer" containerID="df5db949bd1c53e8ae754b3e7e500e8c2e196aacd68a36d6331ddca378ef7504" Dec 10 14:45:44 crc kubenswrapper[4718]: I1210 14:45:44.742255 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"df5db949bd1c53e8ae754b3e7e500e8c2e196aacd68a36d6331ddca378ef7504"} err="failed to get container status \"df5db949bd1c53e8ae754b3e7e500e8c2e196aacd68a36d6331ddca378ef7504\": rpc error: code = NotFound desc = could not find container \"df5db949bd1c53e8ae754b3e7e500e8c2e196aacd68a36d6331ddca378ef7504\": container with ID starting with df5db949bd1c53e8ae754b3e7e500e8c2e196aacd68a36d6331ddca378ef7504 not found: ID does not exist" Dec 10 14:45:44 crc kubenswrapper[4718]: I1210 14:45:44.742286 4718 scope.go:117] "RemoveContainer" containerID="32146f91ffb2907607dc1436062d7e3fe2a52f19e1e18831c50a5b3e94055750" Dec 10 14:45:44 crc kubenswrapper[4718]: I1210 14:45:44.742580 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"32146f91ffb2907607dc1436062d7e3fe2a52f19e1e18831c50a5b3e94055750"} err="failed to get container status \"32146f91ffb2907607dc1436062d7e3fe2a52f19e1e18831c50a5b3e94055750\": rpc error: code = NotFound desc = could not find container \"32146f91ffb2907607dc1436062d7e3fe2a52f19e1e18831c50a5b3e94055750\": container with ID starting with 32146f91ffb2907607dc1436062d7e3fe2a52f19e1e18831c50a5b3e94055750 not found: ID does not exist" Dec 10 14:45:44 crc kubenswrapper[4718]: I1210 14:45:44.742606 4718 scope.go:117] "RemoveContainer" containerID="f2ec586c9c00d7877087cb32e37d064b16317ea1c414ab474147f4c480026c70" Dec 10 14:45:44 crc kubenswrapper[4718]: I1210 14:45:44.742887 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f2ec586c9c00d7877087cb32e37d064b16317ea1c414ab474147f4c480026c70"} err="failed to get container status \"f2ec586c9c00d7877087cb32e37d064b16317ea1c414ab474147f4c480026c70\": rpc error: code = NotFound desc = could not find container \"f2ec586c9c00d7877087cb32e37d064b16317ea1c414ab474147f4c480026c70\": container with ID starting with f2ec586c9c00d7877087cb32e37d064b16317ea1c414ab474147f4c480026c70 not found: ID does not exist" Dec 10 14:45:44 crc kubenswrapper[4718]: I1210 14:45:44.742916 4718 scope.go:117] "RemoveContainer" containerID="f27c89c7dd6f5334f05c340d706e938abde36968f00c30bf84f7b2fb30dfcb3b" Dec 10 14:45:44 crc kubenswrapper[4718]: I1210 14:45:44.743271 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f27c89c7dd6f5334f05c340d706e938abde36968f00c30bf84f7b2fb30dfcb3b"} err="failed to get container status \"f27c89c7dd6f5334f05c340d706e938abde36968f00c30bf84f7b2fb30dfcb3b\": rpc error: code = NotFound desc = could not find container \"f27c89c7dd6f5334f05c340d706e938abde36968f00c30bf84f7b2fb30dfcb3b\": container with ID starting with f27c89c7dd6f5334f05c340d706e938abde36968f00c30bf84f7b2fb30dfcb3b not found: ID does not exist" Dec 10 14:45:44 crc kubenswrapper[4718]: I1210 14:45:44.743293 4718 scope.go:117] "RemoveContainer" containerID="4e231c6918c86142643af798db9c4c6de9764822d2b60e1155c727b143191139" Dec 10 14:45:44 crc kubenswrapper[4718]: I1210 14:45:44.743767 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4e231c6918c86142643af798db9c4c6de9764822d2b60e1155c727b143191139"} err="failed to get container status \"4e231c6918c86142643af798db9c4c6de9764822d2b60e1155c727b143191139\": rpc error: code = NotFound desc = could not find container \"4e231c6918c86142643af798db9c4c6de9764822d2b60e1155c727b143191139\": container with ID starting with 4e231c6918c86142643af798db9c4c6de9764822d2b60e1155c727b143191139 not found: ID does not exist" Dec 10 14:45:44 crc kubenswrapper[4718]: I1210 14:45:44.743848 4718 scope.go:117] "RemoveContainer" containerID="abe2f058bd1c2102c5f00a29af4869eb6dcd8b732ae90f5c4fda6759b460622a" Dec 10 14:45:44 crc kubenswrapper[4718]: I1210 14:45:44.744222 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"abe2f058bd1c2102c5f00a29af4869eb6dcd8b732ae90f5c4fda6759b460622a"} err="failed to get container status \"abe2f058bd1c2102c5f00a29af4869eb6dcd8b732ae90f5c4fda6759b460622a\": rpc error: code = NotFound desc = could not find container \"abe2f058bd1c2102c5f00a29af4869eb6dcd8b732ae90f5c4fda6759b460622a\": container with ID starting with abe2f058bd1c2102c5f00a29af4869eb6dcd8b732ae90f5c4fda6759b460622a not found: ID does not exist" Dec 10 14:45:44 crc kubenswrapper[4718]: I1210 14:45:44.744250 4718 scope.go:117] "RemoveContainer" containerID="02ee125e78be56eace63f1077a3952d4a9b9ccb387af5413a9325eb4bff80ea9" Dec 10 14:45:44 crc kubenswrapper[4718]: I1210 14:45:44.744604 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"02ee125e78be56eace63f1077a3952d4a9b9ccb387af5413a9325eb4bff80ea9"} err="failed to get container status \"02ee125e78be56eace63f1077a3952d4a9b9ccb387af5413a9325eb4bff80ea9\": rpc error: code = NotFound desc = could not find container \"02ee125e78be56eace63f1077a3952d4a9b9ccb387af5413a9325eb4bff80ea9\": container with ID starting with 02ee125e78be56eace63f1077a3952d4a9b9ccb387af5413a9325eb4bff80ea9 not found: ID does not exist" Dec 10 14:45:44 crc kubenswrapper[4718]: I1210 14:45:44.744638 4718 scope.go:117] "RemoveContainer" containerID="f49783c85c42bd1d4afd9d2546c84af07c770b069b7e6f22cf3d95c0e173bcb8" Dec 10 14:45:44 crc kubenswrapper[4718]: I1210 14:45:44.744922 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f49783c85c42bd1d4afd9d2546c84af07c770b069b7e6f22cf3d95c0e173bcb8"} err="failed to get container status \"f49783c85c42bd1d4afd9d2546c84af07c770b069b7e6f22cf3d95c0e173bcb8\": rpc error: code = NotFound desc = could not find container \"f49783c85c42bd1d4afd9d2546c84af07c770b069b7e6f22cf3d95c0e173bcb8\": container with ID starting with f49783c85c42bd1d4afd9d2546c84af07c770b069b7e6f22cf3d95c0e173bcb8 not found: ID does not exist" Dec 10 14:45:44 crc kubenswrapper[4718]: I1210 14:45:44.744946 4718 scope.go:117] "RemoveContainer" containerID="d2ac19fc0eeba332014eefac978a35798eb8bf1ff79212bca24690236bd46e01" Dec 10 14:45:44 crc kubenswrapper[4718]: I1210 14:45:44.745181 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d2ac19fc0eeba332014eefac978a35798eb8bf1ff79212bca24690236bd46e01"} err="failed to get container status \"d2ac19fc0eeba332014eefac978a35798eb8bf1ff79212bca24690236bd46e01\": rpc error: code = NotFound desc = could not find container \"d2ac19fc0eeba332014eefac978a35798eb8bf1ff79212bca24690236bd46e01\": container with ID starting with d2ac19fc0eeba332014eefac978a35798eb8bf1ff79212bca24690236bd46e01 not found: ID does not exist" Dec 10 14:45:44 crc kubenswrapper[4718]: I1210 14:45:44.745208 4718 scope.go:117] "RemoveContainer" containerID="a9263fdd7b12f5852db038570b5a179fff5850893af790fa27694f00375a607f" Dec 10 14:45:44 crc kubenswrapper[4718]: I1210 14:45:44.745516 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a9263fdd7b12f5852db038570b5a179fff5850893af790fa27694f00375a607f"} err="failed to get container status \"a9263fdd7b12f5852db038570b5a179fff5850893af790fa27694f00375a607f\": rpc error: code = NotFound desc = could not find container \"a9263fdd7b12f5852db038570b5a179fff5850893af790fa27694f00375a607f\": container with ID starting with a9263fdd7b12f5852db038570b5a179fff5850893af790fa27694f00375a607f not found: ID does not exist" Dec 10 14:45:44 crc kubenswrapper[4718]: I1210 14:45:44.745551 4718 scope.go:117] "RemoveContainer" containerID="df5db949bd1c53e8ae754b3e7e500e8c2e196aacd68a36d6331ddca378ef7504" Dec 10 14:45:44 crc kubenswrapper[4718]: I1210 14:45:44.745789 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"df5db949bd1c53e8ae754b3e7e500e8c2e196aacd68a36d6331ddca378ef7504"} err="failed to get container status \"df5db949bd1c53e8ae754b3e7e500e8c2e196aacd68a36d6331ddca378ef7504\": rpc error: code = NotFound desc = could not find container \"df5db949bd1c53e8ae754b3e7e500e8c2e196aacd68a36d6331ddca378ef7504\": container with ID starting with df5db949bd1c53e8ae754b3e7e500e8c2e196aacd68a36d6331ddca378ef7504 not found: ID does not exist" Dec 10 14:45:44 crc kubenswrapper[4718]: I1210 14:45:44.745811 4718 scope.go:117] "RemoveContainer" containerID="32146f91ffb2907607dc1436062d7e3fe2a52f19e1e18831c50a5b3e94055750" Dec 10 14:45:44 crc kubenswrapper[4718]: I1210 14:45:44.746030 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"32146f91ffb2907607dc1436062d7e3fe2a52f19e1e18831c50a5b3e94055750"} err="failed to get container status \"32146f91ffb2907607dc1436062d7e3fe2a52f19e1e18831c50a5b3e94055750\": rpc error: code = NotFound desc = could not find container \"32146f91ffb2907607dc1436062d7e3fe2a52f19e1e18831c50a5b3e94055750\": container with ID starting with 32146f91ffb2907607dc1436062d7e3fe2a52f19e1e18831c50a5b3e94055750 not found: ID does not exist" Dec 10 14:45:45 crc kubenswrapper[4718]: I1210 14:45:45.461688 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-hv62w_9db3984f-4589-462f-94d7-89a885be63d5/kube-multus/2.log" Dec 10 14:45:45 crc kubenswrapper[4718]: I1210 14:45:45.462601 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-hv62w_9db3984f-4589-462f-94d7-89a885be63d5/kube-multus/1.log" Dec 10 14:45:45 crc kubenswrapper[4718]: I1210 14:45:45.462773 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-hv62w" event={"ID":"9db3984f-4589-462f-94d7-89a885be63d5","Type":"ContainerStarted","Data":"76c01e8cd719574908d590df1519e850af104e71f349dbfc5e6f27e5657e7041"} Dec 10 14:45:45 crc kubenswrapper[4718]: I1210 14:45:45.464777 4718 generic.go:334] "Generic (PLEG): container finished" podID="597d67b9-ced7-43bc-8273-bdc80a8a96bc" containerID="6b2e6598e388b55b02c5bc74b13b2fdc46c582adcc3667837e62cea8ddf5dcad" exitCode=0 Dec 10 14:45:45 crc kubenswrapper[4718]: I1210 14:45:45.464814 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xtj6h" event={"ID":"597d67b9-ced7-43bc-8273-bdc80a8a96bc","Type":"ContainerDied","Data":"6b2e6598e388b55b02c5bc74b13b2fdc46c582adcc3667837e62cea8ddf5dcad"} Dec 10 14:45:46 crc kubenswrapper[4718]: I1210 14:45:46.030074 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="612af6cb-db4d-4874-a9ea-8b3c7eb8e30c" path="/var/lib/kubelet/pods/612af6cb-db4d-4874-a9ea-8b3c7eb8e30c/volumes" Dec 10 14:45:46 crc kubenswrapper[4718]: I1210 14:45:46.474535 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xtj6h" event={"ID":"597d67b9-ced7-43bc-8273-bdc80a8a96bc","Type":"ContainerStarted","Data":"2781ff6bbdf171e45ffc9dfe8e5c3ad6c40e59a5c9a9414d2036a38a08bb9778"} Dec 10 14:45:46 crc kubenswrapper[4718]: I1210 14:45:46.474598 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xtj6h" event={"ID":"597d67b9-ced7-43bc-8273-bdc80a8a96bc","Type":"ContainerStarted","Data":"e2a649de8bd14903f0a56bf62d09aceb2d5663fd3443a17b724e29931cebbf54"} Dec 10 14:45:47 crc kubenswrapper[4718]: I1210 14:45:47.485120 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xtj6h" event={"ID":"597d67b9-ced7-43bc-8273-bdc80a8a96bc","Type":"ContainerStarted","Data":"6454c25b189f2375a3e872502defdb0524f8af0848e221f36e2bd9d3ff962834"} Dec 10 14:45:47 crc kubenswrapper[4718]: I1210 14:45:47.485519 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xtj6h" event={"ID":"597d67b9-ced7-43bc-8273-bdc80a8a96bc","Type":"ContainerStarted","Data":"02b088dc0049a7b5e66adacf1969c1c7c6573c176475130aa4b1ab5200658a2f"} Dec 10 14:45:47 crc kubenswrapper[4718]: I1210 14:45:47.485534 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xtj6h" event={"ID":"597d67b9-ced7-43bc-8273-bdc80a8a96bc","Type":"ContainerStarted","Data":"cdd03ecd58fc1bb19a5ed7121c136b289e9c0f23e8a1dbca31950dfe8e9e640c"} Dec 10 14:45:47 crc kubenswrapper[4718]: I1210 14:45:47.485545 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xtj6h" event={"ID":"597d67b9-ced7-43bc-8273-bdc80a8a96bc","Type":"ContainerStarted","Data":"3262518cc0f4aeff1a0632fc6e127718b0ba0d84764e2fc1a34ade1045f05575"} Dec 10 14:45:48 crc kubenswrapper[4718]: I1210 14:45:48.644514 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-5655c58dd6-t9tzp" Dec 10 14:45:49 crc kubenswrapper[4718]: I1210 14:45:49.502441 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xtj6h" event={"ID":"597d67b9-ced7-43bc-8273-bdc80a8a96bc","Type":"ContainerStarted","Data":"14bb843464d5cdee7b1acbb8f7e9c24db31760b755d2298eead69c78f62cf070"} Dec 10 14:45:54 crc kubenswrapper[4718]: I1210 14:45:54.534610 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xtj6h" event={"ID":"597d67b9-ced7-43bc-8273-bdc80a8a96bc","Type":"ContainerStarted","Data":"62a1ed0c11bfde255649dca55dc7457269107fe8c22cd23bf6b92a3a203d667a"} Dec 10 14:45:54 crc kubenswrapper[4718]: I1210 14:45:54.535428 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-xtj6h" Dec 10 14:45:54 crc kubenswrapper[4718]: I1210 14:45:54.567314 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-xtj6h" podStartSLOduration=11.567293455 podStartE2EDuration="11.567293455s" podCreationTimestamp="2025-12-10 14:45:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 14:45:54.566649258 +0000 UTC m=+859.515872675" watchObservedRunningTime="2025-12-10 14:45:54.567293455 +0000 UTC m=+859.516516872" Dec 10 14:45:54 crc kubenswrapper[4718]: I1210 14:45:54.596375 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-xtj6h" Dec 10 14:45:55 crc kubenswrapper[4718]: I1210 14:45:55.542913 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-xtj6h" Dec 10 14:45:55 crc kubenswrapper[4718]: I1210 14:45:55.543450 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-xtj6h" Dec 10 14:45:55 crc kubenswrapper[4718]: I1210 14:45:55.572458 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-xtj6h" Dec 10 14:46:14 crc kubenswrapper[4718]: I1210 14:46:14.343902 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-xtj6h" Dec 10 14:46:23 crc kubenswrapper[4718]: I1210 14:46:23.314977 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210dk787"] Dec 10 14:46:23 crc kubenswrapper[4718]: I1210 14:46:23.316548 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210dk787" Dec 10 14:46:23 crc kubenswrapper[4718]: I1210 14:46:23.318997 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Dec 10 14:46:23 crc kubenswrapper[4718]: I1210 14:46:23.325651 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210dk787"] Dec 10 14:46:23 crc kubenswrapper[4718]: I1210 14:46:23.371932 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/792e6faa-53e5-4bb6-baa5-32ce33828b19-bundle\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210dk787\" (UID: \"792e6faa-53e5-4bb6-baa5-32ce33828b19\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210dk787" Dec 10 14:46:23 crc kubenswrapper[4718]: I1210 14:46:23.372035 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wnvzf\" (UniqueName: \"kubernetes.io/projected/792e6faa-53e5-4bb6-baa5-32ce33828b19-kube-api-access-wnvzf\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210dk787\" (UID: \"792e6faa-53e5-4bb6-baa5-32ce33828b19\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210dk787" Dec 10 14:46:23 crc kubenswrapper[4718]: I1210 14:46:23.372200 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/792e6faa-53e5-4bb6-baa5-32ce33828b19-util\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210dk787\" (UID: \"792e6faa-53e5-4bb6-baa5-32ce33828b19\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210dk787" Dec 10 14:46:23 crc kubenswrapper[4718]: I1210 14:46:23.473502 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wnvzf\" (UniqueName: \"kubernetes.io/projected/792e6faa-53e5-4bb6-baa5-32ce33828b19-kube-api-access-wnvzf\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210dk787\" (UID: \"792e6faa-53e5-4bb6-baa5-32ce33828b19\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210dk787" Dec 10 14:46:23 crc kubenswrapper[4718]: I1210 14:46:23.473659 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/792e6faa-53e5-4bb6-baa5-32ce33828b19-util\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210dk787\" (UID: \"792e6faa-53e5-4bb6-baa5-32ce33828b19\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210dk787" Dec 10 14:46:23 crc kubenswrapper[4718]: I1210 14:46:23.473707 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/792e6faa-53e5-4bb6-baa5-32ce33828b19-bundle\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210dk787\" (UID: \"792e6faa-53e5-4bb6-baa5-32ce33828b19\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210dk787" Dec 10 14:46:23 crc kubenswrapper[4718]: I1210 14:46:23.474247 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/792e6faa-53e5-4bb6-baa5-32ce33828b19-bundle\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210dk787\" (UID: \"792e6faa-53e5-4bb6-baa5-32ce33828b19\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210dk787" Dec 10 14:46:23 crc kubenswrapper[4718]: I1210 14:46:23.474323 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/792e6faa-53e5-4bb6-baa5-32ce33828b19-util\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210dk787\" (UID: \"792e6faa-53e5-4bb6-baa5-32ce33828b19\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210dk787" Dec 10 14:46:23 crc kubenswrapper[4718]: I1210 14:46:23.497434 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wnvzf\" (UniqueName: \"kubernetes.io/projected/792e6faa-53e5-4bb6-baa5-32ce33828b19-kube-api-access-wnvzf\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210dk787\" (UID: \"792e6faa-53e5-4bb6-baa5-32ce33828b19\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210dk787" Dec 10 14:46:23 crc kubenswrapper[4718]: I1210 14:46:23.637114 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210dk787" Dec 10 14:46:23 crc kubenswrapper[4718]: I1210 14:46:23.907778 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210dk787"] Dec 10 14:46:24 crc kubenswrapper[4718]: I1210 14:46:24.719494 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210dk787" event={"ID":"792e6faa-53e5-4bb6-baa5-32ce33828b19","Type":"ContainerStarted","Data":"020e5be7057b55e84f291fdeade44977b4a464ff81d8f413827faa50312cb45e"} Dec 10 14:46:24 crc kubenswrapper[4718]: I1210 14:46:24.719567 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210dk787" event={"ID":"792e6faa-53e5-4bb6-baa5-32ce33828b19","Type":"ContainerStarted","Data":"c154cc3b28be866d487aedc133411813d4517209c77604f1b38d65ffe2ec016a"} Dec 10 14:46:25 crc kubenswrapper[4718]: I1210 14:46:25.661255 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-stmxx"] Dec 10 14:46:25 crc kubenswrapper[4718]: I1210 14:46:25.662972 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-stmxx" Dec 10 14:46:25 crc kubenswrapper[4718]: I1210 14:46:25.671770 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-stmxx"] Dec 10 14:46:25 crc kubenswrapper[4718]: I1210 14:46:25.705676 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dc5826e4-6a04-4886-ba0d-c15420d25b65-utilities\") pod \"redhat-operators-stmxx\" (UID: \"dc5826e4-6a04-4886-ba0d-c15420d25b65\") " pod="openshift-marketplace/redhat-operators-stmxx" Dec 10 14:46:25 crc kubenswrapper[4718]: I1210 14:46:25.705819 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dc5826e4-6a04-4886-ba0d-c15420d25b65-catalog-content\") pod \"redhat-operators-stmxx\" (UID: \"dc5826e4-6a04-4886-ba0d-c15420d25b65\") " pod="openshift-marketplace/redhat-operators-stmxx" Dec 10 14:46:25 crc kubenswrapper[4718]: I1210 14:46:25.705886 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dppbd\" (UniqueName: \"kubernetes.io/projected/dc5826e4-6a04-4886-ba0d-c15420d25b65-kube-api-access-dppbd\") pod \"redhat-operators-stmxx\" (UID: \"dc5826e4-6a04-4886-ba0d-c15420d25b65\") " pod="openshift-marketplace/redhat-operators-stmxx" Dec 10 14:46:25 crc kubenswrapper[4718]: I1210 14:46:25.728118 4718 generic.go:334] "Generic (PLEG): container finished" podID="792e6faa-53e5-4bb6-baa5-32ce33828b19" containerID="020e5be7057b55e84f291fdeade44977b4a464ff81d8f413827faa50312cb45e" exitCode=0 Dec 10 14:46:25 crc kubenswrapper[4718]: I1210 14:46:25.728182 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210dk787" event={"ID":"792e6faa-53e5-4bb6-baa5-32ce33828b19","Type":"ContainerDied","Data":"020e5be7057b55e84f291fdeade44977b4a464ff81d8f413827faa50312cb45e"} Dec 10 14:46:25 crc kubenswrapper[4718]: I1210 14:46:25.806974 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dc5826e4-6a04-4886-ba0d-c15420d25b65-catalog-content\") pod \"redhat-operators-stmxx\" (UID: \"dc5826e4-6a04-4886-ba0d-c15420d25b65\") " pod="openshift-marketplace/redhat-operators-stmxx" Dec 10 14:46:25 crc kubenswrapper[4718]: I1210 14:46:25.807041 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dppbd\" (UniqueName: \"kubernetes.io/projected/dc5826e4-6a04-4886-ba0d-c15420d25b65-kube-api-access-dppbd\") pod \"redhat-operators-stmxx\" (UID: \"dc5826e4-6a04-4886-ba0d-c15420d25b65\") " pod="openshift-marketplace/redhat-operators-stmxx" Dec 10 14:46:25 crc kubenswrapper[4718]: I1210 14:46:25.807090 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dc5826e4-6a04-4886-ba0d-c15420d25b65-utilities\") pod \"redhat-operators-stmxx\" (UID: \"dc5826e4-6a04-4886-ba0d-c15420d25b65\") " pod="openshift-marketplace/redhat-operators-stmxx" Dec 10 14:46:25 crc kubenswrapper[4718]: I1210 14:46:25.807642 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dc5826e4-6a04-4886-ba0d-c15420d25b65-catalog-content\") pod \"redhat-operators-stmxx\" (UID: \"dc5826e4-6a04-4886-ba0d-c15420d25b65\") " pod="openshift-marketplace/redhat-operators-stmxx" Dec 10 14:46:25 crc kubenswrapper[4718]: I1210 14:46:25.807642 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dc5826e4-6a04-4886-ba0d-c15420d25b65-utilities\") pod \"redhat-operators-stmxx\" (UID: \"dc5826e4-6a04-4886-ba0d-c15420d25b65\") " pod="openshift-marketplace/redhat-operators-stmxx" Dec 10 14:46:25 crc kubenswrapper[4718]: I1210 14:46:25.831934 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dppbd\" (UniqueName: \"kubernetes.io/projected/dc5826e4-6a04-4886-ba0d-c15420d25b65-kube-api-access-dppbd\") pod \"redhat-operators-stmxx\" (UID: \"dc5826e4-6a04-4886-ba0d-c15420d25b65\") " pod="openshift-marketplace/redhat-operators-stmxx" Dec 10 14:46:25 crc kubenswrapper[4718]: I1210 14:46:25.979213 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-stmxx" Dec 10 14:46:26 crc kubenswrapper[4718]: I1210 14:46:26.470208 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-stmxx"] Dec 10 14:46:27 crc kubenswrapper[4718]: I1210 14:46:27.111429 4718 generic.go:334] "Generic (PLEG): container finished" podID="dc5826e4-6a04-4886-ba0d-c15420d25b65" containerID="eeecd972465f9638fade0253b376b4f5f970d262d77758ee78483d8cff861bc6" exitCode=0 Dec 10 14:46:27 crc kubenswrapper[4718]: I1210 14:46:27.111494 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-stmxx" event={"ID":"dc5826e4-6a04-4886-ba0d-c15420d25b65","Type":"ContainerDied","Data":"eeecd972465f9638fade0253b376b4f5f970d262d77758ee78483d8cff861bc6"} Dec 10 14:46:27 crc kubenswrapper[4718]: I1210 14:46:27.111932 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-stmxx" event={"ID":"dc5826e4-6a04-4886-ba0d-c15420d25b65","Type":"ContainerStarted","Data":"7b74ffd9ebb7c4c7d86d4c29b09d5ce376707253826ac9f9e4cab6511f6e23ce"} Dec 10 14:46:28 crc kubenswrapper[4718]: I1210 14:46:28.119053 4718 generic.go:334] "Generic (PLEG): container finished" podID="792e6faa-53e5-4bb6-baa5-32ce33828b19" containerID="6a5187b612ed44c1d3184ab8acbbfd6b2405b2b9521a3d0f7be607f29b6dd814" exitCode=0 Dec 10 14:46:28 crc kubenswrapper[4718]: I1210 14:46:28.119105 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210dk787" event={"ID":"792e6faa-53e5-4bb6-baa5-32ce33828b19","Type":"ContainerDied","Data":"6a5187b612ed44c1d3184ab8acbbfd6b2405b2b9521a3d0f7be607f29b6dd814"} Dec 10 14:46:29 crc kubenswrapper[4718]: I1210 14:46:29.128825 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210dk787" event={"ID":"792e6faa-53e5-4bb6-baa5-32ce33828b19","Type":"ContainerStarted","Data":"5e235910de41c698b295c686bef2951bc265b078a19fca9471f3d47e07b8fdec"} Dec 10 14:46:29 crc kubenswrapper[4718]: I1210 14:46:29.136343 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-stmxx" event={"ID":"dc5826e4-6a04-4886-ba0d-c15420d25b65","Type":"ContainerStarted","Data":"9fdd2588ee629ab358e5a9ff3fa66ebb79d0e752fac9ad128e6c117cc241472e"} Dec 10 14:46:29 crc kubenswrapper[4718]: I1210 14:46:29.153971 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210dk787" podStartSLOduration=4.314792499 podStartE2EDuration="6.153939947s" podCreationTimestamp="2025-12-10 14:46:23 +0000 UTC" firstStartedPulling="2025-12-10 14:46:25.73009512 +0000 UTC m=+890.679318537" lastFinishedPulling="2025-12-10 14:46:27.569242568 +0000 UTC m=+892.518465985" observedRunningTime="2025-12-10 14:46:29.148018515 +0000 UTC m=+894.097241932" watchObservedRunningTime="2025-12-10 14:46:29.153939947 +0000 UTC m=+894.103163364" Dec 10 14:46:31 crc kubenswrapper[4718]: I1210 14:46:31.149772 4718 generic.go:334] "Generic (PLEG): container finished" podID="792e6faa-53e5-4bb6-baa5-32ce33828b19" containerID="5e235910de41c698b295c686bef2951bc265b078a19fca9471f3d47e07b8fdec" exitCode=0 Dec 10 14:46:31 crc kubenswrapper[4718]: I1210 14:46:31.149822 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210dk787" event={"ID":"792e6faa-53e5-4bb6-baa5-32ce33828b19","Type":"ContainerDied","Data":"5e235910de41c698b295c686bef2951bc265b078a19fca9471f3d47e07b8fdec"} Dec 10 14:46:32 crc kubenswrapper[4718]: I1210 14:46:32.578121 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210dk787" Dec 10 14:46:32 crc kubenswrapper[4718]: I1210 14:46:32.756951 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/792e6faa-53e5-4bb6-baa5-32ce33828b19-util\") pod \"792e6faa-53e5-4bb6-baa5-32ce33828b19\" (UID: \"792e6faa-53e5-4bb6-baa5-32ce33828b19\") " Dec 10 14:46:32 crc kubenswrapper[4718]: I1210 14:46:32.757116 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wnvzf\" (UniqueName: \"kubernetes.io/projected/792e6faa-53e5-4bb6-baa5-32ce33828b19-kube-api-access-wnvzf\") pod \"792e6faa-53e5-4bb6-baa5-32ce33828b19\" (UID: \"792e6faa-53e5-4bb6-baa5-32ce33828b19\") " Dec 10 14:46:32 crc kubenswrapper[4718]: I1210 14:46:32.757139 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/792e6faa-53e5-4bb6-baa5-32ce33828b19-bundle\") pod \"792e6faa-53e5-4bb6-baa5-32ce33828b19\" (UID: \"792e6faa-53e5-4bb6-baa5-32ce33828b19\") " Dec 10 14:46:32 crc kubenswrapper[4718]: I1210 14:46:32.761012 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/792e6faa-53e5-4bb6-baa5-32ce33828b19-bundle" (OuterVolumeSpecName: "bundle") pod "792e6faa-53e5-4bb6-baa5-32ce33828b19" (UID: "792e6faa-53e5-4bb6-baa5-32ce33828b19"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 14:46:32 crc kubenswrapper[4718]: I1210 14:46:32.768338 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/792e6faa-53e5-4bb6-baa5-32ce33828b19-util" (OuterVolumeSpecName: "util") pod "792e6faa-53e5-4bb6-baa5-32ce33828b19" (UID: "792e6faa-53e5-4bb6-baa5-32ce33828b19"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 14:46:32 crc kubenswrapper[4718]: I1210 14:46:32.771925 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/792e6faa-53e5-4bb6-baa5-32ce33828b19-kube-api-access-wnvzf" (OuterVolumeSpecName: "kube-api-access-wnvzf") pod "792e6faa-53e5-4bb6-baa5-32ce33828b19" (UID: "792e6faa-53e5-4bb6-baa5-32ce33828b19"). InnerVolumeSpecName "kube-api-access-wnvzf". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 14:46:32 crc kubenswrapper[4718]: I1210 14:46:32.859662 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wnvzf\" (UniqueName: \"kubernetes.io/projected/792e6faa-53e5-4bb6-baa5-32ce33828b19-kube-api-access-wnvzf\") on node \"crc\" DevicePath \"\"" Dec 10 14:46:32 crc kubenswrapper[4718]: I1210 14:46:32.859735 4718 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/792e6faa-53e5-4bb6-baa5-32ce33828b19-bundle\") on node \"crc\" DevicePath \"\"" Dec 10 14:46:32 crc kubenswrapper[4718]: I1210 14:46:32.859748 4718 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/792e6faa-53e5-4bb6-baa5-32ce33828b19-util\") on node \"crc\" DevicePath \"\"" Dec 10 14:46:33 crc kubenswrapper[4718]: I1210 14:46:33.163659 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210dk787" Dec 10 14:46:33 crc kubenswrapper[4718]: I1210 14:46:33.163638 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210dk787" event={"ID":"792e6faa-53e5-4bb6-baa5-32ce33828b19","Type":"ContainerDied","Data":"c154cc3b28be866d487aedc133411813d4517209c77604f1b38d65ffe2ec016a"} Dec 10 14:46:33 crc kubenswrapper[4718]: I1210 14:46:33.164116 4718 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c154cc3b28be866d487aedc133411813d4517209c77604f1b38d65ffe2ec016a" Dec 10 14:46:33 crc kubenswrapper[4718]: I1210 14:46:33.166916 4718 generic.go:334] "Generic (PLEG): container finished" podID="dc5826e4-6a04-4886-ba0d-c15420d25b65" containerID="9fdd2588ee629ab358e5a9ff3fa66ebb79d0e752fac9ad128e6c117cc241472e" exitCode=0 Dec 10 14:46:33 crc kubenswrapper[4718]: I1210 14:46:33.166969 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-stmxx" event={"ID":"dc5826e4-6a04-4886-ba0d-c15420d25b65","Type":"ContainerDied","Data":"9fdd2588ee629ab358e5a9ff3fa66ebb79d0e752fac9ad128e6c117cc241472e"} Dec 10 14:46:35 crc kubenswrapper[4718]: I1210 14:46:35.182181 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-stmxx" event={"ID":"dc5826e4-6a04-4886-ba0d-c15420d25b65","Type":"ContainerStarted","Data":"8923b7c70ef2cd3fba5a9e4e708f9713002f443659d2b98ebb8bc265fc80e063"} Dec 10 14:46:35 crc kubenswrapper[4718]: I1210 14:46:35.980766 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-stmxx" Dec 10 14:46:35 crc kubenswrapper[4718]: I1210 14:46:35.980836 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-stmxx" Dec 10 14:46:36 crc kubenswrapper[4718]: I1210 14:46:36.215666 4718 scope.go:117] "RemoveContainer" containerID="0fb2ef285e022a0800e4756e854df7efc695588b19f6162afdccad0898285b93" Dec 10 14:46:37 crc kubenswrapper[4718]: I1210 14:46:37.021891 4718 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-stmxx" podUID="dc5826e4-6a04-4886-ba0d-c15420d25b65" containerName="registry-server" probeResult="failure" output=< Dec 10 14:46:37 crc kubenswrapper[4718]: timeout: failed to connect service ":50051" within 1s Dec 10 14:46:37 crc kubenswrapper[4718]: > Dec 10 14:46:37 crc kubenswrapper[4718]: I1210 14:46:37.200128 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-hv62w_9db3984f-4589-462f-94d7-89a885be63d5/kube-multus/2.log" Dec 10 14:46:46 crc kubenswrapper[4718]: I1210 14:46:46.075354 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-stmxx" Dec 10 14:46:46 crc kubenswrapper[4718]: I1210 14:46:46.214527 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-stmxx" podStartSLOduration=14.733500375 podStartE2EDuration="21.214495823s" podCreationTimestamp="2025-12-10 14:46:25 +0000 UTC" firstStartedPulling="2025-12-10 14:46:27.357579198 +0000 UTC m=+892.306802615" lastFinishedPulling="2025-12-10 14:46:33.838574646 +0000 UTC m=+898.787798063" observedRunningTime="2025-12-10 14:46:35.211441091 +0000 UTC m=+900.160664508" watchObservedRunningTime="2025-12-10 14:46:46.214495823 +0000 UTC m=+911.163719240" Dec 10 14:46:46 crc kubenswrapper[4718]: I1210 14:46:46.383750 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-stmxx" Dec 10 14:46:46 crc kubenswrapper[4718]: I1210 14:46:46.692344 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-stmxx"] Dec 10 14:46:47 crc kubenswrapper[4718]: I1210 14:46:47.559680 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-stmxx" podUID="dc5826e4-6a04-4886-ba0d-c15420d25b65" containerName="registry-server" containerID="cri-o://8923b7c70ef2cd3fba5a9e4e708f9713002f443659d2b98ebb8bc265fc80e063" gracePeriod=2 Dec 10 14:46:48 crc kubenswrapper[4718]: I1210 14:46:48.088765 4718 patch_prober.go:28] interesting pod/machine-config-daemon-8zmhn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 10 14:46:48 crc kubenswrapper[4718]: I1210 14:46:48.088856 4718 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" podUID="8db53917-7cfb-496d-b8a0-5cc68f3be4e7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 10 14:46:49 crc kubenswrapper[4718]: I1210 14:46:49.105080 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-668cf9dfbb-czssc"] Dec 10 14:46:49 crc kubenswrapper[4718]: E1210 14:46:49.105588 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="792e6faa-53e5-4bb6-baa5-32ce33828b19" containerName="pull" Dec 10 14:46:49 crc kubenswrapper[4718]: I1210 14:46:49.105614 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="792e6faa-53e5-4bb6-baa5-32ce33828b19" containerName="pull" Dec 10 14:46:49 crc kubenswrapper[4718]: E1210 14:46:49.105630 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="792e6faa-53e5-4bb6-baa5-32ce33828b19" containerName="extract" Dec 10 14:46:49 crc kubenswrapper[4718]: I1210 14:46:49.105639 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="792e6faa-53e5-4bb6-baa5-32ce33828b19" containerName="extract" Dec 10 14:46:49 crc kubenswrapper[4718]: E1210 14:46:49.105655 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="792e6faa-53e5-4bb6-baa5-32ce33828b19" containerName="util" Dec 10 14:46:49 crc kubenswrapper[4718]: I1210 14:46:49.105665 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="792e6faa-53e5-4bb6-baa5-32ce33828b19" containerName="util" Dec 10 14:46:49 crc kubenswrapper[4718]: I1210 14:46:49.105864 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="792e6faa-53e5-4bb6-baa5-32ce33828b19" containerName="extract" Dec 10 14:46:49 crc kubenswrapper[4718]: I1210 14:46:49.106842 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-czssc" Dec 10 14:46:49 crc kubenswrapper[4718]: I1210 14:46:49.109105 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"kube-root-ca.crt" Dec 10 14:46:49 crc kubenswrapper[4718]: I1210 14:46:49.109444 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-dockercfg-m9lw9" Dec 10 14:46:49 crc kubenswrapper[4718]: I1210 14:46:49.113289 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-668cf9dfbb-czssc"] Dec 10 14:46:49 crc kubenswrapper[4718]: I1210 14:46:49.119268 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"openshift-service-ca.crt" Dec 10 14:46:49 crc kubenswrapper[4718]: I1210 14:46:49.251802 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-85c6579888-795fs"] Dec 10 14:46:49 crc kubenswrapper[4718]: I1210 14:46:49.252741 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-85c6579888-795fs" Dec 10 14:46:49 crc kubenswrapper[4718]: I1210 14:46:49.257264 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-service-cert" Dec 10 14:46:49 crc kubenswrapper[4718]: I1210 14:46:49.257398 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-dockercfg-99jxv" Dec 10 14:46:49 crc kubenswrapper[4718]: I1210 14:46:49.270783 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-85c6579888-mxh5h"] Dec 10 14:46:49 crc kubenswrapper[4718]: I1210 14:46:49.271817 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-85c6579888-mxh5h" Dec 10 14:46:49 crc kubenswrapper[4718]: I1210 14:46:49.274397 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-85c6579888-795fs"] Dec 10 14:46:49 crc kubenswrapper[4718]: I1210 14:46:49.282155 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-69tbs\" (UniqueName: \"kubernetes.io/projected/8dd390a0-a978-4de3-ad2e-b76c9f9288ff-kube-api-access-69tbs\") pod \"obo-prometheus-operator-668cf9dfbb-czssc\" (UID: \"8dd390a0-a978-4de3-ad2e-b76c9f9288ff\") " pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-czssc" Dec 10 14:46:49 crc kubenswrapper[4718]: I1210 14:46:49.289361 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-85c6579888-mxh5h"] Dec 10 14:46:49 crc kubenswrapper[4718]: I1210 14:46:49.383936 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/dc2f2282-251a-4b32-b59d-8e28aa8e28b1-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-85c6579888-mxh5h\" (UID: \"dc2f2282-251a-4b32-b59d-8e28aa8e28b1\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-85c6579888-mxh5h" Dec 10 14:46:49 crc kubenswrapper[4718]: I1210 14:46:49.383995 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-69tbs\" (UniqueName: \"kubernetes.io/projected/8dd390a0-a978-4de3-ad2e-b76c9f9288ff-kube-api-access-69tbs\") pod \"obo-prometheus-operator-668cf9dfbb-czssc\" (UID: \"8dd390a0-a978-4de3-ad2e-b76c9f9288ff\") " pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-czssc" Dec 10 14:46:49 crc kubenswrapper[4718]: I1210 14:46:49.384030 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/d7da36d4-9aa5-4d1c-8135-f4af9a21dde9-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-85c6579888-795fs\" (UID: \"d7da36d4-9aa5-4d1c-8135-f4af9a21dde9\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-85c6579888-795fs" Dec 10 14:46:49 crc kubenswrapper[4718]: I1210 14:46:49.384049 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/dc2f2282-251a-4b32-b59d-8e28aa8e28b1-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-85c6579888-mxh5h\" (UID: \"dc2f2282-251a-4b32-b59d-8e28aa8e28b1\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-85c6579888-mxh5h" Dec 10 14:46:49 crc kubenswrapper[4718]: I1210 14:46:49.384227 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/d7da36d4-9aa5-4d1c-8135-f4af9a21dde9-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-85c6579888-795fs\" (UID: \"d7da36d4-9aa5-4d1c-8135-f4af9a21dde9\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-85c6579888-795fs" Dec 10 14:46:49 crc kubenswrapper[4718]: I1210 14:46:49.416333 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-69tbs\" (UniqueName: \"kubernetes.io/projected/8dd390a0-a978-4de3-ad2e-b76c9f9288ff-kube-api-access-69tbs\") pod \"obo-prometheus-operator-668cf9dfbb-czssc\" (UID: \"8dd390a0-a978-4de3-ad2e-b76c9f9288ff\") " pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-czssc" Dec 10 14:46:49 crc kubenswrapper[4718]: I1210 14:46:49.425991 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-czssc" Dec 10 14:46:49 crc kubenswrapper[4718]: I1210 14:46:49.428584 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-operator-d8bb48f5d-k2ncb"] Dec 10 14:46:49 crc kubenswrapper[4718]: I1210 14:46:49.429885 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-d8bb48f5d-k2ncb" Dec 10 14:46:49 crc kubenswrapper[4718]: I1210 14:46:49.433604 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-tls" Dec 10 14:46:49 crc kubenswrapper[4718]: I1210 14:46:49.437969 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-sa-dockercfg-j8gsq" Dec 10 14:46:49 crc kubenswrapper[4718]: I1210 14:46:49.438628 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-d8bb48f5d-k2ncb"] Dec 10 14:46:49 crc kubenswrapper[4718]: I1210 14:46:49.486804 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/dc2f2282-251a-4b32-b59d-8e28aa8e28b1-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-85c6579888-mxh5h\" (UID: \"dc2f2282-251a-4b32-b59d-8e28aa8e28b1\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-85c6579888-mxh5h" Dec 10 14:46:49 crc kubenswrapper[4718]: I1210 14:46:49.486888 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/d7da36d4-9aa5-4d1c-8135-f4af9a21dde9-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-85c6579888-795fs\" (UID: \"d7da36d4-9aa5-4d1c-8135-f4af9a21dde9\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-85c6579888-795fs" Dec 10 14:46:49 crc kubenswrapper[4718]: I1210 14:46:49.486917 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/dc2f2282-251a-4b32-b59d-8e28aa8e28b1-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-85c6579888-mxh5h\" (UID: \"dc2f2282-251a-4b32-b59d-8e28aa8e28b1\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-85c6579888-mxh5h" Dec 10 14:46:49 crc kubenswrapper[4718]: I1210 14:46:49.486976 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/d7da36d4-9aa5-4d1c-8135-f4af9a21dde9-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-85c6579888-795fs\" (UID: \"d7da36d4-9aa5-4d1c-8135-f4af9a21dde9\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-85c6579888-795fs" Dec 10 14:46:49 crc kubenswrapper[4718]: I1210 14:46:49.496758 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/dc2f2282-251a-4b32-b59d-8e28aa8e28b1-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-85c6579888-mxh5h\" (UID: \"dc2f2282-251a-4b32-b59d-8e28aa8e28b1\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-85c6579888-mxh5h" Dec 10 14:46:49 crc kubenswrapper[4718]: I1210 14:46:49.502684 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/dc2f2282-251a-4b32-b59d-8e28aa8e28b1-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-85c6579888-mxh5h\" (UID: \"dc2f2282-251a-4b32-b59d-8e28aa8e28b1\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-85c6579888-mxh5h" Dec 10 14:46:49 crc kubenswrapper[4718]: I1210 14:46:49.531219 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/d7da36d4-9aa5-4d1c-8135-f4af9a21dde9-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-85c6579888-795fs\" (UID: \"d7da36d4-9aa5-4d1c-8135-f4af9a21dde9\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-85c6579888-795fs" Dec 10 14:46:49 crc kubenswrapper[4718]: I1210 14:46:49.576756 4718 generic.go:334] "Generic (PLEG): container finished" podID="dc5826e4-6a04-4886-ba0d-c15420d25b65" containerID="8923b7c70ef2cd3fba5a9e4e708f9713002f443659d2b98ebb8bc265fc80e063" exitCode=0 Dec 10 14:46:49 crc kubenswrapper[4718]: I1210 14:46:49.576838 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-stmxx" event={"ID":"dc5826e4-6a04-4886-ba0d-c15420d25b65","Type":"ContainerDied","Data":"8923b7c70ef2cd3fba5a9e4e708f9713002f443659d2b98ebb8bc265fc80e063"} Dec 10 14:46:49 crc kubenswrapper[4718]: I1210 14:46:49.589423 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/ecb014f7-37b1-431b-a452-676f723287f4-observability-operator-tls\") pod \"observability-operator-d8bb48f5d-k2ncb\" (UID: \"ecb014f7-37b1-431b-a452-676f723287f4\") " pod="openshift-operators/observability-operator-d8bb48f5d-k2ncb" Dec 10 14:46:49 crc kubenswrapper[4718]: I1210 14:46:49.589738 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m49rl\" (UniqueName: \"kubernetes.io/projected/ecb014f7-37b1-431b-a452-676f723287f4-kube-api-access-m49rl\") pod \"observability-operator-d8bb48f5d-k2ncb\" (UID: \"ecb014f7-37b1-431b-a452-676f723287f4\") " pod="openshift-operators/observability-operator-d8bb48f5d-k2ncb" Dec 10 14:46:49 crc kubenswrapper[4718]: I1210 14:46:49.611051 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/d7da36d4-9aa5-4d1c-8135-f4af9a21dde9-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-85c6579888-795fs\" (UID: \"d7da36d4-9aa5-4d1c-8135-f4af9a21dde9\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-85c6579888-795fs" Dec 10 14:46:49 crc kubenswrapper[4718]: I1210 14:46:49.612852 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-85c6579888-mxh5h" Dec 10 14:46:49 crc kubenswrapper[4718]: I1210 14:46:49.649268 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/perses-operator-5446b9c989-fxb27"] Dec 10 14:46:49 crc kubenswrapper[4718]: I1210 14:46:49.650146 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5446b9c989-fxb27" Dec 10 14:46:49 crc kubenswrapper[4718]: I1210 14:46:49.652991 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"perses-operator-dockercfg-7gdrf" Dec 10 14:46:49 crc kubenswrapper[4718]: I1210 14:46:49.680659 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5446b9c989-fxb27"] Dec 10 14:46:49 crc kubenswrapper[4718]: I1210 14:46:49.691550 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/ecb014f7-37b1-431b-a452-676f723287f4-observability-operator-tls\") pod \"observability-operator-d8bb48f5d-k2ncb\" (UID: \"ecb014f7-37b1-431b-a452-676f723287f4\") " pod="openshift-operators/observability-operator-d8bb48f5d-k2ncb" Dec 10 14:46:49 crc kubenswrapper[4718]: I1210 14:46:49.691657 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m49rl\" (UniqueName: \"kubernetes.io/projected/ecb014f7-37b1-431b-a452-676f723287f4-kube-api-access-m49rl\") pod \"observability-operator-d8bb48f5d-k2ncb\" (UID: \"ecb014f7-37b1-431b-a452-676f723287f4\") " pod="openshift-operators/observability-operator-d8bb48f5d-k2ncb" Dec 10 14:46:49 crc kubenswrapper[4718]: I1210 14:46:49.697155 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/ecb014f7-37b1-431b-a452-676f723287f4-observability-operator-tls\") pod \"observability-operator-d8bb48f5d-k2ncb\" (UID: \"ecb014f7-37b1-431b-a452-676f723287f4\") " pod="openshift-operators/observability-operator-d8bb48f5d-k2ncb" Dec 10 14:46:49 crc kubenswrapper[4718]: I1210 14:46:49.724451 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m49rl\" (UniqueName: \"kubernetes.io/projected/ecb014f7-37b1-431b-a452-676f723287f4-kube-api-access-m49rl\") pod \"observability-operator-d8bb48f5d-k2ncb\" (UID: \"ecb014f7-37b1-431b-a452-676f723287f4\") " pod="openshift-operators/observability-operator-d8bb48f5d-k2ncb" Dec 10 14:46:49 crc kubenswrapper[4718]: I1210 14:46:49.793049 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/5ae1b9f9-6939-4aa0-8651-a76dafd291a4-openshift-service-ca\") pod \"perses-operator-5446b9c989-fxb27\" (UID: \"5ae1b9f9-6939-4aa0-8651-a76dafd291a4\") " pod="openshift-operators/perses-operator-5446b9c989-fxb27" Dec 10 14:46:49 crc kubenswrapper[4718]: I1210 14:46:49.793159 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9qx47\" (UniqueName: \"kubernetes.io/projected/5ae1b9f9-6939-4aa0-8651-a76dafd291a4-kube-api-access-9qx47\") pod \"perses-operator-5446b9c989-fxb27\" (UID: \"5ae1b9f9-6939-4aa0-8651-a76dafd291a4\") " pod="openshift-operators/perses-operator-5446b9c989-fxb27" Dec 10 14:46:49 crc kubenswrapper[4718]: I1210 14:46:49.871050 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-85c6579888-795fs" Dec 10 14:46:49 crc kubenswrapper[4718]: I1210 14:46:49.891624 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-d8bb48f5d-k2ncb" Dec 10 14:46:49 crc kubenswrapper[4718]: I1210 14:46:49.894244 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/5ae1b9f9-6939-4aa0-8651-a76dafd291a4-openshift-service-ca\") pod \"perses-operator-5446b9c989-fxb27\" (UID: \"5ae1b9f9-6939-4aa0-8651-a76dafd291a4\") " pod="openshift-operators/perses-operator-5446b9c989-fxb27" Dec 10 14:46:49 crc kubenswrapper[4718]: I1210 14:46:49.894362 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9qx47\" (UniqueName: \"kubernetes.io/projected/5ae1b9f9-6939-4aa0-8651-a76dafd291a4-kube-api-access-9qx47\") pod \"perses-operator-5446b9c989-fxb27\" (UID: \"5ae1b9f9-6939-4aa0-8651-a76dafd291a4\") " pod="openshift-operators/perses-operator-5446b9c989-fxb27" Dec 10 14:46:49 crc kubenswrapper[4718]: I1210 14:46:49.895361 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/5ae1b9f9-6939-4aa0-8651-a76dafd291a4-openshift-service-ca\") pod \"perses-operator-5446b9c989-fxb27\" (UID: \"5ae1b9f9-6939-4aa0-8651-a76dafd291a4\") " pod="openshift-operators/perses-operator-5446b9c989-fxb27" Dec 10 14:46:50 crc kubenswrapper[4718]: I1210 14:46:50.063406 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9qx47\" (UniqueName: \"kubernetes.io/projected/5ae1b9f9-6939-4aa0-8651-a76dafd291a4-kube-api-access-9qx47\") pod \"perses-operator-5446b9c989-fxb27\" (UID: \"5ae1b9f9-6939-4aa0-8651-a76dafd291a4\") " pod="openshift-operators/perses-operator-5446b9c989-fxb27" Dec 10 14:46:50 crc kubenswrapper[4718]: I1210 14:46:50.282893 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5446b9c989-fxb27" Dec 10 14:46:50 crc kubenswrapper[4718]: I1210 14:46:50.290131 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-stmxx" Dec 10 14:46:50 crc kubenswrapper[4718]: I1210 14:46:50.295401 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-668cf9dfbb-czssc"] Dec 10 14:46:50 crc kubenswrapper[4718]: I1210 14:46:50.417901 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dc5826e4-6a04-4886-ba0d-c15420d25b65-catalog-content\") pod \"dc5826e4-6a04-4886-ba0d-c15420d25b65\" (UID: \"dc5826e4-6a04-4886-ba0d-c15420d25b65\") " Dec 10 14:46:50 crc kubenswrapper[4718]: I1210 14:46:50.418358 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dppbd\" (UniqueName: \"kubernetes.io/projected/dc5826e4-6a04-4886-ba0d-c15420d25b65-kube-api-access-dppbd\") pod \"dc5826e4-6a04-4886-ba0d-c15420d25b65\" (UID: \"dc5826e4-6a04-4886-ba0d-c15420d25b65\") " Dec 10 14:46:50 crc kubenswrapper[4718]: I1210 14:46:50.418420 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dc5826e4-6a04-4886-ba0d-c15420d25b65-utilities\") pod \"dc5826e4-6a04-4886-ba0d-c15420d25b65\" (UID: \"dc5826e4-6a04-4886-ba0d-c15420d25b65\") " Dec 10 14:46:50 crc kubenswrapper[4718]: I1210 14:46:50.419241 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dc5826e4-6a04-4886-ba0d-c15420d25b65-utilities" (OuterVolumeSpecName: "utilities") pod "dc5826e4-6a04-4886-ba0d-c15420d25b65" (UID: "dc5826e4-6a04-4886-ba0d-c15420d25b65"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 14:46:50 crc kubenswrapper[4718]: I1210 14:46:50.423231 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dc5826e4-6a04-4886-ba0d-c15420d25b65-kube-api-access-dppbd" (OuterVolumeSpecName: "kube-api-access-dppbd") pod "dc5826e4-6a04-4886-ba0d-c15420d25b65" (UID: "dc5826e4-6a04-4886-ba0d-c15420d25b65"). InnerVolumeSpecName "kube-api-access-dppbd". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 14:46:50 crc kubenswrapper[4718]: I1210 14:46:50.522514 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dppbd\" (UniqueName: \"kubernetes.io/projected/dc5826e4-6a04-4886-ba0d-c15420d25b65-kube-api-access-dppbd\") on node \"crc\" DevicePath \"\"" Dec 10 14:46:50 crc kubenswrapper[4718]: I1210 14:46:50.522556 4718 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dc5826e4-6a04-4886-ba0d-c15420d25b65-utilities\") on node \"crc\" DevicePath \"\"" Dec 10 14:46:50 crc kubenswrapper[4718]: I1210 14:46:50.541324 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-85c6579888-mxh5h"] Dec 10 14:46:50 crc kubenswrapper[4718]: I1210 14:46:50.588546 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-stmxx" event={"ID":"dc5826e4-6a04-4886-ba0d-c15420d25b65","Type":"ContainerDied","Data":"7b74ffd9ebb7c4c7d86d4c29b09d5ce376707253826ac9f9e4cab6511f6e23ce"} Dec 10 14:46:50 crc kubenswrapper[4718]: I1210 14:46:50.588607 4718 scope.go:117] "RemoveContainer" containerID="8923b7c70ef2cd3fba5a9e4e708f9713002f443659d2b98ebb8bc265fc80e063" Dec 10 14:46:50 crc kubenswrapper[4718]: I1210 14:46:50.588809 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-stmxx" Dec 10 14:46:50 crc kubenswrapper[4718]: I1210 14:46:50.598907 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-czssc" event={"ID":"8dd390a0-a978-4de3-ad2e-b76c9f9288ff","Type":"ContainerStarted","Data":"15b8df04c4ae313f4255e4a7a595aae44a32ae06ca344cf69eb4100dc8228080"} Dec 10 14:46:50 crc kubenswrapper[4718]: I1210 14:46:50.602975 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dc5826e4-6a04-4886-ba0d-c15420d25b65-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "dc5826e4-6a04-4886-ba0d-c15420d25b65" (UID: "dc5826e4-6a04-4886-ba0d-c15420d25b65"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 14:46:50 crc kubenswrapper[4718]: W1210 14:46:50.607314 4718 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd7da36d4_9aa5_4d1c_8135_f4af9a21dde9.slice/crio-29be7d9ec93565c9c359ff81e87969b5d27500d89b4a0d0e48da5e81bb4172e3 WatchSource:0}: Error finding container 29be7d9ec93565c9c359ff81e87969b5d27500d89b4a0d0e48da5e81bb4172e3: Status 404 returned error can't find the container with id 29be7d9ec93565c9c359ff81e87969b5d27500d89b4a0d0e48da5e81bb4172e3 Dec 10 14:46:50 crc kubenswrapper[4718]: I1210 14:46:50.607633 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-85c6579888-mxh5h" event={"ID":"dc2f2282-251a-4b32-b59d-8e28aa8e28b1","Type":"ContainerStarted","Data":"ad1bbda6d455616281bfc269a9c0dcf4dfdd39e678766e64ec3ca17e2ee7b3c8"} Dec 10 14:46:50 crc kubenswrapper[4718]: I1210 14:46:50.608253 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-85c6579888-795fs"] Dec 10 14:46:50 crc kubenswrapper[4718]: I1210 14:46:50.625077 4718 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dc5826e4-6a04-4886-ba0d-c15420d25b65-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 10 14:46:50 crc kubenswrapper[4718]: I1210 14:46:50.642654 4718 scope.go:117] "RemoveContainer" containerID="9fdd2588ee629ab358e5a9ff3fa66ebb79d0e752fac9ad128e6c117cc241472e" Dec 10 14:46:50 crc kubenswrapper[4718]: I1210 14:46:50.692761 4718 scope.go:117] "RemoveContainer" containerID="eeecd972465f9638fade0253b376b4f5f970d262d77758ee78483d8cff861bc6" Dec 10 14:46:50 crc kubenswrapper[4718]: I1210 14:46:50.696301 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5446b9c989-fxb27"] Dec 10 14:46:50 crc kubenswrapper[4718]: I1210 14:46:50.767497 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-d8bb48f5d-k2ncb"] Dec 10 14:46:50 crc kubenswrapper[4718]: I1210 14:46:50.942455 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-stmxx"] Dec 10 14:46:50 crc kubenswrapper[4718]: I1210 14:46:50.948965 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-stmxx"] Dec 10 14:46:51 crc kubenswrapper[4718]: I1210 14:46:51.618038 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5446b9c989-fxb27" event={"ID":"5ae1b9f9-6939-4aa0-8651-a76dafd291a4","Type":"ContainerStarted","Data":"a498fef45bc12d8fa425b12f452536882639a1ad3eebecd581b90eeb736f72d3"} Dec 10 14:46:51 crc kubenswrapper[4718]: I1210 14:46:51.619626 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-85c6579888-795fs" event={"ID":"d7da36d4-9aa5-4d1c-8135-f4af9a21dde9","Type":"ContainerStarted","Data":"29be7d9ec93565c9c359ff81e87969b5d27500d89b4a0d0e48da5e81bb4172e3"} Dec 10 14:46:51 crc kubenswrapper[4718]: I1210 14:46:51.621127 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-d8bb48f5d-k2ncb" event={"ID":"ecb014f7-37b1-431b-a452-676f723287f4","Type":"ContainerStarted","Data":"266c1b7fbcc83668b391616f84d2fd0054fbb8a49a005bfa62459e1c487d929a"} Dec 10 14:46:52 crc kubenswrapper[4718]: I1210 14:46:52.031225 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dc5826e4-6a04-4886-ba0d-c15420d25b65" path="/var/lib/kubelet/pods/dc5826e4-6a04-4886-ba0d-c15420d25b65/volumes" Dec 10 14:47:10 crc kubenswrapper[4718]: E1210 14:47:10.976741 4718 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/cluster-observability-operator/cluster-observability-rhel9-operator@sha256:ce7d2904f7b238aa37dfe74a0b76bf73629e7a14fa52bf54b0ecf030ca36f1bb" Dec 10 14:47:10 crc kubenswrapper[4718]: E1210 14:47:10.977564 4718 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:registry.redhat.io/cluster-observability-operator/cluster-observability-rhel9-operator@sha256:ce7d2904f7b238aa37dfe74a0b76bf73629e7a14fa52bf54b0ecf030ca36f1bb,Command:[],Args:[--namespace=$(NAMESPACE) --images=perses=$(RELATED_IMAGE_PERSES) --images=alertmanager=$(RELATED_IMAGE_ALERTMANAGER) --images=prometheus=$(RELATED_IMAGE_PROMETHEUS) --images=thanos=$(RELATED_IMAGE_THANOS) --images=ui-dashboards=$(RELATED_IMAGE_CONSOLE_DASHBOARDS_PLUGIN) --images=ui-distributed-tracing=$(RELATED_IMAGE_CONSOLE_DISTRIBUTED_TRACING_PLUGIN) --images=ui-distributed-tracing-pf5=$(RELATED_IMAGE_CONSOLE_DISTRIBUTED_TRACING_PLUGIN_PF5) --images=ui-distributed-tracing-pf4=$(RELATED_IMAGE_CONSOLE_DISTRIBUTED_TRACING_PLUGIN_PF4) --images=ui-logging=$(RELATED_IMAGE_CONSOLE_LOGGING_PLUGIN) --images=ui-logging-pf4=$(RELATED_IMAGE_CONSOLE_LOGGING_PLUGIN_PF4) --images=ui-troubleshooting-panel=$(RELATED_IMAGE_CONSOLE_TROUBLESHOOTING_PANEL_PLUGIN) --images=ui-monitoring=$(RELATED_IMAGE_CONSOLE_MONITORING_PLUGIN) --images=ui-monitoring-pf5=$(RELATED_IMAGE_CONSOLE_MONITORING_PLUGIN_PF5) --images=korrel8r=$(RELATED_IMAGE_KORREL8R) --images=health-analyzer=$(RELATED_IMAGE_CLUSTER_HEALTH_ANALYZER) --openshift.enabled=true],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:RELATED_IMAGE_ALERTMANAGER,Value:registry.redhat.io/cluster-observability-operator/alertmanager-rhel9@sha256:e718854a7d6ca8accf0fa72db0eb902e46c44d747ad51dc3f06bba0cefaa3c01,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_PROMETHEUS,Value:registry.redhat.io/cluster-observability-operator/prometheus-rhel9@sha256:17ea20be390a94ab39f5cdd7f0cbc2498046eebcf77fe3dec9aa288d5c2cf46b,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_THANOS,Value:registry.redhat.io/cluster-observability-operator/thanos-rhel9@sha256:d972f4faa5e9c121402d23ed85002f26af48ec36b1b71a7489d677b3913d08b4,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_PERSES,Value:registry.redhat.io/cluster-observability-operator/perses-rhel9@sha256:91531137fc1dcd740e277e0f65e120a0176a16f788c14c27925b61aa0b792ade,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CONSOLE_DASHBOARDS_PLUGIN,Value:registry.redhat.io/cluster-observability-operator/dashboards-console-plugin-rhel9@sha256:a69da8bbca8a28dd2925f864d51cc31cf761b10532c553095ba40b242ef701cb,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CONSOLE_DISTRIBUTED_TRACING_PLUGIN,Value:registry.redhat.io/cluster-observability-operator/distributed-tracing-console-plugin-rhel9@sha256:897e1bfad1187062725b54d87107bd0155972257a50d8335dd29e1999b828a4f,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CONSOLE_DISTRIBUTED_TRACING_PLUGIN_PF5,Value:registry.redhat.io/cluster-observability-operator/distributed-tracing-console-plugin-pf5-rhel9@sha256:95fe5b5746ca8c07ac9217ce2d8ac8e6afad17af210f9d8e0074df1310b209a8,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CONSOLE_DISTRIBUTED_TRACING_PLUGIN_PF4,Value:registry.redhat.io/cluster-observability-operator/distributed-tracing-console-plugin-pf4-rhel9@sha256:e9d9a89e4d8126a62b1852055482258ee528cac6398dd5d43ebad75ace0f33c9,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CONSOLE_LOGGING_PLUGIN,Value:registry.redhat.io/cluster-observability-operator/logging-console-plugin-rhel9@sha256:ec684a0645ceb917b019af7ddba68c3533416e356ab0d0320a30e75ca7ebb31b,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CONSOLE_LOGGING_PLUGIN_PF4,Value:registry.redhat.io/cluster-observability-operator/logging-console-plugin-pf4-rhel9@sha256:3b9693fcde9b3a9494fb04735b1f7cfd0426f10be820fdc3f024175c0d3df1c9,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CONSOLE_TROUBLESHOOTING_PANEL_PLUGIN,Value:registry.redhat.io/cluster-observability-operator/troubleshooting-panel-console-plugin-rhel9@sha256:580606f194180accc8abba099e17a26dca7522ec6d233fa2fdd40312771703e3,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CONSOLE_MONITORING_PLUGIN,Value:registry.redhat.io/cluster-observability-operator/monitoring-console-plugin-rhel9@sha256:e03777be39e71701935059cd877603874a13ac94daa73219d4e5e545599d78a9,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CONSOLE_MONITORING_PLUGIN_PF5,Value:registry.redhat.io/cluster-observability-operator/monitoring-console-plugin-pf5-rhel9@sha256:aa47256193cfd2877853878e1ae97d2ab8b8e5deae62b387cbfad02b284d379c,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_KORREL8R,Value:registry.redhat.io/cluster-observability-operator/korrel8r-rhel9@sha256:c595ff56b2cb85514bf4784db6ddb82e4e657e3e708a7fb695fc4997379a94d4,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CLUSTER_HEALTH_ANALYZER,Value:registry.redhat.io/cluster-observability-operator/cluster-health-analyzer-rhel9@sha256:45a4ec2a519bcec99e886aa91596d5356a2414a2bd103baaef9fa7838c672eb2,ValueFrom:nil,},EnvVar{Name:OPERATOR_CONDITION_NAME,Value:cluster-observability-operator.v1.3.0,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{400 -3} {} 400m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{100 -3} {} 100m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:observability-operator-tls,ReadOnly:true,MountPath:/etc/tls/private,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-m49rl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000350000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod observability-operator-d8bb48f5d-k2ncb_openshift-operators(ecb014f7-37b1-431b-a452-676f723287f4): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 10 14:47:10 crc kubenswrapper[4718]: E1210 14:47:10.979412 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-operators/observability-operator-d8bb48f5d-k2ncb" podUID="ecb014f7-37b1-431b-a452-676f723287f4" Dec 10 14:47:11 crc kubenswrapper[4718]: E1210 14:47:11.824244 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/cluster-observability-operator/cluster-observability-rhel9-operator@sha256:ce7d2904f7b238aa37dfe74a0b76bf73629e7a14fa52bf54b0ecf030ca36f1bb\\\"\"" pod="openshift-operators/observability-operator-d8bb48f5d-k2ncb" podUID="ecb014f7-37b1-431b-a452-676f723287f4" Dec 10 14:47:13 crc kubenswrapper[4718]: E1210 14:47:13.831070 4718 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/cluster-observability-operator/perses-rhel9-operator@sha256:9aec4c328ec43e40481e06ca5808deead74b75c0aacb90e9e72966c3fa14f385" Dec 10 14:47:13 crc kubenswrapper[4718]: E1210 14:47:13.831734 4718 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:perses-operator,Image:registry.redhat.io/cluster-observability-operator/perses-rhel9-operator@sha256:9aec4c328ec43e40481e06ca5808deead74b75c0aacb90e9e72966c3fa14f385,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:OPERATOR_CONDITION_NAME,Value:cluster-observability-operator.v1.3.0,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{100 -3} {} 100m DecimalSI},memory: {{134217728 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:openshift-service-ca,ReadOnly:true,MountPath:/ca,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9qx47,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000350000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod perses-operator-5446b9c989-fxb27_openshift-operators(5ae1b9f9-6939-4aa0-8651-a76dafd291a4): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 10 14:47:13 crc kubenswrapper[4718]: E1210 14:47:13.832989 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"perses-operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-operators/perses-operator-5446b9c989-fxb27" podUID="5ae1b9f9-6939-4aa0-8651-a76dafd291a4" Dec 10 14:47:14 crc kubenswrapper[4718]: E1210 14:47:14.290566 4718 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/cluster-observability-operator/obo-prometheus-operator-admission-webhook-rhel9@sha256:43d33f0125e6b990f4a972ac4e952a065d7e72dc1690c6c836963b7341734aec" Dec 10 14:47:14 crc kubenswrapper[4718]: E1210 14:47:14.290750 4718 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:prometheus-operator-admission-webhook,Image:registry.redhat.io/cluster-observability-operator/obo-prometheus-operator-admission-webhook-rhel9@sha256:43d33f0125e6b990f4a972ac4e952a065d7e72dc1690c6c836963b7341734aec,Command:[],Args:[--web.enable-tls=true --web.cert-file=/tmp/k8s-webhook-server/serving-certs/tls.crt --web.key-file=/tmp/k8s-webhook-server/serving-certs/tls.key],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_CONDITION_NAME,Value:cluster-observability-operator.v1.3.0,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{209715200 0} {} BinarySI},},Requests:ResourceList{cpu: {{50 -3} {} 50m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:apiservice-cert,ReadOnly:false,MountPath:/apiserver.local.config/certificates,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:webhook-cert,ReadOnly:false,MountPath:/tmp/k8s-webhook-server/serving-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod obo-prometheus-operator-admission-webhook-85c6579888-mxh5h_openshift-operators(dc2f2282-251a-4b32-b59d-8e28aa8e28b1): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 10 14:47:14 crc kubenswrapper[4718]: E1210 14:47:14.292213 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"prometheus-operator-admission-webhook\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-85c6579888-mxh5h" podUID="dc2f2282-251a-4b32-b59d-8e28aa8e28b1" Dec 10 14:47:14 crc kubenswrapper[4718]: E1210 14:47:14.295404 4718 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/cluster-observability-operator/obo-prometheus-operator-admission-webhook-rhel9@sha256:43d33f0125e6b990f4a972ac4e952a065d7e72dc1690c6c836963b7341734aec" Dec 10 14:47:14 crc kubenswrapper[4718]: E1210 14:47:14.295528 4718 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:prometheus-operator-admission-webhook,Image:registry.redhat.io/cluster-observability-operator/obo-prometheus-operator-admission-webhook-rhel9@sha256:43d33f0125e6b990f4a972ac4e952a065d7e72dc1690c6c836963b7341734aec,Command:[],Args:[--web.enable-tls=true --web.cert-file=/tmp/k8s-webhook-server/serving-certs/tls.crt --web.key-file=/tmp/k8s-webhook-server/serving-certs/tls.key],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_CONDITION_NAME,Value:cluster-observability-operator.v1.3.0,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{209715200 0} {} BinarySI},},Requests:ResourceList{cpu: {{50 -3} {} 50m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:apiservice-cert,ReadOnly:false,MountPath:/apiserver.local.config/certificates,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:webhook-cert,ReadOnly:false,MountPath:/tmp/k8s-webhook-server/serving-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod obo-prometheus-operator-admission-webhook-85c6579888-795fs_openshift-operators(d7da36d4-9aa5-4d1c-8135-f4af9a21dde9): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 10 14:47:14 crc kubenswrapper[4718]: E1210 14:47:14.296670 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"prometheus-operator-admission-webhook\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-85c6579888-795fs" podUID="d7da36d4-9aa5-4d1c-8135-f4af9a21dde9" Dec 10 14:47:14 crc kubenswrapper[4718]: I1210 14:47:14.840150 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-czssc" event={"ID":"8dd390a0-a978-4de3-ad2e-b76c9f9288ff","Type":"ContainerStarted","Data":"29a0f2fa41054626ed9f69e05540c19a81c0617dcc6ab5b00987ffb58d75a17d"} Dec 10 14:47:14 crc kubenswrapper[4718]: E1210 14:47:14.842118 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"prometheus-operator-admission-webhook\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/cluster-observability-operator/obo-prometheus-operator-admission-webhook-rhel9@sha256:43d33f0125e6b990f4a972ac4e952a065d7e72dc1690c6c836963b7341734aec\\\"\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-85c6579888-795fs" podUID="d7da36d4-9aa5-4d1c-8135-f4af9a21dde9" Dec 10 14:47:14 crc kubenswrapper[4718]: E1210 14:47:14.842136 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"perses-operator\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/cluster-observability-operator/perses-rhel9-operator@sha256:9aec4c328ec43e40481e06ca5808deead74b75c0aacb90e9e72966c3fa14f385\\\"\"" pod="openshift-operators/perses-operator-5446b9c989-fxb27" podUID="5ae1b9f9-6939-4aa0-8651-a76dafd291a4" Dec 10 14:47:14 crc kubenswrapper[4718]: E1210 14:47:14.842207 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"prometheus-operator-admission-webhook\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/cluster-observability-operator/obo-prometheus-operator-admission-webhook-rhel9@sha256:43d33f0125e6b990f4a972ac4e952a065d7e72dc1690c6c836963b7341734aec\\\"\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-85c6579888-mxh5h" podUID="dc2f2282-251a-4b32-b59d-8e28aa8e28b1" Dec 10 14:47:14 crc kubenswrapper[4718]: I1210 14:47:14.911680 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-czssc" podStartSLOduration=1.96863412 podStartE2EDuration="25.911657036s" podCreationTimestamp="2025-12-10 14:46:49 +0000 UTC" firstStartedPulling="2025-12-10 14:46:50.343528904 +0000 UTC m=+915.292752321" lastFinishedPulling="2025-12-10 14:47:14.28655182 +0000 UTC m=+939.235775237" observedRunningTime="2025-12-10 14:47:14.909142492 +0000 UTC m=+939.858365939" watchObservedRunningTime="2025-12-10 14:47:14.911657036 +0000 UTC m=+939.860880453" Dec 10 14:47:18 crc kubenswrapper[4718]: I1210 14:47:18.084063 4718 patch_prober.go:28] interesting pod/machine-config-daemon-8zmhn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 10 14:47:18 crc kubenswrapper[4718]: I1210 14:47:18.084128 4718 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" podUID="8db53917-7cfb-496d-b8a0-5cc68f3be4e7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 10 14:47:23 crc kubenswrapper[4718]: I1210 14:47:23.907868 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-d8bb48f5d-k2ncb" event={"ID":"ecb014f7-37b1-431b-a452-676f723287f4","Type":"ContainerStarted","Data":"b2d1b61f9f5c72566f6c40b7744f60cbac2aff393f0db05ba780bd5e92cd5ca6"} Dec 10 14:47:23 crc kubenswrapper[4718]: I1210 14:47:23.908782 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/observability-operator-d8bb48f5d-k2ncb" Dec 10 14:47:23 crc kubenswrapper[4718]: I1210 14:47:23.929702 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-operator-d8bb48f5d-k2ncb" podStartSLOduration=2.203266131 podStartE2EDuration="34.92967018s" podCreationTimestamp="2025-12-10 14:46:49 +0000 UTC" firstStartedPulling="2025-12-10 14:46:50.789894707 +0000 UTC m=+915.739118124" lastFinishedPulling="2025-12-10 14:47:23.516298756 +0000 UTC m=+948.465522173" observedRunningTime="2025-12-10 14:47:23.925961894 +0000 UTC m=+948.875185341" watchObservedRunningTime="2025-12-10 14:47:23.92967018 +0000 UTC m=+948.878893587" Dec 10 14:47:23 crc kubenswrapper[4718]: I1210 14:47:23.965462 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/observability-operator-d8bb48f5d-k2ncb" Dec 10 14:47:26 crc kubenswrapper[4718]: I1210 14:47:26.925882 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-85c6579888-795fs" event={"ID":"d7da36d4-9aa5-4d1c-8135-f4af9a21dde9","Type":"ContainerStarted","Data":"2a9a73b8b82eb3616974f09340d2d5c649253c96032d6b2a7e42a99450acbd53"} Dec 10 14:47:26 crc kubenswrapper[4718]: I1210 14:47:26.946788 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-85c6579888-795fs" podStartSLOduration=2.371440424 podStartE2EDuration="37.946764694s" podCreationTimestamp="2025-12-10 14:46:49 +0000 UTC" firstStartedPulling="2025-12-10 14:46:50.611770879 +0000 UTC m=+915.560994296" lastFinishedPulling="2025-12-10 14:47:26.187095149 +0000 UTC m=+951.136318566" observedRunningTime="2025-12-10 14:47:26.942940156 +0000 UTC m=+951.892163623" watchObservedRunningTime="2025-12-10 14:47:26.946764694 +0000 UTC m=+951.895988111" Dec 10 14:47:28 crc kubenswrapper[4718]: I1210 14:47:28.941131 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-85c6579888-mxh5h" event={"ID":"dc2f2282-251a-4b32-b59d-8e28aa8e28b1","Type":"ContainerStarted","Data":"5f1c2b46adad89a97a4e42f2bd63a997e2305308befc6b677e463dd164289732"} Dec 10 14:47:28 crc kubenswrapper[4718]: I1210 14:47:28.962119 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-85c6579888-mxh5h" podStartSLOduration=-9223371996.892675 podStartE2EDuration="39.96209963s" podCreationTimestamp="2025-12-10 14:46:49 +0000 UTC" firstStartedPulling="2025-12-10 14:46:50.541976445 +0000 UTC m=+915.491199862" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 14:47:28.960711504 +0000 UTC m=+953.909934921" watchObservedRunningTime="2025-12-10 14:47:28.96209963 +0000 UTC m=+953.911323047" Dec 10 14:47:32 crc kubenswrapper[4718]: I1210 14:47:32.966824 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5446b9c989-fxb27" event={"ID":"5ae1b9f9-6939-4aa0-8651-a76dafd291a4","Type":"ContainerStarted","Data":"0bd46ca170c9b69eb46b483d8b11ac4d7300efc721bad549068fb2fa67dba52c"} Dec 10 14:47:32 crc kubenswrapper[4718]: I1210 14:47:32.967082 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/perses-operator-5446b9c989-fxb27" Dec 10 14:47:32 crc kubenswrapper[4718]: I1210 14:47:32.992483 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/perses-operator-5446b9c989-fxb27" podStartSLOduration=2.081493511 podStartE2EDuration="43.992441873s" podCreationTimestamp="2025-12-10 14:46:49 +0000 UTC" firstStartedPulling="2025-12-10 14:46:50.700404937 +0000 UTC m=+915.649628354" lastFinishedPulling="2025-12-10 14:47:32.611353299 +0000 UTC m=+957.560576716" observedRunningTime="2025-12-10 14:47:32.987711512 +0000 UTC m=+957.936934929" watchObservedRunningTime="2025-12-10 14:47:32.992441873 +0000 UTC m=+957.941665290" Dec 10 14:47:40 crc kubenswrapper[4718]: I1210 14:47:40.288040 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/perses-operator-5446b9c989-fxb27" Dec 10 14:47:48 crc kubenswrapper[4718]: I1210 14:47:48.084044 4718 patch_prober.go:28] interesting pod/machine-config-daemon-8zmhn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 10 14:47:48 crc kubenswrapper[4718]: I1210 14:47:48.084588 4718 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" podUID="8db53917-7cfb-496d-b8a0-5cc68f3be4e7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 10 14:47:48 crc kubenswrapper[4718]: I1210 14:47:48.084660 4718 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" Dec 10 14:47:48 crc kubenswrapper[4718]: I1210 14:47:48.085323 4718 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f5b3b0aee57acdc91ec51d93333e41cff8abf9d2f833c5f20d07f2e1f4175aed"} pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 10 14:47:48 crc kubenswrapper[4718]: I1210 14:47:48.085377 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" podUID="8db53917-7cfb-496d-b8a0-5cc68f3be4e7" containerName="machine-config-daemon" containerID="cri-o://f5b3b0aee57acdc91ec51d93333e41cff8abf9d2f833c5f20d07f2e1f4175aed" gracePeriod=600 Dec 10 14:47:49 crc kubenswrapper[4718]: I1210 14:47:49.065344 4718 generic.go:334] "Generic (PLEG): container finished" podID="8db53917-7cfb-496d-b8a0-5cc68f3be4e7" containerID="f5b3b0aee57acdc91ec51d93333e41cff8abf9d2f833c5f20d07f2e1f4175aed" exitCode=0 Dec 10 14:47:49 crc kubenswrapper[4718]: I1210 14:47:49.065418 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" event={"ID":"8db53917-7cfb-496d-b8a0-5cc68f3be4e7","Type":"ContainerDied","Data":"f5b3b0aee57acdc91ec51d93333e41cff8abf9d2f833c5f20d07f2e1f4175aed"} Dec 10 14:47:49 crc kubenswrapper[4718]: I1210 14:47:49.066054 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" event={"ID":"8db53917-7cfb-496d-b8a0-5cc68f3be4e7","Type":"ContainerStarted","Data":"f2aafbfef6aca74c8d0022be5bbc83fbbe6d3fcc33361fe89187f40bd7acdfa4"} Dec 10 14:47:49 crc kubenswrapper[4718]: I1210 14:47:49.066083 4718 scope.go:117] "RemoveContainer" containerID="469eaa563104223b399681102a51314f824a32389f5976a9684ee0f42162fac8" Dec 10 14:47:51 crc kubenswrapper[4718]: I1210 14:47:51.445634 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-d2n7r"] Dec 10 14:47:51 crc kubenswrapper[4718]: E1210 14:47:51.446544 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc5826e4-6a04-4886-ba0d-c15420d25b65" containerName="extract-content" Dec 10 14:47:51 crc kubenswrapper[4718]: I1210 14:47:51.446560 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc5826e4-6a04-4886-ba0d-c15420d25b65" containerName="extract-content" Dec 10 14:47:51 crc kubenswrapper[4718]: E1210 14:47:51.446569 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc5826e4-6a04-4886-ba0d-c15420d25b65" containerName="extract-utilities" Dec 10 14:47:51 crc kubenswrapper[4718]: I1210 14:47:51.446575 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc5826e4-6a04-4886-ba0d-c15420d25b65" containerName="extract-utilities" Dec 10 14:47:51 crc kubenswrapper[4718]: E1210 14:47:51.446584 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc5826e4-6a04-4886-ba0d-c15420d25b65" containerName="registry-server" Dec 10 14:47:51 crc kubenswrapper[4718]: I1210 14:47:51.446592 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc5826e4-6a04-4886-ba0d-c15420d25b65" containerName="registry-server" Dec 10 14:47:51 crc kubenswrapper[4718]: I1210 14:47:51.446714 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="dc5826e4-6a04-4886-ba0d-c15420d25b65" containerName="registry-server" Dec 10 14:47:51 crc kubenswrapper[4718]: I1210 14:47:51.447607 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-d2n7r" Dec 10 14:47:51 crc kubenswrapper[4718]: I1210 14:47:51.458522 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-d2n7r"] Dec 10 14:47:51 crc kubenswrapper[4718]: I1210 14:47:51.565759 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ad41f502-4d6d-4998-adb5-038ad95b7fd3-catalog-content\") pod \"redhat-marketplace-d2n7r\" (UID: \"ad41f502-4d6d-4998-adb5-038ad95b7fd3\") " pod="openshift-marketplace/redhat-marketplace-d2n7r" Dec 10 14:47:51 crc kubenswrapper[4718]: I1210 14:47:51.565847 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8ccs8\" (UniqueName: \"kubernetes.io/projected/ad41f502-4d6d-4998-adb5-038ad95b7fd3-kube-api-access-8ccs8\") pod \"redhat-marketplace-d2n7r\" (UID: \"ad41f502-4d6d-4998-adb5-038ad95b7fd3\") " pod="openshift-marketplace/redhat-marketplace-d2n7r" Dec 10 14:47:51 crc kubenswrapper[4718]: I1210 14:47:51.565892 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ad41f502-4d6d-4998-adb5-038ad95b7fd3-utilities\") pod \"redhat-marketplace-d2n7r\" (UID: \"ad41f502-4d6d-4998-adb5-038ad95b7fd3\") " pod="openshift-marketplace/redhat-marketplace-d2n7r" Dec 10 14:47:51 crc kubenswrapper[4718]: I1210 14:47:51.667673 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ad41f502-4d6d-4998-adb5-038ad95b7fd3-catalog-content\") pod \"redhat-marketplace-d2n7r\" (UID: \"ad41f502-4d6d-4998-adb5-038ad95b7fd3\") " pod="openshift-marketplace/redhat-marketplace-d2n7r" Dec 10 14:47:51 crc kubenswrapper[4718]: I1210 14:47:51.667739 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8ccs8\" (UniqueName: \"kubernetes.io/projected/ad41f502-4d6d-4998-adb5-038ad95b7fd3-kube-api-access-8ccs8\") pod \"redhat-marketplace-d2n7r\" (UID: \"ad41f502-4d6d-4998-adb5-038ad95b7fd3\") " pod="openshift-marketplace/redhat-marketplace-d2n7r" Dec 10 14:47:51 crc kubenswrapper[4718]: I1210 14:47:51.667780 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ad41f502-4d6d-4998-adb5-038ad95b7fd3-utilities\") pod \"redhat-marketplace-d2n7r\" (UID: \"ad41f502-4d6d-4998-adb5-038ad95b7fd3\") " pod="openshift-marketplace/redhat-marketplace-d2n7r" Dec 10 14:47:51 crc kubenswrapper[4718]: I1210 14:47:51.668204 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ad41f502-4d6d-4998-adb5-038ad95b7fd3-catalog-content\") pod \"redhat-marketplace-d2n7r\" (UID: \"ad41f502-4d6d-4998-adb5-038ad95b7fd3\") " pod="openshift-marketplace/redhat-marketplace-d2n7r" Dec 10 14:47:51 crc kubenswrapper[4718]: I1210 14:47:51.668237 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ad41f502-4d6d-4998-adb5-038ad95b7fd3-utilities\") pod \"redhat-marketplace-d2n7r\" (UID: \"ad41f502-4d6d-4998-adb5-038ad95b7fd3\") " pod="openshift-marketplace/redhat-marketplace-d2n7r" Dec 10 14:47:51 crc kubenswrapper[4718]: I1210 14:47:51.703726 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8ccs8\" (UniqueName: \"kubernetes.io/projected/ad41f502-4d6d-4998-adb5-038ad95b7fd3-kube-api-access-8ccs8\") pod \"redhat-marketplace-d2n7r\" (UID: \"ad41f502-4d6d-4998-adb5-038ad95b7fd3\") " pod="openshift-marketplace/redhat-marketplace-d2n7r" Dec 10 14:47:51 crc kubenswrapper[4718]: I1210 14:47:51.767634 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-d2n7r" Dec 10 14:47:52 crc kubenswrapper[4718]: I1210 14:47:52.016924 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-d2n7r"] Dec 10 14:47:52 crc kubenswrapper[4718]: W1210 14:47:52.032918 4718 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podad41f502_4d6d_4998_adb5_038ad95b7fd3.slice/crio-b6591a6b8846b7b3331559c7f7e3919ac7309cf1acdec6a92364852f446bde51 WatchSource:0}: Error finding container b6591a6b8846b7b3331559c7f7e3919ac7309cf1acdec6a92364852f446bde51: Status 404 returned error can't find the container with id b6591a6b8846b7b3331559c7f7e3919ac7309cf1acdec6a92364852f446bde51 Dec 10 14:47:52 crc kubenswrapper[4718]: I1210 14:47:52.095877 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-d2n7r" event={"ID":"ad41f502-4d6d-4998-adb5-038ad95b7fd3","Type":"ContainerStarted","Data":"b6591a6b8846b7b3331559c7f7e3919ac7309cf1acdec6a92364852f446bde51"} Dec 10 14:47:53 crc kubenswrapper[4718]: I1210 14:47:53.105225 4718 generic.go:334] "Generic (PLEG): container finished" podID="ad41f502-4d6d-4998-adb5-038ad95b7fd3" containerID="5961d31e2ee3ea67d4d99f53aa1aad6ac0bf9fec9ff78094fdc1ec7b33e9f159" exitCode=0 Dec 10 14:47:53 crc kubenswrapper[4718]: I1210 14:47:53.105368 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-d2n7r" event={"ID":"ad41f502-4d6d-4998-adb5-038ad95b7fd3","Type":"ContainerDied","Data":"5961d31e2ee3ea67d4d99f53aa1aad6ac0bf9fec9ff78094fdc1ec7b33e9f159"} Dec 10 14:47:55 crc kubenswrapper[4718]: I1210 14:47:55.122672 4718 generic.go:334] "Generic (PLEG): container finished" podID="ad41f502-4d6d-4998-adb5-038ad95b7fd3" containerID="492ed8452a40241773866592cdf602656643776ef9ae004ec4f76f635280a368" exitCode=0 Dec 10 14:47:55 crc kubenswrapper[4718]: I1210 14:47:55.122806 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-d2n7r" event={"ID":"ad41f502-4d6d-4998-adb5-038ad95b7fd3","Type":"ContainerDied","Data":"492ed8452a40241773866592cdf602656643776ef9ae004ec4f76f635280a368"} Dec 10 14:47:56 crc kubenswrapper[4718]: I1210 14:47:56.134941 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-d2n7r" event={"ID":"ad41f502-4d6d-4998-adb5-038ad95b7fd3","Type":"ContainerStarted","Data":"55d94fd0b53a178fd045a0305afef4465b99597daaaaee494af8c5f8dc58870a"} Dec 10 14:47:56 crc kubenswrapper[4718]: I1210 14:47:56.159658 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-d2n7r" podStartSLOduration=2.601078351 podStartE2EDuration="5.159627038s" podCreationTimestamp="2025-12-10 14:47:51 +0000 UTC" firstStartedPulling="2025-12-10 14:47:53.107299657 +0000 UTC m=+978.056523074" lastFinishedPulling="2025-12-10 14:47:55.665848344 +0000 UTC m=+980.615071761" observedRunningTime="2025-12-10 14:47:56.154979769 +0000 UTC m=+981.104203196" watchObservedRunningTime="2025-12-10 14:47:56.159627038 +0000 UTC m=+981.108850455" Dec 10 14:47:58 crc kubenswrapper[4718]: I1210 14:47:58.457714 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f6qfc5"] Dec 10 14:47:58 crc kubenswrapper[4718]: I1210 14:47:58.459206 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f6qfc5" Dec 10 14:47:58 crc kubenswrapper[4718]: I1210 14:47:58.462822 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Dec 10 14:47:58 crc kubenswrapper[4718]: I1210 14:47:58.478277 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f6qfc5"] Dec 10 14:47:58 crc kubenswrapper[4718]: I1210 14:47:58.571787 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ee0c87fb-677b-4338-85a7-3a53ad85e806-util\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f6qfc5\" (UID: \"ee0c87fb-677b-4338-85a7-3a53ad85e806\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f6qfc5" Dec 10 14:47:58 crc kubenswrapper[4718]: I1210 14:47:58.571863 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ee0c87fb-677b-4338-85a7-3a53ad85e806-bundle\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f6qfc5\" (UID: \"ee0c87fb-677b-4338-85a7-3a53ad85e806\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f6qfc5" Dec 10 14:47:58 crc kubenswrapper[4718]: I1210 14:47:58.571922 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4mpdw\" (UniqueName: \"kubernetes.io/projected/ee0c87fb-677b-4338-85a7-3a53ad85e806-kube-api-access-4mpdw\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f6qfc5\" (UID: \"ee0c87fb-677b-4338-85a7-3a53ad85e806\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f6qfc5" Dec 10 14:47:58 crc kubenswrapper[4718]: I1210 14:47:58.673645 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ee0c87fb-677b-4338-85a7-3a53ad85e806-util\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f6qfc5\" (UID: \"ee0c87fb-677b-4338-85a7-3a53ad85e806\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f6qfc5" Dec 10 14:47:58 crc kubenswrapper[4718]: I1210 14:47:58.674028 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ee0c87fb-677b-4338-85a7-3a53ad85e806-bundle\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f6qfc5\" (UID: \"ee0c87fb-677b-4338-85a7-3a53ad85e806\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f6qfc5" Dec 10 14:47:58 crc kubenswrapper[4718]: I1210 14:47:58.674092 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4mpdw\" (UniqueName: \"kubernetes.io/projected/ee0c87fb-677b-4338-85a7-3a53ad85e806-kube-api-access-4mpdw\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f6qfc5\" (UID: \"ee0c87fb-677b-4338-85a7-3a53ad85e806\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f6qfc5" Dec 10 14:47:58 crc kubenswrapper[4718]: I1210 14:47:58.674097 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ee0c87fb-677b-4338-85a7-3a53ad85e806-util\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f6qfc5\" (UID: \"ee0c87fb-677b-4338-85a7-3a53ad85e806\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f6qfc5" Dec 10 14:47:58 crc kubenswrapper[4718]: I1210 14:47:58.674359 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ee0c87fb-677b-4338-85a7-3a53ad85e806-bundle\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f6qfc5\" (UID: \"ee0c87fb-677b-4338-85a7-3a53ad85e806\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f6qfc5" Dec 10 14:47:58 crc kubenswrapper[4718]: I1210 14:47:58.696026 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4mpdw\" (UniqueName: \"kubernetes.io/projected/ee0c87fb-677b-4338-85a7-3a53ad85e806-kube-api-access-4mpdw\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f6qfc5\" (UID: \"ee0c87fb-677b-4338-85a7-3a53ad85e806\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f6qfc5" Dec 10 14:47:58 crc kubenswrapper[4718]: I1210 14:47:58.776217 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f6qfc5" Dec 10 14:47:59 crc kubenswrapper[4718]: I1210 14:47:59.141497 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f6qfc5"] Dec 10 14:47:59 crc kubenswrapper[4718]: W1210 14:47:59.148643 4718 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podee0c87fb_677b_4338_85a7_3a53ad85e806.slice/crio-2c5eb94773f1e4e93e01785e5eae855a3595371c958ee233c99273df811ece5d WatchSource:0}: Error finding container 2c5eb94773f1e4e93e01785e5eae855a3595371c958ee233c99273df811ece5d: Status 404 returned error can't find the container with id 2c5eb94773f1e4e93e01785e5eae855a3595371c958ee233c99273df811ece5d Dec 10 14:47:59 crc kubenswrapper[4718]: I1210 14:47:59.172293 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f6qfc5" event={"ID":"ee0c87fb-677b-4338-85a7-3a53ad85e806","Type":"ContainerStarted","Data":"2c5eb94773f1e4e93e01785e5eae855a3595371c958ee233c99273df811ece5d"} Dec 10 14:48:01 crc kubenswrapper[4718]: I1210 14:48:01.768841 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-d2n7r" Dec 10 14:48:01 crc kubenswrapper[4718]: I1210 14:48:01.769533 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-d2n7r" Dec 10 14:48:01 crc kubenswrapper[4718]: I1210 14:48:01.812247 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-dh2hb"] Dec 10 14:48:01 crc kubenswrapper[4718]: I1210 14:48:01.813769 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dh2hb" Dec 10 14:48:01 crc kubenswrapper[4718]: I1210 14:48:01.818335 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ed6de05f-b121-47be-9317-39b153c3012b-utilities\") pod \"community-operators-dh2hb\" (UID: \"ed6de05f-b121-47be-9317-39b153c3012b\") " pod="openshift-marketplace/community-operators-dh2hb" Dec 10 14:48:01 crc kubenswrapper[4718]: I1210 14:48:01.818478 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ed6de05f-b121-47be-9317-39b153c3012b-catalog-content\") pod \"community-operators-dh2hb\" (UID: \"ed6de05f-b121-47be-9317-39b153c3012b\") " pod="openshift-marketplace/community-operators-dh2hb" Dec 10 14:48:01 crc kubenswrapper[4718]: I1210 14:48:01.818545 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jzm6m\" (UniqueName: \"kubernetes.io/projected/ed6de05f-b121-47be-9317-39b153c3012b-kube-api-access-jzm6m\") pod \"community-operators-dh2hb\" (UID: \"ed6de05f-b121-47be-9317-39b153c3012b\") " pod="openshift-marketplace/community-operators-dh2hb" Dec 10 14:48:01 crc kubenswrapper[4718]: I1210 14:48:01.825894 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-dh2hb"] Dec 10 14:48:01 crc kubenswrapper[4718]: I1210 14:48:01.852110 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-d2n7r" Dec 10 14:48:01 crc kubenswrapper[4718]: I1210 14:48:01.919549 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ed6de05f-b121-47be-9317-39b153c3012b-utilities\") pod \"community-operators-dh2hb\" (UID: \"ed6de05f-b121-47be-9317-39b153c3012b\") " pod="openshift-marketplace/community-operators-dh2hb" Dec 10 14:48:01 crc kubenswrapper[4718]: I1210 14:48:01.919876 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ed6de05f-b121-47be-9317-39b153c3012b-catalog-content\") pod \"community-operators-dh2hb\" (UID: \"ed6de05f-b121-47be-9317-39b153c3012b\") " pod="openshift-marketplace/community-operators-dh2hb" Dec 10 14:48:01 crc kubenswrapper[4718]: I1210 14:48:01.920046 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jzm6m\" (UniqueName: \"kubernetes.io/projected/ed6de05f-b121-47be-9317-39b153c3012b-kube-api-access-jzm6m\") pod \"community-operators-dh2hb\" (UID: \"ed6de05f-b121-47be-9317-39b153c3012b\") " pod="openshift-marketplace/community-operators-dh2hb" Dec 10 14:48:01 crc kubenswrapper[4718]: I1210 14:48:01.920646 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ed6de05f-b121-47be-9317-39b153c3012b-utilities\") pod \"community-operators-dh2hb\" (UID: \"ed6de05f-b121-47be-9317-39b153c3012b\") " pod="openshift-marketplace/community-operators-dh2hb" Dec 10 14:48:01 crc kubenswrapper[4718]: I1210 14:48:01.920701 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ed6de05f-b121-47be-9317-39b153c3012b-catalog-content\") pod \"community-operators-dh2hb\" (UID: \"ed6de05f-b121-47be-9317-39b153c3012b\") " pod="openshift-marketplace/community-operators-dh2hb" Dec 10 14:48:01 crc kubenswrapper[4718]: I1210 14:48:01.941865 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jzm6m\" (UniqueName: \"kubernetes.io/projected/ed6de05f-b121-47be-9317-39b153c3012b-kube-api-access-jzm6m\") pod \"community-operators-dh2hb\" (UID: \"ed6de05f-b121-47be-9317-39b153c3012b\") " pod="openshift-marketplace/community-operators-dh2hb" Dec 10 14:48:02 crc kubenswrapper[4718]: I1210 14:48:02.146007 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dh2hb" Dec 10 14:48:02 crc kubenswrapper[4718]: I1210 14:48:02.195299 4718 generic.go:334] "Generic (PLEG): container finished" podID="ee0c87fb-677b-4338-85a7-3a53ad85e806" containerID="3f190eeb50cd2c539dcfa75a586537f51d86132635fa36f56f5550ecdfec14bb" exitCode=0 Dec 10 14:48:02 crc kubenswrapper[4718]: I1210 14:48:02.195369 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f6qfc5" event={"ID":"ee0c87fb-677b-4338-85a7-3a53ad85e806","Type":"ContainerDied","Data":"3f190eeb50cd2c539dcfa75a586537f51d86132635fa36f56f5550ecdfec14bb"} Dec 10 14:48:02 crc kubenswrapper[4718]: I1210 14:48:02.271633 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-d2n7r" Dec 10 14:48:02 crc kubenswrapper[4718]: I1210 14:48:02.563032 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-dh2hb"] Dec 10 14:48:03 crc kubenswrapper[4718]: I1210 14:48:03.204802 4718 generic.go:334] "Generic (PLEG): container finished" podID="ed6de05f-b121-47be-9317-39b153c3012b" containerID="6bfaf354d79688a5532c88188edb4ffc6c3900b5fdc40e12796f6117f4a99bdf" exitCode=0 Dec 10 14:48:03 crc kubenswrapper[4718]: I1210 14:48:03.204886 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dh2hb" event={"ID":"ed6de05f-b121-47be-9317-39b153c3012b","Type":"ContainerDied","Data":"6bfaf354d79688a5532c88188edb4ffc6c3900b5fdc40e12796f6117f4a99bdf"} Dec 10 14:48:03 crc kubenswrapper[4718]: I1210 14:48:03.205830 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dh2hb" event={"ID":"ed6de05f-b121-47be-9317-39b153c3012b","Type":"ContainerStarted","Data":"6085cddb97953a86a6925e71f4cc01904ac5a2f775afc3b73ced5ebdf362daa8"} Dec 10 14:48:05 crc kubenswrapper[4718]: I1210 14:48:05.223469 4718 generic.go:334] "Generic (PLEG): container finished" podID="ee0c87fb-677b-4338-85a7-3a53ad85e806" containerID="86dc11519486c834d5838d86779d14873ca7f19cadad4ca0d33b9341950d07b3" exitCode=0 Dec 10 14:48:05 crc kubenswrapper[4718]: I1210 14:48:05.223572 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f6qfc5" event={"ID":"ee0c87fb-677b-4338-85a7-3a53ad85e806","Type":"ContainerDied","Data":"86dc11519486c834d5838d86779d14873ca7f19cadad4ca0d33b9341950d07b3"} Dec 10 14:48:05 crc kubenswrapper[4718]: I1210 14:48:05.406613 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-d2n7r"] Dec 10 14:48:05 crc kubenswrapper[4718]: I1210 14:48:05.407102 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-d2n7r" podUID="ad41f502-4d6d-4998-adb5-038ad95b7fd3" containerName="registry-server" containerID="cri-o://55d94fd0b53a178fd045a0305afef4465b99597daaaaee494af8c5f8dc58870a" gracePeriod=2 Dec 10 14:48:06 crc kubenswrapper[4718]: I1210 14:48:06.236969 4718 generic.go:334] "Generic (PLEG): container finished" podID="ad41f502-4d6d-4998-adb5-038ad95b7fd3" containerID="55d94fd0b53a178fd045a0305afef4465b99597daaaaee494af8c5f8dc58870a" exitCode=0 Dec 10 14:48:06 crc kubenswrapper[4718]: I1210 14:48:06.237055 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-d2n7r" event={"ID":"ad41f502-4d6d-4998-adb5-038ad95b7fd3","Type":"ContainerDied","Data":"55d94fd0b53a178fd045a0305afef4465b99597daaaaee494af8c5f8dc58870a"} Dec 10 14:48:06 crc kubenswrapper[4718]: I1210 14:48:06.242381 4718 generic.go:334] "Generic (PLEG): container finished" podID="ee0c87fb-677b-4338-85a7-3a53ad85e806" containerID="74f7e2e44095b0036688f499704f0c1a434dcdc442771a217859b498586fa301" exitCode=0 Dec 10 14:48:06 crc kubenswrapper[4718]: I1210 14:48:06.242468 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f6qfc5" event={"ID":"ee0c87fb-677b-4338-85a7-3a53ad85e806","Type":"ContainerDied","Data":"74f7e2e44095b0036688f499704f0c1a434dcdc442771a217859b498586fa301"} Dec 10 14:48:08 crc kubenswrapper[4718]: I1210 14:48:08.750260 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-d2n7r" Dec 10 14:48:08 crc kubenswrapper[4718]: I1210 14:48:08.757996 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f6qfc5" Dec 10 14:48:08 crc kubenswrapper[4718]: I1210 14:48:08.830616 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ad41f502-4d6d-4998-adb5-038ad95b7fd3-utilities\") pod \"ad41f502-4d6d-4998-adb5-038ad95b7fd3\" (UID: \"ad41f502-4d6d-4998-adb5-038ad95b7fd3\") " Dec 10 14:48:08 crc kubenswrapper[4718]: I1210 14:48:08.830730 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ee0c87fb-677b-4338-85a7-3a53ad85e806-util\") pod \"ee0c87fb-677b-4338-85a7-3a53ad85e806\" (UID: \"ee0c87fb-677b-4338-85a7-3a53ad85e806\") " Dec 10 14:48:08 crc kubenswrapper[4718]: I1210 14:48:08.830817 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ad41f502-4d6d-4998-adb5-038ad95b7fd3-catalog-content\") pod \"ad41f502-4d6d-4998-adb5-038ad95b7fd3\" (UID: \"ad41f502-4d6d-4998-adb5-038ad95b7fd3\") " Dec 10 14:48:08 crc kubenswrapper[4718]: I1210 14:48:08.831480 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8ccs8\" (UniqueName: \"kubernetes.io/projected/ad41f502-4d6d-4998-adb5-038ad95b7fd3-kube-api-access-8ccs8\") pod \"ad41f502-4d6d-4998-adb5-038ad95b7fd3\" (UID: \"ad41f502-4d6d-4998-adb5-038ad95b7fd3\") " Dec 10 14:48:08 crc kubenswrapper[4718]: I1210 14:48:08.831617 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4mpdw\" (UniqueName: \"kubernetes.io/projected/ee0c87fb-677b-4338-85a7-3a53ad85e806-kube-api-access-4mpdw\") pod \"ee0c87fb-677b-4338-85a7-3a53ad85e806\" (UID: \"ee0c87fb-677b-4338-85a7-3a53ad85e806\") " Dec 10 14:48:08 crc kubenswrapper[4718]: I1210 14:48:08.831679 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ee0c87fb-677b-4338-85a7-3a53ad85e806-bundle\") pod \"ee0c87fb-677b-4338-85a7-3a53ad85e806\" (UID: \"ee0c87fb-677b-4338-85a7-3a53ad85e806\") " Dec 10 14:48:08 crc kubenswrapper[4718]: I1210 14:48:08.832900 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ee0c87fb-677b-4338-85a7-3a53ad85e806-bundle" (OuterVolumeSpecName: "bundle") pod "ee0c87fb-677b-4338-85a7-3a53ad85e806" (UID: "ee0c87fb-677b-4338-85a7-3a53ad85e806"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 14:48:08 crc kubenswrapper[4718]: I1210 14:48:08.833260 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ad41f502-4d6d-4998-adb5-038ad95b7fd3-utilities" (OuterVolumeSpecName: "utilities") pod "ad41f502-4d6d-4998-adb5-038ad95b7fd3" (UID: "ad41f502-4d6d-4998-adb5-038ad95b7fd3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 14:48:08 crc kubenswrapper[4718]: I1210 14:48:08.836927 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ad41f502-4d6d-4998-adb5-038ad95b7fd3-kube-api-access-8ccs8" (OuterVolumeSpecName: "kube-api-access-8ccs8") pod "ad41f502-4d6d-4998-adb5-038ad95b7fd3" (UID: "ad41f502-4d6d-4998-adb5-038ad95b7fd3"). InnerVolumeSpecName "kube-api-access-8ccs8". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 14:48:08 crc kubenswrapper[4718]: I1210 14:48:08.842127 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ee0c87fb-677b-4338-85a7-3a53ad85e806-kube-api-access-4mpdw" (OuterVolumeSpecName: "kube-api-access-4mpdw") pod "ee0c87fb-677b-4338-85a7-3a53ad85e806" (UID: "ee0c87fb-677b-4338-85a7-3a53ad85e806"). InnerVolumeSpecName "kube-api-access-4mpdw". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 14:48:08 crc kubenswrapper[4718]: I1210 14:48:08.843487 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ee0c87fb-677b-4338-85a7-3a53ad85e806-util" (OuterVolumeSpecName: "util") pod "ee0c87fb-677b-4338-85a7-3a53ad85e806" (UID: "ee0c87fb-677b-4338-85a7-3a53ad85e806"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 14:48:08 crc kubenswrapper[4718]: I1210 14:48:08.853415 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ad41f502-4d6d-4998-adb5-038ad95b7fd3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ad41f502-4d6d-4998-adb5-038ad95b7fd3" (UID: "ad41f502-4d6d-4998-adb5-038ad95b7fd3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 14:48:08 crc kubenswrapper[4718]: I1210 14:48:08.935318 4718 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ee0c87fb-677b-4338-85a7-3a53ad85e806-util\") on node \"crc\" DevicePath \"\"" Dec 10 14:48:08 crc kubenswrapper[4718]: I1210 14:48:08.935428 4718 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ad41f502-4d6d-4998-adb5-038ad95b7fd3-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 10 14:48:08 crc kubenswrapper[4718]: I1210 14:48:08.935451 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8ccs8\" (UniqueName: \"kubernetes.io/projected/ad41f502-4d6d-4998-adb5-038ad95b7fd3-kube-api-access-8ccs8\") on node \"crc\" DevicePath \"\"" Dec 10 14:48:08 crc kubenswrapper[4718]: I1210 14:48:08.935466 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4mpdw\" (UniqueName: \"kubernetes.io/projected/ee0c87fb-677b-4338-85a7-3a53ad85e806-kube-api-access-4mpdw\") on node \"crc\" DevicePath \"\"" Dec 10 14:48:08 crc kubenswrapper[4718]: I1210 14:48:08.935477 4718 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ee0c87fb-677b-4338-85a7-3a53ad85e806-bundle\") on node \"crc\" DevicePath \"\"" Dec 10 14:48:08 crc kubenswrapper[4718]: I1210 14:48:08.935488 4718 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ad41f502-4d6d-4998-adb5-038ad95b7fd3-utilities\") on node \"crc\" DevicePath \"\"" Dec 10 14:48:09 crc kubenswrapper[4718]: I1210 14:48:09.269635 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f6qfc5" event={"ID":"ee0c87fb-677b-4338-85a7-3a53ad85e806","Type":"ContainerDied","Data":"2c5eb94773f1e4e93e01785e5eae855a3595371c958ee233c99273df811ece5d"} Dec 10 14:48:09 crc kubenswrapper[4718]: I1210 14:48:09.269682 4718 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2c5eb94773f1e4e93e01785e5eae855a3595371c958ee233c99273df811ece5d" Dec 10 14:48:09 crc kubenswrapper[4718]: I1210 14:48:09.269688 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f6qfc5" Dec 10 14:48:09 crc kubenswrapper[4718]: I1210 14:48:09.271841 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-d2n7r" event={"ID":"ad41f502-4d6d-4998-adb5-038ad95b7fd3","Type":"ContainerDied","Data":"b6591a6b8846b7b3331559c7f7e3919ac7309cf1acdec6a92364852f446bde51"} Dec 10 14:48:09 crc kubenswrapper[4718]: I1210 14:48:09.271887 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-d2n7r" Dec 10 14:48:09 crc kubenswrapper[4718]: I1210 14:48:09.271895 4718 scope.go:117] "RemoveContainer" containerID="55d94fd0b53a178fd045a0305afef4465b99597daaaaee494af8c5f8dc58870a" Dec 10 14:48:09 crc kubenswrapper[4718]: I1210 14:48:09.275340 4718 generic.go:334] "Generic (PLEG): container finished" podID="ed6de05f-b121-47be-9317-39b153c3012b" containerID="b815c77539aaf6436a3a5c9ce615f06173cc47977dee30c9a6ef9540c6dca967" exitCode=0 Dec 10 14:48:09 crc kubenswrapper[4718]: I1210 14:48:09.275417 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dh2hb" event={"ID":"ed6de05f-b121-47be-9317-39b153c3012b","Type":"ContainerDied","Data":"b815c77539aaf6436a3a5c9ce615f06173cc47977dee30c9a6ef9540c6dca967"} Dec 10 14:48:09 crc kubenswrapper[4718]: I1210 14:48:09.290853 4718 scope.go:117] "RemoveContainer" containerID="492ed8452a40241773866592cdf602656643776ef9ae004ec4f76f635280a368" Dec 10 14:48:09 crc kubenswrapper[4718]: I1210 14:48:09.315328 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-d2n7r"] Dec 10 14:48:09 crc kubenswrapper[4718]: I1210 14:48:09.320846 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-d2n7r"] Dec 10 14:48:09 crc kubenswrapper[4718]: I1210 14:48:09.332069 4718 scope.go:117] "RemoveContainer" containerID="5961d31e2ee3ea67d4d99f53aa1aad6ac0bf9fec9ff78094fdc1ec7b33e9f159" Dec 10 14:48:10 crc kubenswrapper[4718]: I1210 14:48:10.030011 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ad41f502-4d6d-4998-adb5-038ad95b7fd3" path="/var/lib/kubelet/pods/ad41f502-4d6d-4998-adb5-038ad95b7fd3/volumes" Dec 10 14:48:10 crc kubenswrapper[4718]: I1210 14:48:10.287172 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dh2hb" event={"ID":"ed6de05f-b121-47be-9317-39b153c3012b","Type":"ContainerStarted","Data":"c965ae13cdf369d185d357ddeedf38c77f87289a92ea0fa24d84f2d824eec76d"} Dec 10 14:48:10 crc kubenswrapper[4718]: I1210 14:48:10.311533 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-dh2hb" podStartSLOduration=2.67676675 podStartE2EDuration="9.311504216s" podCreationTimestamp="2025-12-10 14:48:01 +0000 UTC" firstStartedPulling="2025-12-10 14:48:03.208368316 +0000 UTC m=+988.157591733" lastFinishedPulling="2025-12-10 14:48:09.843105772 +0000 UTC m=+994.792329199" observedRunningTime="2025-12-10 14:48:10.308501839 +0000 UTC m=+995.257725256" watchObservedRunningTime="2025-12-10 14:48:10.311504216 +0000 UTC m=+995.260727643" Dec 10 14:48:12 crc kubenswrapper[4718]: I1210 14:48:12.146957 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-dh2hb" Dec 10 14:48:12 crc kubenswrapper[4718]: I1210 14:48:12.147363 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-dh2hb" Dec 10 14:48:12 crc kubenswrapper[4718]: I1210 14:48:12.227034 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-dh2hb" Dec 10 14:48:13 crc kubenswrapper[4718]: I1210 14:48:13.186586 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-5b5b58f5c8-99xzv"] Dec 10 14:48:13 crc kubenswrapper[4718]: E1210 14:48:13.186872 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad41f502-4d6d-4998-adb5-038ad95b7fd3" containerName="extract-utilities" Dec 10 14:48:13 crc kubenswrapper[4718]: I1210 14:48:13.186887 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad41f502-4d6d-4998-adb5-038ad95b7fd3" containerName="extract-utilities" Dec 10 14:48:13 crc kubenswrapper[4718]: E1210 14:48:13.186900 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad41f502-4d6d-4998-adb5-038ad95b7fd3" containerName="extract-content" Dec 10 14:48:13 crc kubenswrapper[4718]: I1210 14:48:13.186907 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad41f502-4d6d-4998-adb5-038ad95b7fd3" containerName="extract-content" Dec 10 14:48:13 crc kubenswrapper[4718]: E1210 14:48:13.186924 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee0c87fb-677b-4338-85a7-3a53ad85e806" containerName="pull" Dec 10 14:48:13 crc kubenswrapper[4718]: I1210 14:48:13.186931 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee0c87fb-677b-4338-85a7-3a53ad85e806" containerName="pull" Dec 10 14:48:13 crc kubenswrapper[4718]: E1210 14:48:13.186940 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee0c87fb-677b-4338-85a7-3a53ad85e806" containerName="extract" Dec 10 14:48:13 crc kubenswrapper[4718]: I1210 14:48:13.186946 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee0c87fb-677b-4338-85a7-3a53ad85e806" containerName="extract" Dec 10 14:48:13 crc kubenswrapper[4718]: E1210 14:48:13.186957 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad41f502-4d6d-4998-adb5-038ad95b7fd3" containerName="registry-server" Dec 10 14:48:13 crc kubenswrapper[4718]: I1210 14:48:13.186964 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad41f502-4d6d-4998-adb5-038ad95b7fd3" containerName="registry-server" Dec 10 14:48:13 crc kubenswrapper[4718]: E1210 14:48:13.186977 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee0c87fb-677b-4338-85a7-3a53ad85e806" containerName="util" Dec 10 14:48:13 crc kubenswrapper[4718]: I1210 14:48:13.186984 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee0c87fb-677b-4338-85a7-3a53ad85e806" containerName="util" Dec 10 14:48:13 crc kubenswrapper[4718]: I1210 14:48:13.187127 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="ee0c87fb-677b-4338-85a7-3a53ad85e806" containerName="extract" Dec 10 14:48:13 crc kubenswrapper[4718]: I1210 14:48:13.187138 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="ad41f502-4d6d-4998-adb5-038ad95b7fd3" containerName="registry-server" Dec 10 14:48:13 crc kubenswrapper[4718]: I1210 14:48:13.187743 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-99xzv" Dec 10 14:48:13 crc kubenswrapper[4718]: I1210 14:48:13.200068 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-5b5b58f5c8-99xzv"] Dec 10 14:48:13 crc kubenswrapper[4718]: I1210 14:48:13.200996 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Dec 10 14:48:13 crc kubenswrapper[4718]: I1210 14:48:13.201130 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-tl2sm" Dec 10 14:48:13 crc kubenswrapper[4718]: I1210 14:48:13.201278 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Dec 10 14:48:13 crc kubenswrapper[4718]: I1210 14:48:13.302493 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-47gbk\" (UniqueName: \"kubernetes.io/projected/b28ae747-6984-4dea-8efd-f3f238f56386-kube-api-access-47gbk\") pod \"nmstate-operator-5b5b58f5c8-99xzv\" (UID: \"b28ae747-6984-4dea-8efd-f3f238f56386\") " pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-99xzv" Dec 10 14:48:13 crc kubenswrapper[4718]: I1210 14:48:13.404931 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-47gbk\" (UniqueName: \"kubernetes.io/projected/b28ae747-6984-4dea-8efd-f3f238f56386-kube-api-access-47gbk\") pod \"nmstate-operator-5b5b58f5c8-99xzv\" (UID: \"b28ae747-6984-4dea-8efd-f3f238f56386\") " pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-99xzv" Dec 10 14:48:13 crc kubenswrapper[4718]: I1210 14:48:13.424467 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-47gbk\" (UniqueName: \"kubernetes.io/projected/b28ae747-6984-4dea-8efd-f3f238f56386-kube-api-access-47gbk\") pod \"nmstate-operator-5b5b58f5c8-99xzv\" (UID: \"b28ae747-6984-4dea-8efd-f3f238f56386\") " pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-99xzv" Dec 10 14:48:13 crc kubenswrapper[4718]: I1210 14:48:13.513507 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-99xzv" Dec 10 14:48:14 crc kubenswrapper[4718]: I1210 14:48:14.038946 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-5b5b58f5c8-99xzv"] Dec 10 14:48:14 crc kubenswrapper[4718]: I1210 14:48:14.318536 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-99xzv" event={"ID":"b28ae747-6984-4dea-8efd-f3f238f56386","Type":"ContainerStarted","Data":"f1a31f81841832cc4dc5ee45b04caabce4ca50f761ac902067ee86d2a7f390bc"} Dec 10 14:48:21 crc kubenswrapper[4718]: I1210 14:48:21.643885 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-99xzv" event={"ID":"b28ae747-6984-4dea-8efd-f3f238f56386","Type":"ContainerStarted","Data":"3e0301a381e065f64bdfc6bdd4326a4ed3bb0359cf03c9a614e6c828a077a020"} Dec 10 14:48:21 crc kubenswrapper[4718]: I1210 14:48:21.664138 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-99xzv" podStartSLOduration=2.142799401 podStartE2EDuration="8.664114214s" podCreationTimestamp="2025-12-10 14:48:13 +0000 UTC" firstStartedPulling="2025-12-10 14:48:14.036304767 +0000 UTC m=+998.985528184" lastFinishedPulling="2025-12-10 14:48:20.55761958 +0000 UTC m=+1005.506842997" observedRunningTime="2025-12-10 14:48:21.661197589 +0000 UTC m=+1006.610421026" watchObservedRunningTime="2025-12-10 14:48:21.664114214 +0000 UTC m=+1006.613337651" Dec 10 14:48:22 crc kubenswrapper[4718]: I1210 14:48:22.198498 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-dh2hb" Dec 10 14:48:22 crc kubenswrapper[4718]: I1210 14:48:22.343242 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-dh2hb"] Dec 10 14:48:22 crc kubenswrapper[4718]: I1210 14:48:22.411697 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-fvqjr"] Dec 10 14:48:22 crc kubenswrapper[4718]: I1210 14:48:22.412011 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-fvqjr" podUID="d800dab1-f8c3-46c8-b3bc-47186e9a999a" containerName="registry-server" containerID="cri-o://80675f0d42fc72cc6cad30b230b2d8e6f7088aa257a67b30dfaf82df8131f6e0" gracePeriod=2 Dec 10 14:48:22 crc kubenswrapper[4718]: I1210 14:48:22.677616 4718 generic.go:334] "Generic (PLEG): container finished" podID="d800dab1-f8c3-46c8-b3bc-47186e9a999a" containerID="80675f0d42fc72cc6cad30b230b2d8e6f7088aa257a67b30dfaf82df8131f6e0" exitCode=0 Dec 10 14:48:22 crc kubenswrapper[4718]: I1210 14:48:22.677686 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fvqjr" event={"ID":"d800dab1-f8c3-46c8-b3bc-47186e9a999a","Type":"ContainerDied","Data":"80675f0d42fc72cc6cad30b230b2d8e6f7088aa257a67b30dfaf82df8131f6e0"} Dec 10 14:48:22 crc kubenswrapper[4718]: I1210 14:48:22.695412 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-7f946cbc9-l52p4"] Dec 10 14:48:22 crc kubenswrapper[4718]: I1210 14:48:22.696551 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-7f946cbc9-l52p4" Dec 10 14:48:22 crc kubenswrapper[4718]: I1210 14:48:22.698556 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-pfxg5" Dec 10 14:48:22 crc kubenswrapper[4718]: I1210 14:48:22.710076 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-5f6d4c5ccb-xkpb5"] Dec 10 14:48:22 crc kubenswrapper[4718]: I1210 14:48:22.711245 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-xkpb5" Dec 10 14:48:22 crc kubenswrapper[4718]: I1210 14:48:22.716030 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Dec 10 14:48:22 crc kubenswrapper[4718]: I1210 14:48:22.737840 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-7f946cbc9-l52p4"] Dec 10 14:48:22 crc kubenswrapper[4718]: I1210 14:48:22.752566 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-5f6d4c5ccb-xkpb5"] Dec 10 14:48:22 crc kubenswrapper[4718]: I1210 14:48:22.764747 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-2pm2t"] Dec 10 14:48:22 crc kubenswrapper[4718]: I1210 14:48:22.765694 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-2pm2t" Dec 10 14:48:22 crc kubenswrapper[4718]: I1210 14:48:22.837409 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xrprl\" (UniqueName: \"kubernetes.io/projected/1d31a2e1-7843-4881-807c-38aed6f2ee1d-kube-api-access-xrprl\") pod \"nmstate-handler-2pm2t\" (UID: \"1d31a2e1-7843-4881-807c-38aed6f2ee1d\") " pod="openshift-nmstate/nmstate-handler-2pm2t" Dec 10 14:48:22 crc kubenswrapper[4718]: I1210 14:48:22.837471 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/1d31a2e1-7843-4881-807c-38aed6f2ee1d-dbus-socket\") pod \"nmstate-handler-2pm2t\" (UID: \"1d31a2e1-7843-4881-807c-38aed6f2ee1d\") " pod="openshift-nmstate/nmstate-handler-2pm2t" Dec 10 14:48:22 crc kubenswrapper[4718]: I1210 14:48:22.837518 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/1d31a2e1-7843-4881-807c-38aed6f2ee1d-ovs-socket\") pod \"nmstate-handler-2pm2t\" (UID: \"1d31a2e1-7843-4881-807c-38aed6f2ee1d\") " pod="openshift-nmstate/nmstate-handler-2pm2t" Dec 10 14:48:22 crc kubenswrapper[4718]: I1210 14:48:22.837542 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zzgd4\" (UniqueName: \"kubernetes.io/projected/dc2fd026-789f-445a-befd-cdaf23a77c25-kube-api-access-zzgd4\") pod \"nmstate-metrics-7f946cbc9-l52p4\" (UID: \"dc2fd026-789f-445a-befd-cdaf23a77c25\") " pod="openshift-nmstate/nmstate-metrics-7f946cbc9-l52p4" Dec 10 14:48:22 crc kubenswrapper[4718]: I1210 14:48:22.837571 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/64da34db-cd1c-46a7-9a41-69926590d466-tls-key-pair\") pod \"nmstate-webhook-5f6d4c5ccb-xkpb5\" (UID: \"64da34db-cd1c-46a7-9a41-69926590d466\") " pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-xkpb5" Dec 10 14:48:22 crc kubenswrapper[4718]: I1210 14:48:22.837593 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/1d31a2e1-7843-4881-807c-38aed6f2ee1d-nmstate-lock\") pod \"nmstate-handler-2pm2t\" (UID: \"1d31a2e1-7843-4881-807c-38aed6f2ee1d\") " pod="openshift-nmstate/nmstate-handler-2pm2t" Dec 10 14:48:22 crc kubenswrapper[4718]: I1210 14:48:22.837615 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rsrd2\" (UniqueName: \"kubernetes.io/projected/64da34db-cd1c-46a7-9a41-69926590d466-kube-api-access-rsrd2\") pod \"nmstate-webhook-5f6d4c5ccb-xkpb5\" (UID: \"64da34db-cd1c-46a7-9a41-69926590d466\") " pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-xkpb5" Dec 10 14:48:22 crc kubenswrapper[4718]: I1210 14:48:22.869750 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fvqjr" Dec 10 14:48:22 crc kubenswrapper[4718]: I1210 14:48:22.935093 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7fbb5f6569-p79rx"] Dec 10 14:48:22 crc kubenswrapper[4718]: I1210 14:48:22.937902 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d800dab1-f8c3-46c8-b3bc-47186e9a999a-catalog-content\") pod \"d800dab1-f8c3-46c8-b3bc-47186e9a999a\" (UID: \"d800dab1-f8c3-46c8-b3bc-47186e9a999a\") " Dec 10 14:48:22 crc kubenswrapper[4718]: I1210 14:48:22.937969 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d800dab1-f8c3-46c8-b3bc-47186e9a999a-utilities\") pod \"d800dab1-f8c3-46c8-b3bc-47186e9a999a\" (UID: \"d800dab1-f8c3-46c8-b3bc-47186e9a999a\") " Dec 10 14:48:22 crc kubenswrapper[4718]: I1210 14:48:22.937997 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vdglx\" (UniqueName: \"kubernetes.io/projected/d800dab1-f8c3-46c8-b3bc-47186e9a999a-kube-api-access-vdglx\") pod \"d800dab1-f8c3-46c8-b3bc-47186e9a999a\" (UID: \"d800dab1-f8c3-46c8-b3bc-47186e9a999a\") " Dec 10 14:48:22 crc kubenswrapper[4718]: E1210 14:48:22.938091 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d800dab1-f8c3-46c8-b3bc-47186e9a999a" containerName="extract-content" Dec 10 14:48:22 crc kubenswrapper[4718]: I1210 14:48:22.938128 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="d800dab1-f8c3-46c8-b3bc-47186e9a999a" containerName="extract-content" Dec 10 14:48:22 crc kubenswrapper[4718]: E1210 14:48:22.938139 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d800dab1-f8c3-46c8-b3bc-47186e9a999a" containerName="registry-server" Dec 10 14:48:22 crc kubenswrapper[4718]: I1210 14:48:22.938147 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="d800dab1-f8c3-46c8-b3bc-47186e9a999a" containerName="registry-server" Dec 10 14:48:22 crc kubenswrapper[4718]: E1210 14:48:22.938160 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d800dab1-f8c3-46c8-b3bc-47186e9a999a" containerName="extract-utilities" Dec 10 14:48:22 crc kubenswrapper[4718]: I1210 14:48:22.938172 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="d800dab1-f8c3-46c8-b3bc-47186e9a999a" containerName="extract-utilities" Dec 10 14:48:22 crc kubenswrapper[4718]: I1210 14:48:22.938336 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="d800dab1-f8c3-46c8-b3bc-47186e9a999a" containerName="registry-server" Dec 10 14:48:22 crc kubenswrapper[4718]: I1210 14:48:22.939313 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d800dab1-f8c3-46c8-b3bc-47186e9a999a-utilities" (OuterVolumeSpecName: "utilities") pod "d800dab1-f8c3-46c8-b3bc-47186e9a999a" (UID: "d800dab1-f8c3-46c8-b3bc-47186e9a999a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 14:48:22 crc kubenswrapper[4718]: I1210 14:48:22.938101 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xrprl\" (UniqueName: \"kubernetes.io/projected/1d31a2e1-7843-4881-807c-38aed6f2ee1d-kube-api-access-xrprl\") pod \"nmstate-handler-2pm2t\" (UID: \"1d31a2e1-7843-4881-807c-38aed6f2ee1d\") " pod="openshift-nmstate/nmstate-handler-2pm2t" Dec 10 14:48:22 crc kubenswrapper[4718]: I1210 14:48:22.941664 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/1d31a2e1-7843-4881-807c-38aed6f2ee1d-dbus-socket\") pod \"nmstate-handler-2pm2t\" (UID: \"1d31a2e1-7843-4881-807c-38aed6f2ee1d\") " pod="openshift-nmstate/nmstate-handler-2pm2t" Dec 10 14:48:22 crc kubenswrapper[4718]: I1210 14:48:22.941768 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/1d31a2e1-7843-4881-807c-38aed6f2ee1d-ovs-socket\") pod \"nmstate-handler-2pm2t\" (UID: \"1d31a2e1-7843-4881-807c-38aed6f2ee1d\") " pod="openshift-nmstate/nmstate-handler-2pm2t" Dec 10 14:48:22 crc kubenswrapper[4718]: I1210 14:48:22.941818 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zzgd4\" (UniqueName: \"kubernetes.io/projected/dc2fd026-789f-445a-befd-cdaf23a77c25-kube-api-access-zzgd4\") pod \"nmstate-metrics-7f946cbc9-l52p4\" (UID: \"dc2fd026-789f-445a-befd-cdaf23a77c25\") " pod="openshift-nmstate/nmstate-metrics-7f946cbc9-l52p4" Dec 10 14:48:22 crc kubenswrapper[4718]: I1210 14:48:22.941885 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/64da34db-cd1c-46a7-9a41-69926590d466-tls-key-pair\") pod \"nmstate-webhook-5f6d4c5ccb-xkpb5\" (UID: \"64da34db-cd1c-46a7-9a41-69926590d466\") " pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-xkpb5" Dec 10 14:48:22 crc kubenswrapper[4718]: I1210 14:48:22.941926 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/1d31a2e1-7843-4881-807c-38aed6f2ee1d-nmstate-lock\") pod \"nmstate-handler-2pm2t\" (UID: \"1d31a2e1-7843-4881-807c-38aed6f2ee1d\") " pod="openshift-nmstate/nmstate-handler-2pm2t" Dec 10 14:48:22 crc kubenswrapper[4718]: I1210 14:48:22.941950 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rsrd2\" (UniqueName: \"kubernetes.io/projected/64da34db-cd1c-46a7-9a41-69926590d466-kube-api-access-rsrd2\") pod \"nmstate-webhook-5f6d4c5ccb-xkpb5\" (UID: \"64da34db-cd1c-46a7-9a41-69926590d466\") " pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-xkpb5" Dec 10 14:48:22 crc kubenswrapper[4718]: I1210 14:48:22.942007 4718 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d800dab1-f8c3-46c8-b3bc-47186e9a999a-utilities\") on node \"crc\" DevicePath \"\"" Dec 10 14:48:22 crc kubenswrapper[4718]: I1210 14:48:22.942320 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-p79rx" Dec 10 14:48:22 crc kubenswrapper[4718]: I1210 14:48:22.942553 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/1d31a2e1-7843-4881-807c-38aed6f2ee1d-dbus-socket\") pod \"nmstate-handler-2pm2t\" (UID: \"1d31a2e1-7843-4881-807c-38aed6f2ee1d\") " pod="openshift-nmstate/nmstate-handler-2pm2t" Dec 10 14:48:22 crc kubenswrapper[4718]: I1210 14:48:22.942609 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/1d31a2e1-7843-4881-807c-38aed6f2ee1d-ovs-socket\") pod \"nmstate-handler-2pm2t\" (UID: \"1d31a2e1-7843-4881-807c-38aed6f2ee1d\") " pod="openshift-nmstate/nmstate-handler-2pm2t" Dec 10 14:48:22 crc kubenswrapper[4718]: I1210 14:48:22.944811 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/1d31a2e1-7843-4881-807c-38aed6f2ee1d-nmstate-lock\") pod \"nmstate-handler-2pm2t\" (UID: \"1d31a2e1-7843-4881-807c-38aed6f2ee1d\") " pod="openshift-nmstate/nmstate-handler-2pm2t" Dec 10 14:48:22 crc kubenswrapper[4718]: I1210 14:48:22.948542 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-tjgg4" Dec 10 14:48:22 crc kubenswrapper[4718]: I1210 14:48:22.948796 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Dec 10 14:48:22 crc kubenswrapper[4718]: I1210 14:48:22.951304 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Dec 10 14:48:22 crc kubenswrapper[4718]: I1210 14:48:22.951974 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/64da34db-cd1c-46a7-9a41-69926590d466-tls-key-pair\") pod \"nmstate-webhook-5f6d4c5ccb-xkpb5\" (UID: \"64da34db-cd1c-46a7-9a41-69926590d466\") " pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-xkpb5" Dec 10 14:48:22 crc kubenswrapper[4718]: I1210 14:48:22.954797 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7fbb5f6569-p79rx"] Dec 10 14:48:22 crc kubenswrapper[4718]: I1210 14:48:22.966247 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rsrd2\" (UniqueName: \"kubernetes.io/projected/64da34db-cd1c-46a7-9a41-69926590d466-kube-api-access-rsrd2\") pod \"nmstate-webhook-5f6d4c5ccb-xkpb5\" (UID: \"64da34db-cd1c-46a7-9a41-69926590d466\") " pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-xkpb5" Dec 10 14:48:22 crc kubenswrapper[4718]: I1210 14:48:22.966859 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d800dab1-f8c3-46c8-b3bc-47186e9a999a-kube-api-access-vdglx" (OuterVolumeSpecName: "kube-api-access-vdglx") pod "d800dab1-f8c3-46c8-b3bc-47186e9a999a" (UID: "d800dab1-f8c3-46c8-b3bc-47186e9a999a"). InnerVolumeSpecName "kube-api-access-vdglx". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 14:48:22 crc kubenswrapper[4718]: I1210 14:48:22.981216 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xrprl\" (UniqueName: \"kubernetes.io/projected/1d31a2e1-7843-4881-807c-38aed6f2ee1d-kube-api-access-xrprl\") pod \"nmstate-handler-2pm2t\" (UID: \"1d31a2e1-7843-4881-807c-38aed6f2ee1d\") " pod="openshift-nmstate/nmstate-handler-2pm2t" Dec 10 14:48:22 crc kubenswrapper[4718]: I1210 14:48:22.982197 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zzgd4\" (UniqueName: \"kubernetes.io/projected/dc2fd026-789f-445a-befd-cdaf23a77c25-kube-api-access-zzgd4\") pod \"nmstate-metrics-7f946cbc9-l52p4\" (UID: \"dc2fd026-789f-445a-befd-cdaf23a77c25\") " pod="openshift-nmstate/nmstate-metrics-7f946cbc9-l52p4" Dec 10 14:48:23 crc kubenswrapper[4718]: I1210 14:48:23.043420 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vdglx\" (UniqueName: \"kubernetes.io/projected/d800dab1-f8c3-46c8-b3bc-47186e9a999a-kube-api-access-vdglx\") on node \"crc\" DevicePath \"\"" Dec 10 14:48:23 crc kubenswrapper[4718]: I1210 14:48:23.043549 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d800dab1-f8c3-46c8-b3bc-47186e9a999a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d800dab1-f8c3-46c8-b3bc-47186e9a999a" (UID: "d800dab1-f8c3-46c8-b3bc-47186e9a999a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 14:48:23 crc kubenswrapper[4718]: I1210 14:48:23.047267 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-7f946cbc9-l52p4" Dec 10 14:48:23 crc kubenswrapper[4718]: I1210 14:48:23.125108 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-xkpb5" Dec 10 14:48:23 crc kubenswrapper[4718]: I1210 14:48:23.141357 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f7dddfccd-2hnz2"] Dec 10 14:48:23 crc kubenswrapper[4718]: I1210 14:48:23.143830 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-2pm2t" Dec 10 14:48:23 crc kubenswrapper[4718]: I1210 14:48:23.145198 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/15f18f34-32e0-49a4-b05d-ccd88e6c9541-nginx-conf\") pod \"nmstate-console-plugin-7fbb5f6569-p79rx\" (UID: \"15f18f34-32e0-49a4-b05d-ccd88e6c9541\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-p79rx" Dec 10 14:48:23 crc kubenswrapper[4718]: I1210 14:48:23.145234 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/15f18f34-32e0-49a4-b05d-ccd88e6c9541-plugin-serving-cert\") pod \"nmstate-console-plugin-7fbb5f6569-p79rx\" (UID: \"15f18f34-32e0-49a4-b05d-ccd88e6c9541\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-p79rx" Dec 10 14:48:23 crc kubenswrapper[4718]: I1210 14:48:23.145258 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t8z7v\" (UniqueName: \"kubernetes.io/projected/15f18f34-32e0-49a4-b05d-ccd88e6c9541-kube-api-access-t8z7v\") pod \"nmstate-console-plugin-7fbb5f6569-p79rx\" (UID: \"15f18f34-32e0-49a4-b05d-ccd88e6c9541\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-p79rx" Dec 10 14:48:23 crc kubenswrapper[4718]: I1210 14:48:23.145416 4718 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d800dab1-f8c3-46c8-b3bc-47186e9a999a-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 10 14:48:23 crc kubenswrapper[4718]: I1210 14:48:23.148460 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f7dddfccd-2hnz2" Dec 10 14:48:23 crc kubenswrapper[4718]: I1210 14:48:23.178658 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f7dddfccd-2hnz2"] Dec 10 14:48:23 crc kubenswrapper[4718]: W1210 14:48:23.206947 4718 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1d31a2e1_7843_4881_807c_38aed6f2ee1d.slice/crio-253ccadb7579a2abd98c9d0da0cff892b1bba54b373367f4e2387f483ac98b1c WatchSource:0}: Error finding container 253ccadb7579a2abd98c9d0da0cff892b1bba54b373367f4e2387f483ac98b1c: Status 404 returned error can't find the container with id 253ccadb7579a2abd98c9d0da0cff892b1bba54b373367f4e2387f483ac98b1c Dec 10 14:48:23 crc kubenswrapper[4718]: I1210 14:48:23.258342 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-htkc7\" (UniqueName: \"kubernetes.io/projected/a0832afe-d3a0-47ca-900c-14b07bf4f2a2-kube-api-access-htkc7\") pod \"console-f7dddfccd-2hnz2\" (UID: \"a0832afe-d3a0-47ca-900c-14b07bf4f2a2\") " pod="openshift-console/console-f7dddfccd-2hnz2" Dec 10 14:48:23 crc kubenswrapper[4718]: I1210 14:48:23.258432 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/a0832afe-d3a0-47ca-900c-14b07bf4f2a2-console-oauth-config\") pod \"console-f7dddfccd-2hnz2\" (UID: \"a0832afe-d3a0-47ca-900c-14b07bf4f2a2\") " pod="openshift-console/console-f7dddfccd-2hnz2" Dec 10 14:48:23 crc kubenswrapper[4718]: I1210 14:48:23.258486 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/a0832afe-d3a0-47ca-900c-14b07bf4f2a2-service-ca\") pod \"console-f7dddfccd-2hnz2\" (UID: \"a0832afe-d3a0-47ca-900c-14b07bf4f2a2\") " pod="openshift-console/console-f7dddfccd-2hnz2" Dec 10 14:48:23 crc kubenswrapper[4718]: I1210 14:48:23.258520 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/a0832afe-d3a0-47ca-900c-14b07bf4f2a2-oauth-serving-cert\") pod \"console-f7dddfccd-2hnz2\" (UID: \"a0832afe-d3a0-47ca-900c-14b07bf4f2a2\") " pod="openshift-console/console-f7dddfccd-2hnz2" Dec 10 14:48:23 crc kubenswrapper[4718]: I1210 14:48:23.258562 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/a0832afe-d3a0-47ca-900c-14b07bf4f2a2-console-serving-cert\") pod \"console-f7dddfccd-2hnz2\" (UID: \"a0832afe-d3a0-47ca-900c-14b07bf4f2a2\") " pod="openshift-console/console-f7dddfccd-2hnz2" Dec 10 14:48:23 crc kubenswrapper[4718]: I1210 14:48:23.258596 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/15f18f34-32e0-49a4-b05d-ccd88e6c9541-nginx-conf\") pod \"nmstate-console-plugin-7fbb5f6569-p79rx\" (UID: \"15f18f34-32e0-49a4-b05d-ccd88e6c9541\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-p79rx" Dec 10 14:48:23 crc kubenswrapper[4718]: I1210 14:48:23.258628 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/15f18f34-32e0-49a4-b05d-ccd88e6c9541-plugin-serving-cert\") pod \"nmstate-console-plugin-7fbb5f6569-p79rx\" (UID: \"15f18f34-32e0-49a4-b05d-ccd88e6c9541\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-p79rx" Dec 10 14:48:23 crc kubenswrapper[4718]: I1210 14:48:23.258661 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t8z7v\" (UniqueName: \"kubernetes.io/projected/15f18f34-32e0-49a4-b05d-ccd88e6c9541-kube-api-access-t8z7v\") pod \"nmstate-console-plugin-7fbb5f6569-p79rx\" (UID: \"15f18f34-32e0-49a4-b05d-ccd88e6c9541\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-p79rx" Dec 10 14:48:23 crc kubenswrapper[4718]: I1210 14:48:23.258710 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a0832afe-d3a0-47ca-900c-14b07bf4f2a2-trusted-ca-bundle\") pod \"console-f7dddfccd-2hnz2\" (UID: \"a0832afe-d3a0-47ca-900c-14b07bf4f2a2\") " pod="openshift-console/console-f7dddfccd-2hnz2" Dec 10 14:48:23 crc kubenswrapper[4718]: I1210 14:48:23.258741 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/a0832afe-d3a0-47ca-900c-14b07bf4f2a2-console-config\") pod \"console-f7dddfccd-2hnz2\" (UID: \"a0832afe-d3a0-47ca-900c-14b07bf4f2a2\") " pod="openshift-console/console-f7dddfccd-2hnz2" Dec 10 14:48:23 crc kubenswrapper[4718]: I1210 14:48:23.260324 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/15f18f34-32e0-49a4-b05d-ccd88e6c9541-nginx-conf\") pod \"nmstate-console-plugin-7fbb5f6569-p79rx\" (UID: \"15f18f34-32e0-49a4-b05d-ccd88e6c9541\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-p79rx" Dec 10 14:48:23 crc kubenswrapper[4718]: E1210 14:48:23.260457 4718 secret.go:188] Couldn't get secret openshift-nmstate/plugin-serving-cert: secret "plugin-serving-cert" not found Dec 10 14:48:23 crc kubenswrapper[4718]: E1210 14:48:23.260529 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/15f18f34-32e0-49a4-b05d-ccd88e6c9541-plugin-serving-cert podName:15f18f34-32e0-49a4-b05d-ccd88e6c9541 nodeName:}" failed. No retries permitted until 2025-12-10 14:48:23.760504933 +0000 UTC m=+1008.709728350 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "plugin-serving-cert" (UniqueName: "kubernetes.io/secret/15f18f34-32e0-49a4-b05d-ccd88e6c9541-plugin-serving-cert") pod "nmstate-console-plugin-7fbb5f6569-p79rx" (UID: "15f18f34-32e0-49a4-b05d-ccd88e6c9541") : secret "plugin-serving-cert" not found Dec 10 14:48:23 crc kubenswrapper[4718]: I1210 14:48:23.312672 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t8z7v\" (UniqueName: \"kubernetes.io/projected/15f18f34-32e0-49a4-b05d-ccd88e6c9541-kube-api-access-t8z7v\") pod \"nmstate-console-plugin-7fbb5f6569-p79rx\" (UID: \"15f18f34-32e0-49a4-b05d-ccd88e6c9541\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-p79rx" Dec 10 14:48:23 crc kubenswrapper[4718]: I1210 14:48:23.402754 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/a0832afe-d3a0-47ca-900c-14b07bf4f2a2-service-ca\") pod \"console-f7dddfccd-2hnz2\" (UID: \"a0832afe-d3a0-47ca-900c-14b07bf4f2a2\") " pod="openshift-console/console-f7dddfccd-2hnz2" Dec 10 14:48:23 crc kubenswrapper[4718]: I1210 14:48:23.402799 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/a0832afe-d3a0-47ca-900c-14b07bf4f2a2-oauth-serving-cert\") pod \"console-f7dddfccd-2hnz2\" (UID: \"a0832afe-d3a0-47ca-900c-14b07bf4f2a2\") " pod="openshift-console/console-f7dddfccd-2hnz2" Dec 10 14:48:23 crc kubenswrapper[4718]: I1210 14:48:23.402833 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/a0832afe-d3a0-47ca-900c-14b07bf4f2a2-console-serving-cert\") pod \"console-f7dddfccd-2hnz2\" (UID: \"a0832afe-d3a0-47ca-900c-14b07bf4f2a2\") " pod="openshift-console/console-f7dddfccd-2hnz2" Dec 10 14:48:23 crc kubenswrapper[4718]: I1210 14:48:23.402885 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a0832afe-d3a0-47ca-900c-14b07bf4f2a2-trusted-ca-bundle\") pod \"console-f7dddfccd-2hnz2\" (UID: \"a0832afe-d3a0-47ca-900c-14b07bf4f2a2\") " pod="openshift-console/console-f7dddfccd-2hnz2" Dec 10 14:48:23 crc kubenswrapper[4718]: I1210 14:48:23.402910 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/a0832afe-d3a0-47ca-900c-14b07bf4f2a2-console-config\") pod \"console-f7dddfccd-2hnz2\" (UID: \"a0832afe-d3a0-47ca-900c-14b07bf4f2a2\") " pod="openshift-console/console-f7dddfccd-2hnz2" Dec 10 14:48:23 crc kubenswrapper[4718]: I1210 14:48:23.402940 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-htkc7\" (UniqueName: \"kubernetes.io/projected/a0832afe-d3a0-47ca-900c-14b07bf4f2a2-kube-api-access-htkc7\") pod \"console-f7dddfccd-2hnz2\" (UID: \"a0832afe-d3a0-47ca-900c-14b07bf4f2a2\") " pod="openshift-console/console-f7dddfccd-2hnz2" Dec 10 14:48:23 crc kubenswrapper[4718]: I1210 14:48:23.402960 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/a0832afe-d3a0-47ca-900c-14b07bf4f2a2-console-oauth-config\") pod \"console-f7dddfccd-2hnz2\" (UID: \"a0832afe-d3a0-47ca-900c-14b07bf4f2a2\") " pod="openshift-console/console-f7dddfccd-2hnz2" Dec 10 14:48:23 crc kubenswrapper[4718]: I1210 14:48:23.403811 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/a0832afe-d3a0-47ca-900c-14b07bf4f2a2-service-ca\") pod \"console-f7dddfccd-2hnz2\" (UID: \"a0832afe-d3a0-47ca-900c-14b07bf4f2a2\") " pod="openshift-console/console-f7dddfccd-2hnz2" Dec 10 14:48:23 crc kubenswrapper[4718]: I1210 14:48:23.408288 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/a0832afe-d3a0-47ca-900c-14b07bf4f2a2-console-oauth-config\") pod \"console-f7dddfccd-2hnz2\" (UID: \"a0832afe-d3a0-47ca-900c-14b07bf4f2a2\") " pod="openshift-console/console-f7dddfccd-2hnz2" Dec 10 14:48:23 crc kubenswrapper[4718]: I1210 14:48:23.408514 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/a0832afe-d3a0-47ca-900c-14b07bf4f2a2-console-serving-cert\") pod \"console-f7dddfccd-2hnz2\" (UID: \"a0832afe-d3a0-47ca-900c-14b07bf4f2a2\") " pod="openshift-console/console-f7dddfccd-2hnz2" Dec 10 14:48:23 crc kubenswrapper[4718]: I1210 14:48:23.413545 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/a0832afe-d3a0-47ca-900c-14b07bf4f2a2-console-config\") pod \"console-f7dddfccd-2hnz2\" (UID: \"a0832afe-d3a0-47ca-900c-14b07bf4f2a2\") " pod="openshift-console/console-f7dddfccd-2hnz2" Dec 10 14:48:23 crc kubenswrapper[4718]: I1210 14:48:23.416307 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/a0832afe-d3a0-47ca-900c-14b07bf4f2a2-oauth-serving-cert\") pod \"console-f7dddfccd-2hnz2\" (UID: \"a0832afe-d3a0-47ca-900c-14b07bf4f2a2\") " pod="openshift-console/console-f7dddfccd-2hnz2" Dec 10 14:48:23 crc kubenswrapper[4718]: I1210 14:48:23.417361 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a0832afe-d3a0-47ca-900c-14b07bf4f2a2-trusted-ca-bundle\") pod \"console-f7dddfccd-2hnz2\" (UID: \"a0832afe-d3a0-47ca-900c-14b07bf4f2a2\") " pod="openshift-console/console-f7dddfccd-2hnz2" Dec 10 14:48:23 crc kubenswrapper[4718]: I1210 14:48:23.427419 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-htkc7\" (UniqueName: \"kubernetes.io/projected/a0832afe-d3a0-47ca-900c-14b07bf4f2a2-kube-api-access-htkc7\") pod \"console-f7dddfccd-2hnz2\" (UID: \"a0832afe-d3a0-47ca-900c-14b07bf4f2a2\") " pod="openshift-console/console-f7dddfccd-2hnz2" Dec 10 14:48:23 crc kubenswrapper[4718]: I1210 14:48:23.487876 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f7dddfccd-2hnz2" Dec 10 14:48:23 crc kubenswrapper[4718]: I1210 14:48:23.490058 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-7f946cbc9-l52p4"] Dec 10 14:48:23 crc kubenswrapper[4718]: W1210 14:48:23.496505 4718 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddc2fd026_789f_445a_befd_cdaf23a77c25.slice/crio-e870bc4bc1ef6c55c2f2ac5376ca9c0124c867cc95bce6609e28197f62bb1a1f WatchSource:0}: Error finding container e870bc4bc1ef6c55c2f2ac5376ca9c0124c867cc95bce6609e28197f62bb1a1f: Status 404 returned error can't find the container with id e870bc4bc1ef6c55c2f2ac5376ca9c0124c867cc95bce6609e28197f62bb1a1f Dec 10 14:48:23 crc kubenswrapper[4718]: I1210 14:48:23.650370 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-5f6d4c5ccb-xkpb5"] Dec 10 14:48:23 crc kubenswrapper[4718]: I1210 14:48:23.686013 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-xkpb5" event={"ID":"64da34db-cd1c-46a7-9a41-69926590d466","Type":"ContainerStarted","Data":"1e37121b6a38d4caf3c35c06fa0ccfb1ea0c9469c24d2a68f486c7799c107230"} Dec 10 14:48:23 crc kubenswrapper[4718]: I1210 14:48:23.698716 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fvqjr" event={"ID":"d800dab1-f8c3-46c8-b3bc-47186e9a999a","Type":"ContainerDied","Data":"ef7354d3bc78d27bc9151affb853fc58d7b818bbaa35d5842b726e6527085f1a"} Dec 10 14:48:23 crc kubenswrapper[4718]: I1210 14:48:23.698794 4718 scope.go:117] "RemoveContainer" containerID="80675f0d42fc72cc6cad30b230b2d8e6f7088aa257a67b30dfaf82df8131f6e0" Dec 10 14:48:23 crc kubenswrapper[4718]: I1210 14:48:23.698979 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fvqjr" Dec 10 14:48:23 crc kubenswrapper[4718]: I1210 14:48:23.713267 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-7f946cbc9-l52p4" event={"ID":"dc2fd026-789f-445a-befd-cdaf23a77c25","Type":"ContainerStarted","Data":"e870bc4bc1ef6c55c2f2ac5376ca9c0124c867cc95bce6609e28197f62bb1a1f"} Dec 10 14:48:23 crc kubenswrapper[4718]: I1210 14:48:23.716891 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-2pm2t" event={"ID":"1d31a2e1-7843-4881-807c-38aed6f2ee1d","Type":"ContainerStarted","Data":"253ccadb7579a2abd98c9d0da0cff892b1bba54b373367f4e2387f483ac98b1c"} Dec 10 14:48:23 crc kubenswrapper[4718]: I1210 14:48:23.749706 4718 scope.go:117] "RemoveContainer" containerID="1779b1e46ffd800cca7152253c87c8c125bf6836c7e0a91a7c06c725f3c76031" Dec 10 14:48:23 crc kubenswrapper[4718]: I1210 14:48:23.760582 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-fvqjr"] Dec 10 14:48:23 crc kubenswrapper[4718]: I1210 14:48:23.787177 4718 scope.go:117] "RemoveContainer" containerID="f6943162e50739565457759982e522e18f092d943e3ebf2af33dc4ad5f83dede" Dec 10 14:48:23 crc kubenswrapper[4718]: I1210 14:48:23.792642 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-fvqjr"] Dec 10 14:48:23 crc kubenswrapper[4718]: I1210 14:48:23.815600 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/15f18f34-32e0-49a4-b05d-ccd88e6c9541-plugin-serving-cert\") pod \"nmstate-console-plugin-7fbb5f6569-p79rx\" (UID: \"15f18f34-32e0-49a4-b05d-ccd88e6c9541\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-p79rx" Dec 10 14:48:23 crc kubenswrapper[4718]: I1210 14:48:23.823813 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/15f18f34-32e0-49a4-b05d-ccd88e6c9541-plugin-serving-cert\") pod \"nmstate-console-plugin-7fbb5f6569-p79rx\" (UID: \"15f18f34-32e0-49a4-b05d-ccd88e6c9541\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-p79rx" Dec 10 14:48:23 crc kubenswrapper[4718]: I1210 14:48:23.835137 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f7dddfccd-2hnz2"] Dec 10 14:48:23 crc kubenswrapper[4718]: I1210 14:48:23.901080 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-p79rx" Dec 10 14:48:24 crc kubenswrapper[4718]: I1210 14:48:24.032306 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d800dab1-f8c3-46c8-b3bc-47186e9a999a" path="/var/lib/kubelet/pods/d800dab1-f8c3-46c8-b3bc-47186e9a999a/volumes" Dec 10 14:48:24 crc kubenswrapper[4718]: I1210 14:48:24.146309 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7fbb5f6569-p79rx"] Dec 10 14:48:24 crc kubenswrapper[4718]: I1210 14:48:24.726314 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f7dddfccd-2hnz2" event={"ID":"a0832afe-d3a0-47ca-900c-14b07bf4f2a2","Type":"ContainerStarted","Data":"99290d11486c17b5470840f0bb28f9dcafbacbcd8f7ff4f4cfe1ce72c94bca53"} Dec 10 14:48:24 crc kubenswrapper[4718]: I1210 14:48:24.726910 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f7dddfccd-2hnz2" event={"ID":"a0832afe-d3a0-47ca-900c-14b07bf4f2a2","Type":"ContainerStarted","Data":"8282aeb6a6414d2fab067200496ba90649aca4a88c5b13b54184d8b5afcf54dd"} Dec 10 14:48:24 crc kubenswrapper[4718]: I1210 14:48:24.728201 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-p79rx" event={"ID":"15f18f34-32e0-49a4-b05d-ccd88e6c9541","Type":"ContainerStarted","Data":"c4d390590e3daf3ab7254ec6da91bfba78bdaec30b45ba7642c145f2b3eb0d04"} Dec 10 14:48:24 crc kubenswrapper[4718]: I1210 14:48:24.754512 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f7dddfccd-2hnz2" podStartSLOduration=1.754477909 podStartE2EDuration="1.754477909s" podCreationTimestamp="2025-12-10 14:48:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 14:48:24.746496605 +0000 UTC m=+1009.695720032" watchObservedRunningTime="2025-12-10 14:48:24.754477909 +0000 UTC m=+1009.703701326" Dec 10 14:48:27 crc kubenswrapper[4718]: I1210 14:48:27.797076 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-7f946cbc9-l52p4" event={"ID":"dc2fd026-789f-445a-befd-cdaf23a77c25","Type":"ContainerStarted","Data":"e381550d2737908d1d4c3015b021fdad7ec421763b6daa1f550f4e6fcebd8b37"} Dec 10 14:48:27 crc kubenswrapper[4718]: I1210 14:48:27.799315 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-2pm2t" event={"ID":"1d31a2e1-7843-4881-807c-38aed6f2ee1d","Type":"ContainerStarted","Data":"22064bddfb4c52060babb123eb8ee753ab90b4df21f1658adb88421dbee3cde2"} Dec 10 14:48:27 crc kubenswrapper[4718]: I1210 14:48:27.800899 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-2pm2t" Dec 10 14:48:27 crc kubenswrapper[4718]: I1210 14:48:27.802595 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-xkpb5" event={"ID":"64da34db-cd1c-46a7-9a41-69926590d466","Type":"ContainerStarted","Data":"c12023c6d3783168343aa04a29130fdfde255387a27134186ce89d90f69d31ae"} Dec 10 14:48:27 crc kubenswrapper[4718]: I1210 14:48:27.803233 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-xkpb5" Dec 10 14:48:27 crc kubenswrapper[4718]: I1210 14:48:27.824369 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-2pm2t" podStartSLOduration=1.5433802760000002 podStartE2EDuration="5.82434087s" podCreationTimestamp="2025-12-10 14:48:22 +0000 UTC" firstStartedPulling="2025-12-10 14:48:23.21235928 +0000 UTC m=+1008.161582697" lastFinishedPulling="2025-12-10 14:48:27.493319864 +0000 UTC m=+1012.442543291" observedRunningTime="2025-12-10 14:48:27.822685878 +0000 UTC m=+1012.771909295" watchObservedRunningTime="2025-12-10 14:48:27.82434087 +0000 UTC m=+1012.773564287" Dec 10 14:48:27 crc kubenswrapper[4718]: I1210 14:48:27.849486 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-xkpb5" podStartSLOduration=2.070589187 podStartE2EDuration="5.849441643s" podCreationTimestamp="2025-12-10 14:48:22 +0000 UTC" firstStartedPulling="2025-12-10 14:48:23.664645682 +0000 UTC m=+1008.613869099" lastFinishedPulling="2025-12-10 14:48:27.443498148 +0000 UTC m=+1012.392721555" observedRunningTime="2025-12-10 14:48:27.844439525 +0000 UTC m=+1012.793662952" watchObservedRunningTime="2025-12-10 14:48:27.849441643 +0000 UTC m=+1012.798665060" Dec 10 14:48:29 crc kubenswrapper[4718]: I1210 14:48:29.827908 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-p79rx" event={"ID":"15f18f34-32e0-49a4-b05d-ccd88e6c9541","Type":"ContainerStarted","Data":"c655c2e3c11156ffc1a4688c870b35ef75f300c7ac224f510549f6ea216e1f5a"} Dec 10 14:48:29 crc kubenswrapper[4718]: I1210 14:48:29.856619 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-p79rx" podStartSLOduration=3.282271926 podStartE2EDuration="7.85659576s" podCreationTimestamp="2025-12-10 14:48:22 +0000 UTC" firstStartedPulling="2025-12-10 14:48:24.150381191 +0000 UTC m=+1009.099604608" lastFinishedPulling="2025-12-10 14:48:28.724705025 +0000 UTC m=+1013.673928442" observedRunningTime="2025-12-10 14:48:29.856355524 +0000 UTC m=+1014.805578941" watchObservedRunningTime="2025-12-10 14:48:29.85659576 +0000 UTC m=+1014.805819177" Dec 10 14:48:30 crc kubenswrapper[4718]: I1210 14:48:30.837471 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-7f946cbc9-l52p4" event={"ID":"dc2fd026-789f-445a-befd-cdaf23a77c25","Type":"ContainerStarted","Data":"c1ef5202cf0d53c4b3601ccc87374fd0673d7a47bda2f211255edbd28c5b3656"} Dec 10 14:48:30 crc kubenswrapper[4718]: I1210 14:48:30.860198 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-7f946cbc9-l52p4" podStartSLOduration=1.8241923469999999 podStartE2EDuration="8.860175099s" podCreationTimestamp="2025-12-10 14:48:22 +0000 UTC" firstStartedPulling="2025-12-10 14:48:23.515074612 +0000 UTC m=+1008.464298029" lastFinishedPulling="2025-12-10 14:48:30.551057364 +0000 UTC m=+1015.500280781" observedRunningTime="2025-12-10 14:48:30.856480675 +0000 UTC m=+1015.805704092" watchObservedRunningTime="2025-12-10 14:48:30.860175099 +0000 UTC m=+1015.809398506" Dec 10 14:48:33 crc kubenswrapper[4718]: I1210 14:48:33.167260 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-2pm2t" Dec 10 14:48:33 crc kubenswrapper[4718]: I1210 14:48:33.488350 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f7dddfccd-2hnz2" Dec 10 14:48:33 crc kubenswrapper[4718]: I1210 14:48:33.488432 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f7dddfccd-2hnz2" Dec 10 14:48:33 crc kubenswrapper[4718]: I1210 14:48:33.493298 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f7dddfccd-2hnz2" Dec 10 14:48:33 crc kubenswrapper[4718]: I1210 14:48:33.865957 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f7dddfccd-2hnz2" Dec 10 14:48:33 crc kubenswrapper[4718]: I1210 14:48:33.931308 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-t2gmf"] Dec 10 14:48:43 crc kubenswrapper[4718]: I1210 14:48:43.130356 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-xkpb5" Dec 10 14:48:58 crc kubenswrapper[4718]: I1210 14:48:58.984961 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-t2gmf" podUID="17fe734a-f022-4fd4-8276-661e662e2c6b" containerName="console" containerID="cri-o://ee4c5716a10b09e6e69b7ab971d84293431c791284b0d2c1b130a009ac51bc78" gracePeriod=15 Dec 10 14:48:59 crc kubenswrapper[4718]: I1210 14:48:59.469047 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-t2gmf_17fe734a-f022-4fd4-8276-661e662e2c6b/console/0.log" Dec 10 14:48:59 crc kubenswrapper[4718]: I1210 14:48:59.469482 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-t2gmf" Dec 10 14:48:59 crc kubenswrapper[4718]: I1210 14:48:59.609183 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/17fe734a-f022-4fd4-8276-661e662e2c6b-service-ca\") pod \"17fe734a-f022-4fd4-8276-661e662e2c6b\" (UID: \"17fe734a-f022-4fd4-8276-661e662e2c6b\") " Dec 10 14:48:59 crc kubenswrapper[4718]: I1210 14:48:59.609264 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/17fe734a-f022-4fd4-8276-661e662e2c6b-oauth-serving-cert\") pod \"17fe734a-f022-4fd4-8276-661e662e2c6b\" (UID: \"17fe734a-f022-4fd4-8276-661e662e2c6b\") " Dec 10 14:48:59 crc kubenswrapper[4718]: I1210 14:48:59.609326 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bl7ln\" (UniqueName: \"kubernetes.io/projected/17fe734a-f022-4fd4-8276-661e662e2c6b-kube-api-access-bl7ln\") pod \"17fe734a-f022-4fd4-8276-661e662e2c6b\" (UID: \"17fe734a-f022-4fd4-8276-661e662e2c6b\") " Dec 10 14:48:59 crc kubenswrapper[4718]: I1210 14:48:59.609354 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/17fe734a-f022-4fd4-8276-661e662e2c6b-console-serving-cert\") pod \"17fe734a-f022-4fd4-8276-661e662e2c6b\" (UID: \"17fe734a-f022-4fd4-8276-661e662e2c6b\") " Dec 10 14:48:59 crc kubenswrapper[4718]: I1210 14:48:59.609416 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/17fe734a-f022-4fd4-8276-661e662e2c6b-console-config\") pod \"17fe734a-f022-4fd4-8276-661e662e2c6b\" (UID: \"17fe734a-f022-4fd4-8276-661e662e2c6b\") " Dec 10 14:48:59 crc kubenswrapper[4718]: I1210 14:48:59.609476 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/17fe734a-f022-4fd4-8276-661e662e2c6b-console-oauth-config\") pod \"17fe734a-f022-4fd4-8276-661e662e2c6b\" (UID: \"17fe734a-f022-4fd4-8276-661e662e2c6b\") " Dec 10 14:48:59 crc kubenswrapper[4718]: I1210 14:48:59.609503 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/17fe734a-f022-4fd4-8276-661e662e2c6b-trusted-ca-bundle\") pod \"17fe734a-f022-4fd4-8276-661e662e2c6b\" (UID: \"17fe734a-f022-4fd4-8276-661e662e2c6b\") " Dec 10 14:48:59 crc kubenswrapper[4718]: I1210 14:48:59.610247 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/17fe734a-f022-4fd4-8276-661e662e2c6b-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "17fe734a-f022-4fd4-8276-661e662e2c6b" (UID: "17fe734a-f022-4fd4-8276-661e662e2c6b"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:48:59 crc kubenswrapper[4718]: I1210 14:48:59.610469 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/17fe734a-f022-4fd4-8276-661e662e2c6b-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "17fe734a-f022-4fd4-8276-661e662e2c6b" (UID: "17fe734a-f022-4fd4-8276-661e662e2c6b"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:48:59 crc kubenswrapper[4718]: I1210 14:48:59.610755 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/17fe734a-f022-4fd4-8276-661e662e2c6b-service-ca" (OuterVolumeSpecName: "service-ca") pod "17fe734a-f022-4fd4-8276-661e662e2c6b" (UID: "17fe734a-f022-4fd4-8276-661e662e2c6b"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:48:59 crc kubenswrapper[4718]: I1210 14:48:59.610989 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/17fe734a-f022-4fd4-8276-661e662e2c6b-console-config" (OuterVolumeSpecName: "console-config") pod "17fe734a-f022-4fd4-8276-661e662e2c6b" (UID: "17fe734a-f022-4fd4-8276-661e662e2c6b"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:48:59 crc kubenswrapper[4718]: I1210 14:48:59.615889 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/17fe734a-f022-4fd4-8276-661e662e2c6b-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "17fe734a-f022-4fd4-8276-661e662e2c6b" (UID: "17fe734a-f022-4fd4-8276-661e662e2c6b"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 14:48:59 crc kubenswrapper[4718]: I1210 14:48:59.615989 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/17fe734a-f022-4fd4-8276-661e662e2c6b-kube-api-access-bl7ln" (OuterVolumeSpecName: "kube-api-access-bl7ln") pod "17fe734a-f022-4fd4-8276-661e662e2c6b" (UID: "17fe734a-f022-4fd4-8276-661e662e2c6b"). InnerVolumeSpecName "kube-api-access-bl7ln". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 14:48:59 crc kubenswrapper[4718]: I1210 14:48:59.616933 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/17fe734a-f022-4fd4-8276-661e662e2c6b-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "17fe734a-f022-4fd4-8276-661e662e2c6b" (UID: "17fe734a-f022-4fd4-8276-661e662e2c6b"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 14:48:59 crc kubenswrapper[4718]: I1210 14:48:59.711013 4718 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/17fe734a-f022-4fd4-8276-661e662e2c6b-console-oauth-config\") on node \"crc\" DevicePath \"\"" Dec 10 14:48:59 crc kubenswrapper[4718]: I1210 14:48:59.711054 4718 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/17fe734a-f022-4fd4-8276-661e662e2c6b-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 10 14:48:59 crc kubenswrapper[4718]: I1210 14:48:59.711063 4718 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/17fe734a-f022-4fd4-8276-661e662e2c6b-service-ca\") on node \"crc\" DevicePath \"\"" Dec 10 14:48:59 crc kubenswrapper[4718]: I1210 14:48:59.711072 4718 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/17fe734a-f022-4fd4-8276-661e662e2c6b-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 10 14:48:59 crc kubenswrapper[4718]: I1210 14:48:59.711081 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bl7ln\" (UniqueName: \"kubernetes.io/projected/17fe734a-f022-4fd4-8276-661e662e2c6b-kube-api-access-bl7ln\") on node \"crc\" DevicePath \"\"" Dec 10 14:48:59 crc kubenswrapper[4718]: I1210 14:48:59.711092 4718 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/17fe734a-f022-4fd4-8276-661e662e2c6b-console-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 10 14:48:59 crc kubenswrapper[4718]: I1210 14:48:59.711100 4718 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/17fe734a-f022-4fd4-8276-661e662e2c6b-console-config\") on node \"crc\" DevicePath \"\"" Dec 10 14:49:00 crc kubenswrapper[4718]: I1210 14:49:00.025697 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-t2gmf_17fe734a-f022-4fd4-8276-661e662e2c6b/console/0.log" Dec 10 14:49:00 crc kubenswrapper[4718]: I1210 14:49:00.025779 4718 generic.go:334] "Generic (PLEG): container finished" podID="17fe734a-f022-4fd4-8276-661e662e2c6b" containerID="ee4c5716a10b09e6e69b7ab971d84293431c791284b0d2c1b130a009ac51bc78" exitCode=2 Dec 10 14:49:00 crc kubenswrapper[4718]: I1210 14:49:00.025895 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-t2gmf" Dec 10 14:49:00 crc kubenswrapper[4718]: I1210 14:49:00.027577 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-t2gmf" event={"ID":"17fe734a-f022-4fd4-8276-661e662e2c6b","Type":"ContainerDied","Data":"ee4c5716a10b09e6e69b7ab971d84293431c791284b0d2c1b130a009ac51bc78"} Dec 10 14:49:00 crc kubenswrapper[4718]: I1210 14:49:00.027646 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-t2gmf" event={"ID":"17fe734a-f022-4fd4-8276-661e662e2c6b","Type":"ContainerDied","Data":"7f568676b729681a6b13944ff1b882ef67b601ac056db46dc1a853e7fb57cb21"} Dec 10 14:49:00 crc kubenswrapper[4718]: I1210 14:49:00.027672 4718 scope.go:117] "RemoveContainer" containerID="ee4c5716a10b09e6e69b7ab971d84293431c791284b0d2c1b130a009ac51bc78" Dec 10 14:49:00 crc kubenswrapper[4718]: I1210 14:49:00.059867 4718 scope.go:117] "RemoveContainer" containerID="ee4c5716a10b09e6e69b7ab971d84293431c791284b0d2c1b130a009ac51bc78" Dec 10 14:49:00 crc kubenswrapper[4718]: E1210 14:49:00.061003 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ee4c5716a10b09e6e69b7ab971d84293431c791284b0d2c1b130a009ac51bc78\": container with ID starting with ee4c5716a10b09e6e69b7ab971d84293431c791284b0d2c1b130a009ac51bc78 not found: ID does not exist" containerID="ee4c5716a10b09e6e69b7ab971d84293431c791284b0d2c1b130a009ac51bc78" Dec 10 14:49:00 crc kubenswrapper[4718]: I1210 14:49:00.061069 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ee4c5716a10b09e6e69b7ab971d84293431c791284b0d2c1b130a009ac51bc78"} err="failed to get container status \"ee4c5716a10b09e6e69b7ab971d84293431c791284b0d2c1b130a009ac51bc78\": rpc error: code = NotFound desc = could not find container \"ee4c5716a10b09e6e69b7ab971d84293431c791284b0d2c1b130a009ac51bc78\": container with ID starting with ee4c5716a10b09e6e69b7ab971d84293431c791284b0d2c1b130a009ac51bc78 not found: ID does not exist" Dec 10 14:49:00 crc kubenswrapper[4718]: I1210 14:49:00.280907 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83pdbgb"] Dec 10 14:49:00 crc kubenswrapper[4718]: E1210 14:49:00.281681 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="17fe734a-f022-4fd4-8276-661e662e2c6b" containerName="console" Dec 10 14:49:00 crc kubenswrapper[4718]: I1210 14:49:00.281777 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="17fe734a-f022-4fd4-8276-661e662e2c6b" containerName="console" Dec 10 14:49:00 crc kubenswrapper[4718]: I1210 14:49:00.281976 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="17fe734a-f022-4fd4-8276-661e662e2c6b" containerName="console" Dec 10 14:49:00 crc kubenswrapper[4718]: I1210 14:49:00.283129 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83pdbgb" Dec 10 14:49:00 crc kubenswrapper[4718]: I1210 14:49:00.285785 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Dec 10 14:49:00 crc kubenswrapper[4718]: I1210 14:49:00.291580 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83pdbgb"] Dec 10 14:49:00 crc kubenswrapper[4718]: I1210 14:49:00.422154 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6sr99\" (UniqueName: \"kubernetes.io/projected/9350b27a-484f-491b-9a2f-2ae333f3636b-kube-api-access-6sr99\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83pdbgb\" (UID: \"9350b27a-484f-491b-9a2f-2ae333f3636b\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83pdbgb" Dec 10 14:49:00 crc kubenswrapper[4718]: I1210 14:49:00.422292 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/9350b27a-484f-491b-9a2f-2ae333f3636b-bundle\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83pdbgb\" (UID: \"9350b27a-484f-491b-9a2f-2ae333f3636b\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83pdbgb" Dec 10 14:49:00 crc kubenswrapper[4718]: I1210 14:49:00.422350 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/9350b27a-484f-491b-9a2f-2ae333f3636b-util\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83pdbgb\" (UID: \"9350b27a-484f-491b-9a2f-2ae333f3636b\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83pdbgb" Dec 10 14:49:00 crc kubenswrapper[4718]: I1210 14:49:00.523028 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6sr99\" (UniqueName: \"kubernetes.io/projected/9350b27a-484f-491b-9a2f-2ae333f3636b-kube-api-access-6sr99\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83pdbgb\" (UID: \"9350b27a-484f-491b-9a2f-2ae333f3636b\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83pdbgb" Dec 10 14:49:00 crc kubenswrapper[4718]: I1210 14:49:00.523110 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/9350b27a-484f-491b-9a2f-2ae333f3636b-bundle\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83pdbgb\" (UID: \"9350b27a-484f-491b-9a2f-2ae333f3636b\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83pdbgb" Dec 10 14:49:00 crc kubenswrapper[4718]: I1210 14:49:00.523153 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/9350b27a-484f-491b-9a2f-2ae333f3636b-util\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83pdbgb\" (UID: \"9350b27a-484f-491b-9a2f-2ae333f3636b\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83pdbgb" Dec 10 14:49:00 crc kubenswrapper[4718]: I1210 14:49:00.523716 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/9350b27a-484f-491b-9a2f-2ae333f3636b-bundle\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83pdbgb\" (UID: \"9350b27a-484f-491b-9a2f-2ae333f3636b\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83pdbgb" Dec 10 14:49:00 crc kubenswrapper[4718]: I1210 14:49:00.523734 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/9350b27a-484f-491b-9a2f-2ae333f3636b-util\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83pdbgb\" (UID: \"9350b27a-484f-491b-9a2f-2ae333f3636b\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83pdbgb" Dec 10 14:49:00 crc kubenswrapper[4718]: I1210 14:49:00.542312 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6sr99\" (UniqueName: \"kubernetes.io/projected/9350b27a-484f-491b-9a2f-2ae333f3636b-kube-api-access-6sr99\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83pdbgb\" (UID: \"9350b27a-484f-491b-9a2f-2ae333f3636b\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83pdbgb" Dec 10 14:49:00 crc kubenswrapper[4718]: I1210 14:49:00.605867 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83pdbgb" Dec 10 14:49:00 crc kubenswrapper[4718]: I1210 14:49:00.803359 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83pdbgb"] Dec 10 14:49:01 crc kubenswrapper[4718]: I1210 14:49:01.062253 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83pdbgb" event={"ID":"9350b27a-484f-491b-9a2f-2ae333f3636b","Type":"ContainerStarted","Data":"fced01ef71713b69dc057cde078b93d2fcf71c4bfe7b65b9161fd9939bf86bd3"} Dec 10 14:49:01 crc kubenswrapper[4718]: I1210 14:49:01.062327 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83pdbgb" event={"ID":"9350b27a-484f-491b-9a2f-2ae333f3636b","Type":"ContainerStarted","Data":"b2c670a9da3bfbc16fc12cd7ec88a4d74074be7eab8cb23d1574a1bf01f63cb2"} Dec 10 14:49:02 crc kubenswrapper[4718]: I1210 14:49:02.079774 4718 generic.go:334] "Generic (PLEG): container finished" podID="9350b27a-484f-491b-9a2f-2ae333f3636b" containerID="fced01ef71713b69dc057cde078b93d2fcf71c4bfe7b65b9161fd9939bf86bd3" exitCode=0 Dec 10 14:49:02 crc kubenswrapper[4718]: I1210 14:49:02.079916 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83pdbgb" event={"ID":"9350b27a-484f-491b-9a2f-2ae333f3636b","Type":"ContainerDied","Data":"fced01ef71713b69dc057cde078b93d2fcf71c4bfe7b65b9161fd9939bf86bd3"} Dec 10 14:49:04 crc kubenswrapper[4718]: I1210 14:49:04.099747 4718 generic.go:334] "Generic (PLEG): container finished" podID="9350b27a-484f-491b-9a2f-2ae333f3636b" containerID="dc3dd8c9a3372900fe012fb3dfceca562358dab51eff3e504683a7eb0eafac73" exitCode=0 Dec 10 14:49:04 crc kubenswrapper[4718]: I1210 14:49:04.099945 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83pdbgb" event={"ID":"9350b27a-484f-491b-9a2f-2ae333f3636b","Type":"ContainerDied","Data":"dc3dd8c9a3372900fe012fb3dfceca562358dab51eff3e504683a7eb0eafac73"} Dec 10 14:49:05 crc kubenswrapper[4718]: I1210 14:49:05.111943 4718 generic.go:334] "Generic (PLEG): container finished" podID="9350b27a-484f-491b-9a2f-2ae333f3636b" containerID="0e49a81c195cba92e93db5d3731404777d213248d14c7bc9f15623db010b6e30" exitCode=0 Dec 10 14:49:05 crc kubenswrapper[4718]: I1210 14:49:05.111994 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83pdbgb" event={"ID":"9350b27a-484f-491b-9a2f-2ae333f3636b","Type":"ContainerDied","Data":"0e49a81c195cba92e93db5d3731404777d213248d14c7bc9f15623db010b6e30"} Dec 10 14:49:06 crc kubenswrapper[4718]: I1210 14:49:06.399181 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83pdbgb" Dec 10 14:49:06 crc kubenswrapper[4718]: I1210 14:49:06.415539 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/9350b27a-484f-491b-9a2f-2ae333f3636b-bundle\") pod \"9350b27a-484f-491b-9a2f-2ae333f3636b\" (UID: \"9350b27a-484f-491b-9a2f-2ae333f3636b\") " Dec 10 14:49:06 crc kubenswrapper[4718]: I1210 14:49:06.415591 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6sr99\" (UniqueName: \"kubernetes.io/projected/9350b27a-484f-491b-9a2f-2ae333f3636b-kube-api-access-6sr99\") pod \"9350b27a-484f-491b-9a2f-2ae333f3636b\" (UID: \"9350b27a-484f-491b-9a2f-2ae333f3636b\") " Dec 10 14:49:06 crc kubenswrapper[4718]: I1210 14:49:06.415618 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/9350b27a-484f-491b-9a2f-2ae333f3636b-util\") pod \"9350b27a-484f-491b-9a2f-2ae333f3636b\" (UID: \"9350b27a-484f-491b-9a2f-2ae333f3636b\") " Dec 10 14:49:06 crc kubenswrapper[4718]: I1210 14:49:06.416907 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9350b27a-484f-491b-9a2f-2ae333f3636b-bundle" (OuterVolumeSpecName: "bundle") pod "9350b27a-484f-491b-9a2f-2ae333f3636b" (UID: "9350b27a-484f-491b-9a2f-2ae333f3636b"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 14:49:06 crc kubenswrapper[4718]: I1210 14:49:06.422643 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9350b27a-484f-491b-9a2f-2ae333f3636b-kube-api-access-6sr99" (OuterVolumeSpecName: "kube-api-access-6sr99") pod "9350b27a-484f-491b-9a2f-2ae333f3636b" (UID: "9350b27a-484f-491b-9a2f-2ae333f3636b"). InnerVolumeSpecName "kube-api-access-6sr99". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 14:49:06 crc kubenswrapper[4718]: I1210 14:49:06.436300 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9350b27a-484f-491b-9a2f-2ae333f3636b-util" (OuterVolumeSpecName: "util") pod "9350b27a-484f-491b-9a2f-2ae333f3636b" (UID: "9350b27a-484f-491b-9a2f-2ae333f3636b"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 14:49:06 crc kubenswrapper[4718]: I1210 14:49:06.516727 4718 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/9350b27a-484f-491b-9a2f-2ae333f3636b-bundle\") on node \"crc\" DevicePath \"\"" Dec 10 14:49:06 crc kubenswrapper[4718]: I1210 14:49:06.516759 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6sr99\" (UniqueName: \"kubernetes.io/projected/9350b27a-484f-491b-9a2f-2ae333f3636b-kube-api-access-6sr99\") on node \"crc\" DevicePath \"\"" Dec 10 14:49:06 crc kubenswrapper[4718]: I1210 14:49:06.516777 4718 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/9350b27a-484f-491b-9a2f-2ae333f3636b-util\") on node \"crc\" DevicePath \"\"" Dec 10 14:49:07 crc kubenswrapper[4718]: I1210 14:49:07.125915 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83pdbgb" event={"ID":"9350b27a-484f-491b-9a2f-2ae333f3636b","Type":"ContainerDied","Data":"b2c670a9da3bfbc16fc12cd7ec88a4d74074be7eab8cb23d1574a1bf01f63cb2"} Dec 10 14:49:07 crc kubenswrapper[4718]: I1210 14:49:07.125973 4718 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b2c670a9da3bfbc16fc12cd7ec88a4d74074be7eab8cb23d1574a1bf01f63cb2" Dec 10 14:49:07 crc kubenswrapper[4718]: I1210 14:49:07.126018 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83pdbgb" Dec 10 14:49:13 crc kubenswrapper[4718]: I1210 14:49:13.020284 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-gkkdd"] Dec 10 14:49:13 crc kubenswrapper[4718]: E1210 14:49:13.021181 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9350b27a-484f-491b-9a2f-2ae333f3636b" containerName="pull" Dec 10 14:49:13 crc kubenswrapper[4718]: I1210 14:49:13.021203 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="9350b27a-484f-491b-9a2f-2ae333f3636b" containerName="pull" Dec 10 14:49:13 crc kubenswrapper[4718]: E1210 14:49:13.021219 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9350b27a-484f-491b-9a2f-2ae333f3636b" containerName="util" Dec 10 14:49:13 crc kubenswrapper[4718]: I1210 14:49:13.021227 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="9350b27a-484f-491b-9a2f-2ae333f3636b" containerName="util" Dec 10 14:49:13 crc kubenswrapper[4718]: E1210 14:49:13.021243 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9350b27a-484f-491b-9a2f-2ae333f3636b" containerName="extract" Dec 10 14:49:13 crc kubenswrapper[4718]: I1210 14:49:13.021251 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="9350b27a-484f-491b-9a2f-2ae333f3636b" containerName="extract" Dec 10 14:49:13 crc kubenswrapper[4718]: I1210 14:49:13.021379 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="9350b27a-484f-491b-9a2f-2ae333f3636b" containerName="extract" Dec 10 14:49:13 crc kubenswrapper[4718]: I1210 14:49:13.022483 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gkkdd" Dec 10 14:49:13 crc kubenswrapper[4718]: I1210 14:49:13.033441 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-gkkdd"] Dec 10 14:49:13 crc kubenswrapper[4718]: I1210 14:49:13.120485 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c80f4e4c-9b8e-4a42-a7c4-4b59035f2685-catalog-content\") pod \"certified-operators-gkkdd\" (UID: \"c80f4e4c-9b8e-4a42-a7c4-4b59035f2685\") " pod="openshift-marketplace/certified-operators-gkkdd" Dec 10 14:49:13 crc kubenswrapper[4718]: I1210 14:49:13.120541 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-shd7j\" (UniqueName: \"kubernetes.io/projected/c80f4e4c-9b8e-4a42-a7c4-4b59035f2685-kube-api-access-shd7j\") pod \"certified-operators-gkkdd\" (UID: \"c80f4e4c-9b8e-4a42-a7c4-4b59035f2685\") " pod="openshift-marketplace/certified-operators-gkkdd" Dec 10 14:49:13 crc kubenswrapper[4718]: I1210 14:49:13.120584 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c80f4e4c-9b8e-4a42-a7c4-4b59035f2685-utilities\") pod \"certified-operators-gkkdd\" (UID: \"c80f4e4c-9b8e-4a42-a7c4-4b59035f2685\") " pod="openshift-marketplace/certified-operators-gkkdd" Dec 10 14:49:13 crc kubenswrapper[4718]: I1210 14:49:13.221964 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c80f4e4c-9b8e-4a42-a7c4-4b59035f2685-utilities\") pod \"certified-operators-gkkdd\" (UID: \"c80f4e4c-9b8e-4a42-a7c4-4b59035f2685\") " pod="openshift-marketplace/certified-operators-gkkdd" Dec 10 14:49:13 crc kubenswrapper[4718]: I1210 14:49:13.222068 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c80f4e4c-9b8e-4a42-a7c4-4b59035f2685-catalog-content\") pod \"certified-operators-gkkdd\" (UID: \"c80f4e4c-9b8e-4a42-a7c4-4b59035f2685\") " pod="openshift-marketplace/certified-operators-gkkdd" Dec 10 14:49:13 crc kubenswrapper[4718]: I1210 14:49:13.222200 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-shd7j\" (UniqueName: \"kubernetes.io/projected/c80f4e4c-9b8e-4a42-a7c4-4b59035f2685-kube-api-access-shd7j\") pod \"certified-operators-gkkdd\" (UID: \"c80f4e4c-9b8e-4a42-a7c4-4b59035f2685\") " pod="openshift-marketplace/certified-operators-gkkdd" Dec 10 14:49:13 crc kubenswrapper[4718]: I1210 14:49:13.223010 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c80f4e4c-9b8e-4a42-a7c4-4b59035f2685-catalog-content\") pod \"certified-operators-gkkdd\" (UID: \"c80f4e4c-9b8e-4a42-a7c4-4b59035f2685\") " pod="openshift-marketplace/certified-operators-gkkdd" Dec 10 14:49:13 crc kubenswrapper[4718]: I1210 14:49:13.223059 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c80f4e4c-9b8e-4a42-a7c4-4b59035f2685-utilities\") pod \"certified-operators-gkkdd\" (UID: \"c80f4e4c-9b8e-4a42-a7c4-4b59035f2685\") " pod="openshift-marketplace/certified-operators-gkkdd" Dec 10 14:49:13 crc kubenswrapper[4718]: I1210 14:49:13.242593 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-shd7j\" (UniqueName: \"kubernetes.io/projected/c80f4e4c-9b8e-4a42-a7c4-4b59035f2685-kube-api-access-shd7j\") pod \"certified-operators-gkkdd\" (UID: \"c80f4e4c-9b8e-4a42-a7c4-4b59035f2685\") " pod="openshift-marketplace/certified-operators-gkkdd" Dec 10 14:49:13 crc kubenswrapper[4718]: I1210 14:49:13.365196 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gkkdd" Dec 10 14:49:13 crc kubenswrapper[4718]: I1210 14:49:13.610186 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-gkkdd"] Dec 10 14:49:14 crc kubenswrapper[4718]: I1210 14:49:14.173253 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gkkdd" event={"ID":"c80f4e4c-9b8e-4a42-a7c4-4b59035f2685","Type":"ContainerStarted","Data":"2c637c15a4db12ffbf77c12216533f367734a271d9e7f5e06b8fbea78dea4164"} Dec 10 14:49:15 crc kubenswrapper[4718]: I1210 14:49:15.181474 4718 generic.go:334] "Generic (PLEG): container finished" podID="c80f4e4c-9b8e-4a42-a7c4-4b59035f2685" containerID="b8589d1ce129297372378e0260eea10e8eed9ab9e6298849adb7b36fffc45056" exitCode=0 Dec 10 14:49:15 crc kubenswrapper[4718]: I1210 14:49:15.181884 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gkkdd" event={"ID":"c80f4e4c-9b8e-4a42-a7c4-4b59035f2685","Type":"ContainerDied","Data":"b8589d1ce129297372378e0260eea10e8eed9ab9e6298849adb7b36fffc45056"} Dec 10 14:49:15 crc kubenswrapper[4718]: I1210 14:49:15.441884 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-5d7fdbff7b-h2jjz"] Dec 10 14:49:15 crc kubenswrapper[4718]: I1210 14:49:15.442959 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-5d7fdbff7b-h2jjz" Dec 10 14:49:15 crc kubenswrapper[4718]: I1210 14:49:15.446284 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Dec 10 14:49:15 crc kubenswrapper[4718]: I1210 14:49:15.446774 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Dec 10 14:49:15 crc kubenswrapper[4718]: I1210 14:49:15.447260 4718 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Dec 10 14:49:15 crc kubenswrapper[4718]: I1210 14:49:15.447451 4718 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-gfstj" Dec 10 14:49:15 crc kubenswrapper[4718]: I1210 14:49:15.447606 4718 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Dec 10 14:49:15 crc kubenswrapper[4718]: I1210 14:49:15.459005 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-5d7fdbff7b-h2jjz"] Dec 10 14:49:15 crc kubenswrapper[4718]: I1210 14:49:15.459251 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/2c08926d-27dd-4571-a545-aeb91d97a810-apiservice-cert\") pod \"metallb-operator-controller-manager-5d7fdbff7b-h2jjz\" (UID: \"2c08926d-27dd-4571-a545-aeb91d97a810\") " pod="metallb-system/metallb-operator-controller-manager-5d7fdbff7b-h2jjz" Dec 10 14:49:15 crc kubenswrapper[4718]: I1210 14:49:15.459305 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/2c08926d-27dd-4571-a545-aeb91d97a810-webhook-cert\") pod \"metallb-operator-controller-manager-5d7fdbff7b-h2jjz\" (UID: \"2c08926d-27dd-4571-a545-aeb91d97a810\") " pod="metallb-system/metallb-operator-controller-manager-5d7fdbff7b-h2jjz" Dec 10 14:49:15 crc kubenswrapper[4718]: I1210 14:49:15.459329 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hwhm8\" (UniqueName: \"kubernetes.io/projected/2c08926d-27dd-4571-a545-aeb91d97a810-kube-api-access-hwhm8\") pod \"metallb-operator-controller-manager-5d7fdbff7b-h2jjz\" (UID: \"2c08926d-27dd-4571-a545-aeb91d97a810\") " pod="metallb-system/metallb-operator-controller-manager-5d7fdbff7b-h2jjz" Dec 10 14:49:15 crc kubenswrapper[4718]: I1210 14:49:15.560678 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/2c08926d-27dd-4571-a545-aeb91d97a810-apiservice-cert\") pod \"metallb-operator-controller-manager-5d7fdbff7b-h2jjz\" (UID: \"2c08926d-27dd-4571-a545-aeb91d97a810\") " pod="metallb-system/metallb-operator-controller-manager-5d7fdbff7b-h2jjz" Dec 10 14:49:15 crc kubenswrapper[4718]: I1210 14:49:15.560746 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/2c08926d-27dd-4571-a545-aeb91d97a810-webhook-cert\") pod \"metallb-operator-controller-manager-5d7fdbff7b-h2jjz\" (UID: \"2c08926d-27dd-4571-a545-aeb91d97a810\") " pod="metallb-system/metallb-operator-controller-manager-5d7fdbff7b-h2jjz" Dec 10 14:49:15 crc kubenswrapper[4718]: I1210 14:49:15.560779 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hwhm8\" (UniqueName: \"kubernetes.io/projected/2c08926d-27dd-4571-a545-aeb91d97a810-kube-api-access-hwhm8\") pod \"metallb-operator-controller-manager-5d7fdbff7b-h2jjz\" (UID: \"2c08926d-27dd-4571-a545-aeb91d97a810\") " pod="metallb-system/metallb-operator-controller-manager-5d7fdbff7b-h2jjz" Dec 10 14:49:15 crc kubenswrapper[4718]: I1210 14:49:15.568239 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/2c08926d-27dd-4571-a545-aeb91d97a810-webhook-cert\") pod \"metallb-operator-controller-manager-5d7fdbff7b-h2jjz\" (UID: \"2c08926d-27dd-4571-a545-aeb91d97a810\") " pod="metallb-system/metallb-operator-controller-manager-5d7fdbff7b-h2jjz" Dec 10 14:49:15 crc kubenswrapper[4718]: I1210 14:49:15.576053 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/2c08926d-27dd-4571-a545-aeb91d97a810-apiservice-cert\") pod \"metallb-operator-controller-manager-5d7fdbff7b-h2jjz\" (UID: \"2c08926d-27dd-4571-a545-aeb91d97a810\") " pod="metallb-system/metallb-operator-controller-manager-5d7fdbff7b-h2jjz" Dec 10 14:49:15 crc kubenswrapper[4718]: I1210 14:49:15.580552 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hwhm8\" (UniqueName: \"kubernetes.io/projected/2c08926d-27dd-4571-a545-aeb91d97a810-kube-api-access-hwhm8\") pod \"metallb-operator-controller-manager-5d7fdbff7b-h2jjz\" (UID: \"2c08926d-27dd-4571-a545-aeb91d97a810\") " pod="metallb-system/metallb-operator-controller-manager-5d7fdbff7b-h2jjz" Dec 10 14:49:15 crc kubenswrapper[4718]: I1210 14:49:15.680255 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-6fdd887f57-qm9df"] Dec 10 14:49:15 crc kubenswrapper[4718]: I1210 14:49:15.681297 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-6fdd887f57-qm9df" Dec 10 14:49:15 crc kubenswrapper[4718]: I1210 14:49:15.683468 4718 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Dec 10 14:49:15 crc kubenswrapper[4718]: I1210 14:49:15.683626 4718 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Dec 10 14:49:15 crc kubenswrapper[4718]: I1210 14:49:15.684141 4718 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-l8vjr" Dec 10 14:49:15 crc kubenswrapper[4718]: I1210 14:49:15.701249 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-6fdd887f57-qm9df"] Dec 10 14:49:15 crc kubenswrapper[4718]: I1210 14:49:15.760802 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-5d7fdbff7b-h2jjz" Dec 10 14:49:15 crc kubenswrapper[4718]: I1210 14:49:15.864247 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/dc4a693b-8b70-42fd-9a9e-83fcdcc7cb6e-apiservice-cert\") pod \"metallb-operator-webhook-server-6fdd887f57-qm9df\" (UID: \"dc4a693b-8b70-42fd-9a9e-83fcdcc7cb6e\") " pod="metallb-system/metallb-operator-webhook-server-6fdd887f57-qm9df" Dec 10 14:49:15 crc kubenswrapper[4718]: I1210 14:49:15.864346 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/dc4a693b-8b70-42fd-9a9e-83fcdcc7cb6e-webhook-cert\") pod \"metallb-operator-webhook-server-6fdd887f57-qm9df\" (UID: \"dc4a693b-8b70-42fd-9a9e-83fcdcc7cb6e\") " pod="metallb-system/metallb-operator-webhook-server-6fdd887f57-qm9df" Dec 10 14:49:15 crc kubenswrapper[4718]: I1210 14:49:15.864375 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ptmc6\" (UniqueName: \"kubernetes.io/projected/dc4a693b-8b70-42fd-9a9e-83fcdcc7cb6e-kube-api-access-ptmc6\") pod \"metallb-operator-webhook-server-6fdd887f57-qm9df\" (UID: \"dc4a693b-8b70-42fd-9a9e-83fcdcc7cb6e\") " pod="metallb-system/metallb-operator-webhook-server-6fdd887f57-qm9df" Dec 10 14:49:15 crc kubenswrapper[4718]: I1210 14:49:15.965666 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/dc4a693b-8b70-42fd-9a9e-83fcdcc7cb6e-apiservice-cert\") pod \"metallb-operator-webhook-server-6fdd887f57-qm9df\" (UID: \"dc4a693b-8b70-42fd-9a9e-83fcdcc7cb6e\") " pod="metallb-system/metallb-operator-webhook-server-6fdd887f57-qm9df" Dec 10 14:49:15 crc kubenswrapper[4718]: I1210 14:49:15.966130 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/dc4a693b-8b70-42fd-9a9e-83fcdcc7cb6e-webhook-cert\") pod \"metallb-operator-webhook-server-6fdd887f57-qm9df\" (UID: \"dc4a693b-8b70-42fd-9a9e-83fcdcc7cb6e\") " pod="metallb-system/metallb-operator-webhook-server-6fdd887f57-qm9df" Dec 10 14:49:15 crc kubenswrapper[4718]: I1210 14:49:15.966154 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ptmc6\" (UniqueName: \"kubernetes.io/projected/dc4a693b-8b70-42fd-9a9e-83fcdcc7cb6e-kube-api-access-ptmc6\") pod \"metallb-operator-webhook-server-6fdd887f57-qm9df\" (UID: \"dc4a693b-8b70-42fd-9a9e-83fcdcc7cb6e\") " pod="metallb-system/metallb-operator-webhook-server-6fdd887f57-qm9df" Dec 10 14:49:15 crc kubenswrapper[4718]: I1210 14:49:15.989597 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/dc4a693b-8b70-42fd-9a9e-83fcdcc7cb6e-webhook-cert\") pod \"metallb-operator-webhook-server-6fdd887f57-qm9df\" (UID: \"dc4a693b-8b70-42fd-9a9e-83fcdcc7cb6e\") " pod="metallb-system/metallb-operator-webhook-server-6fdd887f57-qm9df" Dec 10 14:49:15 crc kubenswrapper[4718]: I1210 14:49:15.993290 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/dc4a693b-8b70-42fd-9a9e-83fcdcc7cb6e-apiservice-cert\") pod \"metallb-operator-webhook-server-6fdd887f57-qm9df\" (UID: \"dc4a693b-8b70-42fd-9a9e-83fcdcc7cb6e\") " pod="metallb-system/metallb-operator-webhook-server-6fdd887f57-qm9df" Dec 10 14:49:15 crc kubenswrapper[4718]: I1210 14:49:15.996365 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ptmc6\" (UniqueName: \"kubernetes.io/projected/dc4a693b-8b70-42fd-9a9e-83fcdcc7cb6e-kube-api-access-ptmc6\") pod \"metallb-operator-webhook-server-6fdd887f57-qm9df\" (UID: \"dc4a693b-8b70-42fd-9a9e-83fcdcc7cb6e\") " pod="metallb-system/metallb-operator-webhook-server-6fdd887f57-qm9df" Dec 10 14:49:15 crc kubenswrapper[4718]: I1210 14:49:15.999758 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-6fdd887f57-qm9df" Dec 10 14:49:16 crc kubenswrapper[4718]: I1210 14:49:16.066152 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-5d7fdbff7b-h2jjz"] Dec 10 14:49:16 crc kubenswrapper[4718]: I1210 14:49:16.205497 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-5d7fdbff7b-h2jjz" event={"ID":"2c08926d-27dd-4571-a545-aeb91d97a810","Type":"ContainerStarted","Data":"5ab54c870be87149b50f0334ccdc37e3ef9541e2587d00b2fc163b872ed1cd3b"} Dec 10 14:49:16 crc kubenswrapper[4718]: I1210 14:49:16.215429 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gkkdd" event={"ID":"c80f4e4c-9b8e-4a42-a7c4-4b59035f2685","Type":"ContainerStarted","Data":"223aaeda0f8704cb2007c916f169a5e6e693f4318dc2c011ea4a73975a88ebdd"} Dec 10 14:49:16 crc kubenswrapper[4718]: I1210 14:49:16.542024 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-6fdd887f57-qm9df"] Dec 10 14:49:16 crc kubenswrapper[4718]: W1210 14:49:16.547224 4718 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddc4a693b_8b70_42fd_9a9e_83fcdcc7cb6e.slice/crio-601645356bd1bed9d53c0f40ccf75004c3a1f898f2d20eba7855d2211ac22ac8 WatchSource:0}: Error finding container 601645356bd1bed9d53c0f40ccf75004c3a1f898f2d20eba7855d2211ac22ac8: Status 404 returned error can't find the container with id 601645356bd1bed9d53c0f40ccf75004c3a1f898f2d20eba7855d2211ac22ac8 Dec 10 14:49:17 crc kubenswrapper[4718]: I1210 14:49:17.224653 4718 generic.go:334] "Generic (PLEG): container finished" podID="c80f4e4c-9b8e-4a42-a7c4-4b59035f2685" containerID="223aaeda0f8704cb2007c916f169a5e6e693f4318dc2c011ea4a73975a88ebdd" exitCode=0 Dec 10 14:49:17 crc kubenswrapper[4718]: I1210 14:49:17.224743 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gkkdd" event={"ID":"c80f4e4c-9b8e-4a42-a7c4-4b59035f2685","Type":"ContainerDied","Data":"223aaeda0f8704cb2007c916f169a5e6e693f4318dc2c011ea4a73975a88ebdd"} Dec 10 14:49:17 crc kubenswrapper[4718]: I1210 14:49:17.229272 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-6fdd887f57-qm9df" event={"ID":"dc4a693b-8b70-42fd-9a9e-83fcdcc7cb6e","Type":"ContainerStarted","Data":"601645356bd1bed9d53c0f40ccf75004c3a1f898f2d20eba7855d2211ac22ac8"} Dec 10 14:49:20 crc kubenswrapper[4718]: I1210 14:49:20.267320 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gkkdd" event={"ID":"c80f4e4c-9b8e-4a42-a7c4-4b59035f2685","Type":"ContainerStarted","Data":"257f4d9f272aa8e6bb2abbbc412a658184caf23b0f081f9651ba08831993b391"} Dec 10 14:49:20 crc kubenswrapper[4718]: I1210 14:49:20.293576 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-gkkdd" podStartSLOduration=3.337265448 podStartE2EDuration="7.293547557s" podCreationTimestamp="2025-12-10 14:49:13 +0000 UTC" firstStartedPulling="2025-12-10 14:49:15.18455616 +0000 UTC m=+1060.133779577" lastFinishedPulling="2025-12-10 14:49:19.140838269 +0000 UTC m=+1064.090061686" observedRunningTime="2025-12-10 14:49:20.292544801 +0000 UTC m=+1065.241768238" watchObservedRunningTime="2025-12-10 14:49:20.293547557 +0000 UTC m=+1065.242770974" Dec 10 14:49:23 crc kubenswrapper[4718]: I1210 14:49:23.365399 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-gkkdd" Dec 10 14:49:23 crc kubenswrapper[4718]: I1210 14:49:23.365463 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-gkkdd" Dec 10 14:49:23 crc kubenswrapper[4718]: I1210 14:49:23.462011 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-gkkdd" Dec 10 14:49:24 crc kubenswrapper[4718]: I1210 14:49:24.414186 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-gkkdd" Dec 10 14:49:24 crc kubenswrapper[4718]: I1210 14:49:24.470799 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-gkkdd"] Dec 10 14:49:26 crc kubenswrapper[4718]: I1210 14:49:26.370432 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-5d7fdbff7b-h2jjz" event={"ID":"2c08926d-27dd-4571-a545-aeb91d97a810","Type":"ContainerStarted","Data":"4bfdcc397bf54e8e24a63d16fd2f198615b6e7b2ba0a924a4e2e232c65a8d19f"} Dec 10 14:49:26 crc kubenswrapper[4718]: I1210 14:49:26.371859 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-5d7fdbff7b-h2jjz" Dec 10 14:49:26 crc kubenswrapper[4718]: I1210 14:49:26.373443 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-gkkdd" podUID="c80f4e4c-9b8e-4a42-a7c4-4b59035f2685" containerName="registry-server" containerID="cri-o://257f4d9f272aa8e6bb2abbbc412a658184caf23b0f081f9651ba08831993b391" gracePeriod=2 Dec 10 14:49:26 crc kubenswrapper[4718]: I1210 14:49:26.374040 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-6fdd887f57-qm9df" event={"ID":"dc4a693b-8b70-42fd-9a9e-83fcdcc7cb6e","Type":"ContainerStarted","Data":"f5351973652ba236575aa16dc24c046bbf6d1c7b6a3e56a002d29064cd27c6c5"} Dec 10 14:49:26 crc kubenswrapper[4718]: I1210 14:49:26.374077 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-6fdd887f57-qm9df" Dec 10 14:49:26 crc kubenswrapper[4718]: I1210 14:49:26.520376 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-5d7fdbff7b-h2jjz" podStartSLOduration=1.611453569 podStartE2EDuration="11.520342028s" podCreationTimestamp="2025-12-10 14:49:15 +0000 UTC" firstStartedPulling="2025-12-10 14:49:16.087460761 +0000 UTC m=+1061.036684178" lastFinishedPulling="2025-12-10 14:49:25.99634922 +0000 UTC m=+1070.945572637" observedRunningTime="2025-12-10 14:49:26.513062432 +0000 UTC m=+1071.462285859" watchObservedRunningTime="2025-12-10 14:49:26.520342028 +0000 UTC m=+1071.469565445" Dec 10 14:49:26 crc kubenswrapper[4718]: I1210 14:49:26.551048 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-6fdd887f57-qm9df" podStartSLOduration=1.9808394169999999 podStartE2EDuration="11.551012943s" podCreationTimestamp="2025-12-10 14:49:15 +0000 UTC" firstStartedPulling="2025-12-10 14:49:16.550735534 +0000 UTC m=+1061.499958951" lastFinishedPulling="2025-12-10 14:49:26.12090906 +0000 UTC m=+1071.070132477" observedRunningTime="2025-12-10 14:49:26.550378227 +0000 UTC m=+1071.499601644" watchObservedRunningTime="2025-12-10 14:49:26.551012943 +0000 UTC m=+1071.500236360" Dec 10 14:49:27 crc kubenswrapper[4718]: I1210 14:49:27.389843 4718 generic.go:334] "Generic (PLEG): container finished" podID="c80f4e4c-9b8e-4a42-a7c4-4b59035f2685" containerID="257f4d9f272aa8e6bb2abbbc412a658184caf23b0f081f9651ba08831993b391" exitCode=0 Dec 10 14:49:27 crc kubenswrapper[4718]: I1210 14:49:27.389975 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gkkdd" event={"ID":"c80f4e4c-9b8e-4a42-a7c4-4b59035f2685","Type":"ContainerDied","Data":"257f4d9f272aa8e6bb2abbbc412a658184caf23b0f081f9651ba08831993b391"} Dec 10 14:49:28 crc kubenswrapper[4718]: I1210 14:49:28.129824 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gkkdd" Dec 10 14:49:28 crc kubenswrapper[4718]: I1210 14:49:28.319805 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-shd7j\" (UniqueName: \"kubernetes.io/projected/c80f4e4c-9b8e-4a42-a7c4-4b59035f2685-kube-api-access-shd7j\") pod \"c80f4e4c-9b8e-4a42-a7c4-4b59035f2685\" (UID: \"c80f4e4c-9b8e-4a42-a7c4-4b59035f2685\") " Dec 10 14:49:28 crc kubenswrapper[4718]: I1210 14:49:28.320008 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c80f4e4c-9b8e-4a42-a7c4-4b59035f2685-catalog-content\") pod \"c80f4e4c-9b8e-4a42-a7c4-4b59035f2685\" (UID: \"c80f4e4c-9b8e-4a42-a7c4-4b59035f2685\") " Dec 10 14:49:28 crc kubenswrapper[4718]: I1210 14:49:28.320062 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c80f4e4c-9b8e-4a42-a7c4-4b59035f2685-utilities\") pod \"c80f4e4c-9b8e-4a42-a7c4-4b59035f2685\" (UID: \"c80f4e4c-9b8e-4a42-a7c4-4b59035f2685\") " Dec 10 14:49:28 crc kubenswrapper[4718]: I1210 14:49:28.321472 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c80f4e4c-9b8e-4a42-a7c4-4b59035f2685-utilities" (OuterVolumeSpecName: "utilities") pod "c80f4e4c-9b8e-4a42-a7c4-4b59035f2685" (UID: "c80f4e4c-9b8e-4a42-a7c4-4b59035f2685"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 14:49:28 crc kubenswrapper[4718]: I1210 14:49:28.330853 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c80f4e4c-9b8e-4a42-a7c4-4b59035f2685-kube-api-access-shd7j" (OuterVolumeSpecName: "kube-api-access-shd7j") pod "c80f4e4c-9b8e-4a42-a7c4-4b59035f2685" (UID: "c80f4e4c-9b8e-4a42-a7c4-4b59035f2685"). InnerVolumeSpecName "kube-api-access-shd7j". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 14:49:28 crc kubenswrapper[4718]: I1210 14:49:28.390169 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c80f4e4c-9b8e-4a42-a7c4-4b59035f2685-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c80f4e4c-9b8e-4a42-a7c4-4b59035f2685" (UID: "c80f4e4c-9b8e-4a42-a7c4-4b59035f2685"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 14:49:28 crc kubenswrapper[4718]: I1210 14:49:28.398490 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gkkdd" Dec 10 14:49:28 crc kubenswrapper[4718]: I1210 14:49:28.398546 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gkkdd" event={"ID":"c80f4e4c-9b8e-4a42-a7c4-4b59035f2685","Type":"ContainerDied","Data":"2c637c15a4db12ffbf77c12216533f367734a271d9e7f5e06b8fbea78dea4164"} Dec 10 14:49:28 crc kubenswrapper[4718]: I1210 14:49:28.398648 4718 scope.go:117] "RemoveContainer" containerID="257f4d9f272aa8e6bb2abbbc412a658184caf23b0f081f9651ba08831993b391" Dec 10 14:49:28 crc kubenswrapper[4718]: I1210 14:49:28.420399 4718 scope.go:117] "RemoveContainer" containerID="223aaeda0f8704cb2007c916f169a5e6e693f4318dc2c011ea4a73975a88ebdd" Dec 10 14:49:28 crc kubenswrapper[4718]: I1210 14:49:28.421587 4718 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c80f4e4c-9b8e-4a42-a7c4-4b59035f2685-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 10 14:49:28 crc kubenswrapper[4718]: I1210 14:49:28.421622 4718 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c80f4e4c-9b8e-4a42-a7c4-4b59035f2685-utilities\") on node \"crc\" DevicePath \"\"" Dec 10 14:49:28 crc kubenswrapper[4718]: I1210 14:49:28.421637 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-shd7j\" (UniqueName: \"kubernetes.io/projected/c80f4e4c-9b8e-4a42-a7c4-4b59035f2685-kube-api-access-shd7j\") on node \"crc\" DevicePath \"\"" Dec 10 14:49:28 crc kubenswrapper[4718]: I1210 14:49:28.434952 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-gkkdd"] Dec 10 14:49:28 crc kubenswrapper[4718]: I1210 14:49:28.439764 4718 scope.go:117] "RemoveContainer" containerID="b8589d1ce129297372378e0260eea10e8eed9ab9e6298849adb7b36fffc45056" Dec 10 14:49:28 crc kubenswrapper[4718]: I1210 14:49:28.443447 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-gkkdd"] Dec 10 14:49:30 crc kubenswrapper[4718]: I1210 14:49:30.031280 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c80f4e4c-9b8e-4a42-a7c4-4b59035f2685" path="/var/lib/kubelet/pods/c80f4e4c-9b8e-4a42-a7c4-4b59035f2685/volumes" Dec 10 14:49:30 crc kubenswrapper[4718]: I1210 14:49:30.044775 4718 pod_container_manager_linux.go:210] "Failed to delete cgroup paths" cgroupName=["kubepods","burstable","pod17fe734a-f022-4fd4-8276-661e662e2c6b"] err="unable to destroy cgroup paths for cgroup [kubepods burstable pod17fe734a-f022-4fd4-8276-661e662e2c6b] : Timed out while waiting for systemd to remove kubepods-burstable-pod17fe734a_f022_4fd4_8276_661e662e2c6b.slice" Dec 10 14:49:30 crc kubenswrapper[4718]: E1210 14:49:30.044873 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to delete cgroup paths for [kubepods burstable pod17fe734a-f022-4fd4-8276-661e662e2c6b] : unable to destroy cgroup paths for cgroup [kubepods burstable pod17fe734a-f022-4fd4-8276-661e662e2c6b] : Timed out while waiting for systemd to remove kubepods-burstable-pod17fe734a_f022_4fd4_8276_661e662e2c6b.slice" pod="openshift-console/console-f9d7485db-t2gmf" podUID="17fe734a-f022-4fd4-8276-661e662e2c6b" Dec 10 14:49:30 crc kubenswrapper[4718]: I1210 14:49:30.413962 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-t2gmf" Dec 10 14:49:30 crc kubenswrapper[4718]: I1210 14:49:30.434801 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-t2gmf"] Dec 10 14:49:30 crc kubenswrapper[4718]: I1210 14:49:30.438899 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-t2gmf"] Dec 10 14:49:32 crc kubenswrapper[4718]: I1210 14:49:32.028897 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="17fe734a-f022-4fd4-8276-661e662e2c6b" path="/var/lib/kubelet/pods/17fe734a-f022-4fd4-8276-661e662e2c6b/volumes" Dec 10 14:49:36 crc kubenswrapper[4718]: I1210 14:49:36.006757 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-6fdd887f57-qm9df" Dec 10 14:49:48 crc kubenswrapper[4718]: I1210 14:49:48.085023 4718 patch_prober.go:28] interesting pod/machine-config-daemon-8zmhn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 10 14:49:48 crc kubenswrapper[4718]: I1210 14:49:48.085890 4718 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" podUID="8db53917-7cfb-496d-b8a0-5cc68f3be4e7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 10 14:50:05 crc kubenswrapper[4718]: I1210 14:50:05.763306 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-5d7fdbff7b-h2jjz" Dec 10 14:50:06 crc kubenswrapper[4718]: I1210 14:50:06.622844 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-jhgkf"] Dec 10 14:50:06 crc kubenswrapper[4718]: E1210 14:50:06.623765 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c80f4e4c-9b8e-4a42-a7c4-4b59035f2685" containerName="extract-utilities" Dec 10 14:50:06 crc kubenswrapper[4718]: I1210 14:50:06.623798 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="c80f4e4c-9b8e-4a42-a7c4-4b59035f2685" containerName="extract-utilities" Dec 10 14:50:06 crc kubenswrapper[4718]: E1210 14:50:06.623820 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c80f4e4c-9b8e-4a42-a7c4-4b59035f2685" containerName="registry-server" Dec 10 14:50:06 crc kubenswrapper[4718]: I1210 14:50:06.623828 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="c80f4e4c-9b8e-4a42-a7c4-4b59035f2685" containerName="registry-server" Dec 10 14:50:06 crc kubenswrapper[4718]: E1210 14:50:06.623836 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c80f4e4c-9b8e-4a42-a7c4-4b59035f2685" containerName="extract-content" Dec 10 14:50:06 crc kubenswrapper[4718]: I1210 14:50:06.623843 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="c80f4e4c-9b8e-4a42-a7c4-4b59035f2685" containerName="extract-content" Dec 10 14:50:06 crc kubenswrapper[4718]: I1210 14:50:06.624010 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="c80f4e4c-9b8e-4a42-a7c4-4b59035f2685" containerName="registry-server" Dec 10 14:50:06 crc kubenswrapper[4718]: I1210 14:50:06.627149 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-jhgkf" Dec 10 14:50:06 crc kubenswrapper[4718]: I1210 14:50:06.631893 4718 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Dec 10 14:50:06 crc kubenswrapper[4718]: I1210 14:50:06.632305 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Dec 10 14:50:06 crc kubenswrapper[4718]: I1210 14:50:06.632654 4718 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-h2kvt" Dec 10 14:50:06 crc kubenswrapper[4718]: I1210 14:50:06.635282 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-7fcb986d4-45v7s"] Dec 10 14:50:06 crc kubenswrapper[4718]: I1210 14:50:06.639023 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-45v7s" Dec 10 14:50:06 crc kubenswrapper[4718]: I1210 14:50:06.647817 4718 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Dec 10 14:50:06 crc kubenswrapper[4718]: I1210 14:50:06.671568 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7fcb986d4-45v7s"] Dec 10 14:50:06 crc kubenswrapper[4718]: I1210 14:50:06.743531 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-pbmz7"] Dec 10 14:50:06 crc kubenswrapper[4718]: I1210 14:50:06.744566 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-pbmz7" Dec 10 14:50:06 crc kubenswrapper[4718]: I1210 14:50:06.747886 4718 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Dec 10 14:50:06 crc kubenswrapper[4718]: I1210 14:50:06.748142 4718 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-fvpmd" Dec 10 14:50:06 crc kubenswrapper[4718]: I1210 14:50:06.748286 4718 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Dec 10 14:50:06 crc kubenswrapper[4718]: I1210 14:50:06.748426 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Dec 10 14:50:06 crc kubenswrapper[4718]: I1210 14:50:06.764379 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-f8648f98b-wm6nx"] Dec 10 14:50:06 crc kubenswrapper[4718]: I1210 14:50:06.765699 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-f8648f98b-wm6nx" Dec 10 14:50:06 crc kubenswrapper[4718]: I1210 14:50:06.768620 4718 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Dec 10 14:50:06 crc kubenswrapper[4718]: I1210 14:50:06.780653 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/8c7642dd-0879-49cf-870a-a30a11c4d1b9-frr-conf\") pod \"frr-k8s-jhgkf\" (UID: \"8c7642dd-0879-49cf-870a-a30a11c4d1b9\") " pod="metallb-system/frr-k8s-jhgkf" Dec 10 14:50:06 crc kubenswrapper[4718]: I1210 14:50:06.780893 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/8c7642dd-0879-49cf-870a-a30a11c4d1b9-metrics\") pod \"frr-k8s-jhgkf\" (UID: \"8c7642dd-0879-49cf-870a-a30a11c4d1b9\") " pod="metallb-system/frr-k8s-jhgkf" Dec 10 14:50:06 crc kubenswrapper[4718]: I1210 14:50:06.781056 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gckfn\" (UniqueName: \"kubernetes.io/projected/63f53c92-7e30-4be9-be8e-4eb3126d9fc1-kube-api-access-gckfn\") pod \"frr-k8s-webhook-server-7fcb986d4-45v7s\" (UID: \"63f53c92-7e30-4be9-be8e-4eb3126d9fc1\") " pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-45v7s" Dec 10 14:50:06 crc kubenswrapper[4718]: I1210 14:50:06.781195 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8c7642dd-0879-49cf-870a-a30a11c4d1b9-metrics-certs\") pod \"frr-k8s-jhgkf\" (UID: \"8c7642dd-0879-49cf-870a-a30a11c4d1b9\") " pod="metallb-system/frr-k8s-jhgkf" Dec 10 14:50:06 crc kubenswrapper[4718]: I1210 14:50:06.781318 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/8c7642dd-0879-49cf-870a-a30a11c4d1b9-frr-startup\") pod \"frr-k8s-jhgkf\" (UID: \"8c7642dd-0879-49cf-870a-a30a11c4d1b9\") " pod="metallb-system/frr-k8s-jhgkf" Dec 10 14:50:06 crc kubenswrapper[4718]: I1210 14:50:06.781477 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/8c7642dd-0879-49cf-870a-a30a11c4d1b9-frr-sockets\") pod \"frr-k8s-jhgkf\" (UID: \"8c7642dd-0879-49cf-870a-a30a11c4d1b9\") " pod="metallb-system/frr-k8s-jhgkf" Dec 10 14:50:06 crc kubenswrapper[4718]: I1210 14:50:06.781614 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gz6gm\" (UniqueName: \"kubernetes.io/projected/8c7642dd-0879-49cf-870a-a30a11c4d1b9-kube-api-access-gz6gm\") pod \"frr-k8s-jhgkf\" (UID: \"8c7642dd-0879-49cf-870a-a30a11c4d1b9\") " pod="metallb-system/frr-k8s-jhgkf" Dec 10 14:50:06 crc kubenswrapper[4718]: I1210 14:50:06.781716 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/63f53c92-7e30-4be9-be8e-4eb3126d9fc1-cert\") pod \"frr-k8s-webhook-server-7fcb986d4-45v7s\" (UID: \"63f53c92-7e30-4be9-be8e-4eb3126d9fc1\") " pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-45v7s" Dec 10 14:50:06 crc kubenswrapper[4718]: I1210 14:50:06.781816 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/8c7642dd-0879-49cf-870a-a30a11c4d1b9-reloader\") pod \"frr-k8s-jhgkf\" (UID: \"8c7642dd-0879-49cf-870a-a30a11c4d1b9\") " pod="metallb-system/frr-k8s-jhgkf" Dec 10 14:50:06 crc kubenswrapper[4718]: I1210 14:50:06.825969 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-f8648f98b-wm6nx"] Dec 10 14:50:06 crc kubenswrapper[4718]: I1210 14:50:06.883437 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/8c7642dd-0879-49cf-870a-a30a11c4d1b9-frr-conf\") pod \"frr-k8s-jhgkf\" (UID: \"8c7642dd-0879-49cf-870a-a30a11c4d1b9\") " pod="metallb-system/frr-k8s-jhgkf" Dec 10 14:50:06 crc kubenswrapper[4718]: I1210 14:50:06.883514 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e08743b1-961a-43f6-a4b4-c546f2ce87cf-metrics-certs\") pod \"controller-f8648f98b-wm6nx\" (UID: \"e08743b1-961a-43f6-a4b4-c546f2ce87cf\") " pod="metallb-system/controller-f8648f98b-wm6nx" Dec 10 14:50:06 crc kubenswrapper[4718]: I1210 14:50:06.883549 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-czh4w\" (UniqueName: \"kubernetes.io/projected/1a988a3f-3408-4963-8f6c-b77351286aab-kube-api-access-czh4w\") pod \"speaker-pbmz7\" (UID: \"1a988a3f-3408-4963-8f6c-b77351286aab\") " pod="metallb-system/speaker-pbmz7" Dec 10 14:50:06 crc kubenswrapper[4718]: I1210 14:50:06.883595 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/8c7642dd-0879-49cf-870a-a30a11c4d1b9-metrics\") pod \"frr-k8s-jhgkf\" (UID: \"8c7642dd-0879-49cf-870a-a30a11c4d1b9\") " pod="metallb-system/frr-k8s-jhgkf" Dec 10 14:50:06 crc kubenswrapper[4718]: I1210 14:50:06.883624 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gckfn\" (UniqueName: \"kubernetes.io/projected/63f53c92-7e30-4be9-be8e-4eb3126d9fc1-kube-api-access-gckfn\") pod \"frr-k8s-webhook-server-7fcb986d4-45v7s\" (UID: \"63f53c92-7e30-4be9-be8e-4eb3126d9fc1\") " pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-45v7s" Dec 10 14:50:06 crc kubenswrapper[4718]: I1210 14:50:06.883657 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/1a988a3f-3408-4963-8f6c-b77351286aab-memberlist\") pod \"speaker-pbmz7\" (UID: \"1a988a3f-3408-4963-8f6c-b77351286aab\") " pod="metallb-system/speaker-pbmz7" Dec 10 14:50:06 crc kubenswrapper[4718]: I1210 14:50:06.883818 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/1a988a3f-3408-4963-8f6c-b77351286aab-metallb-excludel2\") pod \"speaker-pbmz7\" (UID: \"1a988a3f-3408-4963-8f6c-b77351286aab\") " pod="metallb-system/speaker-pbmz7" Dec 10 14:50:06 crc kubenswrapper[4718]: I1210 14:50:06.883886 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8c7642dd-0879-49cf-870a-a30a11c4d1b9-metrics-certs\") pod \"frr-k8s-jhgkf\" (UID: \"8c7642dd-0879-49cf-870a-a30a11c4d1b9\") " pod="metallb-system/frr-k8s-jhgkf" Dec 10 14:50:06 crc kubenswrapper[4718]: I1210 14:50:06.883970 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/8c7642dd-0879-49cf-870a-a30a11c4d1b9-frr-startup\") pod \"frr-k8s-jhgkf\" (UID: \"8c7642dd-0879-49cf-870a-a30a11c4d1b9\") " pod="metallb-system/frr-k8s-jhgkf" Dec 10 14:50:06 crc kubenswrapper[4718]: I1210 14:50:06.883996 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1a988a3f-3408-4963-8f6c-b77351286aab-metrics-certs\") pod \"speaker-pbmz7\" (UID: \"1a988a3f-3408-4963-8f6c-b77351286aab\") " pod="metallb-system/speaker-pbmz7" Dec 10 14:50:06 crc kubenswrapper[4718]: I1210 14:50:06.884030 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e08743b1-961a-43f6-a4b4-c546f2ce87cf-cert\") pod \"controller-f8648f98b-wm6nx\" (UID: \"e08743b1-961a-43f6-a4b4-c546f2ce87cf\") " pod="metallb-system/controller-f8648f98b-wm6nx" Dec 10 14:50:06 crc kubenswrapper[4718]: E1210 14:50:06.884184 4718 secret.go:188] Couldn't get secret metallb-system/frr-k8s-certs-secret: secret "frr-k8s-certs-secret" not found Dec 10 14:50:06 crc kubenswrapper[4718]: E1210 14:50:06.884285 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8c7642dd-0879-49cf-870a-a30a11c4d1b9-metrics-certs podName:8c7642dd-0879-49cf-870a-a30a11c4d1b9 nodeName:}" failed. No retries permitted until 2025-12-10 14:50:07.384254362 +0000 UTC m=+1112.333477779 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/8c7642dd-0879-49cf-870a-a30a11c4d1b9-metrics-certs") pod "frr-k8s-jhgkf" (UID: "8c7642dd-0879-49cf-870a-a30a11c4d1b9") : secret "frr-k8s-certs-secret" not found Dec 10 14:50:06 crc kubenswrapper[4718]: I1210 14:50:06.884432 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/8c7642dd-0879-49cf-870a-a30a11c4d1b9-metrics\") pod \"frr-k8s-jhgkf\" (UID: \"8c7642dd-0879-49cf-870a-a30a11c4d1b9\") " pod="metallb-system/frr-k8s-jhgkf" Dec 10 14:50:06 crc kubenswrapper[4718]: I1210 14:50:06.884557 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/8c7642dd-0879-49cf-870a-a30a11c4d1b9-frr-sockets\") pod \"frr-k8s-jhgkf\" (UID: \"8c7642dd-0879-49cf-870a-a30a11c4d1b9\") " pod="metallb-system/frr-k8s-jhgkf" Dec 10 14:50:06 crc kubenswrapper[4718]: I1210 14:50:06.884608 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rn69w\" (UniqueName: \"kubernetes.io/projected/e08743b1-961a-43f6-a4b4-c546f2ce87cf-kube-api-access-rn69w\") pod \"controller-f8648f98b-wm6nx\" (UID: \"e08743b1-961a-43f6-a4b4-c546f2ce87cf\") " pod="metallb-system/controller-f8648f98b-wm6nx" Dec 10 14:50:06 crc kubenswrapper[4718]: I1210 14:50:06.884639 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/63f53c92-7e30-4be9-be8e-4eb3126d9fc1-cert\") pod \"frr-k8s-webhook-server-7fcb986d4-45v7s\" (UID: \"63f53c92-7e30-4be9-be8e-4eb3126d9fc1\") " pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-45v7s" Dec 10 14:50:06 crc kubenswrapper[4718]: I1210 14:50:06.884664 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gz6gm\" (UniqueName: \"kubernetes.io/projected/8c7642dd-0879-49cf-870a-a30a11c4d1b9-kube-api-access-gz6gm\") pod \"frr-k8s-jhgkf\" (UID: \"8c7642dd-0879-49cf-870a-a30a11c4d1b9\") " pod="metallb-system/frr-k8s-jhgkf" Dec 10 14:50:06 crc kubenswrapper[4718]: I1210 14:50:06.884691 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/8c7642dd-0879-49cf-870a-a30a11c4d1b9-reloader\") pod \"frr-k8s-jhgkf\" (UID: \"8c7642dd-0879-49cf-870a-a30a11c4d1b9\") " pod="metallb-system/frr-k8s-jhgkf" Dec 10 14:50:06 crc kubenswrapper[4718]: I1210 14:50:06.884932 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/8c7642dd-0879-49cf-870a-a30a11c4d1b9-frr-sockets\") pod \"frr-k8s-jhgkf\" (UID: \"8c7642dd-0879-49cf-870a-a30a11c4d1b9\") " pod="metallb-system/frr-k8s-jhgkf" Dec 10 14:50:06 crc kubenswrapper[4718]: I1210 14:50:06.885147 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/8c7642dd-0879-49cf-870a-a30a11c4d1b9-frr-startup\") pod \"frr-k8s-jhgkf\" (UID: \"8c7642dd-0879-49cf-870a-a30a11c4d1b9\") " pod="metallb-system/frr-k8s-jhgkf" Dec 10 14:50:06 crc kubenswrapper[4718]: I1210 14:50:06.885801 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/8c7642dd-0879-49cf-870a-a30a11c4d1b9-reloader\") pod \"frr-k8s-jhgkf\" (UID: \"8c7642dd-0879-49cf-870a-a30a11c4d1b9\") " pod="metallb-system/frr-k8s-jhgkf" Dec 10 14:50:06 crc kubenswrapper[4718]: I1210 14:50:06.886010 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/8c7642dd-0879-49cf-870a-a30a11c4d1b9-frr-conf\") pod \"frr-k8s-jhgkf\" (UID: \"8c7642dd-0879-49cf-870a-a30a11c4d1b9\") " pod="metallb-system/frr-k8s-jhgkf" Dec 10 14:50:06 crc kubenswrapper[4718]: I1210 14:50:06.891017 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/63f53c92-7e30-4be9-be8e-4eb3126d9fc1-cert\") pod \"frr-k8s-webhook-server-7fcb986d4-45v7s\" (UID: \"63f53c92-7e30-4be9-be8e-4eb3126d9fc1\") " pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-45v7s" Dec 10 14:50:06 crc kubenswrapper[4718]: I1210 14:50:06.899414 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gckfn\" (UniqueName: \"kubernetes.io/projected/63f53c92-7e30-4be9-be8e-4eb3126d9fc1-kube-api-access-gckfn\") pod \"frr-k8s-webhook-server-7fcb986d4-45v7s\" (UID: \"63f53c92-7e30-4be9-be8e-4eb3126d9fc1\") " pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-45v7s" Dec 10 14:50:06 crc kubenswrapper[4718]: I1210 14:50:06.899925 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gz6gm\" (UniqueName: \"kubernetes.io/projected/8c7642dd-0879-49cf-870a-a30a11c4d1b9-kube-api-access-gz6gm\") pod \"frr-k8s-jhgkf\" (UID: \"8c7642dd-0879-49cf-870a-a30a11c4d1b9\") " pod="metallb-system/frr-k8s-jhgkf" Dec 10 14:50:06 crc kubenswrapper[4718]: I1210 14:50:06.961472 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-45v7s" Dec 10 14:50:06 crc kubenswrapper[4718]: I1210 14:50:06.985556 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e08743b1-961a-43f6-a4b4-c546f2ce87cf-metrics-certs\") pod \"controller-f8648f98b-wm6nx\" (UID: \"e08743b1-961a-43f6-a4b4-c546f2ce87cf\") " pod="metallb-system/controller-f8648f98b-wm6nx" Dec 10 14:50:06 crc kubenswrapper[4718]: I1210 14:50:06.985607 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-czh4w\" (UniqueName: \"kubernetes.io/projected/1a988a3f-3408-4963-8f6c-b77351286aab-kube-api-access-czh4w\") pod \"speaker-pbmz7\" (UID: \"1a988a3f-3408-4963-8f6c-b77351286aab\") " pod="metallb-system/speaker-pbmz7" Dec 10 14:50:06 crc kubenswrapper[4718]: I1210 14:50:06.985665 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/1a988a3f-3408-4963-8f6c-b77351286aab-memberlist\") pod \"speaker-pbmz7\" (UID: \"1a988a3f-3408-4963-8f6c-b77351286aab\") " pod="metallb-system/speaker-pbmz7" Dec 10 14:50:06 crc kubenswrapper[4718]: I1210 14:50:06.985699 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/1a988a3f-3408-4963-8f6c-b77351286aab-metallb-excludel2\") pod \"speaker-pbmz7\" (UID: \"1a988a3f-3408-4963-8f6c-b77351286aab\") " pod="metallb-system/speaker-pbmz7" Dec 10 14:50:06 crc kubenswrapper[4718]: I1210 14:50:06.985751 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1a988a3f-3408-4963-8f6c-b77351286aab-metrics-certs\") pod \"speaker-pbmz7\" (UID: \"1a988a3f-3408-4963-8f6c-b77351286aab\") " pod="metallb-system/speaker-pbmz7" Dec 10 14:50:06 crc kubenswrapper[4718]: I1210 14:50:06.985772 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e08743b1-961a-43f6-a4b4-c546f2ce87cf-cert\") pod \"controller-f8648f98b-wm6nx\" (UID: \"e08743b1-961a-43f6-a4b4-c546f2ce87cf\") " pod="metallb-system/controller-f8648f98b-wm6nx" Dec 10 14:50:06 crc kubenswrapper[4718]: I1210 14:50:06.985814 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rn69w\" (UniqueName: \"kubernetes.io/projected/e08743b1-961a-43f6-a4b4-c546f2ce87cf-kube-api-access-rn69w\") pod \"controller-f8648f98b-wm6nx\" (UID: \"e08743b1-961a-43f6-a4b4-c546f2ce87cf\") " pod="metallb-system/controller-f8648f98b-wm6nx" Dec 10 14:50:06 crc kubenswrapper[4718]: I1210 14:50:06.987586 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/1a988a3f-3408-4963-8f6c-b77351286aab-metallb-excludel2\") pod \"speaker-pbmz7\" (UID: \"1a988a3f-3408-4963-8f6c-b77351286aab\") " pod="metallb-system/speaker-pbmz7" Dec 10 14:50:06 crc kubenswrapper[4718]: E1210 14:50:06.987788 4718 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Dec 10 14:50:06 crc kubenswrapper[4718]: E1210 14:50:06.987944 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1a988a3f-3408-4963-8f6c-b77351286aab-memberlist podName:1a988a3f-3408-4963-8f6c-b77351286aab nodeName:}" failed. No retries permitted until 2025-12-10 14:50:07.487922397 +0000 UTC m=+1112.437145814 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/1a988a3f-3408-4963-8f6c-b77351286aab-memberlist") pod "speaker-pbmz7" (UID: "1a988a3f-3408-4963-8f6c-b77351286aab") : secret "metallb-memberlist" not found Dec 10 14:50:06 crc kubenswrapper[4718]: I1210 14:50:06.992062 4718 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Dec 10 14:50:06 crc kubenswrapper[4718]: I1210 14:50:06.994189 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e08743b1-961a-43f6-a4b4-c546f2ce87cf-metrics-certs\") pod \"controller-f8648f98b-wm6nx\" (UID: \"e08743b1-961a-43f6-a4b4-c546f2ce87cf\") " pod="metallb-system/controller-f8648f98b-wm6nx" Dec 10 14:50:06 crc kubenswrapper[4718]: I1210 14:50:06.994660 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1a988a3f-3408-4963-8f6c-b77351286aab-metrics-certs\") pod \"speaker-pbmz7\" (UID: \"1a988a3f-3408-4963-8f6c-b77351286aab\") " pod="metallb-system/speaker-pbmz7" Dec 10 14:50:07 crc kubenswrapper[4718]: I1210 14:50:07.006002 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-czh4w\" (UniqueName: \"kubernetes.io/projected/1a988a3f-3408-4963-8f6c-b77351286aab-kube-api-access-czh4w\") pod \"speaker-pbmz7\" (UID: \"1a988a3f-3408-4963-8f6c-b77351286aab\") " pod="metallb-system/speaker-pbmz7" Dec 10 14:50:07 crc kubenswrapper[4718]: I1210 14:50:07.006768 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e08743b1-961a-43f6-a4b4-c546f2ce87cf-cert\") pod \"controller-f8648f98b-wm6nx\" (UID: \"e08743b1-961a-43f6-a4b4-c546f2ce87cf\") " pod="metallb-system/controller-f8648f98b-wm6nx" Dec 10 14:50:07 crc kubenswrapper[4718]: I1210 14:50:07.027709 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rn69w\" (UniqueName: \"kubernetes.io/projected/e08743b1-961a-43f6-a4b4-c546f2ce87cf-kube-api-access-rn69w\") pod \"controller-f8648f98b-wm6nx\" (UID: \"e08743b1-961a-43f6-a4b4-c546f2ce87cf\") " pod="metallb-system/controller-f8648f98b-wm6nx" Dec 10 14:50:07 crc kubenswrapper[4718]: I1210 14:50:07.086732 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-f8648f98b-wm6nx" Dec 10 14:50:07 crc kubenswrapper[4718]: I1210 14:50:07.392944 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8c7642dd-0879-49cf-870a-a30a11c4d1b9-metrics-certs\") pod \"frr-k8s-jhgkf\" (UID: \"8c7642dd-0879-49cf-870a-a30a11c4d1b9\") " pod="metallb-system/frr-k8s-jhgkf" Dec 10 14:50:07 crc kubenswrapper[4718]: I1210 14:50:07.398057 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8c7642dd-0879-49cf-870a-a30a11c4d1b9-metrics-certs\") pod \"frr-k8s-jhgkf\" (UID: \"8c7642dd-0879-49cf-870a-a30a11c4d1b9\") " pod="metallb-system/frr-k8s-jhgkf" Dec 10 14:50:07 crc kubenswrapper[4718]: I1210 14:50:07.431019 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7fcb986d4-45v7s"] Dec 10 14:50:07 crc kubenswrapper[4718]: I1210 14:50:07.494545 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/1a988a3f-3408-4963-8f6c-b77351286aab-memberlist\") pod \"speaker-pbmz7\" (UID: \"1a988a3f-3408-4963-8f6c-b77351286aab\") " pod="metallb-system/speaker-pbmz7" Dec 10 14:50:07 crc kubenswrapper[4718]: E1210 14:50:07.494877 4718 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Dec 10 14:50:07 crc kubenswrapper[4718]: E1210 14:50:07.495063 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1a988a3f-3408-4963-8f6c-b77351286aab-memberlist podName:1a988a3f-3408-4963-8f6c-b77351286aab nodeName:}" failed. No retries permitted until 2025-12-10 14:50:08.495028062 +0000 UTC m=+1113.444251479 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/1a988a3f-3408-4963-8f6c-b77351286aab-memberlist") pod "speaker-pbmz7" (UID: "1a988a3f-3408-4963-8f6c-b77351286aab") : secret "metallb-memberlist" not found Dec 10 14:50:07 crc kubenswrapper[4718]: I1210 14:50:07.551177 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-jhgkf" Dec 10 14:50:07 crc kubenswrapper[4718]: I1210 14:50:07.557948 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-f8648f98b-wm6nx"] Dec 10 14:50:07 crc kubenswrapper[4718]: I1210 14:50:07.661360 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-45v7s" event={"ID":"63f53c92-7e30-4be9-be8e-4eb3126d9fc1","Type":"ContainerStarted","Data":"bf1edbd7873d0d7246f404bdbf1b437c7f821670b0ba35d83a914618678f11a4"} Dec 10 14:50:07 crc kubenswrapper[4718]: I1210 14:50:07.663211 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-f8648f98b-wm6nx" event={"ID":"e08743b1-961a-43f6-a4b4-c546f2ce87cf","Type":"ContainerStarted","Data":"85f62e22b2fb0d1d3c3ee1bca449b7f62fa2d7da17dac5e84d9077610df38e28"} Dec 10 14:50:08 crc kubenswrapper[4718]: I1210 14:50:08.509199 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/1a988a3f-3408-4963-8f6c-b77351286aab-memberlist\") pod \"speaker-pbmz7\" (UID: \"1a988a3f-3408-4963-8f6c-b77351286aab\") " pod="metallb-system/speaker-pbmz7" Dec 10 14:50:08 crc kubenswrapper[4718]: E1210 14:50:08.509442 4718 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Dec 10 14:50:08 crc kubenswrapper[4718]: E1210 14:50:08.509585 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1a988a3f-3408-4963-8f6c-b77351286aab-memberlist podName:1a988a3f-3408-4963-8f6c-b77351286aab nodeName:}" failed. No retries permitted until 2025-12-10 14:50:10.50954084 +0000 UTC m=+1115.458764257 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/1a988a3f-3408-4963-8f6c-b77351286aab-memberlist") pod "speaker-pbmz7" (UID: "1a988a3f-3408-4963-8f6c-b77351286aab") : secret "metallb-memberlist" not found Dec 10 14:50:09 crc kubenswrapper[4718]: I1210 14:50:09.695000 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-f8648f98b-wm6nx" event={"ID":"e08743b1-961a-43f6-a4b4-c546f2ce87cf","Type":"ContainerStarted","Data":"ae4c20967000ec7fc930b9bf385d90764573865cfa7df1ae6fc3eb0d4975b540"} Dec 10 14:50:09 crc kubenswrapper[4718]: I1210 14:50:09.695331 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-f8648f98b-wm6nx" event={"ID":"e08743b1-961a-43f6-a4b4-c546f2ce87cf","Type":"ContainerStarted","Data":"e6816704b289a6b25138f427a591202b1dd900fcd585197e6f3ddce83cd5a62f"} Dec 10 14:50:09 crc kubenswrapper[4718]: I1210 14:50:09.695696 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-f8648f98b-wm6nx" Dec 10 14:50:09 crc kubenswrapper[4718]: I1210 14:50:09.702726 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-jhgkf" event={"ID":"8c7642dd-0879-49cf-870a-a30a11c4d1b9","Type":"ContainerStarted","Data":"b9172b8643605c6674ff0192c11259c58b0a42c78c39be51a340e241ea97bae8"} Dec 10 14:50:09 crc kubenswrapper[4718]: I1210 14:50:09.721177 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-f8648f98b-wm6nx" podStartSLOduration=3.7211518359999998 podStartE2EDuration="3.721151836s" podCreationTimestamp="2025-12-10 14:50:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 14:50:09.715352167 +0000 UTC m=+1114.664575594" watchObservedRunningTime="2025-12-10 14:50:09.721151836 +0000 UTC m=+1114.670375263" Dec 10 14:50:10 crc kubenswrapper[4718]: I1210 14:50:10.540300 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/1a988a3f-3408-4963-8f6c-b77351286aab-memberlist\") pod \"speaker-pbmz7\" (UID: \"1a988a3f-3408-4963-8f6c-b77351286aab\") " pod="metallb-system/speaker-pbmz7" Dec 10 14:50:10 crc kubenswrapper[4718]: I1210 14:50:10.546574 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/1a988a3f-3408-4963-8f6c-b77351286aab-memberlist\") pod \"speaker-pbmz7\" (UID: \"1a988a3f-3408-4963-8f6c-b77351286aab\") " pod="metallb-system/speaker-pbmz7" Dec 10 14:50:10 crc kubenswrapper[4718]: I1210 14:50:10.663815 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-pbmz7" Dec 10 14:50:10 crc kubenswrapper[4718]: I1210 14:50:10.720113 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-pbmz7" event={"ID":"1a988a3f-3408-4963-8f6c-b77351286aab","Type":"ContainerStarted","Data":"cd3b978d42919bda8cc928cb0de803a3503e0cd4e4018d056f22e34fccb46841"} Dec 10 14:50:11 crc kubenswrapper[4718]: I1210 14:50:11.733220 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-pbmz7" event={"ID":"1a988a3f-3408-4963-8f6c-b77351286aab","Type":"ContainerStarted","Data":"828008987d6529b4b189438fe5312b43ae9d26617b277ab495e62870bac3465e"} Dec 10 14:50:11 crc kubenswrapper[4718]: I1210 14:50:11.733628 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-pbmz7" event={"ID":"1a988a3f-3408-4963-8f6c-b77351286aab","Type":"ContainerStarted","Data":"6c6544b404625d49301b727d9d49559b140069b037b62058f718eff34bc33828"} Dec 10 14:50:11 crc kubenswrapper[4718]: I1210 14:50:11.734008 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-pbmz7" Dec 10 14:50:11 crc kubenswrapper[4718]: I1210 14:50:11.761817 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-pbmz7" podStartSLOduration=5.761790531 podStartE2EDuration="5.761790531s" podCreationTimestamp="2025-12-10 14:50:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 14:50:11.752423731 +0000 UTC m=+1116.701647168" watchObservedRunningTime="2025-12-10 14:50:11.761790531 +0000 UTC m=+1116.711013948" Dec 10 14:50:18 crc kubenswrapper[4718]: I1210 14:50:18.084376 4718 patch_prober.go:28] interesting pod/machine-config-daemon-8zmhn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 10 14:50:18 crc kubenswrapper[4718]: I1210 14:50:18.085035 4718 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" podUID="8db53917-7cfb-496d-b8a0-5cc68f3be4e7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 10 14:50:18 crc kubenswrapper[4718]: I1210 14:50:18.796529 4718 generic.go:334] "Generic (PLEG): container finished" podID="8c7642dd-0879-49cf-870a-a30a11c4d1b9" containerID="e501d34b31c78a629ee1f3d31b86cb2361c651f994c222f1773e440a0ed407a3" exitCode=0 Dec 10 14:50:18 crc kubenswrapper[4718]: I1210 14:50:18.796641 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-jhgkf" event={"ID":"8c7642dd-0879-49cf-870a-a30a11c4d1b9","Type":"ContainerDied","Data":"e501d34b31c78a629ee1f3d31b86cb2361c651f994c222f1773e440a0ed407a3"} Dec 10 14:50:18 crc kubenswrapper[4718]: I1210 14:50:18.798824 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-45v7s" event={"ID":"63f53c92-7e30-4be9-be8e-4eb3126d9fc1","Type":"ContainerStarted","Data":"71942ade8a8ea07094d527485649c11532e72eca51dfa270d1580916b99b9a5c"} Dec 10 14:50:18 crc kubenswrapper[4718]: I1210 14:50:18.798984 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-45v7s" Dec 10 14:50:18 crc kubenswrapper[4718]: I1210 14:50:18.845108 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-45v7s" podStartSLOduration=2.525826788 podStartE2EDuration="12.845082674s" podCreationTimestamp="2025-12-10 14:50:06 +0000 UTC" firstStartedPulling="2025-12-10 14:50:07.438764381 +0000 UTC m=+1112.387987798" lastFinishedPulling="2025-12-10 14:50:17.758020267 +0000 UTC m=+1122.707243684" observedRunningTime="2025-12-10 14:50:18.840783423 +0000 UTC m=+1123.790006840" watchObservedRunningTime="2025-12-10 14:50:18.845082674 +0000 UTC m=+1123.794306091" Dec 10 14:50:19 crc kubenswrapper[4718]: I1210 14:50:19.808412 4718 generic.go:334] "Generic (PLEG): container finished" podID="8c7642dd-0879-49cf-870a-a30a11c4d1b9" containerID="c0694674fdad9a57f23036c365997e1f7df16a4bbeae128b78fe80e271bee7cd" exitCode=0 Dec 10 14:50:19 crc kubenswrapper[4718]: I1210 14:50:19.808492 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-jhgkf" event={"ID":"8c7642dd-0879-49cf-870a-a30a11c4d1b9","Type":"ContainerDied","Data":"c0694674fdad9a57f23036c365997e1f7df16a4bbeae128b78fe80e271bee7cd"} Dec 10 14:50:20 crc kubenswrapper[4718]: I1210 14:50:20.819484 4718 generic.go:334] "Generic (PLEG): container finished" podID="8c7642dd-0879-49cf-870a-a30a11c4d1b9" containerID="61678610c282da7e83ea29fbfdb786d58895b370898eef45d6e0b321df71d0ee" exitCode=0 Dec 10 14:50:20 crc kubenswrapper[4718]: I1210 14:50:20.819608 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-jhgkf" event={"ID":"8c7642dd-0879-49cf-870a-a30a11c4d1b9","Type":"ContainerDied","Data":"61678610c282da7e83ea29fbfdb786d58895b370898eef45d6e0b321df71d0ee"} Dec 10 14:50:21 crc kubenswrapper[4718]: I1210 14:50:21.831775 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-jhgkf" event={"ID":"8c7642dd-0879-49cf-870a-a30a11c4d1b9","Type":"ContainerStarted","Data":"c7709c9defd7ce240ddaa4bd92953257c0da867d0e7a5f0882c4799c6271ec78"} Dec 10 14:50:21 crc kubenswrapper[4718]: I1210 14:50:21.832191 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-jhgkf" event={"ID":"8c7642dd-0879-49cf-870a-a30a11c4d1b9","Type":"ContainerStarted","Data":"91012281a68e09ebea30e3b54116e5727242ada1820d45aec613378d1124cbfd"} Dec 10 14:50:21 crc kubenswrapper[4718]: I1210 14:50:21.832208 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-jhgkf" event={"ID":"8c7642dd-0879-49cf-870a-a30a11c4d1b9","Type":"ContainerStarted","Data":"de34c429dcf32b7afde3230da49b9d49b65fd53ed11783c7e6beb19c7acc6c82"} Dec 10 14:50:21 crc kubenswrapper[4718]: I1210 14:50:21.832220 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-jhgkf" event={"ID":"8c7642dd-0879-49cf-870a-a30a11c4d1b9","Type":"ContainerStarted","Data":"080746e61a408d2afade5e9f5bb7ea9ac2c3cb59d368ab1df8abba2e44dbf2eb"} Dec 10 14:50:22 crc kubenswrapper[4718]: I1210 14:50:22.843930 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-jhgkf" event={"ID":"8c7642dd-0879-49cf-870a-a30a11c4d1b9","Type":"ContainerStarted","Data":"450c2e7bc5907b141dc6cab10a737f29278eb267f25e692d65f65a63831bd731"} Dec 10 14:50:22 crc kubenswrapper[4718]: I1210 14:50:22.844487 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-jhgkf" event={"ID":"8c7642dd-0879-49cf-870a-a30a11c4d1b9","Type":"ContainerStarted","Data":"adee800588af1ed9bdf58898b8596a06c7d59ca26aafc4ae11c0ea515c71909d"} Dec 10 14:50:22 crc kubenswrapper[4718]: I1210 14:50:22.844514 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-jhgkf" Dec 10 14:50:22 crc kubenswrapper[4718]: I1210 14:50:22.871148 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-jhgkf" podStartSLOduration=7.828413709 podStartE2EDuration="16.871123958s" podCreationTimestamp="2025-12-10 14:50:06 +0000 UTC" firstStartedPulling="2025-12-10 14:50:08.755019495 +0000 UTC m=+1113.704242912" lastFinishedPulling="2025-12-10 14:50:17.797729744 +0000 UTC m=+1122.746953161" observedRunningTime="2025-12-10 14:50:22.868752407 +0000 UTC m=+1127.817975834" watchObservedRunningTime="2025-12-10 14:50:22.871123958 +0000 UTC m=+1127.820347375" Dec 10 14:50:27 crc kubenswrapper[4718]: I1210 14:50:27.093230 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-f8648f98b-wm6nx" Dec 10 14:50:27 crc kubenswrapper[4718]: I1210 14:50:27.551925 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-jhgkf" Dec 10 14:50:27 crc kubenswrapper[4718]: I1210 14:50:27.593130 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-jhgkf" Dec 10 14:50:30 crc kubenswrapper[4718]: I1210 14:50:30.668506 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-pbmz7" Dec 10 14:50:34 crc kubenswrapper[4718]: I1210 14:50:34.031095 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-792rp"] Dec 10 14:50:34 crc kubenswrapper[4718]: I1210 14:50:34.032814 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-792rp" Dec 10 14:50:34 crc kubenswrapper[4718]: I1210 14:50:34.033823 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-792rp"] Dec 10 14:50:34 crc kubenswrapper[4718]: I1210 14:50:34.040977 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-9fcns" Dec 10 14:50:34 crc kubenswrapper[4718]: I1210 14:50:34.041000 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Dec 10 14:50:34 crc kubenswrapper[4718]: I1210 14:50:34.041029 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Dec 10 14:50:34 crc kubenswrapper[4718]: I1210 14:50:34.215557 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7xf56\" (UniqueName: \"kubernetes.io/projected/a16b369e-539b-4a82-a44d-c47175075dd4-kube-api-access-7xf56\") pod \"openstack-operator-index-792rp\" (UID: \"a16b369e-539b-4a82-a44d-c47175075dd4\") " pod="openstack-operators/openstack-operator-index-792rp" Dec 10 14:50:34 crc kubenswrapper[4718]: I1210 14:50:34.317456 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7xf56\" (UniqueName: \"kubernetes.io/projected/a16b369e-539b-4a82-a44d-c47175075dd4-kube-api-access-7xf56\") pod \"openstack-operator-index-792rp\" (UID: \"a16b369e-539b-4a82-a44d-c47175075dd4\") " pod="openstack-operators/openstack-operator-index-792rp" Dec 10 14:50:34 crc kubenswrapper[4718]: I1210 14:50:34.341200 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7xf56\" (UniqueName: \"kubernetes.io/projected/a16b369e-539b-4a82-a44d-c47175075dd4-kube-api-access-7xf56\") pod \"openstack-operator-index-792rp\" (UID: \"a16b369e-539b-4a82-a44d-c47175075dd4\") " pod="openstack-operators/openstack-operator-index-792rp" Dec 10 14:50:34 crc kubenswrapper[4718]: I1210 14:50:34.360865 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-792rp" Dec 10 14:50:34 crc kubenswrapper[4718]: I1210 14:50:34.895721 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-792rp"] Dec 10 14:50:34 crc kubenswrapper[4718]: I1210 14:50:34.917562 4718 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 10 14:50:34 crc kubenswrapper[4718]: I1210 14:50:34.926886 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-792rp" event={"ID":"a16b369e-539b-4a82-a44d-c47175075dd4","Type":"ContainerStarted","Data":"a63f69ae52177bd26dc1d653e56d2a13a392e55ba843011ae35dce4b961680f0"} Dec 10 14:50:37 crc kubenswrapper[4718]: I1210 14:50:37.029295 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-45v7s" Dec 10 14:50:37 crc kubenswrapper[4718]: I1210 14:50:37.554961 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-jhgkf" Dec 10 14:50:37 crc kubenswrapper[4718]: I1210 14:50:37.949402 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-792rp" event={"ID":"a16b369e-539b-4a82-a44d-c47175075dd4","Type":"ContainerStarted","Data":"e0214e4105754e48494d59fb20d782f79e6edfeb13bb3551d5fbdbda65f7f6f3"} Dec 10 14:50:37 crc kubenswrapper[4718]: I1210 14:50:37.991086 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-792rp" podStartSLOduration=1.408983896 podStartE2EDuration="3.991049066s" podCreationTimestamp="2025-12-10 14:50:34 +0000 UTC" firstStartedPulling="2025-12-10 14:50:34.917227814 +0000 UTC m=+1139.866451231" lastFinishedPulling="2025-12-10 14:50:37.499292984 +0000 UTC m=+1142.448516401" observedRunningTime="2025-12-10 14:50:37.966652062 +0000 UTC m=+1142.915875479" watchObservedRunningTime="2025-12-10 14:50:37.991049066 +0000 UTC m=+1142.940272483" Dec 10 14:50:39 crc kubenswrapper[4718]: I1210 14:50:39.210280 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-792rp"] Dec 10 14:50:39 crc kubenswrapper[4718]: I1210 14:50:39.621965 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-ggxhd"] Dec 10 14:50:39 crc kubenswrapper[4718]: I1210 14:50:39.622880 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-ggxhd" Dec 10 14:50:39 crc kubenswrapper[4718]: I1210 14:50:39.631792 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-ggxhd"] Dec 10 14:50:39 crc kubenswrapper[4718]: I1210 14:50:39.735006 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sqz58\" (UniqueName: \"kubernetes.io/projected/5b117425-d366-4008-9216-4696f8736b81-kube-api-access-sqz58\") pod \"openstack-operator-index-ggxhd\" (UID: \"5b117425-d366-4008-9216-4696f8736b81\") " pod="openstack-operators/openstack-operator-index-ggxhd" Dec 10 14:50:39 crc kubenswrapper[4718]: I1210 14:50:39.836197 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sqz58\" (UniqueName: \"kubernetes.io/projected/5b117425-d366-4008-9216-4696f8736b81-kube-api-access-sqz58\") pod \"openstack-operator-index-ggxhd\" (UID: \"5b117425-d366-4008-9216-4696f8736b81\") " pod="openstack-operators/openstack-operator-index-ggxhd" Dec 10 14:50:39 crc kubenswrapper[4718]: I1210 14:50:39.865558 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sqz58\" (UniqueName: \"kubernetes.io/projected/5b117425-d366-4008-9216-4696f8736b81-kube-api-access-sqz58\") pod \"openstack-operator-index-ggxhd\" (UID: \"5b117425-d366-4008-9216-4696f8736b81\") " pod="openstack-operators/openstack-operator-index-ggxhd" Dec 10 14:50:39 crc kubenswrapper[4718]: I1210 14:50:39.943951 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-ggxhd" Dec 10 14:50:39 crc kubenswrapper[4718]: I1210 14:50:39.962666 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-792rp" podUID="a16b369e-539b-4a82-a44d-c47175075dd4" containerName="registry-server" containerID="cri-o://e0214e4105754e48494d59fb20d782f79e6edfeb13bb3551d5fbdbda65f7f6f3" gracePeriod=2 Dec 10 14:50:40 crc kubenswrapper[4718]: I1210 14:50:40.382005 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-ggxhd"] Dec 10 14:50:40 crc kubenswrapper[4718]: I1210 14:50:40.829511 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-792rp" Dec 10 14:50:40 crc kubenswrapper[4718]: I1210 14:50:40.852593 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7xf56\" (UniqueName: \"kubernetes.io/projected/a16b369e-539b-4a82-a44d-c47175075dd4-kube-api-access-7xf56\") pod \"a16b369e-539b-4a82-a44d-c47175075dd4\" (UID: \"a16b369e-539b-4a82-a44d-c47175075dd4\") " Dec 10 14:50:40 crc kubenswrapper[4718]: I1210 14:50:40.860594 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a16b369e-539b-4a82-a44d-c47175075dd4-kube-api-access-7xf56" (OuterVolumeSpecName: "kube-api-access-7xf56") pod "a16b369e-539b-4a82-a44d-c47175075dd4" (UID: "a16b369e-539b-4a82-a44d-c47175075dd4"). InnerVolumeSpecName "kube-api-access-7xf56". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 14:50:40 crc kubenswrapper[4718]: I1210 14:50:40.954216 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7xf56\" (UniqueName: \"kubernetes.io/projected/a16b369e-539b-4a82-a44d-c47175075dd4-kube-api-access-7xf56\") on node \"crc\" DevicePath \"\"" Dec 10 14:50:40 crc kubenswrapper[4718]: I1210 14:50:40.981459 4718 generic.go:334] "Generic (PLEG): container finished" podID="a16b369e-539b-4a82-a44d-c47175075dd4" containerID="e0214e4105754e48494d59fb20d782f79e6edfeb13bb3551d5fbdbda65f7f6f3" exitCode=0 Dec 10 14:50:40 crc kubenswrapper[4718]: I1210 14:50:40.981632 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-792rp" Dec 10 14:50:40 crc kubenswrapper[4718]: I1210 14:50:40.981726 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-792rp" event={"ID":"a16b369e-539b-4a82-a44d-c47175075dd4","Type":"ContainerDied","Data":"e0214e4105754e48494d59fb20d782f79e6edfeb13bb3551d5fbdbda65f7f6f3"} Dec 10 14:50:40 crc kubenswrapper[4718]: I1210 14:50:40.981793 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-792rp" event={"ID":"a16b369e-539b-4a82-a44d-c47175075dd4","Type":"ContainerDied","Data":"a63f69ae52177bd26dc1d653e56d2a13a392e55ba843011ae35dce4b961680f0"} Dec 10 14:50:40 crc kubenswrapper[4718]: I1210 14:50:40.981823 4718 scope.go:117] "RemoveContainer" containerID="e0214e4105754e48494d59fb20d782f79e6edfeb13bb3551d5fbdbda65f7f6f3" Dec 10 14:50:40 crc kubenswrapper[4718]: I1210 14:50:40.989375 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-ggxhd" event={"ID":"5b117425-d366-4008-9216-4696f8736b81","Type":"ContainerStarted","Data":"37b6510dec98d02ebabe28b9950b0bcbdbf693ddc119f23675721c725c8646e1"} Dec 10 14:50:41 crc kubenswrapper[4718]: I1210 14:50:41.004922 4718 scope.go:117] "RemoveContainer" containerID="e0214e4105754e48494d59fb20d782f79e6edfeb13bb3551d5fbdbda65f7f6f3" Dec 10 14:50:41 crc kubenswrapper[4718]: E1210 14:50:41.006839 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e0214e4105754e48494d59fb20d782f79e6edfeb13bb3551d5fbdbda65f7f6f3\": container with ID starting with e0214e4105754e48494d59fb20d782f79e6edfeb13bb3551d5fbdbda65f7f6f3 not found: ID does not exist" containerID="e0214e4105754e48494d59fb20d782f79e6edfeb13bb3551d5fbdbda65f7f6f3" Dec 10 14:50:41 crc kubenswrapper[4718]: I1210 14:50:41.006901 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e0214e4105754e48494d59fb20d782f79e6edfeb13bb3551d5fbdbda65f7f6f3"} err="failed to get container status \"e0214e4105754e48494d59fb20d782f79e6edfeb13bb3551d5fbdbda65f7f6f3\": rpc error: code = NotFound desc = could not find container \"e0214e4105754e48494d59fb20d782f79e6edfeb13bb3551d5fbdbda65f7f6f3\": container with ID starting with e0214e4105754e48494d59fb20d782f79e6edfeb13bb3551d5fbdbda65f7f6f3 not found: ID does not exist" Dec 10 14:50:41 crc kubenswrapper[4718]: I1210 14:50:41.056562 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-792rp"] Dec 10 14:50:41 crc kubenswrapper[4718]: I1210 14:50:41.063916 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-index-792rp"] Dec 10 14:50:41 crc kubenswrapper[4718]: E1210 14:50:41.118343 4718 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda16b369e_539b_4a82_a44d_c47175075dd4.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda16b369e_539b_4a82_a44d_c47175075dd4.slice/crio-a63f69ae52177bd26dc1d653e56d2a13a392e55ba843011ae35dce4b961680f0\": RecentStats: unable to find data in memory cache]" Dec 10 14:50:42 crc kubenswrapper[4718]: I1210 14:50:41.999976 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-ggxhd" event={"ID":"5b117425-d366-4008-9216-4696f8736b81","Type":"ContainerStarted","Data":"0981fba64ea7449792f11cfd0af563e65cc4cae2066fe3458d9aed883431cb2d"} Dec 10 14:50:42 crc kubenswrapper[4718]: I1210 14:50:42.030817 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a16b369e-539b-4a82-a44d-c47175075dd4" path="/var/lib/kubelet/pods/a16b369e-539b-4a82-a44d-c47175075dd4/volumes" Dec 10 14:50:42 crc kubenswrapper[4718]: I1210 14:50:42.034218 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-ggxhd" podStartSLOduration=2.509801053 podStartE2EDuration="3.03417947s" podCreationTimestamp="2025-12-10 14:50:39 +0000 UTC" firstStartedPulling="2025-12-10 14:50:40.395484598 +0000 UTC m=+1145.344708015" lastFinishedPulling="2025-12-10 14:50:40.919863025 +0000 UTC m=+1145.869086432" observedRunningTime="2025-12-10 14:50:42.026622796 +0000 UTC m=+1146.975846223" watchObservedRunningTime="2025-12-10 14:50:42.03417947 +0000 UTC m=+1146.983402887" Dec 10 14:50:48 crc kubenswrapper[4718]: I1210 14:50:48.084194 4718 patch_prober.go:28] interesting pod/machine-config-daemon-8zmhn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 10 14:50:48 crc kubenswrapper[4718]: I1210 14:50:48.084785 4718 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" podUID="8db53917-7cfb-496d-b8a0-5cc68f3be4e7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 10 14:50:48 crc kubenswrapper[4718]: I1210 14:50:48.084848 4718 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" Dec 10 14:50:48 crc kubenswrapper[4718]: I1210 14:50:48.085736 4718 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f2aafbfef6aca74c8d0022be5bbc83fbbe6d3fcc33361fe89187f40bd7acdfa4"} pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 10 14:50:48 crc kubenswrapper[4718]: I1210 14:50:48.085809 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" podUID="8db53917-7cfb-496d-b8a0-5cc68f3be4e7" containerName="machine-config-daemon" containerID="cri-o://f2aafbfef6aca74c8d0022be5bbc83fbbe6d3fcc33361fe89187f40bd7acdfa4" gracePeriod=600 Dec 10 14:50:49 crc kubenswrapper[4718]: I1210 14:50:49.067227 4718 generic.go:334] "Generic (PLEG): container finished" podID="8db53917-7cfb-496d-b8a0-5cc68f3be4e7" containerID="f2aafbfef6aca74c8d0022be5bbc83fbbe6d3fcc33361fe89187f40bd7acdfa4" exitCode=0 Dec 10 14:50:49 crc kubenswrapper[4718]: I1210 14:50:49.067314 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" event={"ID":"8db53917-7cfb-496d-b8a0-5cc68f3be4e7","Type":"ContainerDied","Data":"f2aafbfef6aca74c8d0022be5bbc83fbbe6d3fcc33361fe89187f40bd7acdfa4"} Dec 10 14:50:49 crc kubenswrapper[4718]: I1210 14:50:49.068011 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" event={"ID":"8db53917-7cfb-496d-b8a0-5cc68f3be4e7","Type":"ContainerStarted","Data":"2ec2fde063b0fe89bfac326b092793e5b0835b83f4f064a57717d8a122925145"} Dec 10 14:50:49 crc kubenswrapper[4718]: I1210 14:50:49.068045 4718 scope.go:117] "RemoveContainer" containerID="f5b3b0aee57acdc91ec51d93333e41cff8abf9d2f833c5f20d07f2e1f4175aed" Dec 10 14:50:49 crc kubenswrapper[4718]: I1210 14:50:49.944221 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-ggxhd" Dec 10 14:50:49 crc kubenswrapper[4718]: I1210 14:50:49.944280 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-ggxhd" Dec 10 14:50:49 crc kubenswrapper[4718]: I1210 14:50:49.977764 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-ggxhd" Dec 10 14:50:50 crc kubenswrapper[4718]: I1210 14:50:50.106696 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-ggxhd" Dec 10 14:50:55 crc kubenswrapper[4718]: I1210 14:50:55.566285 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/d48c0f09d4f85e6997d52028ed3b1a3994c4f4b73b6b561c4a3bbe5136txtln"] Dec 10 14:50:55 crc kubenswrapper[4718]: E1210 14:50:55.567253 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a16b369e-539b-4a82-a44d-c47175075dd4" containerName="registry-server" Dec 10 14:50:55 crc kubenswrapper[4718]: I1210 14:50:55.567270 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="a16b369e-539b-4a82-a44d-c47175075dd4" containerName="registry-server" Dec 10 14:50:55 crc kubenswrapper[4718]: I1210 14:50:55.567408 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="a16b369e-539b-4a82-a44d-c47175075dd4" containerName="registry-server" Dec 10 14:50:55 crc kubenswrapper[4718]: I1210 14:50:55.568582 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/d48c0f09d4f85e6997d52028ed3b1a3994c4f4b73b6b561c4a3bbe5136txtln" Dec 10 14:50:55 crc kubenswrapper[4718]: I1210 14:50:55.571706 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-7lcgr" Dec 10 14:50:55 crc kubenswrapper[4718]: I1210 14:50:55.585760 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/d48c0f09d4f85e6997d52028ed3b1a3994c4f4b73b6b561c4a3bbe5136txtln"] Dec 10 14:50:55 crc kubenswrapper[4718]: I1210 14:50:55.685670 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bjz5z\" (UniqueName: \"kubernetes.io/projected/92739fd0-cf7c-45af-a0db-dbf3f2ffdddf-kube-api-access-bjz5z\") pod \"d48c0f09d4f85e6997d52028ed3b1a3994c4f4b73b6b561c4a3bbe5136txtln\" (UID: \"92739fd0-cf7c-45af-a0db-dbf3f2ffdddf\") " pod="openstack-operators/d48c0f09d4f85e6997d52028ed3b1a3994c4f4b73b6b561c4a3bbe5136txtln" Dec 10 14:50:55 crc kubenswrapper[4718]: I1210 14:50:55.685762 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/92739fd0-cf7c-45af-a0db-dbf3f2ffdddf-util\") pod \"d48c0f09d4f85e6997d52028ed3b1a3994c4f4b73b6b561c4a3bbe5136txtln\" (UID: \"92739fd0-cf7c-45af-a0db-dbf3f2ffdddf\") " pod="openstack-operators/d48c0f09d4f85e6997d52028ed3b1a3994c4f4b73b6b561c4a3bbe5136txtln" Dec 10 14:50:55 crc kubenswrapper[4718]: I1210 14:50:55.685795 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/92739fd0-cf7c-45af-a0db-dbf3f2ffdddf-bundle\") pod \"d48c0f09d4f85e6997d52028ed3b1a3994c4f4b73b6b561c4a3bbe5136txtln\" (UID: \"92739fd0-cf7c-45af-a0db-dbf3f2ffdddf\") " pod="openstack-operators/d48c0f09d4f85e6997d52028ed3b1a3994c4f4b73b6b561c4a3bbe5136txtln" Dec 10 14:50:55 crc kubenswrapper[4718]: I1210 14:50:55.788019 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/92739fd0-cf7c-45af-a0db-dbf3f2ffdddf-util\") pod \"d48c0f09d4f85e6997d52028ed3b1a3994c4f4b73b6b561c4a3bbe5136txtln\" (UID: \"92739fd0-cf7c-45af-a0db-dbf3f2ffdddf\") " pod="openstack-operators/d48c0f09d4f85e6997d52028ed3b1a3994c4f4b73b6b561c4a3bbe5136txtln" Dec 10 14:50:55 crc kubenswrapper[4718]: I1210 14:50:55.788085 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/92739fd0-cf7c-45af-a0db-dbf3f2ffdddf-bundle\") pod \"d48c0f09d4f85e6997d52028ed3b1a3994c4f4b73b6b561c4a3bbe5136txtln\" (UID: \"92739fd0-cf7c-45af-a0db-dbf3f2ffdddf\") " pod="openstack-operators/d48c0f09d4f85e6997d52028ed3b1a3994c4f4b73b6b561c4a3bbe5136txtln" Dec 10 14:50:55 crc kubenswrapper[4718]: I1210 14:50:55.788192 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bjz5z\" (UniqueName: \"kubernetes.io/projected/92739fd0-cf7c-45af-a0db-dbf3f2ffdddf-kube-api-access-bjz5z\") pod \"d48c0f09d4f85e6997d52028ed3b1a3994c4f4b73b6b561c4a3bbe5136txtln\" (UID: \"92739fd0-cf7c-45af-a0db-dbf3f2ffdddf\") " pod="openstack-operators/d48c0f09d4f85e6997d52028ed3b1a3994c4f4b73b6b561c4a3bbe5136txtln" Dec 10 14:50:55 crc kubenswrapper[4718]: I1210 14:50:55.788934 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/92739fd0-cf7c-45af-a0db-dbf3f2ffdddf-bundle\") pod \"d48c0f09d4f85e6997d52028ed3b1a3994c4f4b73b6b561c4a3bbe5136txtln\" (UID: \"92739fd0-cf7c-45af-a0db-dbf3f2ffdddf\") " pod="openstack-operators/d48c0f09d4f85e6997d52028ed3b1a3994c4f4b73b6b561c4a3bbe5136txtln" Dec 10 14:50:55 crc kubenswrapper[4718]: I1210 14:50:55.788934 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/92739fd0-cf7c-45af-a0db-dbf3f2ffdddf-util\") pod \"d48c0f09d4f85e6997d52028ed3b1a3994c4f4b73b6b561c4a3bbe5136txtln\" (UID: \"92739fd0-cf7c-45af-a0db-dbf3f2ffdddf\") " pod="openstack-operators/d48c0f09d4f85e6997d52028ed3b1a3994c4f4b73b6b561c4a3bbe5136txtln" Dec 10 14:50:55 crc kubenswrapper[4718]: I1210 14:50:55.814452 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bjz5z\" (UniqueName: \"kubernetes.io/projected/92739fd0-cf7c-45af-a0db-dbf3f2ffdddf-kube-api-access-bjz5z\") pod \"d48c0f09d4f85e6997d52028ed3b1a3994c4f4b73b6b561c4a3bbe5136txtln\" (UID: \"92739fd0-cf7c-45af-a0db-dbf3f2ffdddf\") " pod="openstack-operators/d48c0f09d4f85e6997d52028ed3b1a3994c4f4b73b6b561c4a3bbe5136txtln" Dec 10 14:50:55 crc kubenswrapper[4718]: I1210 14:50:55.913011 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/d48c0f09d4f85e6997d52028ed3b1a3994c4f4b73b6b561c4a3bbe5136txtln" Dec 10 14:50:56 crc kubenswrapper[4718]: I1210 14:50:56.467917 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/d48c0f09d4f85e6997d52028ed3b1a3994c4f4b73b6b561c4a3bbe5136txtln"] Dec 10 14:50:57 crc kubenswrapper[4718]: I1210 14:50:57.139308 4718 generic.go:334] "Generic (PLEG): container finished" podID="92739fd0-cf7c-45af-a0db-dbf3f2ffdddf" containerID="bd197c18d0a385eb9a106624b3090034569576af51419d1f05c08f26288825a7" exitCode=0 Dec 10 14:50:57 crc kubenswrapper[4718]: I1210 14:50:57.139450 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/d48c0f09d4f85e6997d52028ed3b1a3994c4f4b73b6b561c4a3bbe5136txtln" event={"ID":"92739fd0-cf7c-45af-a0db-dbf3f2ffdddf","Type":"ContainerDied","Data":"bd197c18d0a385eb9a106624b3090034569576af51419d1f05c08f26288825a7"} Dec 10 14:50:57 crc kubenswrapper[4718]: I1210 14:50:57.141202 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/d48c0f09d4f85e6997d52028ed3b1a3994c4f4b73b6b561c4a3bbe5136txtln" event={"ID":"92739fd0-cf7c-45af-a0db-dbf3f2ffdddf","Type":"ContainerStarted","Data":"2ce97527eb62450978b4d6c42d5c4b8583c6e34a61cd19a07da2fc385f20dcef"} Dec 10 14:50:59 crc kubenswrapper[4718]: I1210 14:50:59.158701 4718 generic.go:334] "Generic (PLEG): container finished" podID="92739fd0-cf7c-45af-a0db-dbf3f2ffdddf" containerID="56bcd2bd622bd1f28a9ef730056028a82b8d4f2cab8034ed8e8aee3f5de17a19" exitCode=0 Dec 10 14:50:59 crc kubenswrapper[4718]: I1210 14:50:59.158805 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/d48c0f09d4f85e6997d52028ed3b1a3994c4f4b73b6b561c4a3bbe5136txtln" event={"ID":"92739fd0-cf7c-45af-a0db-dbf3f2ffdddf","Type":"ContainerDied","Data":"56bcd2bd622bd1f28a9ef730056028a82b8d4f2cab8034ed8e8aee3f5de17a19"} Dec 10 14:51:00 crc kubenswrapper[4718]: I1210 14:51:00.180598 4718 generic.go:334] "Generic (PLEG): container finished" podID="92739fd0-cf7c-45af-a0db-dbf3f2ffdddf" containerID="133928b4570a2f393da4f43cd72a5ccec291bdb40fbddaf577b00933d23ffaec" exitCode=0 Dec 10 14:51:00 crc kubenswrapper[4718]: I1210 14:51:00.180656 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/d48c0f09d4f85e6997d52028ed3b1a3994c4f4b73b6b561c4a3bbe5136txtln" event={"ID":"92739fd0-cf7c-45af-a0db-dbf3f2ffdddf","Type":"ContainerDied","Data":"133928b4570a2f393da4f43cd72a5ccec291bdb40fbddaf577b00933d23ffaec"} Dec 10 14:51:01 crc kubenswrapper[4718]: I1210 14:51:01.490120 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/d48c0f09d4f85e6997d52028ed3b1a3994c4f4b73b6b561c4a3bbe5136txtln" Dec 10 14:51:01 crc kubenswrapper[4718]: I1210 14:51:01.584472 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/92739fd0-cf7c-45af-a0db-dbf3f2ffdddf-bundle\") pod \"92739fd0-cf7c-45af-a0db-dbf3f2ffdddf\" (UID: \"92739fd0-cf7c-45af-a0db-dbf3f2ffdddf\") " Dec 10 14:51:01 crc kubenswrapper[4718]: I1210 14:51:01.585240 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bjz5z\" (UniqueName: \"kubernetes.io/projected/92739fd0-cf7c-45af-a0db-dbf3f2ffdddf-kube-api-access-bjz5z\") pod \"92739fd0-cf7c-45af-a0db-dbf3f2ffdddf\" (UID: \"92739fd0-cf7c-45af-a0db-dbf3f2ffdddf\") " Dec 10 14:51:01 crc kubenswrapper[4718]: I1210 14:51:01.585353 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/92739fd0-cf7c-45af-a0db-dbf3f2ffdddf-util\") pod \"92739fd0-cf7c-45af-a0db-dbf3f2ffdddf\" (UID: \"92739fd0-cf7c-45af-a0db-dbf3f2ffdddf\") " Dec 10 14:51:01 crc kubenswrapper[4718]: I1210 14:51:01.587142 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/92739fd0-cf7c-45af-a0db-dbf3f2ffdddf-bundle" (OuterVolumeSpecName: "bundle") pod "92739fd0-cf7c-45af-a0db-dbf3f2ffdddf" (UID: "92739fd0-cf7c-45af-a0db-dbf3f2ffdddf"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 14:51:01 crc kubenswrapper[4718]: I1210 14:51:01.597732 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/92739fd0-cf7c-45af-a0db-dbf3f2ffdddf-kube-api-access-bjz5z" (OuterVolumeSpecName: "kube-api-access-bjz5z") pod "92739fd0-cf7c-45af-a0db-dbf3f2ffdddf" (UID: "92739fd0-cf7c-45af-a0db-dbf3f2ffdddf"). InnerVolumeSpecName "kube-api-access-bjz5z". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 14:51:01 crc kubenswrapper[4718]: I1210 14:51:01.612942 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/92739fd0-cf7c-45af-a0db-dbf3f2ffdddf-util" (OuterVolumeSpecName: "util") pod "92739fd0-cf7c-45af-a0db-dbf3f2ffdddf" (UID: "92739fd0-cf7c-45af-a0db-dbf3f2ffdddf"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 14:51:01 crc kubenswrapper[4718]: I1210 14:51:01.687123 4718 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/92739fd0-cf7c-45af-a0db-dbf3f2ffdddf-util\") on node \"crc\" DevicePath \"\"" Dec 10 14:51:01 crc kubenswrapper[4718]: I1210 14:51:01.687579 4718 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/92739fd0-cf7c-45af-a0db-dbf3f2ffdddf-bundle\") on node \"crc\" DevicePath \"\"" Dec 10 14:51:01 crc kubenswrapper[4718]: I1210 14:51:01.687653 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bjz5z\" (UniqueName: \"kubernetes.io/projected/92739fd0-cf7c-45af-a0db-dbf3f2ffdddf-kube-api-access-bjz5z\") on node \"crc\" DevicePath \"\"" Dec 10 14:51:02 crc kubenswrapper[4718]: I1210 14:51:02.197951 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/d48c0f09d4f85e6997d52028ed3b1a3994c4f4b73b6b561c4a3bbe5136txtln" event={"ID":"92739fd0-cf7c-45af-a0db-dbf3f2ffdddf","Type":"ContainerDied","Data":"2ce97527eb62450978b4d6c42d5c4b8583c6e34a61cd19a07da2fc385f20dcef"} Dec 10 14:51:02 crc kubenswrapper[4718]: I1210 14:51:02.198025 4718 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2ce97527eb62450978b4d6c42d5c4b8583c6e34a61cd19a07da2fc385f20dcef" Dec 10 14:51:02 crc kubenswrapper[4718]: I1210 14:51:02.198137 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/d48c0f09d4f85e6997d52028ed3b1a3994c4f4b73b6b561c4a3bbe5136txtln" Dec 10 14:51:07 crc kubenswrapper[4718]: I1210 14:51:07.783352 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-operator-966884dd6-7tsss"] Dec 10 14:51:07 crc kubenswrapper[4718]: E1210 14:51:07.784456 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="92739fd0-cf7c-45af-a0db-dbf3f2ffdddf" containerName="pull" Dec 10 14:51:07 crc kubenswrapper[4718]: I1210 14:51:07.784473 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="92739fd0-cf7c-45af-a0db-dbf3f2ffdddf" containerName="pull" Dec 10 14:51:07 crc kubenswrapper[4718]: E1210 14:51:07.784486 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="92739fd0-cf7c-45af-a0db-dbf3f2ffdddf" containerName="extract" Dec 10 14:51:07 crc kubenswrapper[4718]: I1210 14:51:07.784492 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="92739fd0-cf7c-45af-a0db-dbf3f2ffdddf" containerName="extract" Dec 10 14:51:07 crc kubenswrapper[4718]: E1210 14:51:07.784503 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="92739fd0-cf7c-45af-a0db-dbf3f2ffdddf" containerName="util" Dec 10 14:51:07 crc kubenswrapper[4718]: I1210 14:51:07.784511 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="92739fd0-cf7c-45af-a0db-dbf3f2ffdddf" containerName="util" Dec 10 14:51:07 crc kubenswrapper[4718]: I1210 14:51:07.784632 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="92739fd0-cf7c-45af-a0db-dbf3f2ffdddf" containerName="extract" Dec 10 14:51:07 crc kubenswrapper[4718]: I1210 14:51:07.785150 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-operator-966884dd6-7tsss" Dec 10 14:51:07 crc kubenswrapper[4718]: I1210 14:51:07.787279 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-operator-dockercfg-lbncw" Dec 10 14:51:07 crc kubenswrapper[4718]: I1210 14:51:07.813601 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-operator-966884dd6-7tsss"] Dec 10 14:51:07 crc kubenswrapper[4718]: I1210 14:51:07.893915 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g2mg4\" (UniqueName: \"kubernetes.io/projected/08ebcdcc-242d-43fe-bb21-3ddf6b7ae71f-kube-api-access-g2mg4\") pod \"openstack-operator-controller-operator-966884dd6-7tsss\" (UID: \"08ebcdcc-242d-43fe-bb21-3ddf6b7ae71f\") " pod="openstack-operators/openstack-operator-controller-operator-966884dd6-7tsss" Dec 10 14:51:07 crc kubenswrapper[4718]: I1210 14:51:07.996077 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g2mg4\" (UniqueName: \"kubernetes.io/projected/08ebcdcc-242d-43fe-bb21-3ddf6b7ae71f-kube-api-access-g2mg4\") pod \"openstack-operator-controller-operator-966884dd6-7tsss\" (UID: \"08ebcdcc-242d-43fe-bb21-3ddf6b7ae71f\") " pod="openstack-operators/openstack-operator-controller-operator-966884dd6-7tsss" Dec 10 14:51:08 crc kubenswrapper[4718]: I1210 14:51:08.038524 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g2mg4\" (UniqueName: \"kubernetes.io/projected/08ebcdcc-242d-43fe-bb21-3ddf6b7ae71f-kube-api-access-g2mg4\") pod \"openstack-operator-controller-operator-966884dd6-7tsss\" (UID: \"08ebcdcc-242d-43fe-bb21-3ddf6b7ae71f\") " pod="openstack-operators/openstack-operator-controller-operator-966884dd6-7tsss" Dec 10 14:51:08 crc kubenswrapper[4718]: I1210 14:51:08.102150 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-operator-966884dd6-7tsss" Dec 10 14:51:08 crc kubenswrapper[4718]: I1210 14:51:08.565853 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-operator-966884dd6-7tsss"] Dec 10 14:51:09 crc kubenswrapper[4718]: I1210 14:51:09.258855 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-966884dd6-7tsss" event={"ID":"08ebcdcc-242d-43fe-bb21-3ddf6b7ae71f","Type":"ContainerStarted","Data":"ffa96b4e7572a9bb1e7e674924c55a8ccfd1aa6dfa23f140030eb72fea61a375"} Dec 10 14:51:17 crc kubenswrapper[4718]: I1210 14:51:17.335703 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-966884dd6-7tsss" event={"ID":"08ebcdcc-242d-43fe-bb21-3ddf6b7ae71f","Type":"ContainerStarted","Data":"babd29d6ff6c7d03af2d0efe5424c68243b1bb757c250033e6a8501223a927c9"} Dec 10 14:51:17 crc kubenswrapper[4718]: I1210 14:51:17.337446 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-operator-966884dd6-7tsss" Dec 10 14:51:17 crc kubenswrapper[4718]: I1210 14:51:17.396455 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-operator-966884dd6-7tsss" podStartSLOduration=3.190698557 podStartE2EDuration="10.396377004s" podCreationTimestamp="2025-12-10 14:51:07 +0000 UTC" firstStartedPulling="2025-12-10 14:51:08.574195994 +0000 UTC m=+1173.523419411" lastFinishedPulling="2025-12-10 14:51:15.779874441 +0000 UTC m=+1180.729097858" observedRunningTime="2025-12-10 14:51:17.388184444 +0000 UTC m=+1182.337407861" watchObservedRunningTime="2025-12-10 14:51:17.396377004 +0000 UTC m=+1182.345600411" Dec 10 14:51:28 crc kubenswrapper[4718]: I1210 14:51:28.109321 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-operator-966884dd6-7tsss" Dec 10 14:51:56 crc kubenswrapper[4718]: I1210 14:51:56.217317 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7d9dfd778-4s6xq"] Dec 10 14:51:56 crc kubenswrapper[4718]: I1210 14:51:56.219745 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-4s6xq" Dec 10 14:51:56 crc kubenswrapper[4718]: I1210 14:51:56.223316 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-95lqc" Dec 10 14:51:56 crc kubenswrapper[4718]: I1210 14:51:56.226609 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-6c677c69b-jfdsv"] Dec 10 14:51:56 crc kubenswrapper[4718]: I1210 14:51:56.228426 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-6c677c69b-jfdsv" Dec 10 14:51:56 crc kubenswrapper[4718]: I1210 14:51:56.231791 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-v6dvn" Dec 10 14:51:56 crc kubenswrapper[4718]: I1210 14:51:56.235443 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7d9dfd778-4s6xq"] Dec 10 14:51:56 crc kubenswrapper[4718]: I1210 14:51:56.240547 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-697fb699cf-6mx57"] Dec 10 14:51:56 crc kubenswrapper[4718]: I1210 14:51:56.241793 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-697fb699cf-6mx57" Dec 10 14:51:56 crc kubenswrapper[4718]: W1210 14:51:56.246129 4718 reflector.go:561] object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-9xpxd": failed to list *v1.Secret: secrets "designate-operator-controller-manager-dockercfg-9xpxd" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openstack-operators": no relationship found between node 'crc' and this object Dec 10 14:51:56 crc kubenswrapper[4718]: E1210 14:51:56.246204 4718 reflector.go:158] "Unhandled Error" err="object-\"openstack-operators\"/\"designate-operator-controller-manager-dockercfg-9xpxd\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"designate-operator-controller-manager-dockercfg-9xpxd\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openstack-operators\": no relationship found between node 'crc' and this object" logger="UnhandledError" Dec 10 14:51:56 crc kubenswrapper[4718]: I1210 14:51:56.252235 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-6c677c69b-jfdsv"] Dec 10 14:51:56 crc kubenswrapper[4718]: I1210 14:51:56.281524 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-697fb699cf-6mx57"] Dec 10 14:51:56 crc kubenswrapper[4718]: I1210 14:51:56.315415 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-5697bb5779-xvwrl"] Dec 10 14:51:56 crc kubenswrapper[4718]: I1210 14:51:56.316847 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-5697bb5779-xvwrl" Dec 10 14:51:56 crc kubenswrapper[4718]: I1210 14:51:56.321151 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-pdl78" Dec 10 14:51:56 crc kubenswrapper[4718]: I1210 14:51:56.339077 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-5f64f6f8bb-gw22w"] Dec 10 14:51:56 crc kubenswrapper[4718]: I1210 14:51:56.340783 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-gw22w" Dec 10 14:51:56 crc kubenswrapper[4718]: I1210 14:51:56.347099 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mt766\" (UniqueName: \"kubernetes.io/projected/82086b4c-0222-45a7-a3c3-fc2504f63a4e-kube-api-access-mt766\") pod \"cinder-operator-controller-manager-6c677c69b-jfdsv\" (UID: \"82086b4c-0222-45a7-a3c3-fc2504f63a4e\") " pod="openstack-operators/cinder-operator-controller-manager-6c677c69b-jfdsv" Dec 10 14:51:56 crc kubenswrapper[4718]: I1210 14:51:56.347161 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2gcpc\" (UniqueName: \"kubernetes.io/projected/61e4671d-9417-472d-9d76-64fdcc0e3297-kube-api-access-2gcpc\") pod \"barbican-operator-controller-manager-7d9dfd778-4s6xq\" (UID: \"61e4671d-9417-472d-9d76-64fdcc0e3297\") " pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-4s6xq" Dec 10 14:51:56 crc kubenswrapper[4718]: I1210 14:51:56.347229 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rdcp9\" (UniqueName: \"kubernetes.io/projected/a3023af7-f9ec-44a3-a532-0f6d51843443-kube-api-access-rdcp9\") pod \"designate-operator-controller-manager-697fb699cf-6mx57\" (UID: \"a3023af7-f9ec-44a3-a532-0f6d51843443\") " pod="openstack-operators/designate-operator-controller-manager-697fb699cf-6mx57" Dec 10 14:51:56 crc kubenswrapper[4718]: I1210 14:51:56.352698 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-2lpkc" Dec 10 14:51:56 crc kubenswrapper[4718]: I1210 14:51:56.364455 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-5697bb5779-xvwrl"] Dec 10 14:51:56 crc kubenswrapper[4718]: I1210 14:51:56.374494 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-68c6d99b8f-mshqn"] Dec 10 14:51:56 crc kubenswrapper[4718]: I1210 14:51:56.375829 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-mshqn" Dec 10 14:51:56 crc kubenswrapper[4718]: I1210 14:51:56.379491 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-btlz6" Dec 10 14:51:56 crc kubenswrapper[4718]: I1210 14:51:56.391875 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-68c6d99b8f-mshqn"] Dec 10 14:51:56 crc kubenswrapper[4718]: I1210 14:51:56.405965 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-5f64f6f8bb-gw22w"] Dec 10 14:51:56 crc kubenswrapper[4718]: I1210 14:51:56.442486 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-78d48bff9d-cvh4g"] Dec 10 14:51:56 crc kubenswrapper[4718]: I1210 14:51:56.443952 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-cvh4g" Dec 10 14:51:56 crc kubenswrapper[4718]: I1210 14:51:56.448406 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mt766\" (UniqueName: \"kubernetes.io/projected/82086b4c-0222-45a7-a3c3-fc2504f63a4e-kube-api-access-mt766\") pod \"cinder-operator-controller-manager-6c677c69b-jfdsv\" (UID: \"82086b4c-0222-45a7-a3c3-fc2504f63a4e\") " pod="openstack-operators/cinder-operator-controller-manager-6c677c69b-jfdsv" Dec 10 14:51:56 crc kubenswrapper[4718]: I1210 14:51:56.448448 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2gcpc\" (UniqueName: \"kubernetes.io/projected/61e4671d-9417-472d-9d76-64fdcc0e3297-kube-api-access-2gcpc\") pod \"barbican-operator-controller-manager-7d9dfd778-4s6xq\" (UID: \"61e4671d-9417-472d-9d76-64fdcc0e3297\") " pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-4s6xq" Dec 10 14:51:56 crc kubenswrapper[4718]: I1210 14:51:56.448475 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bcglg\" (UniqueName: \"kubernetes.io/projected/a701287e-359e-429d-8b94-c4e06e8922a8-kube-api-access-bcglg\") pod \"heat-operator-controller-manager-5f64f6f8bb-gw22w\" (UID: \"a701287e-359e-429d-8b94-c4e06e8922a8\") " pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-gw22w" Dec 10 14:51:56 crc kubenswrapper[4718]: I1210 14:51:56.448511 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4cv8z\" (UniqueName: \"kubernetes.io/projected/513a8781-70b0-4692-9141-0c60ef254a98-kube-api-access-4cv8z\") pod \"glance-operator-controller-manager-5697bb5779-xvwrl\" (UID: \"513a8781-70b0-4692-9141-0c60ef254a98\") " pod="openstack-operators/glance-operator-controller-manager-5697bb5779-xvwrl" Dec 10 14:51:56 crc kubenswrapper[4718]: I1210 14:51:56.448541 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdcp9\" (UniqueName: \"kubernetes.io/projected/a3023af7-f9ec-44a3-a532-0f6d51843443-kube-api-access-rdcp9\") pod \"designate-operator-controller-manager-697fb699cf-6mx57\" (UID: \"a3023af7-f9ec-44a3-a532-0f6d51843443\") " pod="openstack-operators/designate-operator-controller-manager-697fb699cf-6mx57" Dec 10 14:51:56 crc kubenswrapper[4718]: I1210 14:51:56.448582 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wgh2n\" (UniqueName: \"kubernetes.io/projected/80f8ae23-3a84-4810-9868-6571b6cf56a1-kube-api-access-wgh2n\") pod \"horizon-operator-controller-manager-68c6d99b8f-mshqn\" (UID: \"80f8ae23-3a84-4810-9868-6571b6cf56a1\") " pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-mshqn" Dec 10 14:51:56 crc kubenswrapper[4718]: I1210 14:51:56.448854 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-5kcm8" Dec 10 14:51:56 crc kubenswrapper[4718]: I1210 14:51:56.449066 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Dec 10 14:51:56 crc kubenswrapper[4718]: I1210 14:51:56.464512 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-967d97867-2jgmr"] Dec 10 14:51:56 crc kubenswrapper[4718]: I1210 14:51:56.466110 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-967d97867-2jgmr" Dec 10 14:51:56 crc kubenswrapper[4718]: I1210 14:51:56.470059 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-v5rdl" Dec 10 14:51:56 crc kubenswrapper[4718]: I1210 14:51:56.478018 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-78d48bff9d-cvh4g"] Dec 10 14:51:56 crc kubenswrapper[4718]: I1210 14:51:56.489843 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdcp9\" (UniqueName: \"kubernetes.io/projected/a3023af7-f9ec-44a3-a532-0f6d51843443-kube-api-access-rdcp9\") pod \"designate-operator-controller-manager-697fb699cf-6mx57\" (UID: \"a3023af7-f9ec-44a3-a532-0f6d51843443\") " pod="openstack-operators/designate-operator-controller-manager-697fb699cf-6mx57" Dec 10 14:51:56 crc kubenswrapper[4718]: I1210 14:51:56.489938 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-5b5fd79c9c-cfcm2"] Dec 10 14:51:56 crc kubenswrapper[4718]: I1210 14:51:56.491458 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-5b5fd79c9c-cfcm2" Dec 10 14:51:56 crc kubenswrapper[4718]: I1210 14:51:56.494984 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-rkpsb" Dec 10 14:51:56 crc kubenswrapper[4718]: I1210 14:51:56.495370 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-7765d96ddf-9sxwk"] Dec 10 14:51:56 crc kubenswrapper[4718]: I1210 14:51:56.497611 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2gcpc\" (UniqueName: \"kubernetes.io/projected/61e4671d-9417-472d-9d76-64fdcc0e3297-kube-api-access-2gcpc\") pod \"barbican-operator-controller-manager-7d9dfd778-4s6xq\" (UID: \"61e4671d-9417-472d-9d76-64fdcc0e3297\") " pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-4s6xq" Dec 10 14:51:56 crc kubenswrapper[4718]: I1210 14:51:56.497935 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-9sxwk" Dec 10 14:51:56 crc kubenswrapper[4718]: I1210 14:51:56.503362 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-vhmg8" Dec 10 14:51:56 crc kubenswrapper[4718]: I1210 14:51:56.525340 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mt766\" (UniqueName: \"kubernetes.io/projected/82086b4c-0222-45a7-a3c3-fc2504f63a4e-kube-api-access-mt766\") pod \"cinder-operator-controller-manager-6c677c69b-jfdsv\" (UID: \"82086b4c-0222-45a7-a3c3-fc2504f63a4e\") " pod="openstack-operators/cinder-operator-controller-manager-6c677c69b-jfdsv" Dec 10 14:51:56 crc kubenswrapper[4718]: I1210 14:51:56.533657 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-967d97867-2jgmr"] Dec 10 14:51:56 crc kubenswrapper[4718]: I1210 14:51:56.556075 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lw5dc\" (UniqueName: \"kubernetes.io/projected/2516d98a-9991-4d5b-9791-14642a4ec629-kube-api-access-lw5dc\") pod \"ironic-operator-controller-manager-967d97867-2jgmr\" (UID: \"2516d98a-9991-4d5b-9791-14642a4ec629\") " pod="openstack-operators/ironic-operator-controller-manager-967d97867-2jgmr" Dec 10 14:51:56 crc kubenswrapper[4718]: I1210 14:51:56.556170 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6a2a49d9-73fc-4798-8173-ed230aa16811-cert\") pod \"infra-operator-controller-manager-78d48bff9d-cvh4g\" (UID: \"6a2a49d9-73fc-4798-8173-ed230aa16811\") " pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-cvh4g" Dec 10 14:51:56 crc kubenswrapper[4718]: I1210 14:51:56.556244 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wgh2n\" (UniqueName: \"kubernetes.io/projected/80f8ae23-3a84-4810-9868-6571b6cf56a1-kube-api-access-wgh2n\") pod \"horizon-operator-controller-manager-68c6d99b8f-mshqn\" (UID: \"80f8ae23-3a84-4810-9868-6571b6cf56a1\") " pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-mshqn" Dec 10 14:51:56 crc kubenswrapper[4718]: I1210 14:51:56.556424 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f24zs\" (UniqueName: \"kubernetes.io/projected/e7677f94-866d-45c7-b1c9-70fd2b7c7012-kube-api-access-f24zs\") pod \"manila-operator-controller-manager-5b5fd79c9c-cfcm2\" (UID: \"e7677f94-866d-45c7-b1c9-70fd2b7c7012\") " pod="openstack-operators/manila-operator-controller-manager-5b5fd79c9c-cfcm2" Dec 10 14:51:56 crc kubenswrapper[4718]: I1210 14:51:56.556503 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bcglg\" (UniqueName: \"kubernetes.io/projected/a701287e-359e-429d-8b94-c4e06e8922a8-kube-api-access-bcglg\") pod \"heat-operator-controller-manager-5f64f6f8bb-gw22w\" (UID: \"a701287e-359e-429d-8b94-c4e06e8922a8\") " pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-gw22w" Dec 10 14:51:56 crc kubenswrapper[4718]: I1210 14:51:56.556628 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4cv8z\" (UniqueName: \"kubernetes.io/projected/513a8781-70b0-4692-9141-0c60ef254a98-kube-api-access-4cv8z\") pod \"glance-operator-controller-manager-5697bb5779-xvwrl\" (UID: \"513a8781-70b0-4692-9141-0c60ef254a98\") " pod="openstack-operators/glance-operator-controller-manager-5697bb5779-xvwrl" Dec 10 14:51:56 crc kubenswrapper[4718]: I1210 14:51:56.556678 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-25vds\" (UniqueName: \"kubernetes.io/projected/6a2a49d9-73fc-4798-8173-ed230aa16811-kube-api-access-25vds\") pod \"infra-operator-controller-manager-78d48bff9d-cvh4g\" (UID: \"6a2a49d9-73fc-4798-8173-ed230aa16811\") " pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-cvh4g" Dec 10 14:51:56 crc kubenswrapper[4718]: I1210 14:51:56.556744 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-4s6xq" Dec 10 14:51:56 crc kubenswrapper[4718]: I1210 14:51:56.569256 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-7765d96ddf-9sxwk"] Dec 10 14:51:56 crc kubenswrapper[4718]: I1210 14:51:56.573886 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-5b5fd79c9c-cfcm2"] Dec 10 14:51:56 crc kubenswrapper[4718]: I1210 14:51:56.574872 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-6c677c69b-jfdsv" Dec 10 14:51:56 crc kubenswrapper[4718]: I1210 14:51:56.602988 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wgh2n\" (UniqueName: \"kubernetes.io/projected/80f8ae23-3a84-4810-9868-6571b6cf56a1-kube-api-access-wgh2n\") pod \"horizon-operator-controller-manager-68c6d99b8f-mshqn\" (UID: \"80f8ae23-3a84-4810-9868-6571b6cf56a1\") " pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-mshqn" Dec 10 14:51:56 crc kubenswrapper[4718]: I1210 14:51:56.627537 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4cv8z\" (UniqueName: \"kubernetes.io/projected/513a8781-70b0-4692-9141-0c60ef254a98-kube-api-access-4cv8z\") pod \"glance-operator-controller-manager-5697bb5779-xvwrl\" (UID: \"513a8781-70b0-4692-9141-0c60ef254a98\") " pod="openstack-operators/glance-operator-controller-manager-5697bb5779-xvwrl" Dec 10 14:51:56 crc kubenswrapper[4718]: I1210 14:51:56.644242 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-5697bb5779-xvwrl" Dec 10 14:51:56 crc kubenswrapper[4718]: I1210 14:51:56.663803 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f24zs\" (UniqueName: \"kubernetes.io/projected/e7677f94-866d-45c7-b1c9-70fd2b7c7012-kube-api-access-f24zs\") pod \"manila-operator-controller-manager-5b5fd79c9c-cfcm2\" (UID: \"e7677f94-866d-45c7-b1c9-70fd2b7c7012\") " pod="openstack-operators/manila-operator-controller-manager-5b5fd79c9c-cfcm2" Dec 10 14:51:56 crc kubenswrapper[4718]: I1210 14:51:56.663947 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p2zrx\" (UniqueName: \"kubernetes.io/projected/7d8ae7e9-7545-4ab6-b87c-6c5484b47424-kube-api-access-p2zrx\") pod \"keystone-operator-controller-manager-7765d96ddf-9sxwk\" (UID: \"7d8ae7e9-7545-4ab6-b87c-6c5484b47424\") " pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-9sxwk" Dec 10 14:51:56 crc kubenswrapper[4718]: I1210 14:51:56.663997 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-25vds\" (UniqueName: \"kubernetes.io/projected/6a2a49d9-73fc-4798-8173-ed230aa16811-kube-api-access-25vds\") pod \"infra-operator-controller-manager-78d48bff9d-cvh4g\" (UID: \"6a2a49d9-73fc-4798-8173-ed230aa16811\") " pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-cvh4g" Dec 10 14:51:56 crc kubenswrapper[4718]: I1210 14:51:56.664044 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lw5dc\" (UniqueName: \"kubernetes.io/projected/2516d98a-9991-4d5b-9791-14642a4ec629-kube-api-access-lw5dc\") pod \"ironic-operator-controller-manager-967d97867-2jgmr\" (UID: \"2516d98a-9991-4d5b-9791-14642a4ec629\") " pod="openstack-operators/ironic-operator-controller-manager-967d97867-2jgmr" Dec 10 14:51:56 crc kubenswrapper[4718]: I1210 14:51:56.664074 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6a2a49d9-73fc-4798-8173-ed230aa16811-cert\") pod \"infra-operator-controller-manager-78d48bff9d-cvh4g\" (UID: \"6a2a49d9-73fc-4798-8173-ed230aa16811\") " pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-cvh4g" Dec 10 14:51:56 crc kubenswrapper[4718]: I1210 14:51:56.664138 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bcglg\" (UniqueName: \"kubernetes.io/projected/a701287e-359e-429d-8b94-c4e06e8922a8-kube-api-access-bcglg\") pod \"heat-operator-controller-manager-5f64f6f8bb-gw22w\" (UID: \"a701287e-359e-429d-8b94-c4e06e8922a8\") " pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-gw22w" Dec 10 14:51:56 crc kubenswrapper[4718]: E1210 14:51:56.664285 4718 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Dec 10 14:51:56 crc kubenswrapper[4718]: E1210 14:51:56.664377 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a2a49d9-73fc-4798-8173-ed230aa16811-cert podName:6a2a49d9-73fc-4798-8173-ed230aa16811 nodeName:}" failed. No retries permitted until 2025-12-10 14:51:57.164332164 +0000 UTC m=+1222.113555581 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/6a2a49d9-73fc-4798-8173-ed230aa16811-cert") pod "infra-operator-controller-manager-78d48bff9d-cvh4g" (UID: "6a2a49d9-73fc-4798-8173-ed230aa16811") : secret "infra-operator-webhook-server-cert" not found Dec 10 14:51:56 crc kubenswrapper[4718]: I1210 14:51:56.665210 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-79c8c4686c-7qqzr"] Dec 10 14:51:56 crc kubenswrapper[4718]: I1210 14:51:56.672376 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-gw22w" Dec 10 14:51:56 crc kubenswrapper[4718]: I1210 14:51:56.674893 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-79c8c4686c-7qqzr" Dec 10 14:51:56 crc kubenswrapper[4718]: I1210 14:51:56.685946 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-l7fw2" Dec 10 14:51:56 crc kubenswrapper[4718]: I1210 14:51:56.701568 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-79c8c4686c-7qqzr"] Dec 10 14:51:56 crc kubenswrapper[4718]: I1210 14:51:56.712841 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-mshqn" Dec 10 14:51:56 crc kubenswrapper[4718]: I1210 14:51:56.718600 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-25vds\" (UniqueName: \"kubernetes.io/projected/6a2a49d9-73fc-4798-8173-ed230aa16811-kube-api-access-25vds\") pod \"infra-operator-controller-manager-78d48bff9d-cvh4g\" (UID: \"6a2a49d9-73fc-4798-8173-ed230aa16811\") " pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-cvh4g" Dec 10 14:51:56 crc kubenswrapper[4718]: I1210 14:51:56.728434 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f24zs\" (UniqueName: \"kubernetes.io/projected/e7677f94-866d-45c7-b1c9-70fd2b7c7012-kube-api-access-f24zs\") pod \"manila-operator-controller-manager-5b5fd79c9c-cfcm2\" (UID: \"e7677f94-866d-45c7-b1c9-70fd2b7c7012\") " pod="openstack-operators/manila-operator-controller-manager-5b5fd79c9c-cfcm2" Dec 10 14:51:56 crc kubenswrapper[4718]: I1210 14:51:56.738004 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lw5dc\" (UniqueName: \"kubernetes.io/projected/2516d98a-9991-4d5b-9791-14642a4ec629-kube-api-access-lw5dc\") pod \"ironic-operator-controller-manager-967d97867-2jgmr\" (UID: \"2516d98a-9991-4d5b-9791-14642a4ec629\") " pod="openstack-operators/ironic-operator-controller-manager-967d97867-2jgmr" Dec 10 14:51:56 crc kubenswrapper[4718]: I1210 14:51:56.744373 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-6smz5"] Dec 10 14:51:56 crc kubenswrapper[4718]: I1210 14:51:56.746855 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-6smz5" Dec 10 14:51:56 crc kubenswrapper[4718]: I1210 14:51:56.749227 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-zmh4r" Dec 10 14:51:56 crc kubenswrapper[4718]: I1210 14:51:56.762807 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-6smz5"] Dec 10 14:51:56 crc kubenswrapper[4718]: I1210 14:51:56.768922 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s9ndw\" (UniqueName: \"kubernetes.io/projected/32690a0c-0ce7-4639-b30f-18a1a91ed86d-kube-api-access-s9ndw\") pod \"mariadb-operator-controller-manager-79c8c4686c-7qqzr\" (UID: \"32690a0c-0ce7-4639-b30f-18a1a91ed86d\") " pod="openstack-operators/mariadb-operator-controller-manager-79c8c4686c-7qqzr" Dec 10 14:51:56 crc kubenswrapper[4718]: I1210 14:51:56.769036 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p2zrx\" (UniqueName: \"kubernetes.io/projected/7d8ae7e9-7545-4ab6-b87c-6c5484b47424-kube-api-access-p2zrx\") pod \"keystone-operator-controller-manager-7765d96ddf-9sxwk\" (UID: \"7d8ae7e9-7545-4ab6-b87c-6c5484b47424\") " pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-9sxwk" Dec 10 14:51:56 crc kubenswrapper[4718]: I1210 14:51:56.815939 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p2zrx\" (UniqueName: \"kubernetes.io/projected/7d8ae7e9-7545-4ab6-b87c-6c5484b47424-kube-api-access-p2zrx\") pod \"keystone-operator-controller-manager-7765d96ddf-9sxwk\" (UID: \"7d8ae7e9-7545-4ab6-b87c-6c5484b47424\") " pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-9sxwk" Dec 10 14:51:56 crc kubenswrapper[4718]: I1210 14:51:56.817487 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-697bc559fc-426mn"] Dec 10 14:51:56 crc kubenswrapper[4718]: I1210 14:51:56.819461 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-426mn" Dec 10 14:51:56 crc kubenswrapper[4718]: I1210 14:51:56.827180 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-98q4w" Dec 10 14:51:56 crc kubenswrapper[4718]: I1210 14:51:56.851539 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-998648c74-s4568"] Dec 10 14:51:56 crc kubenswrapper[4718]: I1210 14:51:56.856016 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-998648c74-s4568" Dec 10 14:51:56 crc kubenswrapper[4718]: I1210 14:51:56.856162 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-967d97867-2jgmr" Dec 10 14:51:56 crc kubenswrapper[4718]: I1210 14:51:56.863905 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-5b5fd79c9c-cfcm2" Dec 10 14:51:56 crc kubenswrapper[4718]: I1210 14:51:56.867679 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-4sjqd" Dec 10 14:51:56 crc kubenswrapper[4718]: I1210 14:51:56.885133 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s9ndw\" (UniqueName: \"kubernetes.io/projected/32690a0c-0ce7-4639-b30f-18a1a91ed86d-kube-api-access-s9ndw\") pod \"mariadb-operator-controller-manager-79c8c4686c-7qqzr\" (UID: \"32690a0c-0ce7-4639-b30f-18a1a91ed86d\") " pod="openstack-operators/mariadb-operator-controller-manager-79c8c4686c-7qqzr" Dec 10 14:51:56 crc kubenswrapper[4718]: I1210 14:51:56.885205 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5pkvw\" (UniqueName: \"kubernetes.io/projected/191f3c0d-be7d-463e-9979-922dfb629747-kube-api-access-5pkvw\") pod \"neutron-operator-controller-manager-5fdfd5b6b5-6smz5\" (UID: \"191f3c0d-be7d-463e-9979-922dfb629747\") " pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-6smz5" Dec 10 14:51:56 crc kubenswrapper[4718]: I1210 14:51:56.885239 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-txlrl\" (UniqueName: \"kubernetes.io/projected/a3e406b6-e5c1-4c25-b9ee-80fbbb3eb89b-kube-api-access-txlrl\") pod \"nova-operator-controller-manager-697bc559fc-426mn\" (UID: \"a3e406b6-e5c1-4c25-b9ee-80fbbb3eb89b\") " pod="openstack-operators/nova-operator-controller-manager-697bc559fc-426mn" Dec 10 14:51:56 crc kubenswrapper[4718]: I1210 14:51:56.889692 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-697bc559fc-426mn"] Dec 10 14:51:56 crc kubenswrapper[4718]: I1210 14:51:56.958770 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-998648c74-s4568"] Dec 10 14:51:56 crc kubenswrapper[4718]: I1210 14:51:56.965409 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s9ndw\" (UniqueName: \"kubernetes.io/projected/32690a0c-0ce7-4639-b30f-18a1a91ed86d-kube-api-access-s9ndw\") pod \"mariadb-operator-controller-manager-79c8c4686c-7qqzr\" (UID: \"32690a0c-0ce7-4639-b30f-18a1a91ed86d\") " pod="openstack-operators/mariadb-operator-controller-manager-79c8c4686c-7qqzr" Dec 10 14:51:56 crc kubenswrapper[4718]: I1210 14:51:56.967451 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-7f95dc5b94n9mbl"] Dec 10 14:51:56 crc kubenswrapper[4718]: I1210 14:51:56.978648 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7f95dc5b94n9mbl" Dec 10 14:51:56 crc kubenswrapper[4718]: I1210 14:51:56.982311 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-mxvgh" Dec 10 14:51:56 crc kubenswrapper[4718]: I1210 14:51:56.984989 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-b6456fdb6-hq6tv"] Dec 10 14:51:56 crc kubenswrapper[4718]: I1210 14:51:56.986219 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Dec 10 14:51:56 crc kubenswrapper[4718]: I1210 14:51:56.987221 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-hq6tv" Dec 10 14:51:56 crc kubenswrapper[4718]: I1210 14:51:56.990045 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5pkvw\" (UniqueName: \"kubernetes.io/projected/191f3c0d-be7d-463e-9979-922dfb629747-kube-api-access-5pkvw\") pod \"neutron-operator-controller-manager-5fdfd5b6b5-6smz5\" (UID: \"191f3c0d-be7d-463e-9979-922dfb629747\") " pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-6smz5" Dec 10 14:51:56 crc kubenswrapper[4718]: I1210 14:51:56.990123 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-txlrl\" (UniqueName: \"kubernetes.io/projected/a3e406b6-e5c1-4c25-b9ee-80fbbb3eb89b-kube-api-access-txlrl\") pod \"nova-operator-controller-manager-697bc559fc-426mn\" (UID: \"a3e406b6-e5c1-4c25-b9ee-80fbbb3eb89b\") " pod="openstack-operators/nova-operator-controller-manager-697bc559fc-426mn" Dec 10 14:51:56 crc kubenswrapper[4718]: I1210 14:51:56.990190 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k2zfb\" (UniqueName: \"kubernetes.io/projected/664faf77-d6a3-4b57-9dc9-ca7a4879c0ef-kube-api-access-k2zfb\") pod \"octavia-operator-controller-manager-998648c74-s4568\" (UID: \"664faf77-d6a3-4b57-9dc9-ca7a4879c0ef\") " pod="openstack-operators/octavia-operator-controller-manager-998648c74-s4568" Dec 10 14:51:56 crc kubenswrapper[4718]: I1210 14:51:56.991176 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-9sxwk" Dec 10 14:51:56 crc kubenswrapper[4718]: I1210 14:51:56.992067 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-ddkpv" Dec 10 14:51:57 crc kubenswrapper[4718]: I1210 14:51:57.007647 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-7f95dc5b94n9mbl"] Dec 10 14:51:57 crc kubenswrapper[4718]: I1210 14:51:57.020733 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-78f8948974-6s7dx"] Dec 10 14:51:57 crc kubenswrapper[4718]: I1210 14:51:57.022676 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-78f8948974-6s7dx" Dec 10 14:51:57 crc kubenswrapper[4718]: I1210 14:51:57.023157 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5pkvw\" (UniqueName: \"kubernetes.io/projected/191f3c0d-be7d-463e-9979-922dfb629747-kube-api-access-5pkvw\") pod \"neutron-operator-controller-manager-5fdfd5b6b5-6smz5\" (UID: \"191f3c0d-be7d-463e-9979-922dfb629747\") " pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-6smz5" Dec 10 14:51:57 crc kubenswrapper[4718]: I1210 14:51:57.029400 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-jhchp" Dec 10 14:51:57 crc kubenswrapper[4718]: I1210 14:51:57.033976 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-b6456fdb6-hq6tv"] Dec 10 14:51:57 crc kubenswrapper[4718]: I1210 14:51:57.035939 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-txlrl\" (UniqueName: \"kubernetes.io/projected/a3e406b6-e5c1-4c25-b9ee-80fbbb3eb89b-kube-api-access-txlrl\") pod \"nova-operator-controller-manager-697bc559fc-426mn\" (UID: \"a3e406b6-e5c1-4c25-b9ee-80fbbb3eb89b\") " pod="openstack-operators/nova-operator-controller-manager-697bc559fc-426mn" Dec 10 14:51:57 crc kubenswrapper[4718]: I1210 14:51:57.044305 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-9d58d64bc-7vvmc"] Dec 10 14:51:57 crc kubenswrapper[4718]: I1210 14:51:57.046060 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-9d58d64bc-7vvmc" Dec 10 14:51:57 crc kubenswrapper[4718]: I1210 14:51:57.049561 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-6kg6v" Dec 10 14:51:57 crc kubenswrapper[4718]: I1210 14:51:57.063620 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-78f8948974-6s7dx"] Dec 10 14:51:57 crc kubenswrapper[4718]: I1210 14:51:57.074022 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-58d5ff84df-jqlwv"] Dec 10 14:51:57 crc kubenswrapper[4718]: I1210 14:51:57.075932 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-58d5ff84df-jqlwv" Dec 10 14:51:57 crc kubenswrapper[4718]: I1210 14:51:57.077605 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-b6xqq" Dec 10 14:51:57 crc kubenswrapper[4718]: I1210 14:51:57.091515 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k9q4h\" (UniqueName: \"kubernetes.io/projected/204f0155-9693-4239-8a7a-440255d5ad50-kube-api-access-k9q4h\") pod \"ovn-operator-controller-manager-b6456fdb6-hq6tv\" (UID: \"204f0155-9693-4239-8a7a-440255d5ad50\") " pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-hq6tv" Dec 10 14:51:57 crc kubenswrapper[4718]: I1210 14:51:57.091649 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rl2rl\" (UniqueName: \"kubernetes.io/projected/e4e01550-5ee5-4afc-a01a-b3ea52b47f23-kube-api-access-rl2rl\") pod \"placement-operator-controller-manager-78f8948974-6s7dx\" (UID: \"e4e01550-5ee5-4afc-a01a-b3ea52b47f23\") " pod="openstack-operators/placement-operator-controller-manager-78f8948974-6s7dx" Dec 10 14:51:57 crc kubenswrapper[4718]: I1210 14:51:57.091794 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qhz84\" (UniqueName: \"kubernetes.io/projected/463f6bf2-85ef-488a-8223-56898633fe8f-kube-api-access-qhz84\") pod \"openstack-baremetal-operator-controller-manager-7f95dc5b94n9mbl\" (UID: \"463f6bf2-85ef-488a-8223-56898633fe8f\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7f95dc5b94n9mbl" Dec 10 14:51:57 crc kubenswrapper[4718]: I1210 14:51:57.091863 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/463f6bf2-85ef-488a-8223-56898633fe8f-cert\") pod \"openstack-baremetal-operator-controller-manager-7f95dc5b94n9mbl\" (UID: \"463f6bf2-85ef-488a-8223-56898633fe8f\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7f95dc5b94n9mbl" Dec 10 14:51:57 crc kubenswrapper[4718]: I1210 14:51:57.091944 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k2zfb\" (UniqueName: \"kubernetes.io/projected/664faf77-d6a3-4b57-9dc9-ca7a4879c0ef-kube-api-access-k2zfb\") pod \"octavia-operator-controller-manager-998648c74-s4568\" (UID: \"664faf77-d6a3-4b57-9dc9-ca7a4879c0ef\") " pod="openstack-operators/octavia-operator-controller-manager-998648c74-s4568" Dec 10 14:51:57 crc kubenswrapper[4718]: I1210 14:51:57.106357 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-9d58d64bc-7vvmc"] Dec 10 14:51:57 crc kubenswrapper[4718]: I1210 14:51:57.107652 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-79c8c4686c-7qqzr" Dec 10 14:51:57 crc kubenswrapper[4718]: I1210 14:51:57.112741 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k2zfb\" (UniqueName: \"kubernetes.io/projected/664faf77-d6a3-4b57-9dc9-ca7a4879c0ef-kube-api-access-k2zfb\") pod \"octavia-operator-controller-manager-998648c74-s4568\" (UID: \"664faf77-d6a3-4b57-9dc9-ca7a4879c0ef\") " pod="openstack-operators/octavia-operator-controller-manager-998648c74-s4568" Dec 10 14:51:57 crc kubenswrapper[4718]: I1210 14:51:57.113210 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-58d5ff84df-jqlwv"] Dec 10 14:51:57 crc kubenswrapper[4718]: I1210 14:51:57.123951 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-5854674fcc-r7vfj"] Dec 10 14:51:57 crc kubenswrapper[4718]: I1210 14:51:57.127005 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-5854674fcc-r7vfj" Dec 10 14:51:57 crc kubenswrapper[4718]: I1210 14:51:57.129931 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-6tx75" Dec 10 14:51:57 crc kubenswrapper[4718]: I1210 14:51:57.134325 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-75944c9b7-rn6gn"] Dec 10 14:51:57 crc kubenswrapper[4718]: I1210 14:51:57.139120 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-75944c9b7-rn6gn" Dec 10 14:51:57 crc kubenswrapper[4718]: I1210 14:51:57.145932 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-75944c9b7-rn6gn"] Dec 10 14:51:57 crc kubenswrapper[4718]: I1210 14:51:57.146903 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-klkqh" Dec 10 14:51:57 crc kubenswrapper[4718]: I1210 14:51:57.204561 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rl2rl\" (UniqueName: \"kubernetes.io/projected/e4e01550-5ee5-4afc-a01a-b3ea52b47f23-kube-api-access-rl2rl\") pod \"placement-operator-controller-manager-78f8948974-6s7dx\" (UID: \"e4e01550-5ee5-4afc-a01a-b3ea52b47f23\") " pod="openstack-operators/placement-operator-controller-manager-78f8948974-6s7dx" Dec 10 14:51:57 crc kubenswrapper[4718]: I1210 14:51:57.204785 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6a2a49d9-73fc-4798-8173-ed230aa16811-cert\") pod \"infra-operator-controller-manager-78d48bff9d-cvh4g\" (UID: \"6a2a49d9-73fc-4798-8173-ed230aa16811\") " pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-cvh4g" Dec 10 14:51:57 crc kubenswrapper[4718]: I1210 14:51:57.204868 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kktwl\" (UniqueName: \"kubernetes.io/projected/daeefe3a-b055-4ee9-be2e-a93afc257365-kube-api-access-kktwl\") pod \"swift-operator-controller-manager-9d58d64bc-7vvmc\" (UID: \"daeefe3a-b055-4ee9-be2e-a93afc257365\") " pod="openstack-operators/swift-operator-controller-manager-9d58d64bc-7vvmc" Dec 10 14:51:57 crc kubenswrapper[4718]: I1210 14:51:57.204951 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qhz84\" (UniqueName: \"kubernetes.io/projected/463f6bf2-85ef-488a-8223-56898633fe8f-kube-api-access-qhz84\") pod \"openstack-baremetal-operator-controller-manager-7f95dc5b94n9mbl\" (UID: \"463f6bf2-85ef-488a-8223-56898633fe8f\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7f95dc5b94n9mbl" Dec 10 14:51:57 crc kubenswrapper[4718]: I1210 14:51:57.205026 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-24hkw\" (UniqueName: \"kubernetes.io/projected/469e8dbb-654f-464b-80f9-ac7b0d55439f-kube-api-access-24hkw\") pod \"telemetry-operator-controller-manager-58d5ff84df-jqlwv\" (UID: \"469e8dbb-654f-464b-80f9-ac7b0d55439f\") " pod="openstack-operators/telemetry-operator-controller-manager-58d5ff84df-jqlwv" Dec 10 14:51:57 crc kubenswrapper[4718]: I1210 14:51:57.205059 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/463f6bf2-85ef-488a-8223-56898633fe8f-cert\") pod \"openstack-baremetal-operator-controller-manager-7f95dc5b94n9mbl\" (UID: \"463f6bf2-85ef-488a-8223-56898633fe8f\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7f95dc5b94n9mbl" Dec 10 14:51:57 crc kubenswrapper[4718]: I1210 14:51:57.215764 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k9q4h\" (UniqueName: \"kubernetes.io/projected/204f0155-9693-4239-8a7a-440255d5ad50-kube-api-access-k9q4h\") pod \"ovn-operator-controller-manager-b6456fdb6-hq6tv\" (UID: \"204f0155-9693-4239-8a7a-440255d5ad50\") " pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-hq6tv" Dec 10 14:51:57 crc kubenswrapper[4718]: I1210 14:51:57.215859 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9sf4v\" (UniqueName: \"kubernetes.io/projected/12ba5675-3e82-41d7-be5a-ecbe1a440af5-kube-api-access-9sf4v\") pod \"test-operator-controller-manager-5854674fcc-r7vfj\" (UID: \"12ba5675-3e82-41d7-be5a-ecbe1a440af5\") " pod="openstack-operators/test-operator-controller-manager-5854674fcc-r7vfj" Dec 10 14:51:57 crc kubenswrapper[4718]: E1210 14:51:57.216947 4718 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Dec 10 14:51:57 crc kubenswrapper[4718]: E1210 14:51:57.217034 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a2a49d9-73fc-4798-8173-ed230aa16811-cert podName:6a2a49d9-73fc-4798-8173-ed230aa16811 nodeName:}" failed. No retries permitted until 2025-12-10 14:51:58.217006596 +0000 UTC m=+1223.166230013 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/6a2a49d9-73fc-4798-8173-ed230aa16811-cert") pod "infra-operator-controller-manager-78d48bff9d-cvh4g" (UID: "6a2a49d9-73fc-4798-8173-ed230aa16811") : secret "infra-operator-webhook-server-cert" not found Dec 10 14:51:57 crc kubenswrapper[4718]: E1210 14:51:57.217666 4718 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 10 14:51:57 crc kubenswrapper[4718]: I1210 14:51:57.218265 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-6smz5" Dec 10 14:51:57 crc kubenswrapper[4718]: E1210 14:51:57.219877 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/463f6bf2-85ef-488a-8223-56898633fe8f-cert podName:463f6bf2-85ef-488a-8223-56898633fe8f nodeName:}" failed. No retries permitted until 2025-12-10 14:51:57.719837769 +0000 UTC m=+1222.669061186 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/463f6bf2-85ef-488a-8223-56898633fe8f-cert") pod "openstack-baremetal-operator-controller-manager-7f95dc5b94n9mbl" (UID: "463f6bf2-85ef-488a-8223-56898633fe8f") : secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 10 14:51:57 crc kubenswrapper[4718]: I1210 14:51:57.240964 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-5854674fcc-r7vfj"] Dec 10 14:51:57 crc kubenswrapper[4718]: I1210 14:51:57.269130 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qhz84\" (UniqueName: \"kubernetes.io/projected/463f6bf2-85ef-488a-8223-56898633fe8f-kube-api-access-qhz84\") pod \"openstack-baremetal-operator-controller-manager-7f95dc5b94n9mbl\" (UID: \"463f6bf2-85ef-488a-8223-56898633fe8f\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7f95dc5b94n9mbl" Dec 10 14:51:57 crc kubenswrapper[4718]: I1210 14:51:57.277586 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rl2rl\" (UniqueName: \"kubernetes.io/projected/e4e01550-5ee5-4afc-a01a-b3ea52b47f23-kube-api-access-rl2rl\") pod \"placement-operator-controller-manager-78f8948974-6s7dx\" (UID: \"e4e01550-5ee5-4afc-a01a-b3ea52b47f23\") " pod="openstack-operators/placement-operator-controller-manager-78f8948974-6s7dx" Dec 10 14:51:57 crc kubenswrapper[4718]: I1210 14:51:57.287942 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k9q4h\" (UniqueName: \"kubernetes.io/projected/204f0155-9693-4239-8a7a-440255d5ad50-kube-api-access-k9q4h\") pod \"ovn-operator-controller-manager-b6456fdb6-hq6tv\" (UID: \"204f0155-9693-4239-8a7a-440255d5ad50\") " pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-hq6tv" Dec 10 14:51:57 crc kubenswrapper[4718]: I1210 14:51:57.295335 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-9xpxd" Dec 10 14:51:57 crc kubenswrapper[4718]: I1210 14:51:57.299322 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-426mn" Dec 10 14:51:57 crc kubenswrapper[4718]: I1210 14:51:57.301276 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-998648c74-s4568" Dec 10 14:51:57 crc kubenswrapper[4718]: I1210 14:51:57.305121 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-697fb699cf-6mx57" Dec 10 14:51:57 crc kubenswrapper[4718]: I1210 14:51:57.337903 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-hq6tv" Dec 10 14:51:57 crc kubenswrapper[4718]: I1210 14:51:57.339610 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-779lg\" (UniqueName: \"kubernetes.io/projected/21d69144-5afe-4aa8-95f0-c6e7c8802b14-kube-api-access-779lg\") pod \"watcher-operator-controller-manager-75944c9b7-rn6gn\" (UID: \"21d69144-5afe-4aa8-95f0-c6e7c8802b14\") " pod="openstack-operators/watcher-operator-controller-manager-75944c9b7-rn6gn" Dec 10 14:51:57 crc kubenswrapper[4718]: I1210 14:51:57.339678 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-24hkw\" (UniqueName: \"kubernetes.io/projected/469e8dbb-654f-464b-80f9-ac7b0d55439f-kube-api-access-24hkw\") pod \"telemetry-operator-controller-manager-58d5ff84df-jqlwv\" (UID: \"469e8dbb-654f-464b-80f9-ac7b0d55439f\") " pod="openstack-operators/telemetry-operator-controller-manager-58d5ff84df-jqlwv" Dec 10 14:51:57 crc kubenswrapper[4718]: I1210 14:51:57.339766 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9sf4v\" (UniqueName: \"kubernetes.io/projected/12ba5675-3e82-41d7-be5a-ecbe1a440af5-kube-api-access-9sf4v\") pod \"test-operator-controller-manager-5854674fcc-r7vfj\" (UID: \"12ba5675-3e82-41d7-be5a-ecbe1a440af5\") " pod="openstack-operators/test-operator-controller-manager-5854674fcc-r7vfj" Dec 10 14:51:57 crc kubenswrapper[4718]: I1210 14:51:57.339812 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kktwl\" (UniqueName: \"kubernetes.io/projected/daeefe3a-b055-4ee9-be2e-a93afc257365-kube-api-access-kktwl\") pod \"swift-operator-controller-manager-9d58d64bc-7vvmc\" (UID: \"daeefe3a-b055-4ee9-be2e-a93afc257365\") " pod="openstack-operators/swift-operator-controller-manager-9d58d64bc-7vvmc" Dec 10 14:51:57 crc kubenswrapper[4718]: I1210 14:51:57.416819 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-78f8948974-6s7dx" Dec 10 14:51:57 crc kubenswrapper[4718]: I1210 14:51:57.448361 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-779lg\" (UniqueName: \"kubernetes.io/projected/21d69144-5afe-4aa8-95f0-c6e7c8802b14-kube-api-access-779lg\") pod \"watcher-operator-controller-manager-75944c9b7-rn6gn\" (UID: \"21d69144-5afe-4aa8-95f0-c6e7c8802b14\") " pod="openstack-operators/watcher-operator-controller-manager-75944c9b7-rn6gn" Dec 10 14:51:57 crc kubenswrapper[4718]: I1210 14:51:57.505662 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9sf4v\" (UniqueName: \"kubernetes.io/projected/12ba5675-3e82-41d7-be5a-ecbe1a440af5-kube-api-access-9sf4v\") pod \"test-operator-controller-manager-5854674fcc-r7vfj\" (UID: \"12ba5675-3e82-41d7-be5a-ecbe1a440af5\") " pod="openstack-operators/test-operator-controller-manager-5854674fcc-r7vfj" Dec 10 14:51:57 crc kubenswrapper[4718]: I1210 14:51:57.511147 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-24hkw\" (UniqueName: \"kubernetes.io/projected/469e8dbb-654f-464b-80f9-ac7b0d55439f-kube-api-access-24hkw\") pod \"telemetry-operator-controller-manager-58d5ff84df-jqlwv\" (UID: \"469e8dbb-654f-464b-80f9-ac7b0d55439f\") " pod="openstack-operators/telemetry-operator-controller-manager-58d5ff84df-jqlwv" Dec 10 14:51:57 crc kubenswrapper[4718]: I1210 14:51:57.522176 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-779lg\" (UniqueName: \"kubernetes.io/projected/21d69144-5afe-4aa8-95f0-c6e7c8802b14-kube-api-access-779lg\") pod \"watcher-operator-controller-manager-75944c9b7-rn6gn\" (UID: \"21d69144-5afe-4aa8-95f0-c6e7c8802b14\") " pod="openstack-operators/watcher-operator-controller-manager-75944c9b7-rn6gn" Dec 10 14:51:57 crc kubenswrapper[4718]: I1210 14:51:57.527354 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kktwl\" (UniqueName: \"kubernetes.io/projected/daeefe3a-b055-4ee9-be2e-a93afc257365-kube-api-access-kktwl\") pod \"swift-operator-controller-manager-9d58d64bc-7vvmc\" (UID: \"daeefe3a-b055-4ee9-be2e-a93afc257365\") " pod="openstack-operators/swift-operator-controller-manager-9d58d64bc-7vvmc" Dec 10 14:51:57 crc kubenswrapper[4718]: I1210 14:51:57.536178 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-5854674fcc-r7vfj" Dec 10 14:51:57 crc kubenswrapper[4718]: I1210 14:51:57.539353 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-85cbc5886b-z2lw4"] Dec 10 14:51:57 crc kubenswrapper[4718]: I1210 14:51:57.541196 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-85cbc5886b-z2lw4" Dec 10 14:51:57 crc kubenswrapper[4718]: I1210 14:51:57.551825 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Dec 10 14:51:57 crc kubenswrapper[4718]: I1210 14:51:57.552093 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Dec 10 14:51:57 crc kubenswrapper[4718]: I1210 14:51:57.555753 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-hbc9j" Dec 10 14:51:57 crc kubenswrapper[4718]: I1210 14:51:57.585862 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-75944c9b7-rn6gn" Dec 10 14:51:57 crc kubenswrapper[4718]: I1210 14:51:57.665223 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-nb754"] Dec 10 14:51:57 crc kubenswrapper[4718]: I1210 14:51:57.680715 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-nb754" Dec 10 14:51:57 crc kubenswrapper[4718]: I1210 14:51:57.682101 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qtbsf\" (UniqueName: \"kubernetes.io/projected/81cbd3a0-2031-418d-95b6-fdac9d170a51-kube-api-access-qtbsf\") pod \"openstack-operator-controller-manager-85cbc5886b-z2lw4\" (UID: \"81cbd3a0-2031-418d-95b6-fdac9d170a51\") " pod="openstack-operators/openstack-operator-controller-manager-85cbc5886b-z2lw4" Dec 10 14:51:57 crc kubenswrapper[4718]: I1210 14:51:57.682164 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/81cbd3a0-2031-418d-95b6-fdac9d170a51-webhook-certs\") pod \"openstack-operator-controller-manager-85cbc5886b-z2lw4\" (UID: \"81cbd3a0-2031-418d-95b6-fdac9d170a51\") " pod="openstack-operators/openstack-operator-controller-manager-85cbc5886b-z2lw4" Dec 10 14:51:57 crc kubenswrapper[4718]: I1210 14:51:57.682267 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/81cbd3a0-2031-418d-95b6-fdac9d170a51-metrics-certs\") pod \"openstack-operator-controller-manager-85cbc5886b-z2lw4\" (UID: \"81cbd3a0-2031-418d-95b6-fdac9d170a51\") " pod="openstack-operators/openstack-operator-controller-manager-85cbc5886b-z2lw4" Dec 10 14:51:57 crc kubenswrapper[4718]: I1210 14:51:57.707664 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-58d5ff84df-jqlwv" Dec 10 14:51:57 crc kubenswrapper[4718]: I1210 14:51:57.708568 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-j4j8d" Dec 10 14:51:57 crc kubenswrapper[4718]: I1210 14:51:57.709889 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-9d58d64bc-7vvmc" Dec 10 14:51:57 crc kubenswrapper[4718]: I1210 14:51:57.724555 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-nb754"] Dec 10 14:51:57 crc kubenswrapper[4718]: I1210 14:51:57.726079 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-85cbc5886b-z2lw4"] Dec 10 14:51:57 crc kubenswrapper[4718]: I1210 14:51:57.784617 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/81cbd3a0-2031-418d-95b6-fdac9d170a51-metrics-certs\") pod \"openstack-operator-controller-manager-85cbc5886b-z2lw4\" (UID: \"81cbd3a0-2031-418d-95b6-fdac9d170a51\") " pod="openstack-operators/openstack-operator-controller-manager-85cbc5886b-z2lw4" Dec 10 14:51:57 crc kubenswrapper[4718]: I1210 14:51:57.784825 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kmn7t\" (UniqueName: \"kubernetes.io/projected/91cdfe7c-2e49-4919-a7ff-0559e12ecf8b-kube-api-access-kmn7t\") pod \"rabbitmq-cluster-operator-manager-668c99d594-nb754\" (UID: \"91cdfe7c-2e49-4919-a7ff-0559e12ecf8b\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-nb754" Dec 10 14:51:57 crc kubenswrapper[4718]: I1210 14:51:57.784884 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/463f6bf2-85ef-488a-8223-56898633fe8f-cert\") pod \"openstack-baremetal-operator-controller-manager-7f95dc5b94n9mbl\" (UID: \"463f6bf2-85ef-488a-8223-56898633fe8f\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7f95dc5b94n9mbl" Dec 10 14:51:57 crc kubenswrapper[4718]: I1210 14:51:57.784956 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qtbsf\" (UniqueName: \"kubernetes.io/projected/81cbd3a0-2031-418d-95b6-fdac9d170a51-kube-api-access-qtbsf\") pod \"openstack-operator-controller-manager-85cbc5886b-z2lw4\" (UID: \"81cbd3a0-2031-418d-95b6-fdac9d170a51\") " pod="openstack-operators/openstack-operator-controller-manager-85cbc5886b-z2lw4" Dec 10 14:51:57 crc kubenswrapper[4718]: I1210 14:51:57.785004 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/81cbd3a0-2031-418d-95b6-fdac9d170a51-webhook-certs\") pod \"openstack-operator-controller-manager-85cbc5886b-z2lw4\" (UID: \"81cbd3a0-2031-418d-95b6-fdac9d170a51\") " pod="openstack-operators/openstack-operator-controller-manager-85cbc5886b-z2lw4" Dec 10 14:51:57 crc kubenswrapper[4718]: E1210 14:51:57.785304 4718 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Dec 10 14:51:57 crc kubenswrapper[4718]: E1210 14:51:57.785426 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/81cbd3a0-2031-418d-95b6-fdac9d170a51-webhook-certs podName:81cbd3a0-2031-418d-95b6-fdac9d170a51 nodeName:}" failed. No retries permitted until 2025-12-10 14:51:58.285370761 +0000 UTC m=+1223.234594178 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/81cbd3a0-2031-418d-95b6-fdac9d170a51-webhook-certs") pod "openstack-operator-controller-manager-85cbc5886b-z2lw4" (UID: "81cbd3a0-2031-418d-95b6-fdac9d170a51") : secret "webhook-server-cert" not found Dec 10 14:51:57 crc kubenswrapper[4718]: E1210 14:51:57.785590 4718 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 10 14:51:57 crc kubenswrapper[4718]: E1210 14:51:57.785796 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/463f6bf2-85ef-488a-8223-56898633fe8f-cert podName:463f6bf2-85ef-488a-8223-56898633fe8f nodeName:}" failed. No retries permitted until 2025-12-10 14:51:58.785693699 +0000 UTC m=+1223.734917116 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/463f6bf2-85ef-488a-8223-56898633fe8f-cert") pod "openstack-baremetal-operator-controller-manager-7f95dc5b94n9mbl" (UID: "463f6bf2-85ef-488a-8223-56898633fe8f") : secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 10 14:51:57 crc kubenswrapper[4718]: E1210 14:51:57.785590 4718 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Dec 10 14:51:57 crc kubenswrapper[4718]: E1210 14:51:57.785880 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/81cbd3a0-2031-418d-95b6-fdac9d170a51-metrics-certs podName:81cbd3a0-2031-418d-95b6-fdac9d170a51 nodeName:}" failed. No retries permitted until 2025-12-10 14:51:58.285860993 +0000 UTC m=+1223.235084410 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/81cbd3a0-2031-418d-95b6-fdac9d170a51-metrics-certs") pod "openstack-operator-controller-manager-85cbc5886b-z2lw4" (UID: "81cbd3a0-2031-418d-95b6-fdac9d170a51") : secret "metrics-server-cert" not found Dec 10 14:51:57 crc kubenswrapper[4718]: I1210 14:51:57.817758 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qtbsf\" (UniqueName: \"kubernetes.io/projected/81cbd3a0-2031-418d-95b6-fdac9d170a51-kube-api-access-qtbsf\") pod \"openstack-operator-controller-manager-85cbc5886b-z2lw4\" (UID: \"81cbd3a0-2031-418d-95b6-fdac9d170a51\") " pod="openstack-operators/openstack-operator-controller-manager-85cbc5886b-z2lw4" Dec 10 14:51:57 crc kubenswrapper[4718]: I1210 14:51:57.889221 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kmn7t\" (UniqueName: \"kubernetes.io/projected/91cdfe7c-2e49-4919-a7ff-0559e12ecf8b-kube-api-access-kmn7t\") pod \"rabbitmq-cluster-operator-manager-668c99d594-nb754\" (UID: \"91cdfe7c-2e49-4919-a7ff-0559e12ecf8b\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-nb754" Dec 10 14:51:57 crc kubenswrapper[4718]: I1210 14:51:57.918880 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kmn7t\" (UniqueName: \"kubernetes.io/projected/91cdfe7c-2e49-4919-a7ff-0559e12ecf8b-kube-api-access-kmn7t\") pod \"rabbitmq-cluster-operator-manager-668c99d594-nb754\" (UID: \"91cdfe7c-2e49-4919-a7ff-0559e12ecf8b\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-nb754" Dec 10 14:51:57 crc kubenswrapper[4718]: I1210 14:51:57.998804 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-6c677c69b-jfdsv"] Dec 10 14:51:58 crc kubenswrapper[4718]: I1210 14:51:58.017135 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-5697bb5779-xvwrl"] Dec 10 14:51:58 crc kubenswrapper[4718]: W1210 14:51:58.136610 4718 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod82086b4c_0222_45a7_a3c3_fc2504f63a4e.slice/crio-c6b714b412e412da97ec8a0c240c14310be2c8326f8144509655f17e0edf99dd WatchSource:0}: Error finding container c6b714b412e412da97ec8a0c240c14310be2c8326f8144509655f17e0edf99dd: Status 404 returned error can't find the container with id c6b714b412e412da97ec8a0c240c14310be2c8326f8144509655f17e0edf99dd Dec 10 14:51:58 crc kubenswrapper[4718]: I1210 14:51:58.299545 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/81cbd3a0-2031-418d-95b6-fdac9d170a51-webhook-certs\") pod \"openstack-operator-controller-manager-85cbc5886b-z2lw4\" (UID: \"81cbd3a0-2031-418d-95b6-fdac9d170a51\") " pod="openstack-operators/openstack-operator-controller-manager-85cbc5886b-z2lw4" Dec 10 14:51:58 crc kubenswrapper[4718]: I1210 14:51:58.300071 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/81cbd3a0-2031-418d-95b6-fdac9d170a51-metrics-certs\") pod \"openstack-operator-controller-manager-85cbc5886b-z2lw4\" (UID: \"81cbd3a0-2031-418d-95b6-fdac9d170a51\") " pod="openstack-operators/openstack-operator-controller-manager-85cbc5886b-z2lw4" Dec 10 14:51:58 crc kubenswrapper[4718]: I1210 14:51:58.300132 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6a2a49d9-73fc-4798-8173-ed230aa16811-cert\") pod \"infra-operator-controller-manager-78d48bff9d-cvh4g\" (UID: \"6a2a49d9-73fc-4798-8173-ed230aa16811\") " pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-cvh4g" Dec 10 14:51:58 crc kubenswrapper[4718]: I1210 14:51:58.300300 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-nb754" Dec 10 14:51:58 crc kubenswrapper[4718]: E1210 14:51:58.300410 4718 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Dec 10 14:51:58 crc kubenswrapper[4718]: E1210 14:51:58.300482 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a2a49d9-73fc-4798-8173-ed230aa16811-cert podName:6a2a49d9-73fc-4798-8173-ed230aa16811 nodeName:}" failed. No retries permitted until 2025-12-10 14:52:00.300461741 +0000 UTC m=+1225.249685158 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/6a2a49d9-73fc-4798-8173-ed230aa16811-cert") pod "infra-operator-controller-manager-78d48bff9d-cvh4g" (UID: "6a2a49d9-73fc-4798-8173-ed230aa16811") : secret "infra-operator-webhook-server-cert" not found Dec 10 14:51:58 crc kubenswrapper[4718]: E1210 14:51:58.300672 4718 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Dec 10 14:51:58 crc kubenswrapper[4718]: E1210 14:51:58.300722 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/81cbd3a0-2031-418d-95b6-fdac9d170a51-metrics-certs podName:81cbd3a0-2031-418d-95b6-fdac9d170a51 nodeName:}" failed. No retries permitted until 2025-12-10 14:51:59.300707977 +0000 UTC m=+1224.249931394 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/81cbd3a0-2031-418d-95b6-fdac9d170a51-metrics-certs") pod "openstack-operator-controller-manager-85cbc5886b-z2lw4" (UID: "81cbd3a0-2031-418d-95b6-fdac9d170a51") : secret "metrics-server-cert" not found Dec 10 14:51:58 crc kubenswrapper[4718]: E1210 14:51:58.300787 4718 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Dec 10 14:51:58 crc kubenswrapper[4718]: E1210 14:51:58.300822 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/81cbd3a0-2031-418d-95b6-fdac9d170a51-webhook-certs podName:81cbd3a0-2031-418d-95b6-fdac9d170a51 nodeName:}" failed. No retries permitted until 2025-12-10 14:51:59.30081184 +0000 UTC m=+1224.250035257 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/81cbd3a0-2031-418d-95b6-fdac9d170a51-webhook-certs") pod "openstack-operator-controller-manager-85cbc5886b-z2lw4" (UID: "81cbd3a0-2031-418d-95b6-fdac9d170a51") : secret "webhook-server-cert" not found Dec 10 14:51:58 crc kubenswrapper[4718]: I1210 14:51:58.767377 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-6c677c69b-jfdsv" event={"ID":"82086b4c-0222-45a7-a3c3-fc2504f63a4e","Type":"ContainerStarted","Data":"c6b714b412e412da97ec8a0c240c14310be2c8326f8144509655f17e0edf99dd"} Dec 10 14:51:58 crc kubenswrapper[4718]: I1210 14:51:58.797656 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-5697bb5779-xvwrl" event={"ID":"513a8781-70b0-4692-9141-0c60ef254a98","Type":"ContainerStarted","Data":"d6d03f9465f529968f19d37d78db1e930a5e2debe0431de88ef975e6d21c4a53"} Dec 10 14:51:58 crc kubenswrapper[4718]: I1210 14:51:58.887595 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/463f6bf2-85ef-488a-8223-56898633fe8f-cert\") pod \"openstack-baremetal-operator-controller-manager-7f95dc5b94n9mbl\" (UID: \"463f6bf2-85ef-488a-8223-56898633fe8f\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7f95dc5b94n9mbl" Dec 10 14:51:58 crc kubenswrapper[4718]: E1210 14:51:58.887885 4718 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 10 14:51:58 crc kubenswrapper[4718]: E1210 14:51:58.887954 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/463f6bf2-85ef-488a-8223-56898633fe8f-cert podName:463f6bf2-85ef-488a-8223-56898633fe8f nodeName:}" failed. No retries permitted until 2025-12-10 14:52:00.887936015 +0000 UTC m=+1225.837159432 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/463f6bf2-85ef-488a-8223-56898633fe8f-cert") pod "openstack-baremetal-operator-controller-manager-7f95dc5b94n9mbl" (UID: "463f6bf2-85ef-488a-8223-56898633fe8f") : secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 10 14:51:59 crc kubenswrapper[4718]: I1210 14:51:59.376084 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-5f64f6f8bb-gw22w"] Dec 10 14:51:59 crc kubenswrapper[4718]: I1210 14:51:59.400268 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/81cbd3a0-2031-418d-95b6-fdac9d170a51-webhook-certs\") pod \"openstack-operator-controller-manager-85cbc5886b-z2lw4\" (UID: \"81cbd3a0-2031-418d-95b6-fdac9d170a51\") " pod="openstack-operators/openstack-operator-controller-manager-85cbc5886b-z2lw4" Dec 10 14:51:59 crc kubenswrapper[4718]: E1210 14:51:59.400432 4718 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Dec 10 14:51:59 crc kubenswrapper[4718]: E1210 14:51:59.400613 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/81cbd3a0-2031-418d-95b6-fdac9d170a51-webhook-certs podName:81cbd3a0-2031-418d-95b6-fdac9d170a51 nodeName:}" failed. No retries permitted until 2025-12-10 14:52:01.400590051 +0000 UTC m=+1226.349813468 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/81cbd3a0-2031-418d-95b6-fdac9d170a51-webhook-certs") pod "openstack-operator-controller-manager-85cbc5886b-z2lw4" (UID: "81cbd3a0-2031-418d-95b6-fdac9d170a51") : secret "webhook-server-cert" not found Dec 10 14:51:59 crc kubenswrapper[4718]: I1210 14:51:59.404770 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/81cbd3a0-2031-418d-95b6-fdac9d170a51-metrics-certs\") pod \"openstack-operator-controller-manager-85cbc5886b-z2lw4\" (UID: \"81cbd3a0-2031-418d-95b6-fdac9d170a51\") " pod="openstack-operators/openstack-operator-controller-manager-85cbc5886b-z2lw4" Dec 10 14:51:59 crc kubenswrapper[4718]: E1210 14:51:59.405216 4718 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Dec 10 14:51:59 crc kubenswrapper[4718]: E1210 14:51:59.408646 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/81cbd3a0-2031-418d-95b6-fdac9d170a51-metrics-certs podName:81cbd3a0-2031-418d-95b6-fdac9d170a51 nodeName:}" failed. No retries permitted until 2025-12-10 14:52:01.408565876 +0000 UTC m=+1226.357789523 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/81cbd3a0-2031-418d-95b6-fdac9d170a51-metrics-certs") pod "openstack-operator-controller-manager-85cbc5886b-z2lw4" (UID: "81cbd3a0-2031-418d-95b6-fdac9d170a51") : secret "metrics-server-cert" not found Dec 10 14:51:59 crc kubenswrapper[4718]: I1210 14:51:59.504182 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-7765d96ddf-9sxwk"] Dec 10 14:51:59 crc kubenswrapper[4718]: W1210 14:51:59.546365 4718 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7d8ae7e9_7545_4ab6_b87c_6c5484b47424.slice/crio-37a90892ea102005064b74f4adfe97dc7857baddd49546b02a1a5849b5ca45d1 WatchSource:0}: Error finding container 37a90892ea102005064b74f4adfe97dc7857baddd49546b02a1a5849b5ca45d1: Status 404 returned error can't find the container with id 37a90892ea102005064b74f4adfe97dc7857baddd49546b02a1a5849b5ca45d1 Dec 10 14:51:59 crc kubenswrapper[4718]: I1210 14:51:59.640558 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-79c8c4686c-7qqzr"] Dec 10 14:51:59 crc kubenswrapper[4718]: I1210 14:51:59.653305 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-68c6d99b8f-mshqn"] Dec 10 14:51:59 crc kubenswrapper[4718]: I1210 14:51:59.674806 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-967d97867-2jgmr"] Dec 10 14:51:59 crc kubenswrapper[4718]: I1210 14:51:59.686246 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7d9dfd778-4s6xq"] Dec 10 14:51:59 crc kubenswrapper[4718]: I1210 14:51:59.706900 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-5b5fd79c9c-cfcm2"] Dec 10 14:51:59 crc kubenswrapper[4718]: I1210 14:51:59.752458 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-6smz5"] Dec 10 14:51:59 crc kubenswrapper[4718]: I1210 14:51:59.766569 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-697bc559fc-426mn"] Dec 10 14:51:59 crc kubenswrapper[4718]: I1210 14:51:59.822918 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-mshqn" event={"ID":"80f8ae23-3a84-4810-9868-6571b6cf56a1","Type":"ContainerStarted","Data":"3db92a3cf9b08163ec2257a56ba1fbd18384c0abed5a2b74f6eb139fd5837081"} Dec 10 14:51:59 crc kubenswrapper[4718]: I1210 14:51:59.831680 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-6smz5" event={"ID":"191f3c0d-be7d-463e-9979-922dfb629747","Type":"ContainerStarted","Data":"2b8b5a97af48ee85efabb0528a648ca2e3bcf7b63a21f512884d8a70e779cb46"} Dec 10 14:51:59 crc kubenswrapper[4718]: I1210 14:51:59.833817 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-967d97867-2jgmr" event={"ID":"2516d98a-9991-4d5b-9791-14642a4ec629","Type":"ContainerStarted","Data":"5d31789c437c925234217ba1ac0b87ad06264edd578a68ebf61edd37e891ac40"} Dec 10 14:51:59 crc kubenswrapper[4718]: I1210 14:51:59.835865 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-426mn" event={"ID":"a3e406b6-e5c1-4c25-b9ee-80fbbb3eb89b","Type":"ContainerStarted","Data":"42f196e12ab24f09e5d01fcf3dd878f7e512e47778f7f1737d564c8717f9290b"} Dec 10 14:51:59 crc kubenswrapper[4718]: I1210 14:51:59.844345 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-5b5fd79c9c-cfcm2" event={"ID":"e7677f94-866d-45c7-b1c9-70fd2b7c7012","Type":"ContainerStarted","Data":"7ab1c881496d3f2320d0c73ce52a230d0b31f6b67d5d4f985dc60c2e88aa9952"} Dec 10 14:51:59 crc kubenswrapper[4718]: I1210 14:51:59.845901 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-9sxwk" event={"ID":"7d8ae7e9-7545-4ab6-b87c-6c5484b47424","Type":"ContainerStarted","Data":"37a90892ea102005064b74f4adfe97dc7857baddd49546b02a1a5849b5ca45d1"} Dec 10 14:51:59 crc kubenswrapper[4718]: I1210 14:51:59.847659 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-4s6xq" event={"ID":"61e4671d-9417-472d-9d76-64fdcc0e3297","Type":"ContainerStarted","Data":"b75fa648a0fd4f814ca9421aa8c71f6b288cb13275a935110604cf096b058102"} Dec 10 14:51:59 crc kubenswrapper[4718]: I1210 14:51:59.849680 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-79c8c4686c-7qqzr" event={"ID":"32690a0c-0ce7-4639-b30f-18a1a91ed86d","Type":"ContainerStarted","Data":"213947985b46de2bf7c89b4a6c440f1e5cb8f6a6ade796ac69122ed2e21c6a44"} Dec 10 14:51:59 crc kubenswrapper[4718]: I1210 14:51:59.851151 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-gw22w" event={"ID":"a701287e-359e-429d-8b94-c4e06e8922a8","Type":"ContainerStarted","Data":"49ad827c42543347fcccd100e7aaf3c639da9062cb7902b67fc48db7d21346a9"} Dec 10 14:52:00 crc kubenswrapper[4718]: W1210 14:52:00.063911 4718 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode4e01550_5ee5_4afc_a01a_b3ea52b47f23.slice/crio-f62604e79f8a241ca79c014e6ac3df79ca334a1f03cb7b2db0f0276951b5a41f WatchSource:0}: Error finding container f62604e79f8a241ca79c014e6ac3df79ca334a1f03cb7b2db0f0276951b5a41f: Status 404 returned error can't find the container with id f62604e79f8a241ca79c014e6ac3df79ca334a1f03cb7b2db0f0276951b5a41f Dec 10 14:52:00 crc kubenswrapper[4718]: I1210 14:52:00.068790 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-b6456fdb6-hq6tv"] Dec 10 14:52:00 crc kubenswrapper[4718]: I1210 14:52:00.090110 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-78f8948974-6s7dx"] Dec 10 14:52:00 crc kubenswrapper[4718]: W1210 14:52:00.104069 4718 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod12ba5675_3e82_41d7_be5a_ecbe1a440af5.slice/crio-6ffb141d05350cf057d18ea883bbe3bbb98ac259827d634dbc692237861940d1 WatchSource:0}: Error finding container 6ffb141d05350cf057d18ea883bbe3bbb98ac259827d634dbc692237861940d1: Status 404 returned error can't find the container with id 6ffb141d05350cf057d18ea883bbe3bbb98ac259827d634dbc692237861940d1 Dec 10 14:52:00 crc kubenswrapper[4718]: W1210 14:52:00.108888 4718 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod21d69144_5afe_4aa8_95f0_c6e7c8802b14.slice/crio-9483d23adf544c45b8da8030db12c5c5ad914fbce3fe54202e18d87762475d47 WatchSource:0}: Error finding container 9483d23adf544c45b8da8030db12c5c5ad914fbce3fe54202e18d87762475d47: Status 404 returned error can't find the container with id 9483d23adf544c45b8da8030db12c5c5ad914fbce3fe54202e18d87762475d47 Dec 10 14:52:00 crc kubenswrapper[4718]: E1210 14:52:00.117438 4718 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:101b3e007d8c9f2e183262d7712f986ad51256448099069bc14f1ea5f997ab94,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-9sf4v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-5854674fcc-r7vfj_openstack-operators(12ba5675-3e82-41d7-be5a-ecbe1a440af5): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 10 14:52:00 crc kubenswrapper[4718]: E1210 14:52:00.117688 4718 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/watcher-operator@sha256:961417d59f527d925ac48ff6a11de747d0493315e496e34dc83d76a1a1fff58a,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-779lg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-75944c9b7-rn6gn_openstack-operators(21d69144-5afe-4aa8-95f0-c6e7c8802b14): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 10 14:52:00 crc kubenswrapper[4718]: I1210 14:52:00.125744 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-5854674fcc-r7vfj"] Dec 10 14:52:00 crc kubenswrapper[4718]: E1210 14:52:00.126065 4718 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-9sf4v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-5854674fcc-r7vfj_openstack-operators(12ba5675-3e82-41d7-be5a-ecbe1a440af5): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 10 14:52:00 crc kubenswrapper[4718]: E1210 14:52:00.126256 4718 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-779lg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-75944c9b7-rn6gn_openstack-operators(21d69144-5afe-4aa8-95f0-c6e7c8802b14): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 10 14:52:00 crc kubenswrapper[4718]: E1210 14:52:00.127523 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"]" pod="openstack-operators/test-operator-controller-manager-5854674fcc-r7vfj" podUID="12ba5675-3e82-41d7-be5a-ecbe1a440af5" Dec 10 14:52:00 crc kubenswrapper[4718]: E1210 14:52:00.127648 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"]" pod="openstack-operators/watcher-operator-controller-manager-75944c9b7-rn6gn" podUID="21d69144-5afe-4aa8-95f0-c6e7c8802b14" Dec 10 14:52:00 crc kubenswrapper[4718]: I1210 14:52:00.138007 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-nb754"] Dec 10 14:52:00 crc kubenswrapper[4718]: W1210 14:52:00.146707 4718 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddaeefe3a_b055_4ee9_be2e_a93afc257365.slice/crio-5307e9e4b313bd1753f9d67bff15ffa2f4704a194502a0e37795127350766216 WatchSource:0}: Error finding container 5307e9e4b313bd1753f9d67bff15ffa2f4704a194502a0e37795127350766216: Status 404 returned error can't find the container with id 5307e9e4b313bd1753f9d67bff15ffa2f4704a194502a0e37795127350766216 Dec 10 14:52:00 crc kubenswrapper[4718]: E1210 14:52:00.152284 4718 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/octavia-operator@sha256:d9a3694865a7d54ee96397add18c3898886e98d079aa20876a0f4de1fa7a7168,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-k2zfb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod octavia-operator-controller-manager-998648c74-s4568_openstack-operators(664faf77-d6a3-4b57-9dc9-ca7a4879c0ef): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 10 14:52:00 crc kubenswrapper[4718]: I1210 14:52:00.154050 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-998648c74-s4568"] Dec 10 14:52:00 crc kubenswrapper[4718]: E1210 14:52:00.154935 4718 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/swift-operator@sha256:3aa109bb973253ae9dcf339b9b65abbd1176cdb4be672c93e538a5f113816991,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-kktwl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-9d58d64bc-7vvmc_openstack-operators(daeefe3a-b055-4ee9-be2e-a93afc257365): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 10 14:52:00 crc kubenswrapper[4718]: E1210 14:52:00.155456 4718 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-k2zfb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod octavia-operator-controller-manager-998648c74-s4568_openstack-operators(664faf77-d6a3-4b57-9dc9-ca7a4879c0ef): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 10 14:52:00 crc kubenswrapper[4718]: E1210 14:52:00.156919 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"]" pod="openstack-operators/octavia-operator-controller-manager-998648c74-s4568" podUID="664faf77-d6a3-4b57-9dc9-ca7a4879c0ef" Dec 10 14:52:00 crc kubenswrapper[4718]: E1210 14:52:00.157046 4718 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-kktwl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-9d58d64bc-7vvmc_openstack-operators(daeefe3a-b055-4ee9-be2e-a93afc257365): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 10 14:52:00 crc kubenswrapper[4718]: E1210 14:52:00.158360 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"]" pod="openstack-operators/swift-operator-controller-manager-9d58d64bc-7vvmc" podUID="daeefe3a-b055-4ee9-be2e-a93afc257365" Dec 10 14:52:00 crc kubenswrapper[4718]: I1210 14:52:00.168447 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-58d5ff84df-jqlwv"] Dec 10 14:52:00 crc kubenswrapper[4718]: E1210 14:52:00.178631 4718 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/designate-operator@sha256:900050d3501c0785b227db34b89883efe68247816e5c7427cacb74f8aa10605a,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-rdcp9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod designate-operator-controller-manager-697fb699cf-6mx57_openstack-operators(a3023af7-f9ec-44a3-a532-0f6d51843443): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 10 14:52:00 crc kubenswrapper[4718]: I1210 14:52:00.179102 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-75944c9b7-rn6gn"] Dec 10 14:52:00 crc kubenswrapper[4718]: E1210 14:52:00.181748 4718 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-rdcp9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod designate-operator-controller-manager-697fb699cf-6mx57_openstack-operators(a3023af7-f9ec-44a3-a532-0f6d51843443): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 10 14:52:00 crc kubenswrapper[4718]: E1210 14:52:00.182981 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"]" pod="openstack-operators/designate-operator-controller-manager-697fb699cf-6mx57" podUID="a3023af7-f9ec-44a3-a532-0f6d51843443" Dec 10 14:52:00 crc kubenswrapper[4718]: I1210 14:52:00.193751 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-9d58d64bc-7vvmc"] Dec 10 14:52:00 crc kubenswrapper[4718]: I1210 14:52:00.201902 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-697fb699cf-6mx57"] Dec 10 14:52:00 crc kubenswrapper[4718]: I1210 14:52:00.343260 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6a2a49d9-73fc-4798-8173-ed230aa16811-cert\") pod \"infra-operator-controller-manager-78d48bff9d-cvh4g\" (UID: \"6a2a49d9-73fc-4798-8173-ed230aa16811\") " pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-cvh4g" Dec 10 14:52:00 crc kubenswrapper[4718]: E1210 14:52:00.343599 4718 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Dec 10 14:52:00 crc kubenswrapper[4718]: E1210 14:52:00.343675 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a2a49d9-73fc-4798-8173-ed230aa16811-cert podName:6a2a49d9-73fc-4798-8173-ed230aa16811 nodeName:}" failed. No retries permitted until 2025-12-10 14:52:04.343648071 +0000 UTC m=+1229.292871488 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/6a2a49d9-73fc-4798-8173-ed230aa16811-cert") pod "infra-operator-controller-manager-78d48bff9d-cvh4g" (UID: "6a2a49d9-73fc-4798-8173-ed230aa16811") : secret "infra-operator-webhook-server-cert" not found Dec 10 14:52:00 crc kubenswrapper[4718]: I1210 14:52:00.892896 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-58d5ff84df-jqlwv" event={"ID":"469e8dbb-654f-464b-80f9-ac7b0d55439f","Type":"ContainerStarted","Data":"49a7647df54ff65ec7a5f44d67bdf1d238a74facf04f20520450e19930d678df"} Dec 10 14:52:00 crc kubenswrapper[4718]: I1210 14:52:00.896161 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-697fb699cf-6mx57" event={"ID":"a3023af7-f9ec-44a3-a532-0f6d51843443","Type":"ContainerStarted","Data":"24d015f9b56c59fef60d3bd37e43fb6f603be428d23ec385111d0f8c6e537cd2"} Dec 10 14:52:00 crc kubenswrapper[4718]: I1210 14:52:00.903183 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-78f8948974-6s7dx" event={"ID":"e4e01550-5ee5-4afc-a01a-b3ea52b47f23","Type":"ContainerStarted","Data":"f62604e79f8a241ca79c014e6ac3df79ca334a1f03cb7b2db0f0276951b5a41f"} Dec 10 14:52:00 crc kubenswrapper[4718]: E1210 14:52:00.904430 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/designate-operator@sha256:900050d3501c0785b227db34b89883efe68247816e5c7427cacb74f8aa10605a\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/designate-operator-controller-manager-697fb699cf-6mx57" podUID="a3023af7-f9ec-44a3-a532-0f6d51843443" Dec 10 14:52:00 crc kubenswrapper[4718]: I1210 14:52:00.912009 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5854674fcc-r7vfj" event={"ID":"12ba5675-3e82-41d7-be5a-ecbe1a440af5","Type":"ContainerStarted","Data":"6ffb141d05350cf057d18ea883bbe3bbb98ac259827d634dbc692237861940d1"} Dec 10 14:52:00 crc kubenswrapper[4718]: I1210 14:52:00.924035 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-9d58d64bc-7vvmc" event={"ID":"daeefe3a-b055-4ee9-be2e-a93afc257365","Type":"ContainerStarted","Data":"5307e9e4b313bd1753f9d67bff15ffa2f4704a194502a0e37795127350766216"} Dec 10 14:52:00 crc kubenswrapper[4718]: E1210 14:52:00.938255 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:3aa109bb973253ae9dcf339b9b65abbd1176cdb4be672c93e538a5f113816991\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/swift-operator-controller-manager-9d58d64bc-7vvmc" podUID="daeefe3a-b055-4ee9-be2e-a93afc257365" Dec 10 14:52:00 crc kubenswrapper[4718]: I1210 14:52:00.938687 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-998648c74-s4568" event={"ID":"664faf77-d6a3-4b57-9dc9-ca7a4879c0ef","Type":"ContainerStarted","Data":"9803372eec98c29cddc68c8111e60c56abbb107f92967e4d670a75847be70576"} Dec 10 14:52:00 crc kubenswrapper[4718]: E1210 14:52:00.938819 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:101b3e007d8c9f2e183262d7712f986ad51256448099069bc14f1ea5f997ab94\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/test-operator-controller-manager-5854674fcc-r7vfj" podUID="12ba5675-3e82-41d7-be5a-ecbe1a440af5" Dec 10 14:52:00 crc kubenswrapper[4718]: I1210 14:52:00.945126 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-75944c9b7-rn6gn" event={"ID":"21d69144-5afe-4aa8-95f0-c6e7c8802b14","Type":"ContainerStarted","Data":"9483d23adf544c45b8da8030db12c5c5ad914fbce3fe54202e18d87762475d47"} Dec 10 14:52:00 crc kubenswrapper[4718]: E1210 14:52:00.954667 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/octavia-operator@sha256:d9a3694865a7d54ee96397add18c3898886e98d079aa20876a0f4de1fa7a7168\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/octavia-operator-controller-manager-998648c74-s4568" podUID="664faf77-d6a3-4b57-9dc9-ca7a4879c0ef" Dec 10 14:52:00 crc kubenswrapper[4718]: I1210 14:52:00.957700 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/463f6bf2-85ef-488a-8223-56898633fe8f-cert\") pod \"openstack-baremetal-operator-controller-manager-7f95dc5b94n9mbl\" (UID: \"463f6bf2-85ef-488a-8223-56898633fe8f\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7f95dc5b94n9mbl" Dec 10 14:52:00 crc kubenswrapper[4718]: E1210 14:52:00.958799 4718 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 10 14:52:00 crc kubenswrapper[4718]: E1210 14:52:00.958958 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/463f6bf2-85ef-488a-8223-56898633fe8f-cert podName:463f6bf2-85ef-488a-8223-56898633fe8f nodeName:}" failed. No retries permitted until 2025-12-10 14:52:04.958911076 +0000 UTC m=+1229.908134493 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/463f6bf2-85ef-488a-8223-56898633fe8f-cert") pod "openstack-baremetal-operator-controller-manager-7f95dc5b94n9mbl" (UID: "463f6bf2-85ef-488a-8223-56898633fe8f") : secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 10 14:52:00 crc kubenswrapper[4718]: I1210 14:52:00.961639 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-hq6tv" event={"ID":"204f0155-9693-4239-8a7a-440255d5ad50","Type":"ContainerStarted","Data":"f2763bf6bd291f4b5bfe65c1c6f95057578391db66d0e6912c89a99c703aa83a"} Dec 10 14:52:00 crc kubenswrapper[4718]: E1210 14:52:00.966735 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:961417d59f527d925ac48ff6a11de747d0493315e496e34dc83d76a1a1fff58a\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/watcher-operator-controller-manager-75944c9b7-rn6gn" podUID="21d69144-5afe-4aa8-95f0-c6e7c8802b14" Dec 10 14:52:00 crc kubenswrapper[4718]: I1210 14:52:00.969616 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-nb754" event={"ID":"91cdfe7c-2e49-4919-a7ff-0559e12ecf8b","Type":"ContainerStarted","Data":"d36a0f35778c8f231ee561b33192d0cc13b1e6fc36e94efa8fedc9ceee6d7f43"} Dec 10 14:52:01 crc kubenswrapper[4718]: I1210 14:52:01.820787 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/81cbd3a0-2031-418d-95b6-fdac9d170a51-webhook-certs\") pod \"openstack-operator-controller-manager-85cbc5886b-z2lw4\" (UID: \"81cbd3a0-2031-418d-95b6-fdac9d170a51\") " pod="openstack-operators/openstack-operator-controller-manager-85cbc5886b-z2lw4" Dec 10 14:52:01 crc kubenswrapper[4718]: I1210 14:52:01.820880 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/81cbd3a0-2031-418d-95b6-fdac9d170a51-metrics-certs\") pod \"openstack-operator-controller-manager-85cbc5886b-z2lw4\" (UID: \"81cbd3a0-2031-418d-95b6-fdac9d170a51\") " pod="openstack-operators/openstack-operator-controller-manager-85cbc5886b-z2lw4" Dec 10 14:52:01 crc kubenswrapper[4718]: E1210 14:52:01.829487 4718 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Dec 10 14:52:01 crc kubenswrapper[4718]: E1210 14:52:01.829566 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/81cbd3a0-2031-418d-95b6-fdac9d170a51-webhook-certs podName:81cbd3a0-2031-418d-95b6-fdac9d170a51 nodeName:}" failed. No retries permitted until 2025-12-10 14:52:05.829541651 +0000 UTC m=+1230.778765068 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/81cbd3a0-2031-418d-95b6-fdac9d170a51-webhook-certs") pod "openstack-operator-controller-manager-85cbc5886b-z2lw4" (UID: "81cbd3a0-2031-418d-95b6-fdac9d170a51") : secret "webhook-server-cert" not found Dec 10 14:52:01 crc kubenswrapper[4718]: E1210 14:52:01.830375 4718 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Dec 10 14:52:01 crc kubenswrapper[4718]: E1210 14:52:01.830431 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/81cbd3a0-2031-418d-95b6-fdac9d170a51-metrics-certs podName:81cbd3a0-2031-418d-95b6-fdac9d170a51 nodeName:}" failed. No retries permitted until 2025-12-10 14:52:05.830420353 +0000 UTC m=+1230.779643770 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/81cbd3a0-2031-418d-95b6-fdac9d170a51-metrics-certs") pod "openstack-operator-controller-manager-85cbc5886b-z2lw4" (UID: "81cbd3a0-2031-418d-95b6-fdac9d170a51") : secret "metrics-server-cert" not found Dec 10 14:52:02 crc kubenswrapper[4718]: E1210 14:52:02.009545 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/octavia-operator@sha256:d9a3694865a7d54ee96397add18c3898886e98d079aa20876a0f4de1fa7a7168\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/octavia-operator-controller-manager-998648c74-s4568" podUID="664faf77-d6a3-4b57-9dc9-ca7a4879c0ef" Dec 10 14:52:02 crc kubenswrapper[4718]: E1210 14:52:02.010026 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:961417d59f527d925ac48ff6a11de747d0493315e496e34dc83d76a1a1fff58a\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/watcher-operator-controller-manager-75944c9b7-rn6gn" podUID="21d69144-5afe-4aa8-95f0-c6e7c8802b14" Dec 10 14:52:02 crc kubenswrapper[4718]: E1210 14:52:02.010123 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:101b3e007d8c9f2e183262d7712f986ad51256448099069bc14f1ea5f997ab94\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/test-operator-controller-manager-5854674fcc-r7vfj" podUID="12ba5675-3e82-41d7-be5a-ecbe1a440af5" Dec 10 14:52:02 crc kubenswrapper[4718]: E1210 14:52:02.011366 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:3aa109bb973253ae9dcf339b9b65abbd1176cdb4be672c93e538a5f113816991\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/swift-operator-controller-manager-9d58d64bc-7vvmc" podUID="daeefe3a-b055-4ee9-be2e-a93afc257365" Dec 10 14:52:02 crc kubenswrapper[4718]: E1210 14:52:02.014165 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/designate-operator@sha256:900050d3501c0785b227db34b89883efe68247816e5c7427cacb74f8aa10605a\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/designate-operator-controller-manager-697fb699cf-6mx57" podUID="a3023af7-f9ec-44a3-a532-0f6d51843443" Dec 10 14:52:04 crc kubenswrapper[4718]: I1210 14:52:04.463432 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6a2a49d9-73fc-4798-8173-ed230aa16811-cert\") pod \"infra-operator-controller-manager-78d48bff9d-cvh4g\" (UID: \"6a2a49d9-73fc-4798-8173-ed230aa16811\") " pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-cvh4g" Dec 10 14:52:04 crc kubenswrapper[4718]: E1210 14:52:04.463918 4718 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Dec 10 14:52:04 crc kubenswrapper[4718]: E1210 14:52:04.465047 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a2a49d9-73fc-4798-8173-ed230aa16811-cert podName:6a2a49d9-73fc-4798-8173-ed230aa16811 nodeName:}" failed. No retries permitted until 2025-12-10 14:52:12.465011097 +0000 UTC m=+1237.414234514 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/6a2a49d9-73fc-4798-8173-ed230aa16811-cert") pod "infra-operator-controller-manager-78d48bff9d-cvh4g" (UID: "6a2a49d9-73fc-4798-8173-ed230aa16811") : secret "infra-operator-webhook-server-cert" not found Dec 10 14:52:05 crc kubenswrapper[4718]: I1210 14:52:05.005659 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/463f6bf2-85ef-488a-8223-56898633fe8f-cert\") pod \"openstack-baremetal-operator-controller-manager-7f95dc5b94n9mbl\" (UID: \"463f6bf2-85ef-488a-8223-56898633fe8f\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7f95dc5b94n9mbl" Dec 10 14:52:05 crc kubenswrapper[4718]: E1210 14:52:05.005947 4718 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 10 14:52:05 crc kubenswrapper[4718]: E1210 14:52:05.006086 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/463f6bf2-85ef-488a-8223-56898633fe8f-cert podName:463f6bf2-85ef-488a-8223-56898633fe8f nodeName:}" failed. No retries permitted until 2025-12-10 14:52:13.006059391 +0000 UTC m=+1237.955282858 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/463f6bf2-85ef-488a-8223-56898633fe8f-cert") pod "openstack-baremetal-operator-controller-manager-7f95dc5b94n9mbl" (UID: "463f6bf2-85ef-488a-8223-56898633fe8f") : secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 10 14:52:05 crc kubenswrapper[4718]: I1210 14:52:05.841062 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/81cbd3a0-2031-418d-95b6-fdac9d170a51-webhook-certs\") pod \"openstack-operator-controller-manager-85cbc5886b-z2lw4\" (UID: \"81cbd3a0-2031-418d-95b6-fdac9d170a51\") " pod="openstack-operators/openstack-operator-controller-manager-85cbc5886b-z2lw4" Dec 10 14:52:05 crc kubenswrapper[4718]: I1210 14:52:05.841197 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/81cbd3a0-2031-418d-95b6-fdac9d170a51-metrics-certs\") pod \"openstack-operator-controller-manager-85cbc5886b-z2lw4\" (UID: \"81cbd3a0-2031-418d-95b6-fdac9d170a51\") " pod="openstack-operators/openstack-operator-controller-manager-85cbc5886b-z2lw4" Dec 10 14:52:05 crc kubenswrapper[4718]: E1210 14:52:05.841355 4718 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Dec 10 14:52:05 crc kubenswrapper[4718]: E1210 14:52:05.841430 4718 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Dec 10 14:52:05 crc kubenswrapper[4718]: E1210 14:52:05.841549 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/81cbd3a0-2031-418d-95b6-fdac9d170a51-webhook-certs podName:81cbd3a0-2031-418d-95b6-fdac9d170a51 nodeName:}" failed. No retries permitted until 2025-12-10 14:52:13.841517815 +0000 UTC m=+1238.790741392 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/81cbd3a0-2031-418d-95b6-fdac9d170a51-webhook-certs") pod "openstack-operator-controller-manager-85cbc5886b-z2lw4" (UID: "81cbd3a0-2031-418d-95b6-fdac9d170a51") : secret "webhook-server-cert" not found Dec 10 14:52:05 crc kubenswrapper[4718]: E1210 14:52:05.841664 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/81cbd3a0-2031-418d-95b6-fdac9d170a51-metrics-certs podName:81cbd3a0-2031-418d-95b6-fdac9d170a51 nodeName:}" failed. No retries permitted until 2025-12-10 14:52:13.841605377 +0000 UTC m=+1238.790828964 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/81cbd3a0-2031-418d-95b6-fdac9d170a51-metrics-certs") pod "openstack-operator-controller-manager-85cbc5886b-z2lw4" (UID: "81cbd3a0-2031-418d-95b6-fdac9d170a51") : secret "metrics-server-cert" not found Dec 10 14:52:12 crc kubenswrapper[4718]: I1210 14:52:12.491344 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6a2a49d9-73fc-4798-8173-ed230aa16811-cert\") pod \"infra-operator-controller-manager-78d48bff9d-cvh4g\" (UID: \"6a2a49d9-73fc-4798-8173-ed230aa16811\") " pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-cvh4g" Dec 10 14:52:12 crc kubenswrapper[4718]: I1210 14:52:12.505257 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6a2a49d9-73fc-4798-8173-ed230aa16811-cert\") pod \"infra-operator-controller-manager-78d48bff9d-cvh4g\" (UID: \"6a2a49d9-73fc-4798-8173-ed230aa16811\") " pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-cvh4g" Dec 10 14:52:12 crc kubenswrapper[4718]: I1210 14:52:12.678609 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-cvh4g" Dec 10 14:52:13 crc kubenswrapper[4718]: I1210 14:52:13.103513 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/463f6bf2-85ef-488a-8223-56898633fe8f-cert\") pod \"openstack-baremetal-operator-controller-manager-7f95dc5b94n9mbl\" (UID: \"463f6bf2-85ef-488a-8223-56898633fe8f\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7f95dc5b94n9mbl" Dec 10 14:52:13 crc kubenswrapper[4718]: E1210 14:52:13.103870 4718 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 10 14:52:13 crc kubenswrapper[4718]: E1210 14:52:13.103982 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/463f6bf2-85ef-488a-8223-56898633fe8f-cert podName:463f6bf2-85ef-488a-8223-56898633fe8f nodeName:}" failed. No retries permitted until 2025-12-10 14:52:29.103947415 +0000 UTC m=+1254.053170852 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/463f6bf2-85ef-488a-8223-56898633fe8f-cert") pod "openstack-baremetal-operator-controller-manager-7f95dc5b94n9mbl" (UID: "463f6bf2-85ef-488a-8223-56898633fe8f") : secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 10 14:52:13 crc kubenswrapper[4718]: I1210 14:52:13.916233 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/81cbd3a0-2031-418d-95b6-fdac9d170a51-webhook-certs\") pod \"openstack-operator-controller-manager-85cbc5886b-z2lw4\" (UID: \"81cbd3a0-2031-418d-95b6-fdac9d170a51\") " pod="openstack-operators/openstack-operator-controller-manager-85cbc5886b-z2lw4" Dec 10 14:52:13 crc kubenswrapper[4718]: I1210 14:52:13.916311 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/81cbd3a0-2031-418d-95b6-fdac9d170a51-metrics-certs\") pod \"openstack-operator-controller-manager-85cbc5886b-z2lw4\" (UID: \"81cbd3a0-2031-418d-95b6-fdac9d170a51\") " pod="openstack-operators/openstack-operator-controller-manager-85cbc5886b-z2lw4" Dec 10 14:52:13 crc kubenswrapper[4718]: E1210 14:52:13.916461 4718 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Dec 10 14:52:13 crc kubenswrapper[4718]: E1210 14:52:13.916462 4718 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Dec 10 14:52:13 crc kubenswrapper[4718]: E1210 14:52:13.916542 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/81cbd3a0-2031-418d-95b6-fdac9d170a51-webhook-certs podName:81cbd3a0-2031-418d-95b6-fdac9d170a51 nodeName:}" failed. No retries permitted until 2025-12-10 14:52:29.916517442 +0000 UTC m=+1254.865740869 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/81cbd3a0-2031-418d-95b6-fdac9d170a51-webhook-certs") pod "openstack-operator-controller-manager-85cbc5886b-z2lw4" (UID: "81cbd3a0-2031-418d-95b6-fdac9d170a51") : secret "webhook-server-cert" not found Dec 10 14:52:13 crc kubenswrapper[4718]: E1210 14:52:13.916564 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/81cbd3a0-2031-418d-95b6-fdac9d170a51-metrics-certs podName:81cbd3a0-2031-418d-95b6-fdac9d170a51 nodeName:}" failed. No retries permitted until 2025-12-10 14:52:29.916553153 +0000 UTC m=+1254.865776570 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/81cbd3a0-2031-418d-95b6-fdac9d170a51-metrics-certs") pod "openstack-operator-controller-manager-85cbc5886b-z2lw4" (UID: "81cbd3a0-2031-418d-95b6-fdac9d170a51") : secret "metrics-server-cert" not found Dec 10 14:52:14 crc kubenswrapper[4718]: E1210 14:52:14.378406 4718 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/heat-operator@sha256:c4abfc148600dfa85915f3dc911d988ea2335f26cb6b8d749fe79bfe53e5e429" Dec 10 14:52:14 crc kubenswrapper[4718]: E1210 14:52:14.379206 4718 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/heat-operator@sha256:c4abfc148600dfa85915f3dc911d988ea2335f26cb6b8d749fe79bfe53e5e429,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-bcglg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-operator-controller-manager-5f64f6f8bb-gw22w_openstack-operators(a701287e-359e-429d-8b94-c4e06e8922a8): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 10 14:52:23 crc kubenswrapper[4718]: E1210 14:52:23.336524 4718 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/cinder-operator@sha256:981b6a8f95934a86c5f10ef6e198b07265aeba7f11cf84b9ccd13dfaf06f3ca3" Dec 10 14:52:23 crc kubenswrapper[4718]: E1210 14:52:23.337595 4718 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/cinder-operator@sha256:981b6a8f95934a86c5f10ef6e198b07265aeba7f11cf84b9ccd13dfaf06f3ca3,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-mt766,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-operator-controller-manager-6c677c69b-jfdsv_openstack-operators(82086b4c-0222-45a7-a3c3-fc2504f63a4e): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 10 14:52:25 crc kubenswrapper[4718]: E1210 14:52:25.639569 4718 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/neutron-operator@sha256:0b3fb69f35c151895d3dffd514974a9f9fe1c77c3bca69b78b81efb183cf4557" Dec 10 14:52:25 crc kubenswrapper[4718]: E1210 14:52:25.640096 4718 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/neutron-operator@sha256:0b3fb69f35c151895d3dffd514974a9f9fe1c77c3bca69b78b81efb183cf4557,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-5pkvw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod neutron-operator-controller-manager-5fdfd5b6b5-6smz5_openstack-operators(191f3c0d-be7d-463e-9979-922dfb629747): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 10 14:52:26 crc kubenswrapper[4718]: E1210 14:52:26.842859 4718 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/manila-operator@sha256:44126f9c6b1d2bf752ddf989e20a4fc4cc1c07723d4fcb78465ccb2f55da6b3a" Dec 10 14:52:26 crc kubenswrapper[4718]: E1210 14:52:26.843125 4718 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/manila-operator@sha256:44126f9c6b1d2bf752ddf989e20a4fc4cc1c07723d4fcb78465ccb2f55da6b3a,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-f24zs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod manila-operator-controller-manager-5b5fd79c9c-cfcm2_openstack-operators(e7677f94-866d-45c7-b1c9-70fd2b7c7012): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 10 14:52:29 crc kubenswrapper[4718]: I1210 14:52:29.194615 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/463f6bf2-85ef-488a-8223-56898633fe8f-cert\") pod \"openstack-baremetal-operator-controller-manager-7f95dc5b94n9mbl\" (UID: \"463f6bf2-85ef-488a-8223-56898633fe8f\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7f95dc5b94n9mbl" Dec 10 14:52:29 crc kubenswrapper[4718]: I1210 14:52:29.207117 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/463f6bf2-85ef-488a-8223-56898633fe8f-cert\") pod \"openstack-baremetal-operator-controller-manager-7f95dc5b94n9mbl\" (UID: \"463f6bf2-85ef-488a-8223-56898633fe8f\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7f95dc5b94n9mbl" Dec 10 14:52:29 crc kubenswrapper[4718]: I1210 14:52:29.403314 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7f95dc5b94n9mbl" Dec 10 14:52:29 crc kubenswrapper[4718]: E1210 14:52:29.984877 4718 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/placement-operator@sha256:d29650b006da97eb9178fcc58f2eb9fead8c2b414fac18f86a3c3a1507488c4f" Dec 10 14:52:29 crc kubenswrapper[4718]: E1210 14:52:29.985141 4718 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/placement-operator@sha256:d29650b006da97eb9178fcc58f2eb9fead8c2b414fac18f86a3c3a1507488c4f,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-rl2rl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-operator-controller-manager-78f8948974-6s7dx_openstack-operators(e4e01550-5ee5-4afc-a01a-b3ea52b47f23): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 10 14:52:30 crc kubenswrapper[4718]: I1210 14:52:30.011253 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/81cbd3a0-2031-418d-95b6-fdac9d170a51-webhook-certs\") pod \"openstack-operator-controller-manager-85cbc5886b-z2lw4\" (UID: \"81cbd3a0-2031-418d-95b6-fdac9d170a51\") " pod="openstack-operators/openstack-operator-controller-manager-85cbc5886b-z2lw4" Dec 10 14:52:30 crc kubenswrapper[4718]: I1210 14:52:30.011356 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/81cbd3a0-2031-418d-95b6-fdac9d170a51-metrics-certs\") pod \"openstack-operator-controller-manager-85cbc5886b-z2lw4\" (UID: \"81cbd3a0-2031-418d-95b6-fdac9d170a51\") " pod="openstack-operators/openstack-operator-controller-manager-85cbc5886b-z2lw4" Dec 10 14:52:30 crc kubenswrapper[4718]: I1210 14:52:30.019226 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/81cbd3a0-2031-418d-95b6-fdac9d170a51-webhook-certs\") pod \"openstack-operator-controller-manager-85cbc5886b-z2lw4\" (UID: \"81cbd3a0-2031-418d-95b6-fdac9d170a51\") " pod="openstack-operators/openstack-operator-controller-manager-85cbc5886b-z2lw4" Dec 10 14:52:30 crc kubenswrapper[4718]: I1210 14:52:30.019882 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/81cbd3a0-2031-418d-95b6-fdac9d170a51-metrics-certs\") pod \"openstack-operator-controller-manager-85cbc5886b-z2lw4\" (UID: \"81cbd3a0-2031-418d-95b6-fdac9d170a51\") " pod="openstack-operators/openstack-operator-controller-manager-85cbc5886b-z2lw4" Dec 10 14:52:30 crc kubenswrapper[4718]: I1210 14:52:30.076248 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-85cbc5886b-z2lw4" Dec 10 14:52:30 crc kubenswrapper[4718]: E1210 14:52:30.846107 4718 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/ironic-operator@sha256:5bdb3685be3ddc1efd62e16aaf2fa96ead64315e26d52b1b2a7d8ac01baa1e87" Dec 10 14:52:30 crc kubenswrapper[4718]: E1210 14:52:30.846677 4718 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ironic-operator@sha256:5bdb3685be3ddc1efd62e16aaf2fa96ead64315e26d52b1b2a7d8ac01baa1e87,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-lw5dc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ironic-operator-controller-manager-967d97867-2jgmr_openstack-operators(2516d98a-9991-4d5b-9791-14642a4ec629): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 10 14:52:32 crc kubenswrapper[4718]: E1210 14:52:32.732690 4718 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/telemetry-operator@sha256:f27e732ec1faee765461bf137d9be81278b2fa39675019a73622755e1e610b6f" Dec 10 14:52:32 crc kubenswrapper[4718]: E1210 14:52:32.733011 4718 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/telemetry-operator@sha256:f27e732ec1faee765461bf137d9be81278b2fa39675019a73622755e1e610b6f,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-24hkw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-58d5ff84df-jqlwv_openstack-operators(469e8dbb-654f-464b-80f9-ac7b0d55439f): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 10 14:52:35 crc kubenswrapper[4718]: E1210 14:52:35.371102 4718 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2" Dec 10 14:52:35 crc kubenswrapper[4718]: E1210 14:52:35.372229 4718 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-kmn7t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-nb754_openstack-operators(91cdfe7c-2e49-4919-a7ff-0559e12ecf8b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 10 14:52:35 crc kubenswrapper[4718]: E1210 14:52:35.374033 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-nb754" podUID="91cdfe7c-2e49-4919-a7ff-0559e12ecf8b" Dec 10 14:52:35 crc kubenswrapper[4718]: E1210 14:52:35.840614 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-nb754" podUID="91cdfe7c-2e49-4919-a7ff-0559e12ecf8b" Dec 10 14:52:39 crc kubenswrapper[4718]: E1210 14:52:39.454275 4718 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/watcher-operator@sha256:961417d59f527d925ac48ff6a11de747d0493315e496e34dc83d76a1a1fff58a" Dec 10 14:52:39 crc kubenswrapper[4718]: E1210 14:52:39.455467 4718 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/watcher-operator@sha256:961417d59f527d925ac48ff6a11de747d0493315e496e34dc83d76a1a1fff58a,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-779lg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-75944c9b7-rn6gn_openstack-operators(21d69144-5afe-4aa8-95f0-c6e7c8802b14): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 10 14:52:40 crc kubenswrapper[4718]: E1210 14:52:40.707843 4718 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/octavia-operator@sha256:d9a3694865a7d54ee96397add18c3898886e98d079aa20876a0f4de1fa7a7168" Dec 10 14:52:40 crc kubenswrapper[4718]: E1210 14:52:40.708209 4718 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/octavia-operator@sha256:d9a3694865a7d54ee96397add18c3898886e98d079aa20876a0f4de1fa7a7168,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-k2zfb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod octavia-operator-controller-manager-998648c74-s4568_openstack-operators(664faf77-d6a3-4b57-9dc9-ca7a4879c0ef): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 10 14:52:44 crc kubenswrapper[4718]: E1210 14:52:44.188098 4718 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/nova-operator@sha256:779f0cee6024d0fb8f259b036fe790e62aa5a3b0431ea9bf15a6e7d02e2e5670" Dec 10 14:52:44 crc kubenswrapper[4718]: E1210 14:52:44.188976 4718 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/nova-operator@sha256:779f0cee6024d0fb8f259b036fe790e62aa5a3b0431ea9bf15a6e7d02e2e5670,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-txlrl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-697bc559fc-426mn_openstack-operators(a3e406b6-e5c1-4c25-b9ee-80fbbb3eb89b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 10 14:52:44 crc kubenswrapper[4718]: E1210 14:52:44.895845 4718 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/keystone-operator@sha256:72ad6517987f674af0d0ae092cbb874aeae909c8b8b60188099c311762ebc8f7" Dec 10 14:52:44 crc kubenswrapper[4718]: E1210 14:52:44.896128 4718 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/keystone-operator@sha256:72ad6517987f674af0d0ae092cbb874aeae909c8b8b60188099c311762ebc8f7,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-p2zrx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod keystone-operator-controller-manager-7765d96ddf-9sxwk_openstack-operators(7d8ae7e9-7545-4ab6-b87c-6c5484b47424): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 10 14:52:45 crc kubenswrapper[4718]: I1210 14:52:45.792466 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-78d48bff9d-cvh4g"] Dec 10 14:52:46 crc kubenswrapper[4718]: I1210 14:52:46.003236 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-7f95dc5b94n9mbl"] Dec 10 14:52:46 crc kubenswrapper[4718]: I1210 14:52:46.157420 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-85cbc5886b-z2lw4"] Dec 10 14:52:47 crc kubenswrapper[4718]: W1210 14:52:47.617016 4718 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod81cbd3a0_2031_418d_95b6_fdac9d170a51.slice/crio-0e6354072c86cf268da4c9e70f3550024660d38a22579cfda3e3a8d9ec29fcf1 WatchSource:0}: Error finding container 0e6354072c86cf268da4c9e70f3550024660d38a22579cfda3e3a8d9ec29fcf1: Status 404 returned error can't find the container with id 0e6354072c86cf268da4c9e70f3550024660d38a22579cfda3e3a8d9ec29fcf1 Dec 10 14:52:47 crc kubenswrapper[4718]: W1210 14:52:47.619019 4718 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6a2a49d9_73fc_4798_8173_ed230aa16811.slice/crio-87522281cd59c2d20d5acb814784bb766fde0dcf825dcb2416c73763908753c0 WatchSource:0}: Error finding container 87522281cd59c2d20d5acb814784bb766fde0dcf825dcb2416c73763908753c0: Status 404 returned error can't find the container with id 87522281cd59c2d20d5acb814784bb766fde0dcf825dcb2416c73763908753c0 Dec 10 14:52:48 crc kubenswrapper[4718]: I1210 14:52:48.084358 4718 patch_prober.go:28] interesting pod/machine-config-daemon-8zmhn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 10 14:52:48 crc kubenswrapper[4718]: I1210 14:52:48.084882 4718 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" podUID="8db53917-7cfb-496d-b8a0-5cc68f3be4e7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 10 14:52:48 crc kubenswrapper[4718]: I1210 14:52:48.182984 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-mshqn" event={"ID":"80f8ae23-3a84-4810-9868-6571b6cf56a1","Type":"ContainerStarted","Data":"28436820eb2fa86fab32e99d675be3e44acc992fe1a8f4ace950809d77fad1a9"} Dec 10 14:52:48 crc kubenswrapper[4718]: I1210 14:52:48.193176 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-hq6tv" event={"ID":"204f0155-9693-4239-8a7a-440255d5ad50","Type":"ContainerStarted","Data":"26fd92345fb5e58861549c6106922ef606193449027aca7f277523ed0dcf80b4"} Dec 10 14:52:48 crc kubenswrapper[4718]: I1210 14:52:48.197765 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-cvh4g" event={"ID":"6a2a49d9-73fc-4798-8173-ed230aa16811","Type":"ContainerStarted","Data":"87522281cd59c2d20d5acb814784bb766fde0dcf825dcb2416c73763908753c0"} Dec 10 14:52:48 crc kubenswrapper[4718]: I1210 14:52:48.200543 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-85cbc5886b-z2lw4" event={"ID":"81cbd3a0-2031-418d-95b6-fdac9d170a51","Type":"ContainerStarted","Data":"0e6354072c86cf268da4c9e70f3550024660d38a22579cfda3e3a8d9ec29fcf1"} Dec 10 14:52:48 crc kubenswrapper[4718]: I1210 14:52:48.212886 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-79c8c4686c-7qqzr" event={"ID":"32690a0c-0ce7-4639-b30f-18a1a91ed86d","Type":"ContainerStarted","Data":"1b3d92d1d72d75291870caa2881ae649b1615502fbd220e1ddaa6ebfb6fe11a5"} Dec 10 14:52:48 crc kubenswrapper[4718]: I1210 14:52:48.218308 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-5697bb5779-xvwrl" event={"ID":"513a8781-70b0-4692-9141-0c60ef254a98","Type":"ContainerStarted","Data":"80ed42ab25bee4510b257c60ec8df07be33781fa29a886b5cc293f1df8717aa4"} Dec 10 14:52:48 crc kubenswrapper[4718]: I1210 14:52:48.231725 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7f95dc5b94n9mbl" event={"ID":"463f6bf2-85ef-488a-8223-56898633fe8f","Type":"ContainerStarted","Data":"9f03e6fa4b286a706d7ed3bdb782ab7d2769989942f5c5dc3e394f3232ecddd3"} Dec 10 14:52:49 crc kubenswrapper[4718]: I1210 14:52:49.366525 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-4s6xq" event={"ID":"61e4671d-9417-472d-9d76-64fdcc0e3297","Type":"ContainerStarted","Data":"96716c8185f19336de90182c0cfa335a4ba1349571d363fa463cdcd581358604"} Dec 10 14:52:49 crc kubenswrapper[4718]: I1210 14:52:49.372001 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5854674fcc-r7vfj" event={"ID":"12ba5675-3e82-41d7-be5a-ecbe1a440af5","Type":"ContainerStarted","Data":"7d8b77e04abc06416eeb63b0228c67f64ed8d02fae711570a8ed8bd212d318b2"} Dec 10 14:52:49 crc kubenswrapper[4718]: I1210 14:52:49.481791 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-85cbc5886b-z2lw4" event={"ID":"81cbd3a0-2031-418d-95b6-fdac9d170a51","Type":"ContainerStarted","Data":"960cb7117bf4005725c174f43813cec877cc138745180a316513020e54cef45a"} Dec 10 14:52:49 crc kubenswrapper[4718]: I1210 14:52:49.482142 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-85cbc5886b-z2lw4" Dec 10 14:52:49 crc kubenswrapper[4718]: I1210 14:52:49.496113 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-697fb699cf-6mx57" event={"ID":"a3023af7-f9ec-44a3-a532-0f6d51843443","Type":"ContainerStarted","Data":"27cf16862b333599a554ab09aeb7bb08b0467b53cfb17dd2d4dc9d9d7f1d81d5"} Dec 10 14:52:49 crc kubenswrapper[4718]: I1210 14:52:49.628220 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-85cbc5886b-z2lw4" podStartSLOduration=52.628170993 podStartE2EDuration="52.628170993s" podCreationTimestamp="2025-12-10 14:51:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 14:52:49.609004372 +0000 UTC m=+1274.558227799" watchObservedRunningTime="2025-12-10 14:52:49.628170993 +0000 UTC m=+1274.577394400" Dec 10 14:52:54 crc kubenswrapper[4718]: I1210 14:52:54.714591 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-9d58d64bc-7vvmc" event={"ID":"daeefe3a-b055-4ee9-be2e-a93afc257365","Type":"ContainerStarted","Data":"22e48239cd188a2efd992b63aef059cee17c4679ab3eb25d5b93ae4b5f06d0c5"} Dec 10 14:53:00 crc kubenswrapper[4718]: I1210 14:53:00.084079 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-85cbc5886b-z2lw4" Dec 10 14:53:02 crc kubenswrapper[4718]: E1210 14:53:02.700661 4718 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0" Dec 10 14:53:02 crc kubenswrapper[4718]: E1210 14:53:02.700740 4718 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0" Dec 10 14:53:02 crc kubenswrapper[4718]: E1210 14:53:02.701495 4718 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-5pkvw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod neutron-operator-controller-manager-5fdfd5b6b5-6smz5_openstack-operators(191f3c0d-be7d-463e-9979-922dfb629747): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 10 14:53:02 crc kubenswrapper[4718]: E1210 14:53:02.701654 4718 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-bcglg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-operator-controller-manager-5f64f6f8bb-gw22w_openstack-operators(a701287e-359e-429d-8b94-c4e06e8922a8): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 10 14:53:02 crc kubenswrapper[4718]: E1210 14:53:02.702804 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"]" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-6smz5" podUID="191f3c0d-be7d-463e-9979-922dfb629747" Dec 10 14:53:02 crc kubenswrapper[4718]: E1210 14:53:02.702853 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"]" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-gw22w" podUID="a701287e-359e-429d-8b94-c4e06e8922a8" Dec 10 14:53:16 crc kubenswrapper[4718]: E1210 14:53:16.011933 4718 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0" Dec 10 14:53:16 crc kubenswrapper[4718]: E1210 14:53:16.013263 4718 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-24hkw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-58d5ff84df-jqlwv_openstack-operators(469e8dbb-654f-464b-80f9-ac7b0d55439f): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 10 14:53:16 crc kubenswrapper[4718]: E1210 14:53:16.014581 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"]" pod="openstack-operators/telemetry-operator-controller-manager-58d5ff84df-jqlwv" podUID="469e8dbb-654f-464b-80f9-ac7b0d55439f" Dec 10 14:53:16 crc kubenswrapper[4718]: E1210 14:53:16.069998 4718 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0" Dec 10 14:53:16 crc kubenswrapper[4718]: E1210 14:53:16.070252 4718 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-k2zfb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod octavia-operator-controller-manager-998648c74-s4568_openstack-operators(664faf77-d6a3-4b57-9dc9-ca7a4879c0ef): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 10 14:53:16 crc kubenswrapper[4718]: E1210 14:53:16.071728 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"]" pod="openstack-operators/octavia-operator-controller-manager-998648c74-s4568" podUID="664faf77-d6a3-4b57-9dc9-ca7a4879c0ef" Dec 10 14:53:16 crc kubenswrapper[4718]: E1210 14:53:16.655606 4718 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/openstack-baremetal-operator@sha256:9d539fb6b72f91cfc6200bb91b7c6dbaeab17c7711342dd3a9549c66762a2d48" Dec 10 14:53:16 crc kubenswrapper[4718]: E1210 14:53:16.656738 4718 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/openstack-baremetal-operator@sha256:9d539fb6b72f91cfc6200bb91b7c6dbaeab17c7711342dd3a9549c66762a2d48,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:true,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/openstack-k8s-operators/openstack-baremetal-operator-agent@sha256:add611bf73d5aab1ac07ef665281ed0e5ad1aded495b8b32927aa2e726abb29a,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_ANSIBLEEE_IMAGE_URL_DEFAULT,Value:quay.io/openstack-k8s-operators/openstack-ansibleee-runner@sha256:2f23894a78a13a0ae52fa2f8ae1e1b99282bebecd0cfda3db696e5d371097eaa,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_AODH_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-aodh-api@sha256:36946a77001110f391fb254ec77129803a6b7c34dacfa1a4c8c51aa8d23d57c5,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_AODH_EVALUATOR_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-aodh-evaluator@sha256:dd58b29b5d88662a621c685c2b76fe8a71cc9e82aa85dff22a66182a6ceef3ae,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_AODH_LISTENER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-aodh-listener@sha256:fc47ed1c6249c9f6ef13ef1eac82d5a34819a715dea5117d33df0d0dc69ace8b,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_AODH_NOTIFIER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-aodh-notifier@sha256:e21d35c272d016f4dbd323dc827ee83538c96674adfb188e362aa652ce167b61,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_APACHE_IMAGE_URL_DEFAULT,Value:registry.redhat.io/ubi9/httpd-24@sha256:6b929971283d69f485a7d3e449fb5a3dd65d5a4de585c73419e776821d00062c,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_BARBICAN_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-barbican-api@sha256:fe32d3ea620f0c7ecfdde9bbf28417fde03bc18c6f60b1408fa8da24d8188f16,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_BARBICAN_KEYSTONE_LISTENER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-barbican-keystone-listener@sha256:c2ace235f775334be02d78928802b76309543e869cc6b4b55843ee546691e6c3,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_BARBICAN_WORKER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-barbican-worker@sha256:be77cc58b87f299b42bb2cbe74f3f8d028b8c887851a53209441b60e1363aeb5,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CEILOMETER_CENTRAL_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ceilometer-central@sha256:5a548c25fe3d02f7a042cb0a6d28fc8039a34c4a3b3d07aadda4aba3a926e777,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CEILOMETER_COMPUTE_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ceilometer-compute@sha256:41dc9cf27a902d9c7b392d730bd761cf3c391a548a841e9e4d38e1571f3c53bf,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CEILOMETER_IPMI_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi@sha256:174f8f712eb5fdda5061a1a68624befb27bbe766842653788583ec74c5ae506a,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CEILOMETER_MYSQLD_EXPORTER_IMAGE_URL_DEFAULT,Value:quay.io/prometheus/mysqld-exporter@sha256:7211a617ec657701ca819aa0ba28e1d5750f5bf2c1391b755cc4a48cc360b0fa,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CEILOMETER_NOTIFICATION_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ceilometer-notification@sha256:df14f6de785b8aefc38ceb5b47088405224cfa914977c9ab811514cc77b08a67,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CEILOMETER_SGCORE_IMAGE_URL_DEFAULT,Value:quay.io/openstack-k8s-operators/sg-core@sha256:09b5017c95d7697e66b9c64846bc48ef5826a009cba89b956ec54561e5f4a2d1,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CINDER_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-cinder-api@sha256:b59b7445e581cc720038107e421371c86c5765b2967e77d884ef29b1d9fd0f49,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CINDER_BACKUP_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-cinder-backup@sha256:b8d76f96b6f17a3318d089c0b5c0e6c292d969ab392cdcc708ec0f0188c953ae,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CINDER_SCHEDULER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-cinder-scheduler@sha256:43c55407c7c9b4141482533546e6570535373f7e36df374dfbbe388293c19dbf,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CINDER_VOLUME_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-cinder-volume@sha256:097816f289af117f14cd8ee1678a9635e8da6de4a1bde834d02199c4ef65c5c0,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CLOUDKITTY_API_IMAGE_URL_DEFAULT,Value:quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api@sha256:9b4547f0bbb29be8d91f7adbf4914712fcca39a6841293c334ee97340c4eb570,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CLOUDKITTY_PROC_IMAGE_URL_DEFAULT,Value:quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-processor@sha256:e29f7d54ba2134b90fc17e5781773331b7d67b936419dbc81e20b8a6ce0866b9,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-designate-api@sha256:281668af8ed34c2464f3593d350cf7b695b41b81f40cc539ad74b7b65822afb9,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_BACKENDBIND9_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-designate-backend-bind9@sha256:84319e5dd6569ea531e64b688557c2a2e20deb5225f3d349e402e34858f00fe7,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_CENTRAL_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-designate-central@sha256:acb53e0e210562091843c212bc0cf5541daacd6f2bd18923430bae8c36578731,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_MDNS_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-designate-mdns@sha256:be6f4002842ebadf30d035721567a7e669f12a6eef8c00dc89030b3b08f3dd2c,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_PRODUCER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-designate-producer@sha256:988635be61f6ed8c0d707622193b7efe8e9b1dc7effbf9b09d2db5ec593b59e7,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_UNBOUND_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-unbound@sha256:63e08752678a68571e1c54ceea42c113af493a04cdc22198a3713df7b53f87e5,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_WORKER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-designate-worker@sha256:6741d06b0f1bbeb2968807dc5be45853cdd3dfb9cc7ea6ef23e909ae24f3cbf4,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_FRR_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-frr@sha256:1803a36d1a397a5595dddb4a2f791ab9443d3af97391a53928fa495ca7032d93,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_ISCSID_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-iscsid@sha256:d163fcf801d67d9c67b2ae4368675b75714db7c531de842aad43979a888c5d57,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_KEPLER_IMAGE_URL_DEFAULT,Value:quay.io/sustainable_computing_io/kepler@sha256:581b65b646301e0fcb07582150ba63438f1353a85bf9acf1eb2acb4ce71c58bd,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_LOGROTATE_CROND_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-cron@sha256:15bf81d933a44128cb6f3264632a9563337eb3bfe82c4a33c746595467d3b0c3,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_MULTIPATHD_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-multipathd@sha256:df38dbd6b3eccec2abaa8e3618a385405ccec1b73ae8c3573a138b0c961ed31f,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_NEUTRON_DHCP_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent@sha256:3a08e21338f651a90ee83ae46242b8c80c64488144f27a77848517049c3a8f5d,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_NEUTRON_METADATA_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:85729a662800e6b42ceb088545fed39a2ac58704b4a37fd540cdef3ebf9e59a2,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_NEUTRON_OVN_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-neutron-ovn-agent@sha256:ebeb4443ab9f9360925f7abd9c24b7a453390d678f79ed247d2042dcc6f9c3fc,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_NEUTRON_SRIOV_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent@sha256:04bb4cd601b08034c6cba18e701fcd36026ec4340402ed710a0bbd09d8e4884d,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_NODE_EXPORTER_IMAGE_URL_DEFAULT,Value:quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_OVN_BGP_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ovn-bgp-agent@sha256:27b80783b7d4658d89dda9a09924e9ee472908a8fa1c86bcf3f773d17a4196e0,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_PODMAN_EXPORTER_IMAGE_URL_DEFAULT,Value:quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_GLANCE_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-glance-api@sha256:e4aa4ebbb1e581a12040e9ad2ae2709ac31b5d965bb64fc4252d1028b05c565f,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_HEAT_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-heat-api@sha256:8cb133c5a5551e1aa11ef3326149db1babbf00924d0ff493ebe3346b69fd4b5b,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_HEAT_CFNAPI_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-heat-api-cfn@sha256:13c3567176bb2d033f6c6b30e20404bd67a217e2537210bf222f3afe0c8619b7,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_HEAT_ENGINE_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-heat-engine@sha256:60ac3446d57f1a97a6ca2d8e6584b00aa18704bc2707a7ac1a6a28c6d685d215,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_HORIZON_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-horizon@sha256:dd7600bc5278c663cfcfecafd3fb051a2cd2ddc3c1efb07738bf09512aa23ae7,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_INFRA_MEMCACHED_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-memcached@sha256:e47191ba776414b781b3e27b856ab45a03b9480c7dc2b1addb939608794882dc,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_INFRA_REDIS_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-redis@sha256:7e7788d1aae251e60f4012870140c65bce9760cd27feaeec5f65c42fe4ffce77,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_IRONIC_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ironic-api@sha256:6a401117007514660c694248adce8136d83559caf1b38e475935335e09ac954a,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_IRONIC_CONDUCTOR_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ironic-conductor@sha256:364d50f873551805782c23264570eff40e3807f35d9bccdd456515b4e31da488,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_IRONIC_INSPECTOR_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ironic-inspector@sha256:2d72dd490576e0cb670d21a08420888f3758d64ed0cbd2ef8b9aa8488ad2ce40,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_IRONIC_NEUTRON_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ironic-neutron-agent@sha256:96fdf7cddf31509ee63950a9d61320d0b01beb1212e28f37a6e872d6589ded22,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_IRONIC_PXE_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ironic-pxe@sha256:8b7534a2999075f919fc162d21f76026e8bf781913cc3d2ac07e484e9b2fc596,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_IRONIC_PYTHON_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/ironic-python-agent@sha256:d65eaaea2ab02d63af9d8a106619908fa01a2e56bd6753edc5590e66e46270db,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_KEYSTONE_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-keystone@sha256:d042d7f91bafb002affff8cf750d694a0da129377255c502028528fe2280e790,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_KSM_IMAGE_URL_DEFAULT,Value:registry.k8s.io/kube-state-metrics/kube-state-metrics@sha256:db384bf43222b066c378e77027a675d4cd9911107adba46c2922b3a55e10d6fb,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_MANILA_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-manila-api@sha256:a8faef9ea5e8ef8327b7fbb9b9cafc74c38c09c7e3b2365a7cad5eb49766f71d,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_MANILA_SCHEDULER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-manila-scheduler@sha256:88aa46ea03a5584560806aa4b093584fda6b2f54c562005b72be2e3615688090,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_MANILA_SHARE_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-manila-share@sha256:c08ecdfb7638c1897004347d835bdbabacff40a345f64c2b3111c377096bfa56,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_MARIADB_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-mariadb@sha256:ed0f8ba03f3ce47a32006d730c3049455325eb2c3b98b9fd6b3fb9901004df13,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NET_UTILS_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-netutils@sha256:8b4025a4f30e83acc0b51ac063eea701006a302a1acbdec53f54b540270887f7,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NEUTRON_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:ea0bf67f1aa5d95a9a07b9c8692c293470f1311792c55d3d57f1f92e56689c33,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NOVA_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-nova-api@sha256:4992f5ddbd20cca07e750846b2dbe7c51c5766c3002c388f8d8a158e347ec63d,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NOVA_COMPUTE_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:526afed30c44ef41d54d63a4f4db122bc603f775243ae350a59d2e0b5050076b,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NOVA_CONDUCTOR_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-nova-conductor@sha256:22f097cb86b28ac48dc670ed7e0e841280bef1608f11b2b4536fbc2d2a6a90be,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NOVA_NOVNC_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-nova-novncproxy@sha256:20b3ad38accb9eb8849599280a263d3436a5af03d89645e5ec4508586297ffde,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NOVA_SCHEDULER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-nova-scheduler@sha256:378ed518b68ea809cffa2ff7a93d51e52cfc53af14eedc978924fdabccef0325,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OCTAVIA_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-octavia-api@sha256:8c3632033f8c004f31a1c7c57c5ca7b450a11e9170a220b8943b57f80717c70c,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OCTAVIA_HEALTHMANAGER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-octavia-health-manager@sha256:3f746f7c6a8c48c0f4a800dcb4bc49bfbc4de4a9ca6a55d8f22bc515a92ea1d9,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OCTAVIA_HOUSEKEEPING_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-octavia-housekeeping@sha256:e1f7bf105190c3cbbfcf0aeeb77a92d1466100ba8377221ed5eee228949e05bd,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OCTAVIA_RSYSLOG_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-rsyslog@sha256:954b4c60705b229a968aba3b5b35ab02759378706103ed1189fae3e3316fac35,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OCTAVIA_WORKER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-octavia-worker@sha256:f2e0025727efb95efa65e6af6338ae3fc79bf61095d6d54931a0be8d7fe9acac,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OPENSTACK_CLIENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-openstackclient@sha256:2b4f8494513a3af102066fec5868ab167ac8664aceb2f0c639d7a0b60260a944,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OPENSTACK_MUST_GATHER_DEFAULT,Value:quay.io/openstack-k8s-operators/openstack-must-gather@sha256:854a802357b4f565a366fce3bf29b20c1b768ec4ab7e822ef52dfc2fef000d2c,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OPENSTACK_NETWORK_EXPORTER_IMAGE_URL_DEFAULT,Value:quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OS_CONTAINER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/edpm-hardened-uefi@sha256:194121c2d79401bd41f75428a437fe32a5806a6a160f7d80798ff66baed9afa5,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OVN_CONTROLLER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:fa24ce4aa285e3632c86a53e8d0385d4c788d049da42dd06570ad9d44aae00de,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OVN_CONTROLLER_OVS_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ovn-base@sha256:df45459c449f64cc6471e98c0890ac00dcc77a940f85d4e7e9d9dd52990d65b3,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OVN_NB_DBCLUSTER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ovn-nb-db-server@sha256:947c1bb9373b7d3f2acea104a5666e394c830111bf80d133f1fe7238e4d06f28,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OVN_NORTHD_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ovn-northd@sha256:425ebddc9d6851ee9c730e67eaf43039943dc7937fb11332a41335a9114b2d44,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OVN_SB_DBCLUSTER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ovn-sb-db-server@sha256:bea03c7c34dc6ef8bc163e12a8940011b8feebc44a2efaaba2d3c4c6c515d6c8,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_PLACEMENT_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-placement-api@sha256:33f4e5f7a715d48482ec46a42267ea992fa268585303c4f1bd3cbea072a6348b,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_RABBITMQ_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-rabbitmq@sha256:e733252aab7f4bc0efbdd712bcd88e44c5498bf1773dba843bc9dcfac324fe3d,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_SWIFT_ACCOUNT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-swift-account@sha256:a2280bc80b454dc9e5c95daf74b8a53d6f9e42fc16d45287e089fc41014fe1da,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_SWIFT_CONTAINER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-swift-container@sha256:88d687a7bb593b2e61598b422baba84d67c114419590a6d83d15327d119ce208,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_SWIFT_OBJECT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-swift-object@sha256:2635e02b99d380b2e547013c09c6c8da01bc89b3d3ce570e4d8f8656c7635b0e,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_SWIFT_PROXY_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-swift-proxy-server@sha256:ac7fefe1c93839c7ccb2aaa0a18751df0e9f64a36a3b4cc1b81d82d7774b8b45,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_TEST_TEMPEST_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-tempest-all@sha256:a357cf166caaeea230f8a912aceb042e3170c5d680844e8f97b936baa10834ed,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_WATCHER_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-master-centos9/openstack-watcher-api@sha256:debe653cf73fece436c0fdc897a41f63b9b55b470ef04cddba573992f21ddf5d,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_WATCHER_APPLIER_IMAGE_URL_DEFAULT,Value:quay.io/podified-master-centos9/openstack-watcher-applier@sha256:30cebe5bc6d290c90663bac2fc66122b38e677ec4714aaddb40a2dc239671ecd,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_WATCHER_DECISION_ENGINE_IMAGE_URL_DEFAULT,Value:quay.io/podified-master-centos9/openstack-watcher-decision-engine@sha256:cb305c062a57fe0ec93b7ed6f6d0bb5b853872ed21dde1b354b853ceb569c6a3,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cert,ReadOnly:true,MountPath:/tmp/k8s-webhook-server/serving-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qhz84,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstack-baremetal-operator-controller-manager-7f95dc5b94n9mbl_openstack-operators(463f6bf2-85ef-488a-8223-56898633fe8f): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 10 14:53:17 crc kubenswrapper[4718]: E1210 14:53:17.191233 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/watcher-operator-controller-manager-75944c9b7-rn6gn" podUID="21d69144-5afe-4aa8-95f0-c6e7c8802b14" Dec 10 14:53:17 crc kubenswrapper[4718]: E1210 14:53:17.262171 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/cinder-operator-controller-manager-6c677c69b-jfdsv" podUID="82086b4c-0222-45a7-a3c3-fc2504f63a4e" Dec 10 14:53:17 crc kubenswrapper[4718]: E1210 14:53:17.333375 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/ironic-operator-controller-manager-967d97867-2jgmr" podUID="2516d98a-9991-4d5b-9791-14642a4ec629" Dec 10 14:53:17 crc kubenswrapper[4718]: E1210 14:53:17.365939 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/placement-operator-controller-manager-78f8948974-6s7dx" podUID="e4e01550-5ee5-4afc-a01a-b3ea52b47f23" Dec 10 14:53:17 crc kubenswrapper[4718]: E1210 14:53:17.388212 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/manila-operator-controller-manager-5b5fd79c9c-cfcm2" podUID="e7677f94-866d-45c7-b1c9-70fd2b7c7012" Dec 10 14:53:17 crc kubenswrapper[4718]: E1210 14:53:17.389507 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-426mn" podUID="a3e406b6-e5c1-4c25-b9ee-80fbbb3eb89b" Dec 10 14:53:17 crc kubenswrapper[4718]: E1210 14:53:17.430927 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7f95dc5b94n9mbl" podUID="463f6bf2-85ef-488a-8223-56898633fe8f" Dec 10 14:53:17 crc kubenswrapper[4718]: E1210 14:53:17.936827 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-9sxwk" podUID="7d8ae7e9-7545-4ab6-b87c-6c5484b47424" Dec 10 14:53:18 crc kubenswrapper[4718]: I1210 14:53:18.006915 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7f95dc5b94n9mbl" event={"ID":"463f6bf2-85ef-488a-8223-56898633fe8f","Type":"ContainerStarted","Data":"fa2a0c05929b843eb260f1c259061bac496233fc692d6d5774e41005992529dc"} Dec 10 14:53:18 crc kubenswrapper[4718]: E1210 14:53:18.010895 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/openstack-baremetal-operator@sha256:9d539fb6b72f91cfc6200bb91b7c6dbaeab17c7711342dd3a9549c66762a2d48\\\"\"" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7f95dc5b94n9mbl" podUID="463f6bf2-85ef-488a-8223-56898633fe8f" Dec 10 14:53:18 crc kubenswrapper[4718]: I1210 14:53:18.043661 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-4s6xq" event={"ID":"61e4671d-9417-472d-9d76-64fdcc0e3297","Type":"ContainerStarted","Data":"717838b1c0ac9833122f42c4bc7d211672f701b9e9eda561c76fae4e111c75b6"} Dec 10 14:53:18 crc kubenswrapper[4718]: I1210 14:53:18.043718 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-4s6xq" Dec 10 14:53:18 crc kubenswrapper[4718]: I1210 14:53:18.043731 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-nb754" event={"ID":"91cdfe7c-2e49-4919-a7ff-0559e12ecf8b","Type":"ContainerStarted","Data":"4948695f43eaa9128e2aee0f72d7ba86c282a23356ee969b89f22a86c33d42e7"} Dec 10 14:53:18 crc kubenswrapper[4718]: I1210 14:53:18.043744 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-hq6tv" event={"ID":"204f0155-9693-4239-8a7a-440255d5ad50","Type":"ContainerStarted","Data":"df4ea912d0f792045be9a4af1f9ae4c39111a5f19a92988995752545611d924e"} Dec 10 14:53:18 crc kubenswrapper[4718]: I1210 14:53:18.043777 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-4s6xq" Dec 10 14:53:18 crc kubenswrapper[4718]: I1210 14:53:18.043790 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-hq6tv" Dec 10 14:53:18 crc kubenswrapper[4718]: I1210 14:53:18.048061 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-hq6tv" Dec 10 14:53:18 crc kubenswrapper[4718]: I1210 14:53:18.052088 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-967d97867-2jgmr" event={"ID":"2516d98a-9991-4d5b-9791-14642a4ec629","Type":"ContainerStarted","Data":"ad145ac14c20b5e35933b9c038b970b4f56137c45e812bcfedfa1ef232399481"} Dec 10 14:53:18 crc kubenswrapper[4718]: I1210 14:53:18.056834 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-cvh4g" event={"ID":"6a2a49d9-73fc-4798-8173-ed230aa16811","Type":"ContainerStarted","Data":"b23cccb14bd229e458e614ecc09eee4fae8a6b7da338835575c9639d4e954b99"} Dec 10 14:53:18 crc kubenswrapper[4718]: I1210 14:53:18.068836 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-426mn" event={"ID":"a3e406b6-e5c1-4c25-b9ee-80fbbb3eb89b","Type":"ContainerStarted","Data":"d615809a411a77037339d2e9a0a6b277dcaa8fad46fd5d5cf44f78c38cf7dc8e"} Dec 10 14:53:18 crc kubenswrapper[4718]: I1210 14:53:18.085824 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-5b5fd79c9c-cfcm2" event={"ID":"e7677f94-866d-45c7-b1c9-70fd2b7c7012","Type":"ContainerStarted","Data":"b1eea2a47d4365d10ffa4d8fc25c84884fb0f2d23a90812a1862ab567e878f37"} Dec 10 14:53:18 crc kubenswrapper[4718]: I1210 14:53:18.086980 4718 patch_prober.go:28] interesting pod/machine-config-daemon-8zmhn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 10 14:53:18 crc kubenswrapper[4718]: I1210 14:53:18.087071 4718 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" podUID="8db53917-7cfb-496d-b8a0-5cc68f3be4e7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 10 14:53:18 crc kubenswrapper[4718]: I1210 14:53:18.092100 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-6c677c69b-jfdsv" event={"ID":"82086b4c-0222-45a7-a3c3-fc2504f63a4e","Type":"ContainerStarted","Data":"20e09395e39b2a8e3a143daa069edb9617d0d92b389a165ebfe1ebc6e2b6ada5"} Dec 10 14:53:18 crc kubenswrapper[4718]: I1210 14:53:18.099290 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-75944c9b7-rn6gn" event={"ID":"21d69144-5afe-4aa8-95f0-c6e7c8802b14","Type":"ContainerStarted","Data":"4bd90cc76e3f753283a978cf010c969c50f4e89df548f276f127052141eb85fc"} Dec 10 14:53:18 crc kubenswrapper[4718]: I1210 14:53:18.112909 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-6smz5" event={"ID":"191f3c0d-be7d-463e-9979-922dfb629747","Type":"ContainerStarted","Data":"36f9cd868ad893a2d430dc88489c68971128bb75fbbc247b9521f3e6521c7597"} Dec 10 14:53:18 crc kubenswrapper[4718]: I1210 14:53:18.152124 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-9d58d64bc-7vvmc" Dec 10 14:53:18 crc kubenswrapper[4718]: I1210 14:53:18.169213 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-4s6xq" podStartSLOduration=5.220495024 podStartE2EDuration="1m22.169138796s" podCreationTimestamp="2025-12-10 14:51:56 +0000 UTC" firstStartedPulling="2025-12-10 14:51:59.709529783 +0000 UTC m=+1224.658753190" lastFinishedPulling="2025-12-10 14:53:16.658173545 +0000 UTC m=+1301.607396962" observedRunningTime="2025-12-10 14:53:18.133976255 +0000 UTC m=+1303.083199672" watchObservedRunningTime="2025-12-10 14:53:18.169138796 +0000 UTC m=+1303.118362213" Dec 10 14:53:18 crc kubenswrapper[4718]: I1210 14:53:18.185985 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-9d58d64bc-7vvmc" Dec 10 14:53:18 crc kubenswrapper[4718]: I1210 14:53:18.189645 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-79c8c4686c-7qqzr" event={"ID":"32690a0c-0ce7-4639-b30f-18a1a91ed86d","Type":"ContainerStarted","Data":"97c4c443b1edd3088b7b6f5fe3c403a92c823e9a8de2b935bfa9c8fb46c6e91a"} Dec 10 14:53:18 crc kubenswrapper[4718]: I1210 14:53:18.192057 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-79c8c4686c-7qqzr" Dec 10 14:53:18 crc kubenswrapper[4718]: I1210 14:53:18.197128 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-79c8c4686c-7qqzr" Dec 10 14:53:18 crc kubenswrapper[4718]: I1210 14:53:18.237402 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-hq6tv" podStartSLOduration=5.453271004 podStartE2EDuration="1m22.237341312s" podCreationTimestamp="2025-12-10 14:51:56 +0000 UTC" firstStartedPulling="2025-12-10 14:52:00.059952366 +0000 UTC m=+1225.009175773" lastFinishedPulling="2025-12-10 14:53:16.844022664 +0000 UTC m=+1301.793246081" observedRunningTime="2025-12-10 14:53:18.220779108 +0000 UTC m=+1303.170002525" watchObservedRunningTime="2025-12-10 14:53:18.237341312 +0000 UTC m=+1303.186564729" Dec 10 14:53:18 crc kubenswrapper[4718]: I1210 14:53:18.255550 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-mshqn" event={"ID":"80f8ae23-3a84-4810-9868-6571b6cf56a1","Type":"ContainerStarted","Data":"7ac8edffa40e4b615331ba9d43c8fb15a61c8d5ddea5f0e4229ec1bbb61b7961"} Dec 10 14:53:18 crc kubenswrapper[4718]: I1210 14:53:18.351933 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-nb754" podStartSLOduration=25.315186438 podStartE2EDuration="1m21.351898506s" podCreationTimestamp="2025-12-10 14:51:57 +0000 UTC" firstStartedPulling="2025-12-10 14:52:00.091164655 +0000 UTC m=+1225.040388072" lastFinishedPulling="2025-12-10 14:52:56.127876723 +0000 UTC m=+1281.077100140" observedRunningTime="2025-12-10 14:53:18.264870937 +0000 UTC m=+1303.214094354" watchObservedRunningTime="2025-12-10 14:53:18.351898506 +0000 UTC m=+1303.301121923" Dec 10 14:53:18 crc kubenswrapper[4718]: I1210 14:53:18.365647 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-9sxwk" event={"ID":"7d8ae7e9-7545-4ab6-b87c-6c5484b47424","Type":"ContainerStarted","Data":"cd303209af9ba17b17726793a662a75de0f6d754ccaa4acb9724429db2d95699"} Dec 10 14:53:18 crc kubenswrapper[4718]: I1210 14:53:18.372618 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-79c8c4686c-7qqzr" podStartSLOduration=5.228625921 podStartE2EDuration="1m22.372588415s" podCreationTimestamp="2025-12-10 14:51:56 +0000 UTC" firstStartedPulling="2025-12-10 14:51:59.699753282 +0000 UTC m=+1224.648976699" lastFinishedPulling="2025-12-10 14:53:16.843715776 +0000 UTC m=+1301.792939193" observedRunningTime="2025-12-10 14:53:18.36731844 +0000 UTC m=+1303.316541867" watchObservedRunningTime="2025-12-10 14:53:18.372588415 +0000 UTC m=+1303.321811832" Dec 10 14:53:18 crc kubenswrapper[4718]: I1210 14:53:18.392908 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-78f8948974-6s7dx" event={"ID":"e4e01550-5ee5-4afc-a01a-b3ea52b47f23","Type":"ContainerStarted","Data":"6e956f252f955da75f51b34810b3a47559140fec9bac0eb0c4f585c0abb5f1cc"} Dec 10 14:53:18 crc kubenswrapper[4718]: I1210 14:53:18.470356 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-9d58d64bc-7vvmc" podStartSLOduration=5.844122914 podStartE2EDuration="1m22.470316738s" podCreationTimestamp="2025-12-10 14:51:56 +0000 UTC" firstStartedPulling="2025-12-10 14:52:00.154787065 +0000 UTC m=+1225.104010482" lastFinishedPulling="2025-12-10 14:53:16.780980889 +0000 UTC m=+1301.730204306" observedRunningTime="2025-12-10 14:53:18.455119209 +0000 UTC m=+1303.404342626" watchObservedRunningTime="2025-12-10 14:53:18.470316738 +0000 UTC m=+1303.419540155" Dec 10 14:53:18 crc kubenswrapper[4718]: I1210 14:53:18.733487 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-mshqn" podStartSLOduration=5.688463866 podStartE2EDuration="1m22.733460286s" podCreationTimestamp="2025-12-10 14:51:56 +0000 UTC" firstStartedPulling="2025-12-10 14:51:59.655234972 +0000 UTC m=+1224.604458389" lastFinishedPulling="2025-12-10 14:53:16.700231392 +0000 UTC m=+1301.649454809" observedRunningTime="2025-12-10 14:53:18.716109772 +0000 UTC m=+1303.665333189" watchObservedRunningTime="2025-12-10 14:53:18.733460286 +0000 UTC m=+1303.682683713" Dec 10 14:53:19 crc kubenswrapper[4718]: I1210 14:53:19.524975 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-gw22w" event={"ID":"a701287e-359e-429d-8b94-c4e06e8922a8","Type":"ContainerStarted","Data":"d1181deb3eae0f057a56288d6fd92eec20195240a3319ece59e9f45c429dd8d8"} Dec 10 14:53:19 crc kubenswrapper[4718]: I1210 14:53:19.525060 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-gw22w" event={"ID":"a701287e-359e-429d-8b94-c4e06e8922a8","Type":"ContainerStarted","Data":"31d0840c8f38316b97d5cb006f391e9e2a9aba80ad60e2ad1a226a05b0d10fea"} Dec 10 14:53:19 crc kubenswrapper[4718]: I1210 14:53:19.525108 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-gw22w" Dec 10 14:53:19 crc kubenswrapper[4718]: I1210 14:53:19.527719 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-6smz5" event={"ID":"191f3c0d-be7d-463e-9979-922dfb629747","Type":"ContainerStarted","Data":"1e35e87c2cdf9925366ec410cb947ab0e3cb78483f6d87f96d21ed712cb09745"} Dec 10 14:53:19 crc kubenswrapper[4718]: I1210 14:53:19.528301 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-6smz5" Dec 10 14:53:19 crc kubenswrapper[4718]: I1210 14:53:19.530222 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5854674fcc-r7vfj" event={"ID":"12ba5675-3e82-41d7-be5a-ecbe1a440af5","Type":"ContainerStarted","Data":"5026a81e0cd018b02c9789fc32f4460ad41ebcc0465c10dc9aa252d37cd679c9"} Dec 10 14:53:19 crc kubenswrapper[4718]: I1210 14:53:19.531161 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-5854674fcc-r7vfj" Dec 10 14:53:19 crc kubenswrapper[4718]: I1210 14:53:19.533237 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-cvh4g" event={"ID":"6a2a49d9-73fc-4798-8173-ed230aa16811","Type":"ContainerStarted","Data":"6c9cb4d0fd03fadd1a9b00f1d21569c09b879ce94be627ad7008b97e0cb2a428"} Dec 10 14:53:19 crc kubenswrapper[4718]: I1210 14:53:19.533863 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-cvh4g" Dec 10 14:53:19 crc kubenswrapper[4718]: I1210 14:53:19.536453 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-5697bb5779-xvwrl" event={"ID":"513a8781-70b0-4692-9141-0c60ef254a98","Type":"ContainerStarted","Data":"850f45251bde35040ea671492adf1752bbdc0f561d639e4042fd163cf97ac2a9"} Dec 10 14:53:19 crc kubenswrapper[4718]: I1210 14:53:19.537374 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-5697bb5779-xvwrl" Dec 10 14:53:19 crc kubenswrapper[4718]: I1210 14:53:19.539222 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-5854674fcc-r7vfj" Dec 10 14:53:19 crc kubenswrapper[4718]: I1210 14:53:19.540492 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-697fb699cf-6mx57" event={"ID":"a3023af7-f9ec-44a3-a532-0f6d51843443","Type":"ContainerStarted","Data":"f2ecb2f6d28f84f7ee615d22041c5559eeab73241b8c3bec87a1b3546f9d7fd0"} Dec 10 14:53:19 crc kubenswrapper[4718]: I1210 14:53:19.540971 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-697fb699cf-6mx57" Dec 10 14:53:19 crc kubenswrapper[4718]: I1210 14:53:19.541559 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-5697bb5779-xvwrl" Dec 10 14:53:19 crc kubenswrapper[4718]: I1210 14:53:19.543488 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-6c677c69b-jfdsv" event={"ID":"82086b4c-0222-45a7-a3c3-fc2504f63a4e","Type":"ContainerStarted","Data":"22556a1ba8d8718d09895294c3f62e051ecad1ba12db0c7c2592cf2ca1c17a20"} Dec 10 14:53:19 crc kubenswrapper[4718]: I1210 14:53:19.543623 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-6c677c69b-jfdsv" Dec 10 14:53:19 crc kubenswrapper[4718]: I1210 14:53:19.544101 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-697fb699cf-6mx57" Dec 10 14:53:19 crc kubenswrapper[4718]: I1210 14:53:19.547250 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-58d5ff84df-jqlwv" event={"ID":"469e8dbb-654f-464b-80f9-ac7b0d55439f","Type":"ContainerStarted","Data":"88fb423ebdf9de4f9251f350152ada19488af81ca7076ca77eb0d562b4babe21"} Dec 10 14:53:19 crc kubenswrapper[4718]: I1210 14:53:19.547340 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-58d5ff84df-jqlwv" event={"ID":"469e8dbb-654f-464b-80f9-ac7b0d55439f","Type":"ContainerStarted","Data":"ab72f6f2f013ae18ac6caa3be8a135d79e8fb9dd64edb4688777ebf5ba43c982"} Dec 10 14:53:19 crc kubenswrapper[4718]: I1210 14:53:19.547584 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-58d5ff84df-jqlwv" Dec 10 14:53:19 crc kubenswrapper[4718]: I1210 14:53:19.556022 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-9d58d64bc-7vvmc" event={"ID":"daeefe3a-b055-4ee9-be2e-a93afc257365","Type":"ContainerStarted","Data":"0b29d17ce7d6fdb8844a48715484958a46c1f3fbdb1b0b45adb5bcc021c3de67"} Dec 10 14:53:19 crc kubenswrapper[4718]: I1210 14:53:19.562107 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-mshqn" Dec 10 14:53:19 crc kubenswrapper[4718]: I1210 14:53:19.572070 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-gw22w" podStartSLOduration=5.608730474 podStartE2EDuration="1m23.572037049s" podCreationTimestamp="2025-12-10 14:51:56 +0000 UTC" firstStartedPulling="2025-12-10 14:51:59.424543415 +0000 UTC m=+1224.373766832" lastFinishedPulling="2025-12-10 14:53:17.38784999 +0000 UTC m=+1302.337073407" observedRunningTime="2025-12-10 14:53:19.561191032 +0000 UTC m=+1304.510414449" watchObservedRunningTime="2025-12-10 14:53:19.572037049 +0000 UTC m=+1304.521260476" Dec 10 14:53:19 crc kubenswrapper[4718]: I1210 14:53:19.577420 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-mshqn" Dec 10 14:53:19 crc kubenswrapper[4718]: E1210 14:53:19.584105 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/openstack-baremetal-operator@sha256:9d539fb6b72f91cfc6200bb91b7c6dbaeab17c7711342dd3a9549c66762a2d48\\\"\"" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7f95dc5b94n9mbl" podUID="463f6bf2-85ef-488a-8223-56898633fe8f" Dec 10 14:53:19 crc kubenswrapper[4718]: I1210 14:53:19.640820 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-6smz5" podStartSLOduration=6.545654936 podStartE2EDuration="1m23.64079498s" podCreationTimestamp="2025-12-10 14:51:56 +0000 UTC" firstStartedPulling="2025-12-10 14:51:59.727048831 +0000 UTC m=+1224.676272248" lastFinishedPulling="2025-12-10 14:53:16.822188875 +0000 UTC m=+1301.771412292" observedRunningTime="2025-12-10 14:53:19.599498092 +0000 UTC m=+1304.548721519" watchObservedRunningTime="2025-12-10 14:53:19.64079498 +0000 UTC m=+1304.590018397" Dec 10 14:53:19 crc kubenswrapper[4718]: I1210 14:53:19.642869 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-6c677c69b-jfdsv" podStartSLOduration=2.858194072 podStartE2EDuration="1m23.642726069s" podCreationTimestamp="2025-12-10 14:51:56 +0000 UTC" firstStartedPulling="2025-12-10 14:51:58.155262253 +0000 UTC m=+1223.104485670" lastFinishedPulling="2025-12-10 14:53:18.93979425 +0000 UTC m=+1303.889017667" observedRunningTime="2025-12-10 14:53:19.638756708 +0000 UTC m=+1304.587980125" watchObservedRunningTime="2025-12-10 14:53:19.642726069 +0000 UTC m=+1304.591949486" Dec 10 14:53:19 crc kubenswrapper[4718]: I1210 14:53:19.675712 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-58d5ff84df-jqlwv" podStartSLOduration=5.676077389 podStartE2EDuration="1m23.675685013s" podCreationTimestamp="2025-12-10 14:51:56 +0000 UTC" firstStartedPulling="2025-12-10 14:52:00.109055483 +0000 UTC m=+1225.058278900" lastFinishedPulling="2025-12-10 14:53:18.108663107 +0000 UTC m=+1303.057886524" observedRunningTime="2025-12-10 14:53:19.668946181 +0000 UTC m=+1304.618169598" watchObservedRunningTime="2025-12-10 14:53:19.675685013 +0000 UTC m=+1304.624908430" Dec 10 14:53:19 crc kubenswrapper[4718]: I1210 14:53:19.763339 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-697fb699cf-6mx57" podStartSLOduration=7.122125889 podStartE2EDuration="1m23.763294517s" podCreationTimestamp="2025-12-10 14:51:56 +0000 UTC" firstStartedPulling="2025-12-10 14:52:00.17843061 +0000 UTC m=+1225.127654027" lastFinishedPulling="2025-12-10 14:53:16.819599238 +0000 UTC m=+1301.768822655" observedRunningTime="2025-12-10 14:53:19.711196033 +0000 UTC m=+1304.660419480" watchObservedRunningTime="2025-12-10 14:53:19.763294517 +0000 UTC m=+1304.712517934" Dec 10 14:53:19 crc kubenswrapper[4718]: I1210 14:53:19.764425 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-5697bb5779-xvwrl" podStartSLOduration=5.107407419 podStartE2EDuration="1m23.764413286s" podCreationTimestamp="2025-12-10 14:51:56 +0000 UTC" firstStartedPulling="2025-12-10 14:51:58.12390006 +0000 UTC m=+1223.073123477" lastFinishedPulling="2025-12-10 14:53:16.780905917 +0000 UTC m=+1301.730129344" observedRunningTime="2025-12-10 14:53:19.745200264 +0000 UTC m=+1304.694423701" watchObservedRunningTime="2025-12-10 14:53:19.764413286 +0000 UTC m=+1304.713636693" Dec 10 14:53:19 crc kubenswrapper[4718]: I1210 14:53:19.786359 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-cvh4g" podStartSLOduration=55.350980049 podStartE2EDuration="1m23.786326507s" podCreationTimestamp="2025-12-10 14:51:56 +0000 UTC" firstStartedPulling="2025-12-10 14:52:47.632845609 +0000 UTC m=+1272.582069026" lastFinishedPulling="2025-12-10 14:53:16.068192077 +0000 UTC m=+1301.017415484" observedRunningTime="2025-12-10 14:53:19.783987727 +0000 UTC m=+1304.733211144" watchObservedRunningTime="2025-12-10 14:53:19.786326507 +0000 UTC m=+1304.735549924" Dec 10 14:53:19 crc kubenswrapper[4718]: I1210 14:53:19.834148 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-5854674fcc-r7vfj" podStartSLOduration=7.131704984 podStartE2EDuration="1m23.834119831s" podCreationTimestamp="2025-12-10 14:51:56 +0000 UTC" firstStartedPulling="2025-12-10 14:52:00.117099929 +0000 UTC m=+1225.066323346" lastFinishedPulling="2025-12-10 14:53:16.819514776 +0000 UTC m=+1301.768738193" observedRunningTime="2025-12-10 14:53:19.828053315 +0000 UTC m=+1304.777276732" watchObservedRunningTime="2025-12-10 14:53:19.834119831 +0000 UTC m=+1304.783343248" Dec 10 14:53:20 crc kubenswrapper[4718]: I1210 14:53:20.662987 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-9sxwk" event={"ID":"7d8ae7e9-7545-4ab6-b87c-6c5484b47424","Type":"ContainerStarted","Data":"d00b60febf76f9bd26207f21008992e4bfa6f770bdb95333db13ff5cb54d6848"} Dec 10 14:53:20 crc kubenswrapper[4718]: I1210 14:53:20.664117 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-9sxwk" Dec 10 14:53:20 crc kubenswrapper[4718]: I1210 14:53:20.666778 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-78f8948974-6s7dx" event={"ID":"e4e01550-5ee5-4afc-a01a-b3ea52b47f23","Type":"ContainerStarted","Data":"a0241134176f8364b2614a048794d28ce600e3912e7edde40bc80a188011c0aa"} Dec 10 14:53:20 crc kubenswrapper[4718]: I1210 14:53:20.667858 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-78f8948974-6s7dx" Dec 10 14:53:20 crc kubenswrapper[4718]: I1210 14:53:20.869659 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-9sxwk" podStartSLOduration=4.863820371 podStartE2EDuration="1m24.869574036s" podCreationTimestamp="2025-12-10 14:51:56 +0000 UTC" firstStartedPulling="2025-12-10 14:51:59.554825071 +0000 UTC m=+1224.504048488" lastFinishedPulling="2025-12-10 14:53:19.560578736 +0000 UTC m=+1304.509802153" observedRunningTime="2025-12-10 14:53:20.850159019 +0000 UTC m=+1305.799382456" watchObservedRunningTime="2025-12-10 14:53:20.869574036 +0000 UTC m=+1305.818797473" Dec 10 14:53:21 crc kubenswrapper[4718]: I1210 14:53:21.053436 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-967d97867-2jgmr" event={"ID":"2516d98a-9991-4d5b-9791-14642a4ec629","Type":"ContainerStarted","Data":"8dde243c6b22d355820675bbe630d97950ab0a530c2405d14324425e89725ab0"} Dec 10 14:53:21 crc kubenswrapper[4718]: I1210 14:53:21.053696 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-967d97867-2jgmr" Dec 10 14:53:21 crc kubenswrapper[4718]: I1210 14:53:21.063982 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-426mn" event={"ID":"a3e406b6-e5c1-4c25-b9ee-80fbbb3eb89b","Type":"ContainerStarted","Data":"a646535536f6036893dcff7234391b84dbd3481ad68f722072e0df3634994b4f"} Dec 10 14:53:21 crc kubenswrapper[4718]: I1210 14:53:21.065987 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-426mn" Dec 10 14:53:21 crc kubenswrapper[4718]: I1210 14:53:21.094734 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-5b5fd79c9c-cfcm2" event={"ID":"e7677f94-866d-45c7-b1c9-70fd2b7c7012","Type":"ContainerStarted","Data":"779e744062515c7761c3acb17b87708725cce4677a3043477d5e9cec112f0419"} Dec 10 14:53:21 crc kubenswrapper[4718]: I1210 14:53:21.095490 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-5b5fd79c9c-cfcm2" Dec 10 14:53:21 crc kubenswrapper[4718]: I1210 14:53:21.131800 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-78f8948974-6s7dx" podStartSLOduration=6.127584682 podStartE2EDuration="1m25.131720129s" podCreationTimestamp="2025-12-10 14:51:56 +0000 UTC" firstStartedPulling="2025-12-10 14:52:00.088530208 +0000 UTC m=+1225.037753625" lastFinishedPulling="2025-12-10 14:53:19.092665655 +0000 UTC m=+1304.041889072" observedRunningTime="2025-12-10 14:53:21.092197597 +0000 UTC m=+1306.041421024" watchObservedRunningTime="2025-12-10 14:53:21.131720129 +0000 UTC m=+1306.080943546" Dec 10 14:53:21 crc kubenswrapper[4718]: I1210 14:53:21.138303 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-75944c9b7-rn6gn" event={"ID":"21d69144-5afe-4aa8-95f0-c6e7c8802b14","Type":"ContainerStarted","Data":"39fcd4b9b6808278e5e06f9542f0eee20f63a6f08cc40328b1074a1dbcbd138f"} Dec 10 14:53:21 crc kubenswrapper[4718]: I1210 14:53:21.144780 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-75944c9b7-rn6gn" Dec 10 14:53:21 crc kubenswrapper[4718]: I1210 14:53:21.206494 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-426mn" podStartSLOduration=5.334975627 podStartE2EDuration="1m25.206455373s" podCreationTimestamp="2025-12-10 14:51:56 +0000 UTC" firstStartedPulling="2025-12-10 14:51:59.719534139 +0000 UTC m=+1224.668757556" lastFinishedPulling="2025-12-10 14:53:19.591013895 +0000 UTC m=+1304.540237302" observedRunningTime="2025-12-10 14:53:21.165753711 +0000 UTC m=+1306.114977138" watchObservedRunningTime="2025-12-10 14:53:21.206455373 +0000 UTC m=+1306.155678790" Dec 10 14:53:21 crc kubenswrapper[4718]: I1210 14:53:21.318882 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-967d97867-2jgmr" podStartSLOduration=6.080491967 podStartE2EDuration="1m25.318774259s" podCreationTimestamp="2025-12-10 14:51:56 +0000 UTC" firstStartedPulling="2025-12-10 14:51:59.70628778 +0000 UTC m=+1224.655511197" lastFinishedPulling="2025-12-10 14:53:18.944570072 +0000 UTC m=+1303.893793489" observedRunningTime="2025-12-10 14:53:21.204136984 +0000 UTC m=+1306.153360401" watchObservedRunningTime="2025-12-10 14:53:21.318774259 +0000 UTC m=+1306.267997676" Dec 10 14:53:21 crc kubenswrapper[4718]: I1210 14:53:21.439043 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-5b5fd79c9c-cfcm2" podStartSLOduration=5.570009355 podStartE2EDuration="1m25.439009308s" podCreationTimestamp="2025-12-10 14:51:56 +0000 UTC" firstStartedPulling="2025-12-10 14:51:59.720157755 +0000 UTC m=+1224.669381172" lastFinishedPulling="2025-12-10 14:53:19.589157708 +0000 UTC m=+1304.538381125" observedRunningTime="2025-12-10 14:53:21.421460378 +0000 UTC m=+1306.370683795" watchObservedRunningTime="2025-12-10 14:53:21.439009308 +0000 UTC m=+1306.388232725" Dec 10 14:53:21 crc kubenswrapper[4718]: I1210 14:53:21.463515 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-75944c9b7-rn6gn" podStartSLOduration=6.641183374 podStartE2EDuration="1m25.463482634s" podCreationTimestamp="2025-12-10 14:51:56 +0000 UTC" firstStartedPulling="2025-12-10 14:52:00.117538551 +0000 UTC m=+1225.066761968" lastFinishedPulling="2025-12-10 14:53:18.939837811 +0000 UTC m=+1303.889061228" observedRunningTime="2025-12-10 14:53:21.453638632 +0000 UTC m=+1306.402862049" watchObservedRunningTime="2025-12-10 14:53:21.463482634 +0000 UTC m=+1306.412706051" Dec 10 14:53:22 crc kubenswrapper[4718]: I1210 14:53:22.153214 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-6smz5" Dec 10 14:53:22 crc kubenswrapper[4718]: I1210 14:53:22.162270 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-cvh4g" Dec 10 14:53:26 crc kubenswrapper[4718]: I1210 14:53:26.811751 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-gw22w" Dec 10 14:53:26 crc kubenswrapper[4718]: I1210 14:53:26.813088 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-6c677c69b-jfdsv" Dec 10 14:53:26 crc kubenswrapper[4718]: I1210 14:53:26.875057 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-967d97867-2jgmr" Dec 10 14:53:27 crc kubenswrapper[4718]: I1210 14:53:27.179190 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-9sxwk" Dec 10 14:53:27 crc kubenswrapper[4718]: I1210 14:53:27.303867 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-426mn" Dec 10 14:53:27 crc kubenswrapper[4718]: I1210 14:53:27.368761 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-5b5fd79c9c-cfcm2" Dec 10 14:53:27 crc kubenswrapper[4718]: I1210 14:53:27.466575 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-78f8948974-6s7dx" Dec 10 14:53:27 crc kubenswrapper[4718]: I1210 14:53:27.711634 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-58d5ff84df-jqlwv" Dec 10 14:53:27 crc kubenswrapper[4718]: I1210 14:53:27.965604 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-75944c9b7-rn6gn" Dec 10 14:53:30 crc kubenswrapper[4718]: I1210 14:53:30.803750 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-998648c74-s4568" event={"ID":"664faf77-d6a3-4b57-9dc9-ca7a4879c0ef","Type":"ContainerStarted","Data":"cfed850e4f0eb5f80cdf2e3de7bee390cf58a219f52856572d3757b30f1c775c"} Dec 10 14:53:30 crc kubenswrapper[4718]: I1210 14:53:30.804042 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-998648c74-s4568" event={"ID":"664faf77-d6a3-4b57-9dc9-ca7a4879c0ef","Type":"ContainerStarted","Data":"e6b65d6df7ab5602173ab544add4aab0a2247f8c3cd3f045ea194d5a13bf73c8"} Dec 10 14:53:30 crc kubenswrapper[4718]: I1210 14:53:30.804308 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-998648c74-s4568" Dec 10 14:53:30 crc kubenswrapper[4718]: I1210 14:53:30.822772 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-998648c74-s4568" podStartSLOduration=5.106164527 podStartE2EDuration="1m34.822747867s" podCreationTimestamp="2025-12-10 14:51:56 +0000 UTC" firstStartedPulling="2025-12-10 14:52:00.152036574 +0000 UTC m=+1225.101259991" lastFinishedPulling="2025-12-10 14:53:29.868619914 +0000 UTC m=+1314.817843331" observedRunningTime="2025-12-10 14:53:30.821076735 +0000 UTC m=+1315.770300162" watchObservedRunningTime="2025-12-10 14:53:30.822747867 +0000 UTC m=+1315.771971284" Dec 10 14:53:34 crc kubenswrapper[4718]: I1210 14:53:34.854707 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7f95dc5b94n9mbl" event={"ID":"463f6bf2-85ef-488a-8223-56898633fe8f","Type":"ContainerStarted","Data":"9cafaf4cd2d4255f71a9ec76aff00f7d7b0c82f9b09a59f385cf75dcc45a89ef"} Dec 10 14:53:34 crc kubenswrapper[4718]: I1210 14:53:34.856101 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7f95dc5b94n9mbl" Dec 10 14:53:34 crc kubenswrapper[4718]: I1210 14:53:34.890377 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7f95dc5b94n9mbl" podStartSLOduration=52.458593064 podStartE2EDuration="1m38.890354477s" podCreationTimestamp="2025-12-10 14:51:56 +0000 UTC" firstStartedPulling="2025-12-10 14:52:47.618798089 +0000 UTC m=+1272.568021506" lastFinishedPulling="2025-12-10 14:53:34.050559502 +0000 UTC m=+1318.999782919" observedRunningTime="2025-12-10 14:53:34.884315993 +0000 UTC m=+1319.833539410" watchObservedRunningTime="2025-12-10 14:53:34.890354477 +0000 UTC m=+1319.839577884" Dec 10 14:53:38 crc kubenswrapper[4718]: I1210 14:53:38.344911 4718 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/octavia-operator-controller-manager-998648c74-s4568" podUID="664faf77-d6a3-4b57-9dc9-ca7a4879c0ef" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.88:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 10 14:53:39 crc kubenswrapper[4718]: I1210 14:53:39.981122 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7f95dc5b94n9mbl" Dec 10 14:53:47 crc kubenswrapper[4718]: I1210 14:53:47.302660 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-998648c74-s4568" Dec 10 14:53:48 crc kubenswrapper[4718]: I1210 14:53:48.084212 4718 patch_prober.go:28] interesting pod/machine-config-daemon-8zmhn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 10 14:53:48 crc kubenswrapper[4718]: I1210 14:53:48.084446 4718 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" podUID="8db53917-7cfb-496d-b8a0-5cc68f3be4e7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 10 14:53:48 crc kubenswrapper[4718]: I1210 14:53:48.084692 4718 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" Dec 10 14:53:48 crc kubenswrapper[4718]: I1210 14:53:48.085349 4718 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"2ec2fde063b0fe89bfac326b092793e5b0835b83f4f064a57717d8a122925145"} pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 10 14:53:48 crc kubenswrapper[4718]: I1210 14:53:48.085445 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" podUID="8db53917-7cfb-496d-b8a0-5cc68f3be4e7" containerName="machine-config-daemon" containerID="cri-o://2ec2fde063b0fe89bfac326b092793e5b0835b83f4f064a57717d8a122925145" gracePeriod=600 Dec 10 14:53:49 crc kubenswrapper[4718]: I1210 14:53:49.454521 4718 generic.go:334] "Generic (PLEG): container finished" podID="8db53917-7cfb-496d-b8a0-5cc68f3be4e7" containerID="2ec2fde063b0fe89bfac326b092793e5b0835b83f4f064a57717d8a122925145" exitCode=0 Dec 10 14:53:49 crc kubenswrapper[4718]: I1210 14:53:49.454699 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" event={"ID":"8db53917-7cfb-496d-b8a0-5cc68f3be4e7","Type":"ContainerDied","Data":"2ec2fde063b0fe89bfac326b092793e5b0835b83f4f064a57717d8a122925145"} Dec 10 14:53:49 crc kubenswrapper[4718]: I1210 14:53:49.454924 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" event={"ID":"8db53917-7cfb-496d-b8a0-5cc68f3be4e7","Type":"ContainerStarted","Data":"76beffb68a4302af15a19b5c310d822594a21c7c7ec551bf4bf551ca3dcf1f21"} Dec 10 14:53:49 crc kubenswrapper[4718]: I1210 14:53:49.454985 4718 scope.go:117] "RemoveContainer" containerID="f2aafbfef6aca74c8d0022be5bbc83fbbe6d3fcc33361fe89187f40bd7acdfa4" Dec 10 14:54:08 crc kubenswrapper[4718]: I1210 14:54:08.931513 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-8468885bfc-pmrts"] Dec 10 14:54:08 crc kubenswrapper[4718]: I1210 14:54:08.940332 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8468885bfc-pmrts" Dec 10 14:54:08 crc kubenswrapper[4718]: I1210 14:54:08.954425 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-6sld2" Dec 10 14:54:08 crc kubenswrapper[4718]: I1210 14:54:08.955077 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Dec 10 14:54:08 crc kubenswrapper[4718]: I1210 14:54:08.955367 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Dec 10 14:54:08 crc kubenswrapper[4718]: I1210 14:54:08.955593 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Dec 10 14:54:08 crc kubenswrapper[4718]: I1210 14:54:08.990045 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8468885bfc-pmrts"] Dec 10 14:54:09 crc kubenswrapper[4718]: I1210 14:54:09.044732 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-545d49fd5c-wvw6h"] Dec 10 14:54:09 crc kubenswrapper[4718]: I1210 14:54:09.047124 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-545d49fd5c-wvw6h" Dec 10 14:54:09 crc kubenswrapper[4718]: I1210 14:54:09.051797 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Dec 10 14:54:09 crc kubenswrapper[4718]: I1210 14:54:09.063127 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-545d49fd5c-wvw6h"] Dec 10 14:54:09 crc kubenswrapper[4718]: I1210 14:54:09.085783 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5fe12fc7-cc3d-47cb-8c4c-c38cdae3d919-config\") pod \"dnsmasq-dns-8468885bfc-pmrts\" (UID: \"5fe12fc7-cc3d-47cb-8c4c-c38cdae3d919\") " pod="openstack/dnsmasq-dns-8468885bfc-pmrts" Dec 10 14:54:09 crc kubenswrapper[4718]: I1210 14:54:09.085917 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9md9c\" (UniqueName: \"kubernetes.io/projected/5fe12fc7-cc3d-47cb-8c4c-c38cdae3d919-kube-api-access-9md9c\") pod \"dnsmasq-dns-8468885bfc-pmrts\" (UID: \"5fe12fc7-cc3d-47cb-8c4c-c38cdae3d919\") " pod="openstack/dnsmasq-dns-8468885bfc-pmrts" Dec 10 14:54:09 crc kubenswrapper[4718]: I1210 14:54:09.187904 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d49150b3-2c3e-4214-9c50-cbd6a1aaa0d3-dns-svc\") pod \"dnsmasq-dns-545d49fd5c-wvw6h\" (UID: \"d49150b3-2c3e-4214-9c50-cbd6a1aaa0d3\") " pod="openstack/dnsmasq-dns-545d49fd5c-wvw6h" Dec 10 14:54:09 crc kubenswrapper[4718]: I1210 14:54:09.187983 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d49150b3-2c3e-4214-9c50-cbd6a1aaa0d3-config\") pod \"dnsmasq-dns-545d49fd5c-wvw6h\" (UID: \"d49150b3-2c3e-4214-9c50-cbd6a1aaa0d3\") " pod="openstack/dnsmasq-dns-545d49fd5c-wvw6h" Dec 10 14:54:09 crc kubenswrapper[4718]: I1210 14:54:09.188008 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nj9qn\" (UniqueName: \"kubernetes.io/projected/d49150b3-2c3e-4214-9c50-cbd6a1aaa0d3-kube-api-access-nj9qn\") pod \"dnsmasq-dns-545d49fd5c-wvw6h\" (UID: \"d49150b3-2c3e-4214-9c50-cbd6a1aaa0d3\") " pod="openstack/dnsmasq-dns-545d49fd5c-wvw6h" Dec 10 14:54:09 crc kubenswrapper[4718]: I1210 14:54:09.188172 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5fe12fc7-cc3d-47cb-8c4c-c38cdae3d919-config\") pod \"dnsmasq-dns-8468885bfc-pmrts\" (UID: \"5fe12fc7-cc3d-47cb-8c4c-c38cdae3d919\") " pod="openstack/dnsmasq-dns-8468885bfc-pmrts" Dec 10 14:54:09 crc kubenswrapper[4718]: I1210 14:54:09.188301 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9md9c\" (UniqueName: \"kubernetes.io/projected/5fe12fc7-cc3d-47cb-8c4c-c38cdae3d919-kube-api-access-9md9c\") pod \"dnsmasq-dns-8468885bfc-pmrts\" (UID: \"5fe12fc7-cc3d-47cb-8c4c-c38cdae3d919\") " pod="openstack/dnsmasq-dns-8468885bfc-pmrts" Dec 10 14:54:09 crc kubenswrapper[4718]: I1210 14:54:09.189512 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5fe12fc7-cc3d-47cb-8c4c-c38cdae3d919-config\") pod \"dnsmasq-dns-8468885bfc-pmrts\" (UID: \"5fe12fc7-cc3d-47cb-8c4c-c38cdae3d919\") " pod="openstack/dnsmasq-dns-8468885bfc-pmrts" Dec 10 14:54:09 crc kubenswrapper[4718]: I1210 14:54:09.224138 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9md9c\" (UniqueName: \"kubernetes.io/projected/5fe12fc7-cc3d-47cb-8c4c-c38cdae3d919-kube-api-access-9md9c\") pod \"dnsmasq-dns-8468885bfc-pmrts\" (UID: \"5fe12fc7-cc3d-47cb-8c4c-c38cdae3d919\") " pod="openstack/dnsmasq-dns-8468885bfc-pmrts" Dec 10 14:54:09 crc kubenswrapper[4718]: I1210 14:54:09.363265 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d49150b3-2c3e-4214-9c50-cbd6a1aaa0d3-dns-svc\") pod \"dnsmasq-dns-545d49fd5c-wvw6h\" (UID: \"d49150b3-2c3e-4214-9c50-cbd6a1aaa0d3\") " pod="openstack/dnsmasq-dns-545d49fd5c-wvw6h" Dec 10 14:54:09 crc kubenswrapper[4718]: I1210 14:54:09.363365 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nj9qn\" (UniqueName: \"kubernetes.io/projected/d49150b3-2c3e-4214-9c50-cbd6a1aaa0d3-kube-api-access-nj9qn\") pod \"dnsmasq-dns-545d49fd5c-wvw6h\" (UID: \"d49150b3-2c3e-4214-9c50-cbd6a1aaa0d3\") " pod="openstack/dnsmasq-dns-545d49fd5c-wvw6h" Dec 10 14:54:09 crc kubenswrapper[4718]: I1210 14:54:09.363418 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d49150b3-2c3e-4214-9c50-cbd6a1aaa0d3-config\") pod \"dnsmasq-dns-545d49fd5c-wvw6h\" (UID: \"d49150b3-2c3e-4214-9c50-cbd6a1aaa0d3\") " pod="openstack/dnsmasq-dns-545d49fd5c-wvw6h" Dec 10 14:54:09 crc kubenswrapper[4718]: I1210 14:54:09.365053 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d49150b3-2c3e-4214-9c50-cbd6a1aaa0d3-config\") pod \"dnsmasq-dns-545d49fd5c-wvw6h\" (UID: \"d49150b3-2c3e-4214-9c50-cbd6a1aaa0d3\") " pod="openstack/dnsmasq-dns-545d49fd5c-wvw6h" Dec 10 14:54:09 crc kubenswrapper[4718]: I1210 14:54:09.365721 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d49150b3-2c3e-4214-9c50-cbd6a1aaa0d3-dns-svc\") pod \"dnsmasq-dns-545d49fd5c-wvw6h\" (UID: \"d49150b3-2c3e-4214-9c50-cbd6a1aaa0d3\") " pod="openstack/dnsmasq-dns-545d49fd5c-wvw6h" Dec 10 14:54:09 crc kubenswrapper[4718]: I1210 14:54:09.366380 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8468885bfc-pmrts" Dec 10 14:54:09 crc kubenswrapper[4718]: I1210 14:54:09.404508 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nj9qn\" (UniqueName: \"kubernetes.io/projected/d49150b3-2c3e-4214-9c50-cbd6a1aaa0d3-kube-api-access-nj9qn\") pod \"dnsmasq-dns-545d49fd5c-wvw6h\" (UID: \"d49150b3-2c3e-4214-9c50-cbd6a1aaa0d3\") " pod="openstack/dnsmasq-dns-545d49fd5c-wvw6h" Dec 10 14:54:09 crc kubenswrapper[4718]: I1210 14:54:09.795912 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-545d49fd5c-wvw6h" Dec 10 14:54:10 crc kubenswrapper[4718]: I1210 14:54:10.597416 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-545d49fd5c-wvw6h"] Dec 10 14:54:10 crc kubenswrapper[4718]: I1210 14:54:10.670067 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-b9b4959cc-p58sh"] Dec 10 14:54:10 crc kubenswrapper[4718]: I1210 14:54:10.671874 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b9b4959cc-p58sh" Dec 10 14:54:10 crc kubenswrapper[4718]: I1210 14:54:10.800495 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8468885bfc-pmrts"] Dec 10 14:54:10 crc kubenswrapper[4718]: I1210 14:54:10.823641 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-b9b4959cc-p58sh"] Dec 10 14:54:10 crc kubenswrapper[4718]: I1210 14:54:10.851668 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-545d49fd5c-wvw6h"] Dec 10 14:54:10 crc kubenswrapper[4718]: I1210 14:54:10.881787 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bf0bd86a-1722-479c-8dcd-25080bc05c11-dns-svc\") pod \"dnsmasq-dns-b9b4959cc-p58sh\" (UID: \"bf0bd86a-1722-479c-8dcd-25080bc05c11\") " pod="openstack/dnsmasq-dns-b9b4959cc-p58sh" Dec 10 14:54:10 crc kubenswrapper[4718]: I1210 14:54:10.881910 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bf0bd86a-1722-479c-8dcd-25080bc05c11-config\") pod \"dnsmasq-dns-b9b4959cc-p58sh\" (UID: \"bf0bd86a-1722-479c-8dcd-25080bc05c11\") " pod="openstack/dnsmasq-dns-b9b4959cc-p58sh" Dec 10 14:54:10 crc kubenswrapper[4718]: I1210 14:54:10.882010 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wnngk\" (UniqueName: \"kubernetes.io/projected/bf0bd86a-1722-479c-8dcd-25080bc05c11-kube-api-access-wnngk\") pod \"dnsmasq-dns-b9b4959cc-p58sh\" (UID: \"bf0bd86a-1722-479c-8dcd-25080bc05c11\") " pod="openstack/dnsmasq-dns-b9b4959cc-p58sh" Dec 10 14:54:10 crc kubenswrapper[4718]: I1210 14:54:10.984335 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bf0bd86a-1722-479c-8dcd-25080bc05c11-dns-svc\") pod \"dnsmasq-dns-b9b4959cc-p58sh\" (UID: \"bf0bd86a-1722-479c-8dcd-25080bc05c11\") " pod="openstack/dnsmasq-dns-b9b4959cc-p58sh" Dec 10 14:54:10 crc kubenswrapper[4718]: I1210 14:54:10.984495 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bf0bd86a-1722-479c-8dcd-25080bc05c11-config\") pod \"dnsmasq-dns-b9b4959cc-p58sh\" (UID: \"bf0bd86a-1722-479c-8dcd-25080bc05c11\") " pod="openstack/dnsmasq-dns-b9b4959cc-p58sh" Dec 10 14:54:10 crc kubenswrapper[4718]: I1210 14:54:10.984599 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wnngk\" (UniqueName: \"kubernetes.io/projected/bf0bd86a-1722-479c-8dcd-25080bc05c11-kube-api-access-wnngk\") pod \"dnsmasq-dns-b9b4959cc-p58sh\" (UID: \"bf0bd86a-1722-479c-8dcd-25080bc05c11\") " pod="openstack/dnsmasq-dns-b9b4959cc-p58sh" Dec 10 14:54:10 crc kubenswrapper[4718]: I1210 14:54:10.985476 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bf0bd86a-1722-479c-8dcd-25080bc05c11-dns-svc\") pod \"dnsmasq-dns-b9b4959cc-p58sh\" (UID: \"bf0bd86a-1722-479c-8dcd-25080bc05c11\") " pod="openstack/dnsmasq-dns-b9b4959cc-p58sh" Dec 10 14:54:10 crc kubenswrapper[4718]: I1210 14:54:10.985643 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bf0bd86a-1722-479c-8dcd-25080bc05c11-config\") pod \"dnsmasq-dns-b9b4959cc-p58sh\" (UID: \"bf0bd86a-1722-479c-8dcd-25080bc05c11\") " pod="openstack/dnsmasq-dns-b9b4959cc-p58sh" Dec 10 14:54:11 crc kubenswrapper[4718]: I1210 14:54:11.013759 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wnngk\" (UniqueName: \"kubernetes.io/projected/bf0bd86a-1722-479c-8dcd-25080bc05c11-kube-api-access-wnngk\") pod \"dnsmasq-dns-b9b4959cc-p58sh\" (UID: \"bf0bd86a-1722-479c-8dcd-25080bc05c11\") " pod="openstack/dnsmasq-dns-b9b4959cc-p58sh" Dec 10 14:54:11 crc kubenswrapper[4718]: I1210 14:54:11.195007 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b9b4959cc-p58sh" Dec 10 14:54:11 crc kubenswrapper[4718]: I1210 14:54:11.206777 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8468885bfc-pmrts"] Dec 10 14:54:11 crc kubenswrapper[4718]: I1210 14:54:11.264347 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-86b8f4ff9-5scsd"] Dec 10 14:54:11 crc kubenswrapper[4718]: I1210 14:54:11.266225 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86b8f4ff9-5scsd" Dec 10 14:54:11 crc kubenswrapper[4718]: I1210 14:54:11.285968 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-86b8f4ff9-5scsd"] Dec 10 14:54:11 crc kubenswrapper[4718]: I1210 14:54:11.405773 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d2148743-814f-409c-bbf9-65b8430f767c-config\") pod \"dnsmasq-dns-86b8f4ff9-5scsd\" (UID: \"d2148743-814f-409c-bbf9-65b8430f767c\") " pod="openstack/dnsmasq-dns-86b8f4ff9-5scsd" Dec 10 14:54:11 crc kubenswrapper[4718]: I1210 14:54:11.406492 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d2148743-814f-409c-bbf9-65b8430f767c-dns-svc\") pod \"dnsmasq-dns-86b8f4ff9-5scsd\" (UID: \"d2148743-814f-409c-bbf9-65b8430f767c\") " pod="openstack/dnsmasq-dns-86b8f4ff9-5scsd" Dec 10 14:54:11 crc kubenswrapper[4718]: I1210 14:54:11.406568 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n6vl8\" (UniqueName: \"kubernetes.io/projected/d2148743-814f-409c-bbf9-65b8430f767c-kube-api-access-n6vl8\") pod \"dnsmasq-dns-86b8f4ff9-5scsd\" (UID: \"d2148743-814f-409c-bbf9-65b8430f767c\") " pod="openstack/dnsmasq-dns-86b8f4ff9-5scsd" Dec 10 14:54:11 crc kubenswrapper[4718]: I1210 14:54:11.510552 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d2148743-814f-409c-bbf9-65b8430f767c-config\") pod \"dnsmasq-dns-86b8f4ff9-5scsd\" (UID: \"d2148743-814f-409c-bbf9-65b8430f767c\") " pod="openstack/dnsmasq-dns-86b8f4ff9-5scsd" Dec 10 14:54:11 crc kubenswrapper[4718]: I1210 14:54:11.510645 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d2148743-814f-409c-bbf9-65b8430f767c-dns-svc\") pod \"dnsmasq-dns-86b8f4ff9-5scsd\" (UID: \"d2148743-814f-409c-bbf9-65b8430f767c\") " pod="openstack/dnsmasq-dns-86b8f4ff9-5scsd" Dec 10 14:54:11 crc kubenswrapper[4718]: I1210 14:54:11.510700 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n6vl8\" (UniqueName: \"kubernetes.io/projected/d2148743-814f-409c-bbf9-65b8430f767c-kube-api-access-n6vl8\") pod \"dnsmasq-dns-86b8f4ff9-5scsd\" (UID: \"d2148743-814f-409c-bbf9-65b8430f767c\") " pod="openstack/dnsmasq-dns-86b8f4ff9-5scsd" Dec 10 14:54:11 crc kubenswrapper[4718]: I1210 14:54:11.512169 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d2148743-814f-409c-bbf9-65b8430f767c-config\") pod \"dnsmasq-dns-86b8f4ff9-5scsd\" (UID: \"d2148743-814f-409c-bbf9-65b8430f767c\") " pod="openstack/dnsmasq-dns-86b8f4ff9-5scsd" Dec 10 14:54:11 crc kubenswrapper[4718]: I1210 14:54:11.513471 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d2148743-814f-409c-bbf9-65b8430f767c-dns-svc\") pod \"dnsmasq-dns-86b8f4ff9-5scsd\" (UID: \"d2148743-814f-409c-bbf9-65b8430f767c\") " pod="openstack/dnsmasq-dns-86b8f4ff9-5scsd" Dec 10 14:54:11 crc kubenswrapper[4718]: I1210 14:54:11.549756 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n6vl8\" (UniqueName: \"kubernetes.io/projected/d2148743-814f-409c-bbf9-65b8430f767c-kube-api-access-n6vl8\") pod \"dnsmasq-dns-86b8f4ff9-5scsd\" (UID: \"d2148743-814f-409c-bbf9-65b8430f767c\") " pod="openstack/dnsmasq-dns-86b8f4ff9-5scsd" Dec 10 14:54:11 crc kubenswrapper[4718]: I1210 14:54:11.726523 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86b8f4ff9-5scsd" Dec 10 14:54:11 crc kubenswrapper[4718]: I1210 14:54:11.754466 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-b9b4959cc-p58sh"] Dec 10 14:54:11 crc kubenswrapper[4718]: I1210 14:54:11.898739 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b9b4959cc-p58sh" event={"ID":"bf0bd86a-1722-479c-8dcd-25080bc05c11","Type":"ContainerStarted","Data":"173b426c7ab1ee7b23cf2e07c0d00c3a281c2b77eedf6efae3529bd44cd02324"} Dec 10 14:54:11 crc kubenswrapper[4718]: I1210 14:54:11.907493 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8468885bfc-pmrts" event={"ID":"5fe12fc7-cc3d-47cb-8c4c-c38cdae3d919","Type":"ContainerStarted","Data":"e7e854b27fe4c2083fc67987539206de13def1157c1d20405fc2c9fa3d057799"} Dec 10 14:54:11 crc kubenswrapper[4718]: I1210 14:54:11.910269 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-545d49fd5c-wvw6h" event={"ID":"d49150b3-2c3e-4214-9c50-cbd6a1aaa0d3","Type":"ContainerStarted","Data":"87fa213d2cfcd20df65fcf2cb55aee09e060a173a2c7be18387f767077cf8e32"} Dec 10 14:54:12 crc kubenswrapper[4718]: I1210 14:54:12.182522 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Dec 10 14:54:12 crc kubenswrapper[4718]: I1210 14:54:12.184937 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Dec 10 14:54:12 crc kubenswrapper[4718]: I1210 14:54:12.201090 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Dec 10 14:54:12 crc kubenswrapper[4718]: I1210 14:54:12.207210 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-dl8ph" Dec 10 14:54:12 crc kubenswrapper[4718]: I1210 14:54:12.207756 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Dec 10 14:54:12 crc kubenswrapper[4718]: I1210 14:54:12.208277 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Dec 10 14:54:12 crc kubenswrapper[4718]: I1210 14:54:12.208575 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Dec 10 14:54:12 crc kubenswrapper[4718]: I1210 14:54:12.208967 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Dec 10 14:54:12 crc kubenswrapper[4718]: I1210 14:54:12.210891 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Dec 10 14:54:12 crc kubenswrapper[4718]: I1210 14:54:12.211173 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Dec 10 14:54:12 crc kubenswrapper[4718]: I1210 14:54:12.855969 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 10 14:54:12 crc kubenswrapper[4718]: I1210 14:54:12.866660 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Dec 10 14:54:12 crc kubenswrapper[4718]: I1210 14:54:12.876652 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Dec 10 14:54:12 crc kubenswrapper[4718]: I1210 14:54:12.877137 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-svxt4" Dec 10 14:54:12 crc kubenswrapper[4718]: I1210 14:54:12.877615 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Dec 10 14:54:12 crc kubenswrapper[4718]: I1210 14:54:12.878271 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Dec 10 14:54:12 crc kubenswrapper[4718]: I1210 14:54:12.878655 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Dec 10 14:54:12 crc kubenswrapper[4718]: I1210 14:54:12.878796 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Dec 10 14:54:12 crc kubenswrapper[4718]: I1210 14:54:12.879425 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Dec 10 14:54:12 crc kubenswrapper[4718]: I1210 14:54:12.948613 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/0fcf07d8-859b-4547-8a32-824f40da6a93-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"0fcf07d8-859b-4547-8a32-824f40da6a93\") " pod="openstack/rabbitmq-server-0" Dec 10 14:54:12 crc kubenswrapper[4718]: I1210 14:54:12.950564 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-79ddq\" (UniqueName: \"kubernetes.io/projected/0fcf07d8-859b-4547-8a32-824f40da6a93-kube-api-access-79ddq\") pod \"rabbitmq-server-0\" (UID: \"0fcf07d8-859b-4547-8a32-824f40da6a93\") " pod="openstack/rabbitmq-server-0" Dec 10 14:54:12 crc kubenswrapper[4718]: I1210 14:54:12.950685 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/0fcf07d8-859b-4547-8a32-824f40da6a93-server-conf\") pod \"rabbitmq-server-0\" (UID: \"0fcf07d8-859b-4547-8a32-824f40da6a93\") " pod="openstack/rabbitmq-server-0" Dec 10 14:54:12 crc kubenswrapper[4718]: I1210 14:54:12.950788 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/0fcf07d8-859b-4547-8a32-824f40da6a93-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"0fcf07d8-859b-4547-8a32-824f40da6a93\") " pod="openstack/rabbitmq-server-0" Dec 10 14:54:12 crc kubenswrapper[4718]: I1210 14:54:12.950921 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/0fcf07d8-859b-4547-8a32-824f40da6a93-pod-info\") pod \"rabbitmq-server-0\" (UID: \"0fcf07d8-859b-4547-8a32-824f40da6a93\") " pod="openstack/rabbitmq-server-0" Dec 10 14:54:12 crc kubenswrapper[4718]: I1210 14:54:12.950957 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/0fcf07d8-859b-4547-8a32-824f40da6a93-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"0fcf07d8-859b-4547-8a32-824f40da6a93\") " pod="openstack/rabbitmq-server-0" Dec 10 14:54:12 crc kubenswrapper[4718]: I1210 14:54:12.950994 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/0fcf07d8-859b-4547-8a32-824f40da6a93-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"0fcf07d8-859b-4547-8a32-824f40da6a93\") " pod="openstack/rabbitmq-server-0" Dec 10 14:54:12 crc kubenswrapper[4718]: I1210 14:54:12.951059 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/0fcf07d8-859b-4547-8a32-824f40da6a93-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"0fcf07d8-859b-4547-8a32-824f40da6a93\") " pod="openstack/rabbitmq-server-0" Dec 10 14:54:12 crc kubenswrapper[4718]: I1210 14:54:12.951519 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-server-0\" (UID: \"0fcf07d8-859b-4547-8a32-824f40da6a93\") " pod="openstack/rabbitmq-server-0" Dec 10 14:54:12 crc kubenswrapper[4718]: I1210 14:54:12.951596 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/0fcf07d8-859b-4547-8a32-824f40da6a93-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"0fcf07d8-859b-4547-8a32-824f40da6a93\") " pod="openstack/rabbitmq-server-0" Dec 10 14:54:12 crc kubenswrapper[4718]: I1210 14:54:12.951692 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0fcf07d8-859b-4547-8a32-824f40da6a93-config-data\") pod \"rabbitmq-server-0\" (UID: \"0fcf07d8-859b-4547-8a32-824f40da6a93\") " pod="openstack/rabbitmq-server-0" Dec 10 14:54:13 crc kubenswrapper[4718]: I1210 14:54:13.079271 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/282d32e9-d539-4bac-9fd1-a8735e8d92e1-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"282d32e9-d539-4bac-9fd1-a8735e8d92e1\") " pod="openstack/rabbitmq-cell1-server-0" Dec 10 14:54:13 crc kubenswrapper[4718]: I1210 14:54:13.079330 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/0fcf07d8-859b-4547-8a32-824f40da6a93-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"0fcf07d8-859b-4547-8a32-824f40da6a93\") " pod="openstack/rabbitmq-server-0" Dec 10 14:54:13 crc kubenswrapper[4718]: I1210 14:54:13.079361 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/282d32e9-d539-4bac-9fd1-a8735e8d92e1-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"282d32e9-d539-4bac-9fd1-a8735e8d92e1\") " pod="openstack/rabbitmq-cell1-server-0" Dec 10 14:54:13 crc kubenswrapper[4718]: I1210 14:54:13.079419 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/0fcf07d8-859b-4547-8a32-824f40da6a93-pod-info\") pod \"rabbitmq-server-0\" (UID: \"0fcf07d8-859b-4547-8a32-824f40da6a93\") " pod="openstack/rabbitmq-server-0" Dec 10 14:54:13 crc kubenswrapper[4718]: I1210 14:54:13.079440 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/0fcf07d8-859b-4547-8a32-824f40da6a93-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"0fcf07d8-859b-4547-8a32-824f40da6a93\") " pod="openstack/rabbitmq-server-0" Dec 10 14:54:13 crc kubenswrapper[4718]: I1210 14:54:13.079459 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/0fcf07d8-859b-4547-8a32-824f40da6a93-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"0fcf07d8-859b-4547-8a32-824f40da6a93\") " pod="openstack/rabbitmq-server-0" Dec 10 14:54:13 crc kubenswrapper[4718]: I1210 14:54:13.079480 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/0fcf07d8-859b-4547-8a32-824f40da6a93-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"0fcf07d8-859b-4547-8a32-824f40da6a93\") " pod="openstack/rabbitmq-server-0" Dec 10 14:54:13 crc kubenswrapper[4718]: I1210 14:54:13.079507 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-server-0\" (UID: \"0fcf07d8-859b-4547-8a32-824f40da6a93\") " pod="openstack/rabbitmq-server-0" Dec 10 14:54:13 crc kubenswrapper[4718]: I1210 14:54:13.079524 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/0fcf07d8-859b-4547-8a32-824f40da6a93-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"0fcf07d8-859b-4547-8a32-824f40da6a93\") " pod="openstack/rabbitmq-server-0" Dec 10 14:54:13 crc kubenswrapper[4718]: I1210 14:54:13.079548 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/282d32e9-d539-4bac-9fd1-a8735e8d92e1-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"282d32e9-d539-4bac-9fd1-a8735e8d92e1\") " pod="openstack/rabbitmq-cell1-server-0" Dec 10 14:54:13 crc kubenswrapper[4718]: I1210 14:54:13.079574 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/282d32e9-d539-4bac-9fd1-a8735e8d92e1-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"282d32e9-d539-4bac-9fd1-a8735e8d92e1\") " pod="openstack/rabbitmq-cell1-server-0" Dec 10 14:54:13 crc kubenswrapper[4718]: I1210 14:54:13.079594 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/282d32e9-d539-4bac-9fd1-a8735e8d92e1-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"282d32e9-d539-4bac-9fd1-a8735e8d92e1\") " pod="openstack/rabbitmq-cell1-server-0" Dec 10 14:54:13 crc kubenswrapper[4718]: I1210 14:54:13.079614 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0fcf07d8-859b-4547-8a32-824f40da6a93-config-data\") pod \"rabbitmq-server-0\" (UID: \"0fcf07d8-859b-4547-8a32-824f40da6a93\") " pod="openstack/rabbitmq-server-0" Dec 10 14:54:13 crc kubenswrapper[4718]: I1210 14:54:13.079644 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/0fcf07d8-859b-4547-8a32-824f40da6a93-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"0fcf07d8-859b-4547-8a32-824f40da6a93\") " pod="openstack/rabbitmq-server-0" Dec 10 14:54:13 crc kubenswrapper[4718]: I1210 14:54:13.079661 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/282d32e9-d539-4bac-9fd1-a8735e8d92e1-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"282d32e9-d539-4bac-9fd1-a8735e8d92e1\") " pod="openstack/rabbitmq-cell1-server-0" Dec 10 14:54:13 crc kubenswrapper[4718]: I1210 14:54:13.079706 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/282d32e9-d539-4bac-9fd1-a8735e8d92e1-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"282d32e9-d539-4bac-9fd1-a8735e8d92e1\") " pod="openstack/rabbitmq-cell1-server-0" Dec 10 14:54:13 crc kubenswrapper[4718]: I1210 14:54:13.079722 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-55kgx\" (UniqueName: \"kubernetes.io/projected/282d32e9-d539-4bac-9fd1-a8735e8d92e1-kube-api-access-55kgx\") pod \"rabbitmq-cell1-server-0\" (UID: \"282d32e9-d539-4bac-9fd1-a8735e8d92e1\") " pod="openstack/rabbitmq-cell1-server-0" Dec 10 14:54:13 crc kubenswrapper[4718]: I1210 14:54:13.079749 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-79ddq\" (UniqueName: \"kubernetes.io/projected/0fcf07d8-859b-4547-8a32-824f40da6a93-kube-api-access-79ddq\") pod \"rabbitmq-server-0\" (UID: \"0fcf07d8-859b-4547-8a32-824f40da6a93\") " pod="openstack/rabbitmq-server-0" Dec 10 14:54:13 crc kubenswrapper[4718]: I1210 14:54:13.079783 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"282d32e9-d539-4bac-9fd1-a8735e8d92e1\") " pod="openstack/rabbitmq-cell1-server-0" Dec 10 14:54:13 crc kubenswrapper[4718]: I1210 14:54:13.079807 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/282d32e9-d539-4bac-9fd1-a8735e8d92e1-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"282d32e9-d539-4bac-9fd1-a8735e8d92e1\") " pod="openstack/rabbitmq-cell1-server-0" Dec 10 14:54:13 crc kubenswrapper[4718]: I1210 14:54:13.079828 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/0fcf07d8-859b-4547-8a32-824f40da6a93-server-conf\") pod \"rabbitmq-server-0\" (UID: \"0fcf07d8-859b-4547-8a32-824f40da6a93\") " pod="openstack/rabbitmq-server-0" Dec 10 14:54:13 crc kubenswrapper[4718]: I1210 14:54:13.079859 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/282d32e9-d539-4bac-9fd1-a8735e8d92e1-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"282d32e9-d539-4bac-9fd1-a8735e8d92e1\") " pod="openstack/rabbitmq-cell1-server-0" Dec 10 14:54:13 crc kubenswrapper[4718]: I1210 14:54:13.150257 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/0fcf07d8-859b-4547-8a32-824f40da6a93-server-conf\") pod \"rabbitmq-server-0\" (UID: \"0fcf07d8-859b-4547-8a32-824f40da6a93\") " pod="openstack/rabbitmq-server-0" Dec 10 14:54:13 crc kubenswrapper[4718]: I1210 14:54:13.161136 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/0fcf07d8-859b-4547-8a32-824f40da6a93-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"0fcf07d8-859b-4547-8a32-824f40da6a93\") " pod="openstack/rabbitmq-server-0" Dec 10 14:54:13 crc kubenswrapper[4718]: I1210 14:54:13.250657 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/0fcf07d8-859b-4547-8a32-824f40da6a93-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"0fcf07d8-859b-4547-8a32-824f40da6a93\") " pod="openstack/rabbitmq-server-0" Dec 10 14:54:13 crc kubenswrapper[4718]: I1210 14:54:13.250768 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/0fcf07d8-859b-4547-8a32-824f40da6a93-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"0fcf07d8-859b-4547-8a32-824f40da6a93\") " pod="openstack/rabbitmq-server-0" Dec 10 14:54:13 crc kubenswrapper[4718]: I1210 14:54:13.256038 4718 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-server-0\" (UID: \"0fcf07d8-859b-4547-8a32-824f40da6a93\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/rabbitmq-server-0" Dec 10 14:54:13 crc kubenswrapper[4718]: I1210 14:54:13.260173 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0fcf07d8-859b-4547-8a32-824f40da6a93-config-data\") pod \"rabbitmq-server-0\" (UID: \"0fcf07d8-859b-4547-8a32-824f40da6a93\") " pod="openstack/rabbitmq-server-0" Dec 10 14:54:13 crc kubenswrapper[4718]: I1210 14:54:13.263887 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/282d32e9-d539-4bac-9fd1-a8735e8d92e1-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"282d32e9-d539-4bac-9fd1-a8735e8d92e1\") " pod="openstack/rabbitmq-cell1-server-0" Dec 10 14:54:13 crc kubenswrapper[4718]: I1210 14:54:13.263953 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/282d32e9-d539-4bac-9fd1-a8735e8d92e1-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"282d32e9-d539-4bac-9fd1-a8735e8d92e1\") " pod="openstack/rabbitmq-cell1-server-0" Dec 10 14:54:13 crc kubenswrapper[4718]: I1210 14:54:13.263974 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-55kgx\" (UniqueName: \"kubernetes.io/projected/282d32e9-d539-4bac-9fd1-a8735e8d92e1-kube-api-access-55kgx\") pod \"rabbitmq-cell1-server-0\" (UID: \"282d32e9-d539-4bac-9fd1-a8735e8d92e1\") " pod="openstack/rabbitmq-cell1-server-0" Dec 10 14:54:13 crc kubenswrapper[4718]: I1210 14:54:13.264035 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/282d32e9-d539-4bac-9fd1-a8735e8d92e1-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"282d32e9-d539-4bac-9fd1-a8735e8d92e1\") " pod="openstack/rabbitmq-cell1-server-0" Dec 10 14:54:13 crc kubenswrapper[4718]: I1210 14:54:13.264065 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/282d32e9-d539-4bac-9fd1-a8735e8d92e1-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"282d32e9-d539-4bac-9fd1-a8735e8d92e1\") " pod="openstack/rabbitmq-cell1-server-0" Dec 10 14:54:13 crc kubenswrapper[4718]: I1210 14:54:13.264093 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/282d32e9-d539-4bac-9fd1-a8735e8d92e1-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"282d32e9-d539-4bac-9fd1-a8735e8d92e1\") " pod="openstack/rabbitmq-cell1-server-0" Dec 10 14:54:13 crc kubenswrapper[4718]: I1210 14:54:13.264116 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/282d32e9-d539-4bac-9fd1-a8735e8d92e1-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"282d32e9-d539-4bac-9fd1-a8735e8d92e1\") " pod="openstack/rabbitmq-cell1-server-0" Dec 10 14:54:13 crc kubenswrapper[4718]: I1210 14:54:13.264193 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/282d32e9-d539-4bac-9fd1-a8735e8d92e1-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"282d32e9-d539-4bac-9fd1-a8735e8d92e1\") " pod="openstack/rabbitmq-cell1-server-0" Dec 10 14:54:13 crc kubenswrapper[4718]: I1210 14:54:13.264215 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/282d32e9-d539-4bac-9fd1-a8735e8d92e1-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"282d32e9-d539-4bac-9fd1-a8735e8d92e1\") " pod="openstack/rabbitmq-cell1-server-0" Dec 10 14:54:13 crc kubenswrapper[4718]: I1210 14:54:13.264230 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/282d32e9-d539-4bac-9fd1-a8735e8d92e1-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"282d32e9-d539-4bac-9fd1-a8735e8d92e1\") " pod="openstack/rabbitmq-cell1-server-0" Dec 10 14:54:13 crc kubenswrapper[4718]: I1210 14:54:13.265690 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/282d32e9-d539-4bac-9fd1-a8735e8d92e1-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"282d32e9-d539-4bac-9fd1-a8735e8d92e1\") " pod="openstack/rabbitmq-cell1-server-0" Dec 10 14:54:13 crc kubenswrapper[4718]: I1210 14:54:13.276025 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/282d32e9-d539-4bac-9fd1-a8735e8d92e1-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"282d32e9-d539-4bac-9fd1-a8735e8d92e1\") " pod="openstack/rabbitmq-cell1-server-0" Dec 10 14:54:13 crc kubenswrapper[4718]: I1210 14:54:13.277537 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/282d32e9-d539-4bac-9fd1-a8735e8d92e1-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"282d32e9-d539-4bac-9fd1-a8735e8d92e1\") " pod="openstack/rabbitmq-cell1-server-0" Dec 10 14:54:13 crc kubenswrapper[4718]: I1210 14:54:13.279998 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/282d32e9-d539-4bac-9fd1-a8735e8d92e1-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"282d32e9-d539-4bac-9fd1-a8735e8d92e1\") " pod="openstack/rabbitmq-cell1-server-0" Dec 10 14:54:13 crc kubenswrapper[4718]: I1210 14:54:13.280626 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/282d32e9-d539-4bac-9fd1-a8735e8d92e1-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"282d32e9-d539-4bac-9fd1-a8735e8d92e1\") " pod="openstack/rabbitmq-cell1-server-0" Dec 10 14:54:13 crc kubenswrapper[4718]: I1210 14:54:13.280824 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/282d32e9-d539-4bac-9fd1-a8735e8d92e1-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"282d32e9-d539-4bac-9fd1-a8735e8d92e1\") " pod="openstack/rabbitmq-cell1-server-0" Dec 10 14:54:13 crc kubenswrapper[4718]: I1210 14:54:13.280944 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/282d32e9-d539-4bac-9fd1-a8735e8d92e1-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"282d32e9-d539-4bac-9fd1-a8735e8d92e1\") " pod="openstack/rabbitmq-cell1-server-0" Dec 10 14:54:13 crc kubenswrapper[4718]: I1210 14:54:13.293491 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/282d32e9-d539-4bac-9fd1-a8735e8d92e1-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"282d32e9-d539-4bac-9fd1-a8735e8d92e1\") " pod="openstack/rabbitmq-cell1-server-0" Dec 10 14:54:13 crc kubenswrapper[4718]: I1210 14:54:13.298702 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/0fcf07d8-859b-4547-8a32-824f40da6a93-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"0fcf07d8-859b-4547-8a32-824f40da6a93\") " pod="openstack/rabbitmq-server-0" Dec 10 14:54:13 crc kubenswrapper[4718]: I1210 14:54:13.299688 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 10 14:54:13 crc kubenswrapper[4718]: I1210 14:54:13.300707 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/0fcf07d8-859b-4547-8a32-824f40da6a93-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"0fcf07d8-859b-4547-8a32-824f40da6a93\") " pod="openstack/rabbitmq-server-0" Dec 10 14:54:13 crc kubenswrapper[4718]: I1210 14:54:13.307665 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/282d32e9-d539-4bac-9fd1-a8735e8d92e1-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"282d32e9-d539-4bac-9fd1-a8735e8d92e1\") " pod="openstack/rabbitmq-cell1-server-0" Dec 10 14:54:13 crc kubenswrapper[4718]: I1210 14:54:13.308752 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/0fcf07d8-859b-4547-8a32-824f40da6a93-pod-info\") pod \"rabbitmq-server-0\" (UID: \"0fcf07d8-859b-4547-8a32-824f40da6a93\") " pod="openstack/rabbitmq-server-0" Dec 10 14:54:13 crc kubenswrapper[4718]: I1210 14:54:13.312822 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/0fcf07d8-859b-4547-8a32-824f40da6a93-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"0fcf07d8-859b-4547-8a32-824f40da6a93\") " pod="openstack/rabbitmq-server-0" Dec 10 14:54:13 crc kubenswrapper[4718]: I1210 14:54:13.413759 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"282d32e9-d539-4bac-9fd1-a8735e8d92e1\") " pod="openstack/rabbitmq-cell1-server-0" Dec 10 14:54:13 crc kubenswrapper[4718]: I1210 14:54:13.414424 4718 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"282d32e9-d539-4bac-9fd1-a8735e8d92e1\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/rabbitmq-cell1-server-0" Dec 10 14:54:13 crc kubenswrapper[4718]: I1210 14:54:13.433498 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-55kgx\" (UniqueName: \"kubernetes.io/projected/282d32e9-d539-4bac-9fd1-a8735e8d92e1-kube-api-access-55kgx\") pod \"rabbitmq-cell1-server-0\" (UID: \"282d32e9-d539-4bac-9fd1-a8735e8d92e1\") " pod="openstack/rabbitmq-cell1-server-0" Dec 10 14:54:13 crc kubenswrapper[4718]: I1210 14:54:13.453552 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-b9b4959cc-p58sh"] Dec 10 14:54:13 crc kubenswrapper[4718]: I1210 14:54:13.465613 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-79ddq\" (UniqueName: \"kubernetes.io/projected/0fcf07d8-859b-4547-8a32-824f40da6a93-kube-api-access-79ddq\") pod \"rabbitmq-server-0\" (UID: \"0fcf07d8-859b-4547-8a32-824f40da6a93\") " pod="openstack/rabbitmq-server-0" Dec 10 14:54:13 crc kubenswrapper[4718]: I1210 14:54:13.472767 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5449989c59-s2cg4"] Dec 10 14:54:13 crc kubenswrapper[4718]: I1210 14:54:13.475472 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5449989c59-s2cg4" Dec 10 14:54:13 crc kubenswrapper[4718]: I1210 14:54:13.485907 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5449989c59-s2cg4"] Dec 10 14:54:13 crc kubenswrapper[4718]: I1210 14:54:13.497328 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-server-0\" (UID: \"0fcf07d8-859b-4547-8a32-824f40da6a93\") " pod="openstack/rabbitmq-server-0" Dec 10 14:54:13 crc kubenswrapper[4718]: I1210 14:54:13.556699 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"282d32e9-d539-4bac-9fd1-a8735e8d92e1\") " pod="openstack/rabbitmq-cell1-server-0" Dec 10 14:54:13 crc kubenswrapper[4718]: I1210 14:54:13.622974 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/28bd664c-dd6f-4b1d-a9c3-239b94717974-dns-svc\") pod \"dnsmasq-dns-5449989c59-s2cg4\" (UID: \"28bd664c-dd6f-4b1d-a9c3-239b94717974\") " pod="openstack/dnsmasq-dns-5449989c59-s2cg4" Dec 10 14:54:13 crc kubenswrapper[4718]: I1210 14:54:13.623071 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/28bd664c-dd6f-4b1d-a9c3-239b94717974-config\") pod \"dnsmasq-dns-5449989c59-s2cg4\" (UID: \"28bd664c-dd6f-4b1d-a9c3-239b94717974\") " pod="openstack/dnsmasq-dns-5449989c59-s2cg4" Dec 10 14:54:13 crc kubenswrapper[4718]: I1210 14:54:13.623146 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pbfdx\" (UniqueName: \"kubernetes.io/projected/28bd664c-dd6f-4b1d-a9c3-239b94717974-kube-api-access-pbfdx\") pod \"dnsmasq-dns-5449989c59-s2cg4\" (UID: \"28bd664c-dd6f-4b1d-a9c3-239b94717974\") " pod="openstack/dnsmasq-dns-5449989c59-s2cg4" Dec 10 14:54:13 crc kubenswrapper[4718]: I1210 14:54:13.660022 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-86b8f4ff9-5scsd"] Dec 10 14:54:13 crc kubenswrapper[4718]: W1210 14:54:13.680018 4718 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd2148743_814f_409c_bbf9_65b8430f767c.slice/crio-f7e5681debd50b6e2c0b90341f7455a01662f5e8b0e3bc62939e5dec75701f21 WatchSource:0}: Error finding container f7e5681debd50b6e2c0b90341f7455a01662f5e8b0e3bc62939e5dec75701f21: Status 404 returned error can't find the container with id f7e5681debd50b6e2c0b90341f7455a01662f5e8b0e3bc62939e5dec75701f21 Dec 10 14:54:13 crc kubenswrapper[4718]: I1210 14:54:13.725258 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/28bd664c-dd6f-4b1d-a9c3-239b94717974-dns-svc\") pod \"dnsmasq-dns-5449989c59-s2cg4\" (UID: \"28bd664c-dd6f-4b1d-a9c3-239b94717974\") " pod="openstack/dnsmasq-dns-5449989c59-s2cg4" Dec 10 14:54:13 crc kubenswrapper[4718]: I1210 14:54:13.725333 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/28bd664c-dd6f-4b1d-a9c3-239b94717974-config\") pod \"dnsmasq-dns-5449989c59-s2cg4\" (UID: \"28bd664c-dd6f-4b1d-a9c3-239b94717974\") " pod="openstack/dnsmasq-dns-5449989c59-s2cg4" Dec 10 14:54:13 crc kubenswrapper[4718]: I1210 14:54:13.725489 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pbfdx\" (UniqueName: \"kubernetes.io/projected/28bd664c-dd6f-4b1d-a9c3-239b94717974-kube-api-access-pbfdx\") pod \"dnsmasq-dns-5449989c59-s2cg4\" (UID: \"28bd664c-dd6f-4b1d-a9c3-239b94717974\") " pod="openstack/dnsmasq-dns-5449989c59-s2cg4" Dec 10 14:54:13 crc kubenswrapper[4718]: I1210 14:54:13.728562 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/28bd664c-dd6f-4b1d-a9c3-239b94717974-dns-svc\") pod \"dnsmasq-dns-5449989c59-s2cg4\" (UID: \"28bd664c-dd6f-4b1d-a9c3-239b94717974\") " pod="openstack/dnsmasq-dns-5449989c59-s2cg4" Dec 10 14:54:13 crc kubenswrapper[4718]: I1210 14:54:13.728697 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/28bd664c-dd6f-4b1d-a9c3-239b94717974-config\") pod \"dnsmasq-dns-5449989c59-s2cg4\" (UID: \"28bd664c-dd6f-4b1d-a9c3-239b94717974\") " pod="openstack/dnsmasq-dns-5449989c59-s2cg4" Dec 10 14:54:13 crc kubenswrapper[4718]: I1210 14:54:13.741966 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Dec 10 14:54:13 crc kubenswrapper[4718]: I1210 14:54:13.752014 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pbfdx\" (UniqueName: \"kubernetes.io/projected/28bd664c-dd6f-4b1d-a9c3-239b94717974-kube-api-access-pbfdx\") pod \"dnsmasq-dns-5449989c59-s2cg4\" (UID: \"28bd664c-dd6f-4b1d-a9c3-239b94717974\") " pod="openstack/dnsmasq-dns-5449989c59-s2cg4" Dec 10 14:54:13 crc kubenswrapper[4718]: I1210 14:54:13.826635 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Dec 10 14:54:13 crc kubenswrapper[4718]: I1210 14:54:13.890967 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5449989c59-s2cg4" Dec 10 14:54:14 crc kubenswrapper[4718]: I1210 14:54:14.436121 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86b8f4ff9-5scsd" event={"ID":"d2148743-814f-409c-bbf9-65b8430f767c","Type":"ContainerStarted","Data":"f7e5681debd50b6e2c0b90341f7455a01662f5e8b0e3bc62939e5dec75701f21"} Dec 10 14:54:14 crc kubenswrapper[4718]: I1210 14:54:14.543902 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-notifications-server-0"] Dec 10 14:54:14 crc kubenswrapper[4718]: I1210 14:54:14.546113 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-notifications-server-0" Dec 10 14:54:14 crc kubenswrapper[4718]: I1210 14:54:14.561619 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-notifications-server-0"] Dec 10 14:54:14 crc kubenswrapper[4718]: I1210 14:54:14.562302 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-notifications-svc" Dec 10 14:54:14 crc kubenswrapper[4718]: I1210 14:54:14.562568 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-notifications-server-dockercfg-vmwr7" Dec 10 14:54:14 crc kubenswrapper[4718]: I1210 14:54:14.562749 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-notifications-plugins-conf" Dec 10 14:54:14 crc kubenswrapper[4718]: I1210 14:54:14.563007 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-notifications-erlang-cookie" Dec 10 14:54:14 crc kubenswrapper[4718]: I1210 14:54:14.563717 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-notifications-default-user" Dec 10 14:54:14 crc kubenswrapper[4718]: I1210 14:54:14.563936 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-notifications-server-conf" Dec 10 14:54:14 crc kubenswrapper[4718]: I1210 14:54:14.564149 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-notifications-config-data" Dec 10 14:54:14 crc kubenswrapper[4718]: I1210 14:54:14.669594 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-notifications-server-0\" (UID: \"5611ee41-14a4-45d3-88b1-e6e6c9bc4d13\") " pod="openstack/rabbitmq-notifications-server-0" Dec 10 14:54:14 crc kubenswrapper[4718]: I1210 14:54:14.669694 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/5611ee41-14a4-45d3-88b1-e6e6c9bc4d13-server-conf\") pod \"rabbitmq-notifications-server-0\" (UID: \"5611ee41-14a4-45d3-88b1-e6e6c9bc4d13\") " pod="openstack/rabbitmq-notifications-server-0" Dec 10 14:54:14 crc kubenswrapper[4718]: I1210 14:54:14.669770 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/5611ee41-14a4-45d3-88b1-e6e6c9bc4d13-rabbitmq-erlang-cookie\") pod \"rabbitmq-notifications-server-0\" (UID: \"5611ee41-14a4-45d3-88b1-e6e6c9bc4d13\") " pod="openstack/rabbitmq-notifications-server-0" Dec 10 14:54:14 crc kubenswrapper[4718]: I1210 14:54:14.669828 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/5611ee41-14a4-45d3-88b1-e6e6c9bc4d13-rabbitmq-plugins\") pod \"rabbitmq-notifications-server-0\" (UID: \"5611ee41-14a4-45d3-88b1-e6e6c9bc4d13\") " pod="openstack/rabbitmq-notifications-server-0" Dec 10 14:54:14 crc kubenswrapper[4718]: I1210 14:54:14.669854 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-85fbk\" (UniqueName: \"kubernetes.io/projected/5611ee41-14a4-45d3-88b1-e6e6c9bc4d13-kube-api-access-85fbk\") pod \"rabbitmq-notifications-server-0\" (UID: \"5611ee41-14a4-45d3-88b1-e6e6c9bc4d13\") " pod="openstack/rabbitmq-notifications-server-0" Dec 10 14:54:14 crc kubenswrapper[4718]: I1210 14:54:14.669914 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/5611ee41-14a4-45d3-88b1-e6e6c9bc4d13-erlang-cookie-secret\") pod \"rabbitmq-notifications-server-0\" (UID: \"5611ee41-14a4-45d3-88b1-e6e6c9bc4d13\") " pod="openstack/rabbitmq-notifications-server-0" Dec 10 14:54:14 crc kubenswrapper[4718]: I1210 14:54:14.670007 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/5611ee41-14a4-45d3-88b1-e6e6c9bc4d13-config-data\") pod \"rabbitmq-notifications-server-0\" (UID: \"5611ee41-14a4-45d3-88b1-e6e6c9bc4d13\") " pod="openstack/rabbitmq-notifications-server-0" Dec 10 14:54:14 crc kubenswrapper[4718]: I1210 14:54:14.670088 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/5611ee41-14a4-45d3-88b1-e6e6c9bc4d13-rabbitmq-tls\") pod \"rabbitmq-notifications-server-0\" (UID: \"5611ee41-14a4-45d3-88b1-e6e6c9bc4d13\") " pod="openstack/rabbitmq-notifications-server-0" Dec 10 14:54:14 crc kubenswrapper[4718]: I1210 14:54:14.670313 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/5611ee41-14a4-45d3-88b1-e6e6c9bc4d13-rabbitmq-confd\") pod \"rabbitmq-notifications-server-0\" (UID: \"5611ee41-14a4-45d3-88b1-e6e6c9bc4d13\") " pod="openstack/rabbitmq-notifications-server-0" Dec 10 14:54:14 crc kubenswrapper[4718]: I1210 14:54:14.670375 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/5611ee41-14a4-45d3-88b1-e6e6c9bc4d13-pod-info\") pod \"rabbitmq-notifications-server-0\" (UID: \"5611ee41-14a4-45d3-88b1-e6e6c9bc4d13\") " pod="openstack/rabbitmq-notifications-server-0" Dec 10 14:54:14 crc kubenswrapper[4718]: I1210 14:54:14.670446 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/5611ee41-14a4-45d3-88b1-e6e6c9bc4d13-plugins-conf\") pod \"rabbitmq-notifications-server-0\" (UID: \"5611ee41-14a4-45d3-88b1-e6e6c9bc4d13\") " pod="openstack/rabbitmq-notifications-server-0" Dec 10 14:54:14 crc kubenswrapper[4718]: I1210 14:54:14.773601 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/5611ee41-14a4-45d3-88b1-e6e6c9bc4d13-pod-info\") pod \"rabbitmq-notifications-server-0\" (UID: \"5611ee41-14a4-45d3-88b1-e6e6c9bc4d13\") " pod="openstack/rabbitmq-notifications-server-0" Dec 10 14:54:14 crc kubenswrapper[4718]: I1210 14:54:14.773681 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/5611ee41-14a4-45d3-88b1-e6e6c9bc4d13-plugins-conf\") pod \"rabbitmq-notifications-server-0\" (UID: \"5611ee41-14a4-45d3-88b1-e6e6c9bc4d13\") " pod="openstack/rabbitmq-notifications-server-0" Dec 10 14:54:14 crc kubenswrapper[4718]: I1210 14:54:14.773766 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-notifications-server-0\" (UID: \"5611ee41-14a4-45d3-88b1-e6e6c9bc4d13\") " pod="openstack/rabbitmq-notifications-server-0" Dec 10 14:54:14 crc kubenswrapper[4718]: I1210 14:54:14.773802 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/5611ee41-14a4-45d3-88b1-e6e6c9bc4d13-server-conf\") pod \"rabbitmq-notifications-server-0\" (UID: \"5611ee41-14a4-45d3-88b1-e6e6c9bc4d13\") " pod="openstack/rabbitmq-notifications-server-0" Dec 10 14:54:14 crc kubenswrapper[4718]: I1210 14:54:14.773859 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/5611ee41-14a4-45d3-88b1-e6e6c9bc4d13-rabbitmq-erlang-cookie\") pod \"rabbitmq-notifications-server-0\" (UID: \"5611ee41-14a4-45d3-88b1-e6e6c9bc4d13\") " pod="openstack/rabbitmq-notifications-server-0" Dec 10 14:54:14 crc kubenswrapper[4718]: I1210 14:54:14.773893 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/5611ee41-14a4-45d3-88b1-e6e6c9bc4d13-rabbitmq-plugins\") pod \"rabbitmq-notifications-server-0\" (UID: \"5611ee41-14a4-45d3-88b1-e6e6c9bc4d13\") " pod="openstack/rabbitmq-notifications-server-0" Dec 10 14:54:14 crc kubenswrapper[4718]: I1210 14:54:14.773923 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-85fbk\" (UniqueName: \"kubernetes.io/projected/5611ee41-14a4-45d3-88b1-e6e6c9bc4d13-kube-api-access-85fbk\") pod \"rabbitmq-notifications-server-0\" (UID: \"5611ee41-14a4-45d3-88b1-e6e6c9bc4d13\") " pod="openstack/rabbitmq-notifications-server-0" Dec 10 14:54:14 crc kubenswrapper[4718]: I1210 14:54:14.773958 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/5611ee41-14a4-45d3-88b1-e6e6c9bc4d13-erlang-cookie-secret\") pod \"rabbitmq-notifications-server-0\" (UID: \"5611ee41-14a4-45d3-88b1-e6e6c9bc4d13\") " pod="openstack/rabbitmq-notifications-server-0" Dec 10 14:54:14 crc kubenswrapper[4718]: I1210 14:54:14.773976 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/5611ee41-14a4-45d3-88b1-e6e6c9bc4d13-config-data\") pod \"rabbitmq-notifications-server-0\" (UID: \"5611ee41-14a4-45d3-88b1-e6e6c9bc4d13\") " pod="openstack/rabbitmq-notifications-server-0" Dec 10 14:54:14 crc kubenswrapper[4718]: I1210 14:54:14.773996 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/5611ee41-14a4-45d3-88b1-e6e6c9bc4d13-rabbitmq-tls\") pod \"rabbitmq-notifications-server-0\" (UID: \"5611ee41-14a4-45d3-88b1-e6e6c9bc4d13\") " pod="openstack/rabbitmq-notifications-server-0" Dec 10 14:54:14 crc kubenswrapper[4718]: I1210 14:54:14.774046 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/5611ee41-14a4-45d3-88b1-e6e6c9bc4d13-rabbitmq-confd\") pod \"rabbitmq-notifications-server-0\" (UID: \"5611ee41-14a4-45d3-88b1-e6e6c9bc4d13\") " pod="openstack/rabbitmq-notifications-server-0" Dec 10 14:54:14 crc kubenswrapper[4718]: I1210 14:54:14.775292 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/5611ee41-14a4-45d3-88b1-e6e6c9bc4d13-rabbitmq-plugins\") pod \"rabbitmq-notifications-server-0\" (UID: \"5611ee41-14a4-45d3-88b1-e6e6c9bc4d13\") " pod="openstack/rabbitmq-notifications-server-0" Dec 10 14:54:14 crc kubenswrapper[4718]: I1210 14:54:14.776333 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/5611ee41-14a4-45d3-88b1-e6e6c9bc4d13-plugins-conf\") pod \"rabbitmq-notifications-server-0\" (UID: \"5611ee41-14a4-45d3-88b1-e6e6c9bc4d13\") " pod="openstack/rabbitmq-notifications-server-0" Dec 10 14:54:14 crc kubenswrapper[4718]: I1210 14:54:14.776659 4718 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-notifications-server-0\" (UID: \"5611ee41-14a4-45d3-88b1-e6e6c9bc4d13\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/rabbitmq-notifications-server-0" Dec 10 14:54:14 crc kubenswrapper[4718]: I1210 14:54:14.805947 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/5611ee41-14a4-45d3-88b1-e6e6c9bc4d13-config-data\") pod \"rabbitmq-notifications-server-0\" (UID: \"5611ee41-14a4-45d3-88b1-e6e6c9bc4d13\") " pod="openstack/rabbitmq-notifications-server-0" Dec 10 14:54:14 crc kubenswrapper[4718]: I1210 14:54:14.869044 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/5611ee41-14a4-45d3-88b1-e6e6c9bc4d13-rabbitmq-erlang-cookie\") pod \"rabbitmq-notifications-server-0\" (UID: \"5611ee41-14a4-45d3-88b1-e6e6c9bc4d13\") " pod="openstack/rabbitmq-notifications-server-0" Dec 10 14:54:14 crc kubenswrapper[4718]: I1210 14:54:14.873653 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/5611ee41-14a4-45d3-88b1-e6e6c9bc4d13-server-conf\") pod \"rabbitmq-notifications-server-0\" (UID: \"5611ee41-14a4-45d3-88b1-e6e6c9bc4d13\") " pod="openstack/rabbitmq-notifications-server-0" Dec 10 14:54:14 crc kubenswrapper[4718]: I1210 14:54:14.899823 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-85fbk\" (UniqueName: \"kubernetes.io/projected/5611ee41-14a4-45d3-88b1-e6e6c9bc4d13-kube-api-access-85fbk\") pod \"rabbitmq-notifications-server-0\" (UID: \"5611ee41-14a4-45d3-88b1-e6e6c9bc4d13\") " pod="openstack/rabbitmq-notifications-server-0" Dec 10 14:54:14 crc kubenswrapper[4718]: I1210 14:54:14.928834 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/5611ee41-14a4-45d3-88b1-e6e6c9bc4d13-pod-info\") pod \"rabbitmq-notifications-server-0\" (UID: \"5611ee41-14a4-45d3-88b1-e6e6c9bc4d13\") " pod="openstack/rabbitmq-notifications-server-0" Dec 10 14:54:14 crc kubenswrapper[4718]: I1210 14:54:14.930090 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/5611ee41-14a4-45d3-88b1-e6e6c9bc4d13-erlang-cookie-secret\") pod \"rabbitmq-notifications-server-0\" (UID: \"5611ee41-14a4-45d3-88b1-e6e6c9bc4d13\") " pod="openstack/rabbitmq-notifications-server-0" Dec 10 14:54:14 crc kubenswrapper[4718]: I1210 14:54:14.931096 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/5611ee41-14a4-45d3-88b1-e6e6c9bc4d13-rabbitmq-confd\") pod \"rabbitmq-notifications-server-0\" (UID: \"5611ee41-14a4-45d3-88b1-e6e6c9bc4d13\") " pod="openstack/rabbitmq-notifications-server-0" Dec 10 14:54:14 crc kubenswrapper[4718]: I1210 14:54:14.963478 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/5611ee41-14a4-45d3-88b1-e6e6c9bc4d13-rabbitmq-tls\") pod \"rabbitmq-notifications-server-0\" (UID: \"5611ee41-14a4-45d3-88b1-e6e6c9bc4d13\") " pod="openstack/rabbitmq-notifications-server-0" Dec 10 14:54:15 crc kubenswrapper[4718]: I1210 14:54:15.000247 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-notifications-server-0\" (UID: \"5611ee41-14a4-45d3-88b1-e6e6c9bc4d13\") " pod="openstack/rabbitmq-notifications-server-0" Dec 10 14:54:15 crc kubenswrapper[4718]: I1210 14:54:15.278009 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-notifications-server-0" Dec 10 14:54:15 crc kubenswrapper[4718]: I1210 14:54:15.589318 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Dec 10 14:54:15 crc kubenswrapper[4718]: I1210 14:54:15.591542 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Dec 10 14:54:15 crc kubenswrapper[4718]: I1210 14:54:15.606407 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-9mjhw" Dec 10 14:54:15 crc kubenswrapper[4718]: I1210 14:54:15.613403 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Dec 10 14:54:15 crc kubenswrapper[4718]: I1210 14:54:15.613608 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Dec 10 14:54:15 crc kubenswrapper[4718]: I1210 14:54:15.613740 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Dec 10 14:54:15 crc kubenswrapper[4718]: I1210 14:54:15.653125 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Dec 10 14:54:15 crc kubenswrapper[4718]: I1210 14:54:15.712713 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zcp5c\" (UniqueName: \"kubernetes.io/projected/0708d5de-311d-46e3-981e-7bd7a2fc495c-kube-api-access-zcp5c\") pod \"openstack-galera-0\" (UID: \"0708d5de-311d-46e3-981e-7bd7a2fc495c\") " pod="openstack/openstack-galera-0" Dec 10 14:54:15 crc kubenswrapper[4718]: I1210 14:54:15.712761 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/0708d5de-311d-46e3-981e-7bd7a2fc495c-kolla-config\") pod \"openstack-galera-0\" (UID: \"0708d5de-311d-46e3-981e-7bd7a2fc495c\") " pod="openstack/openstack-galera-0" Dec 10 14:54:15 crc kubenswrapper[4718]: I1210 14:54:15.712806 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0708d5de-311d-46e3-981e-7bd7a2fc495c-operator-scripts\") pod \"openstack-galera-0\" (UID: \"0708d5de-311d-46e3-981e-7bd7a2fc495c\") " pod="openstack/openstack-galera-0" Dec 10 14:54:15 crc kubenswrapper[4718]: I1210 14:54:15.712864 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0708d5de-311d-46e3-981e-7bd7a2fc495c-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"0708d5de-311d-46e3-981e-7bd7a2fc495c\") " pod="openstack/openstack-galera-0" Dec 10 14:54:15 crc kubenswrapper[4718]: I1210 14:54:15.712898 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/0708d5de-311d-46e3-981e-7bd7a2fc495c-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"0708d5de-311d-46e3-981e-7bd7a2fc495c\") " pod="openstack/openstack-galera-0" Dec 10 14:54:15 crc kubenswrapper[4718]: I1210 14:54:15.712953 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/0708d5de-311d-46e3-981e-7bd7a2fc495c-config-data-generated\") pod \"openstack-galera-0\" (UID: \"0708d5de-311d-46e3-981e-7bd7a2fc495c\") " pod="openstack/openstack-galera-0" Dec 10 14:54:15 crc kubenswrapper[4718]: I1210 14:54:15.712985 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/0708d5de-311d-46e3-981e-7bd7a2fc495c-config-data-default\") pod \"openstack-galera-0\" (UID: \"0708d5de-311d-46e3-981e-7bd7a2fc495c\") " pod="openstack/openstack-galera-0" Dec 10 14:54:15 crc kubenswrapper[4718]: I1210 14:54:15.713016 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"openstack-galera-0\" (UID: \"0708d5de-311d-46e3-981e-7bd7a2fc495c\") " pod="openstack/openstack-galera-0" Dec 10 14:54:15 crc kubenswrapper[4718]: I1210 14:54:15.715411 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Dec 10 14:54:15 crc kubenswrapper[4718]: I1210 14:54:15.830472 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zcp5c\" (UniqueName: \"kubernetes.io/projected/0708d5de-311d-46e3-981e-7bd7a2fc495c-kube-api-access-zcp5c\") pod \"openstack-galera-0\" (UID: \"0708d5de-311d-46e3-981e-7bd7a2fc495c\") " pod="openstack/openstack-galera-0" Dec 10 14:54:15 crc kubenswrapper[4718]: I1210 14:54:15.831096 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/0708d5de-311d-46e3-981e-7bd7a2fc495c-kolla-config\") pod \"openstack-galera-0\" (UID: \"0708d5de-311d-46e3-981e-7bd7a2fc495c\") " pod="openstack/openstack-galera-0" Dec 10 14:54:15 crc kubenswrapper[4718]: I1210 14:54:15.831166 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0708d5de-311d-46e3-981e-7bd7a2fc495c-operator-scripts\") pod \"openstack-galera-0\" (UID: \"0708d5de-311d-46e3-981e-7bd7a2fc495c\") " pod="openstack/openstack-galera-0" Dec 10 14:54:15 crc kubenswrapper[4718]: I1210 14:54:15.831199 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0708d5de-311d-46e3-981e-7bd7a2fc495c-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"0708d5de-311d-46e3-981e-7bd7a2fc495c\") " pod="openstack/openstack-galera-0" Dec 10 14:54:15 crc kubenswrapper[4718]: I1210 14:54:15.831272 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/0708d5de-311d-46e3-981e-7bd7a2fc495c-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"0708d5de-311d-46e3-981e-7bd7a2fc495c\") " pod="openstack/openstack-galera-0" Dec 10 14:54:15 crc kubenswrapper[4718]: I1210 14:54:15.831414 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/0708d5de-311d-46e3-981e-7bd7a2fc495c-config-data-generated\") pod \"openstack-galera-0\" (UID: \"0708d5de-311d-46e3-981e-7bd7a2fc495c\") " pod="openstack/openstack-galera-0" Dec 10 14:54:15 crc kubenswrapper[4718]: I1210 14:54:15.831469 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/0708d5de-311d-46e3-981e-7bd7a2fc495c-config-data-default\") pod \"openstack-galera-0\" (UID: \"0708d5de-311d-46e3-981e-7bd7a2fc495c\") " pod="openstack/openstack-galera-0" Dec 10 14:54:15 crc kubenswrapper[4718]: I1210 14:54:15.831513 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"openstack-galera-0\" (UID: \"0708d5de-311d-46e3-981e-7bd7a2fc495c\") " pod="openstack/openstack-galera-0" Dec 10 14:54:15 crc kubenswrapper[4718]: I1210 14:54:15.832224 4718 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"openstack-galera-0\" (UID: \"0708d5de-311d-46e3-981e-7bd7a2fc495c\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/openstack-galera-0" Dec 10 14:54:15 crc kubenswrapper[4718]: I1210 14:54:15.836681 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/0708d5de-311d-46e3-981e-7bd7a2fc495c-config-data-generated\") pod \"openstack-galera-0\" (UID: \"0708d5de-311d-46e3-981e-7bd7a2fc495c\") " pod="openstack/openstack-galera-0" Dec 10 14:54:15 crc kubenswrapper[4718]: I1210 14:54:15.840140 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/0708d5de-311d-46e3-981e-7bd7a2fc495c-config-data-default\") pod \"openstack-galera-0\" (UID: \"0708d5de-311d-46e3-981e-7bd7a2fc495c\") " pod="openstack/openstack-galera-0" Dec 10 14:54:15 crc kubenswrapper[4718]: I1210 14:54:15.842909 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/0708d5de-311d-46e3-981e-7bd7a2fc495c-kolla-config\") pod \"openstack-galera-0\" (UID: \"0708d5de-311d-46e3-981e-7bd7a2fc495c\") " pod="openstack/openstack-galera-0" Dec 10 14:54:15 crc kubenswrapper[4718]: I1210 14:54:15.845429 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0708d5de-311d-46e3-981e-7bd7a2fc495c-operator-scripts\") pod \"openstack-galera-0\" (UID: \"0708d5de-311d-46e3-981e-7bd7a2fc495c\") " pod="openstack/openstack-galera-0" Dec 10 14:54:15 crc kubenswrapper[4718]: I1210 14:54:15.906878 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0708d5de-311d-46e3-981e-7bd7a2fc495c-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"0708d5de-311d-46e3-981e-7bd7a2fc495c\") " pod="openstack/openstack-galera-0" Dec 10 14:54:15 crc kubenswrapper[4718]: I1210 14:54:15.921087 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/0708d5de-311d-46e3-981e-7bd7a2fc495c-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"0708d5de-311d-46e3-981e-7bd7a2fc495c\") " pod="openstack/openstack-galera-0" Dec 10 14:54:15 crc kubenswrapper[4718]: I1210 14:54:15.945858 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zcp5c\" (UniqueName: \"kubernetes.io/projected/0708d5de-311d-46e3-981e-7bd7a2fc495c-kube-api-access-zcp5c\") pod \"openstack-galera-0\" (UID: \"0708d5de-311d-46e3-981e-7bd7a2fc495c\") " pod="openstack/openstack-galera-0" Dec 10 14:54:16 crc kubenswrapper[4718]: I1210 14:54:16.388090 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"openstack-galera-0\" (UID: \"0708d5de-311d-46e3-981e-7bd7a2fc495c\") " pod="openstack/openstack-galera-0" Dec 10 14:54:16 crc kubenswrapper[4718]: I1210 14:54:16.585279 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 10 14:54:16 crc kubenswrapper[4718]: I1210 14:54:16.601588 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Dec 10 14:54:16 crc kubenswrapper[4718]: I1210 14:54:16.826538 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Dec 10 14:54:16 crc kubenswrapper[4718]: I1210 14:54:16.845446 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Dec 10 14:54:16 crc kubenswrapper[4718]: I1210 14:54:16.847368 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Dec 10 14:54:16 crc kubenswrapper[4718]: I1210 14:54:16.853109 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Dec 10 14:54:16 crc kubenswrapper[4718]: I1210 14:54:16.853612 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Dec 10 14:54:16 crc kubenswrapper[4718]: I1210 14:54:16.854609 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-27xtv" Dec 10 14:54:16 crc kubenswrapper[4718]: I1210 14:54:16.878953 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5449989c59-s2cg4"] Dec 10 14:54:16 crc kubenswrapper[4718]: I1210 14:54:16.909100 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Dec 10 14:54:17 crc kubenswrapper[4718]: I1210 14:54:16.994565 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/571880ea-f2a9-4e9e-99a5-c8bcaffb8675-combined-ca-bundle\") pod \"memcached-0\" (UID: \"571880ea-f2a9-4e9e-99a5-c8bcaffb8675\") " pod="openstack/memcached-0" Dec 10 14:54:17 crc kubenswrapper[4718]: I1210 14:54:16.994639 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/571880ea-f2a9-4e9e-99a5-c8bcaffb8675-kolla-config\") pod \"memcached-0\" (UID: \"571880ea-f2a9-4e9e-99a5-c8bcaffb8675\") " pod="openstack/memcached-0" Dec 10 14:54:17 crc kubenswrapper[4718]: I1210 14:54:16.994683 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/571880ea-f2a9-4e9e-99a5-c8bcaffb8675-config-data\") pod \"memcached-0\" (UID: \"571880ea-f2a9-4e9e-99a5-c8bcaffb8675\") " pod="openstack/memcached-0" Dec 10 14:54:17 crc kubenswrapper[4718]: I1210 14:54:16.994711 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/571880ea-f2a9-4e9e-99a5-c8bcaffb8675-memcached-tls-certs\") pod \"memcached-0\" (UID: \"571880ea-f2a9-4e9e-99a5-c8bcaffb8675\") " pod="openstack/memcached-0" Dec 10 14:54:17 crc kubenswrapper[4718]: I1210 14:54:16.994779 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2cbzl\" (UniqueName: \"kubernetes.io/projected/571880ea-f2a9-4e9e-99a5-c8bcaffb8675-kube-api-access-2cbzl\") pod \"memcached-0\" (UID: \"571880ea-f2a9-4e9e-99a5-c8bcaffb8675\") " pod="openstack/memcached-0" Dec 10 14:54:17 crc kubenswrapper[4718]: I1210 14:54:17.024517 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Dec 10 14:54:17 crc kubenswrapper[4718]: I1210 14:54:17.026884 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Dec 10 14:54:17 crc kubenswrapper[4718]: I1210 14:54:17.032468 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Dec 10 14:54:17 crc kubenswrapper[4718]: I1210 14:54:17.032658 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-f4xp6" Dec 10 14:54:17 crc kubenswrapper[4718]: I1210 14:54:17.033493 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Dec 10 14:54:17 crc kubenswrapper[4718]: I1210 14:54:17.033634 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Dec 10 14:54:17 crc kubenswrapper[4718]: I1210 14:54:17.042418 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Dec 10 14:54:17 crc kubenswrapper[4718]: I1210 14:54:17.096642 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2cbzl\" (UniqueName: \"kubernetes.io/projected/571880ea-f2a9-4e9e-99a5-c8bcaffb8675-kube-api-access-2cbzl\") pod \"memcached-0\" (UID: \"571880ea-f2a9-4e9e-99a5-c8bcaffb8675\") " pod="openstack/memcached-0" Dec 10 14:54:17 crc kubenswrapper[4718]: I1210 14:54:17.097985 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/f81943cf-47c7-424d-9473-2df3195bc9a6-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"f81943cf-47c7-424d-9473-2df3195bc9a6\") " pod="openstack/openstack-cell1-galera-0" Dec 10 14:54:17 crc kubenswrapper[4718]: I1210 14:54:17.098055 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/f81943cf-47c7-424d-9473-2df3195bc9a6-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"f81943cf-47c7-424d-9473-2df3195bc9a6\") " pod="openstack/openstack-cell1-galera-0" Dec 10 14:54:17 crc kubenswrapper[4718]: I1210 14:54:17.098112 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/571880ea-f2a9-4e9e-99a5-c8bcaffb8675-combined-ca-bundle\") pod \"memcached-0\" (UID: \"571880ea-f2a9-4e9e-99a5-c8bcaffb8675\") " pod="openstack/memcached-0" Dec 10 14:54:17 crc kubenswrapper[4718]: I1210 14:54:17.098144 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/571880ea-f2a9-4e9e-99a5-c8bcaffb8675-kolla-config\") pod \"memcached-0\" (UID: \"571880ea-f2a9-4e9e-99a5-c8bcaffb8675\") " pod="openstack/memcached-0" Dec 10 14:54:17 crc kubenswrapper[4718]: I1210 14:54:17.098186 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"openstack-cell1-galera-0\" (UID: \"f81943cf-47c7-424d-9473-2df3195bc9a6\") " pod="openstack/openstack-cell1-galera-0" Dec 10 14:54:17 crc kubenswrapper[4718]: I1210 14:54:17.098267 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8tzvl\" (UniqueName: \"kubernetes.io/projected/f81943cf-47c7-424d-9473-2df3195bc9a6-kube-api-access-8tzvl\") pod \"openstack-cell1-galera-0\" (UID: \"f81943cf-47c7-424d-9473-2df3195bc9a6\") " pod="openstack/openstack-cell1-galera-0" Dec 10 14:54:17 crc kubenswrapper[4718]: I1210 14:54:17.098287 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/571880ea-f2a9-4e9e-99a5-c8bcaffb8675-config-data\") pod \"memcached-0\" (UID: \"571880ea-f2a9-4e9e-99a5-c8bcaffb8675\") " pod="openstack/memcached-0" Dec 10 14:54:17 crc kubenswrapper[4718]: I1210 14:54:17.098305 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f81943cf-47c7-424d-9473-2df3195bc9a6-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"f81943cf-47c7-424d-9473-2df3195bc9a6\") " pod="openstack/openstack-cell1-galera-0" Dec 10 14:54:17 crc kubenswrapper[4718]: I1210 14:54:17.098336 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/571880ea-f2a9-4e9e-99a5-c8bcaffb8675-memcached-tls-certs\") pod \"memcached-0\" (UID: \"571880ea-f2a9-4e9e-99a5-c8bcaffb8675\") " pod="openstack/memcached-0" Dec 10 14:54:17 crc kubenswrapper[4718]: I1210 14:54:17.098368 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/f81943cf-47c7-424d-9473-2df3195bc9a6-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"f81943cf-47c7-424d-9473-2df3195bc9a6\") " pod="openstack/openstack-cell1-galera-0" Dec 10 14:54:17 crc kubenswrapper[4718]: I1210 14:54:17.098466 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f81943cf-47c7-424d-9473-2df3195bc9a6-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"f81943cf-47c7-424d-9473-2df3195bc9a6\") " pod="openstack/openstack-cell1-galera-0" Dec 10 14:54:17 crc kubenswrapper[4718]: I1210 14:54:17.098495 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/f81943cf-47c7-424d-9473-2df3195bc9a6-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"f81943cf-47c7-424d-9473-2df3195bc9a6\") " pod="openstack/openstack-cell1-galera-0" Dec 10 14:54:17 crc kubenswrapper[4718]: I1210 14:54:17.099784 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/571880ea-f2a9-4e9e-99a5-c8bcaffb8675-config-data\") pod \"memcached-0\" (UID: \"571880ea-f2a9-4e9e-99a5-c8bcaffb8675\") " pod="openstack/memcached-0" Dec 10 14:54:17 crc kubenswrapper[4718]: I1210 14:54:17.101554 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/571880ea-f2a9-4e9e-99a5-c8bcaffb8675-kolla-config\") pod \"memcached-0\" (UID: \"571880ea-f2a9-4e9e-99a5-c8bcaffb8675\") " pod="openstack/memcached-0" Dec 10 14:54:17 crc kubenswrapper[4718]: I1210 14:54:17.118810 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/571880ea-f2a9-4e9e-99a5-c8bcaffb8675-combined-ca-bundle\") pod \"memcached-0\" (UID: \"571880ea-f2a9-4e9e-99a5-c8bcaffb8675\") " pod="openstack/memcached-0" Dec 10 14:54:17 crc kubenswrapper[4718]: I1210 14:54:17.120910 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/571880ea-f2a9-4e9e-99a5-c8bcaffb8675-memcached-tls-certs\") pod \"memcached-0\" (UID: \"571880ea-f2a9-4e9e-99a5-c8bcaffb8675\") " pod="openstack/memcached-0" Dec 10 14:54:17 crc kubenswrapper[4718]: I1210 14:54:17.122036 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2cbzl\" (UniqueName: \"kubernetes.io/projected/571880ea-f2a9-4e9e-99a5-c8bcaffb8675-kube-api-access-2cbzl\") pod \"memcached-0\" (UID: \"571880ea-f2a9-4e9e-99a5-c8bcaffb8675\") " pod="openstack/memcached-0" Dec 10 14:54:17 crc kubenswrapper[4718]: I1210 14:54:17.201317 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f81943cf-47c7-424d-9473-2df3195bc9a6-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"f81943cf-47c7-424d-9473-2df3195bc9a6\") " pod="openstack/openstack-cell1-galera-0" Dec 10 14:54:17 crc kubenswrapper[4718]: I1210 14:54:17.201543 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/f81943cf-47c7-424d-9473-2df3195bc9a6-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"f81943cf-47c7-424d-9473-2df3195bc9a6\") " pod="openstack/openstack-cell1-galera-0" Dec 10 14:54:17 crc kubenswrapper[4718]: I1210 14:54:17.202397 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/f81943cf-47c7-424d-9473-2df3195bc9a6-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"f81943cf-47c7-424d-9473-2df3195bc9a6\") " pod="openstack/openstack-cell1-galera-0" Dec 10 14:54:17 crc kubenswrapper[4718]: I1210 14:54:17.202463 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/f81943cf-47c7-424d-9473-2df3195bc9a6-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"f81943cf-47c7-424d-9473-2df3195bc9a6\") " pod="openstack/openstack-cell1-galera-0" Dec 10 14:54:17 crc kubenswrapper[4718]: I1210 14:54:17.202513 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"openstack-cell1-galera-0\" (UID: \"f81943cf-47c7-424d-9473-2df3195bc9a6\") " pod="openstack/openstack-cell1-galera-0" Dec 10 14:54:17 crc kubenswrapper[4718]: I1210 14:54:17.202568 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8tzvl\" (UniqueName: \"kubernetes.io/projected/f81943cf-47c7-424d-9473-2df3195bc9a6-kube-api-access-8tzvl\") pod \"openstack-cell1-galera-0\" (UID: \"f81943cf-47c7-424d-9473-2df3195bc9a6\") " pod="openstack/openstack-cell1-galera-0" Dec 10 14:54:17 crc kubenswrapper[4718]: I1210 14:54:17.202598 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f81943cf-47c7-424d-9473-2df3195bc9a6-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"f81943cf-47c7-424d-9473-2df3195bc9a6\") " pod="openstack/openstack-cell1-galera-0" Dec 10 14:54:17 crc kubenswrapper[4718]: I1210 14:54:17.202660 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/f81943cf-47c7-424d-9473-2df3195bc9a6-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"f81943cf-47c7-424d-9473-2df3195bc9a6\") " pod="openstack/openstack-cell1-galera-0" Dec 10 14:54:17 crc kubenswrapper[4718]: I1210 14:54:17.203295 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/f81943cf-47c7-424d-9473-2df3195bc9a6-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"f81943cf-47c7-424d-9473-2df3195bc9a6\") " pod="openstack/openstack-cell1-galera-0" Dec 10 14:54:17 crc kubenswrapper[4718]: I1210 14:54:17.204128 4718 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"openstack-cell1-galera-0\" (UID: \"f81943cf-47c7-424d-9473-2df3195bc9a6\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/openstack-cell1-galera-0" Dec 10 14:54:17 crc kubenswrapper[4718]: I1210 14:54:17.212444 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/f81943cf-47c7-424d-9473-2df3195bc9a6-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"f81943cf-47c7-424d-9473-2df3195bc9a6\") " pod="openstack/openstack-cell1-galera-0" Dec 10 14:54:17 crc kubenswrapper[4718]: I1210 14:54:17.213289 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/f81943cf-47c7-424d-9473-2df3195bc9a6-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"f81943cf-47c7-424d-9473-2df3195bc9a6\") " pod="openstack/openstack-cell1-galera-0" Dec 10 14:54:17 crc kubenswrapper[4718]: I1210 14:54:17.223666 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/f81943cf-47c7-424d-9473-2df3195bc9a6-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"f81943cf-47c7-424d-9473-2df3195bc9a6\") " pod="openstack/openstack-cell1-galera-0" Dec 10 14:54:17 crc kubenswrapper[4718]: I1210 14:54:17.224852 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f81943cf-47c7-424d-9473-2df3195bc9a6-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"f81943cf-47c7-424d-9473-2df3195bc9a6\") " pod="openstack/openstack-cell1-galera-0" Dec 10 14:54:17 crc kubenswrapper[4718]: I1210 14:54:17.248207 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f81943cf-47c7-424d-9473-2df3195bc9a6-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"f81943cf-47c7-424d-9473-2df3195bc9a6\") " pod="openstack/openstack-cell1-galera-0" Dec 10 14:54:17 crc kubenswrapper[4718]: I1210 14:54:17.262234 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8tzvl\" (UniqueName: \"kubernetes.io/projected/f81943cf-47c7-424d-9473-2df3195bc9a6-kube-api-access-8tzvl\") pod \"openstack-cell1-galera-0\" (UID: \"f81943cf-47c7-424d-9473-2df3195bc9a6\") " pod="openstack/openstack-cell1-galera-0" Dec 10 14:54:17 crc kubenswrapper[4718]: I1210 14:54:17.263981 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"openstack-cell1-galera-0\" (UID: \"f81943cf-47c7-424d-9473-2df3195bc9a6\") " pod="openstack/openstack-cell1-galera-0" Dec 10 14:54:17 crc kubenswrapper[4718]: I1210 14:54:17.286043 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Dec 10 14:54:17 crc kubenswrapper[4718]: I1210 14:54:17.386748 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Dec 10 14:54:17 crc kubenswrapper[4718]: I1210 14:54:17.522018 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-notifications-server-0"] Dec 10 14:54:17 crc kubenswrapper[4718]: I1210 14:54:17.858806 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"0fcf07d8-859b-4547-8a32-824f40da6a93","Type":"ContainerStarted","Data":"bd733297e1f2aec880af84c4c0846c3d26f69d8fea31f5413b6b94099bc9fe29"} Dec 10 14:54:17 crc kubenswrapper[4718]: I1210 14:54:17.865436 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"282d32e9-d539-4bac-9fd1-a8735e8d92e1","Type":"ContainerStarted","Data":"83f4820489da7e0f23fce40539fbf05b5e91f0d1b4c31c463394476cbeda7059"} Dec 10 14:54:17 crc kubenswrapper[4718]: I1210 14:54:17.867524 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5449989c59-s2cg4" event={"ID":"28bd664c-dd6f-4b1d-a9c3-239b94717974","Type":"ContainerStarted","Data":"00f5aa4d56f522440443732213f24df960c770f9c14bca76a450b55894d92916"} Dec 10 14:54:18 crc kubenswrapper[4718]: I1210 14:54:18.232546 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Dec 10 14:54:19 crc kubenswrapper[4718]: I1210 14:54:19.238126 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Dec 10 14:54:19 crc kubenswrapper[4718]: I1210 14:54:19.330091 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-notifications-server-0" event={"ID":"5611ee41-14a4-45d3-88b1-e6e6c9bc4d13","Type":"ContainerStarted","Data":"2830ad9f8fa668c54bf65d72c3a974949a99001cf9c38553651452e0c5f28e41"} Dec 10 14:54:19 crc kubenswrapper[4718]: I1210 14:54:19.345163 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"0708d5de-311d-46e3-981e-7bd7a2fc495c","Type":"ContainerStarted","Data":"c530372eda843fc82e663031441c5fc9e5f44517e5394fc39f4d22be7bd93636"} Dec 10 14:54:19 crc kubenswrapper[4718]: I1210 14:54:19.420015 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Dec 10 14:54:19 crc kubenswrapper[4718]: I1210 14:54:19.421829 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Dec 10 14:54:19 crc kubenswrapper[4718]: I1210 14:54:19.444586 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-mr8zl" Dec 10 14:54:19 crc kubenswrapper[4718]: I1210 14:54:19.445374 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Dec 10 14:54:19 crc kubenswrapper[4718]: I1210 14:54:19.484239 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8rnc7\" (UniqueName: \"kubernetes.io/projected/df93c205-2f35-4c9d-b3ce-45174d5bfc2d-kube-api-access-8rnc7\") pod \"kube-state-metrics-0\" (UID: \"df93c205-2f35-4c9d-b3ce-45174d5bfc2d\") " pod="openstack/kube-state-metrics-0" Dec 10 14:54:19 crc kubenswrapper[4718]: I1210 14:54:19.764869 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8rnc7\" (UniqueName: \"kubernetes.io/projected/df93c205-2f35-4c9d-b3ce-45174d5bfc2d-kube-api-access-8rnc7\") pod \"kube-state-metrics-0\" (UID: \"df93c205-2f35-4c9d-b3ce-45174d5bfc2d\") " pod="openstack/kube-state-metrics-0" Dec 10 14:54:19 crc kubenswrapper[4718]: I1210 14:54:19.860198 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8rnc7\" (UniqueName: \"kubernetes.io/projected/df93c205-2f35-4c9d-b3ce-45174d5bfc2d-kube-api-access-8rnc7\") pod \"kube-state-metrics-0\" (UID: \"df93c205-2f35-4c9d-b3ce-45174d5bfc2d\") " pod="openstack/kube-state-metrics-0" Dec 10 14:54:19 crc kubenswrapper[4718]: I1210 14:54:19.898943 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Dec 10 14:54:20 crc kubenswrapper[4718]: W1210 14:54:20.071987 4718 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod571880ea_f2a9_4e9e_99a5_c8bcaffb8675.slice/crio-7c6e91225156156157feeb94f34463dfb4b71b7954eb627e3fcee80b0156b9b8 WatchSource:0}: Error finding container 7c6e91225156156157feeb94f34463dfb4b71b7954eb627e3fcee80b0156b9b8: Status 404 returned error can't find the container with id 7c6e91225156156157feeb94f34463dfb4b71b7954eb627e3fcee80b0156b9b8 Dec 10 14:54:20 crc kubenswrapper[4718]: I1210 14:54:20.111279 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Dec 10 14:54:21 crc kubenswrapper[4718]: I1210 14:54:21.596088 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"f81943cf-47c7-424d-9473-2df3195bc9a6","Type":"ContainerStarted","Data":"155e8f812bc3ad397393f379cbe8d8530b847ecd015d8d1abbf5baef5c118dd1"} Dec 10 14:54:21 crc kubenswrapper[4718]: I1210 14:54:21.603613 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"571880ea-f2a9-4e9e-99a5-c8bcaffb8675","Type":"ContainerStarted","Data":"7c6e91225156156157feeb94f34463dfb4b71b7954eb627e3fcee80b0156b9b8"} Dec 10 14:54:24 crc kubenswrapper[4718]: I1210 14:54:24.900925 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Dec 10 14:54:24 crc kubenswrapper[4718]: I1210 14:54:24.942983 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Dec 10 14:54:24 crc kubenswrapper[4718]: I1210 14:54:24.968896 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Dec 10 14:54:24 crc kubenswrapper[4718]: I1210 14:54:24.969193 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Dec 10 14:54:24 crc kubenswrapper[4718]: I1210 14:54:24.971510 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Dec 10 14:54:24 crc kubenswrapper[4718]: I1210 14:54:24.971704 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Dec 10 14:54:24 crc kubenswrapper[4718]: I1210 14:54:24.971917 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-8v2sk" Dec 10 14:54:24 crc kubenswrapper[4718]: I1210 14:54:24.991878 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Dec 10 14:54:24 crc kubenswrapper[4718]: I1210 14:54:24.996594 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Dec 10 14:54:25 crc kubenswrapper[4718]: I1210 14:54:25.067265 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/7ea78b30-1cce-42f6-abae-e7e66ee3daae-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"7ea78b30-1cce-42f6-abae-e7e66ee3daae\") " pod="openstack/prometheus-metric-storage-0" Dec 10 14:54:25 crc kubenswrapper[4718]: I1210 14:54:25.067430 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/7ea78b30-1cce-42f6-abae-e7e66ee3daae-config\") pod \"prometheus-metric-storage-0\" (UID: \"7ea78b30-1cce-42f6-abae-e7e66ee3daae\") " pod="openstack/prometheus-metric-storage-0" Dec 10 14:54:25 crc kubenswrapper[4718]: I1210 14:54:25.067462 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/7ea78b30-1cce-42f6-abae-e7e66ee3daae-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"7ea78b30-1cce-42f6-abae-e7e66ee3daae\") " pod="openstack/prometheus-metric-storage-0" Dec 10 14:54:25 crc kubenswrapper[4718]: I1210 14:54:25.067496 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/7ea78b30-1cce-42f6-abae-e7e66ee3daae-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"7ea78b30-1cce-42f6-abae-e7e66ee3daae\") " pod="openstack/prometheus-metric-storage-0" Dec 10 14:54:25 crc kubenswrapper[4718]: I1210 14:54:25.067565 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-9caec827-4c52-4f0c-a90d-988b3cf66a46\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-9caec827-4c52-4f0c-a90d-988b3cf66a46\") pod \"prometheus-metric-storage-0\" (UID: \"7ea78b30-1cce-42f6-abae-e7e66ee3daae\") " pod="openstack/prometheus-metric-storage-0" Dec 10 14:54:25 crc kubenswrapper[4718]: I1210 14:54:25.067636 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/7ea78b30-1cce-42f6-abae-e7e66ee3daae-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"7ea78b30-1cce-42f6-abae-e7e66ee3daae\") " pod="openstack/prometheus-metric-storage-0" Dec 10 14:54:25 crc kubenswrapper[4718]: I1210 14:54:25.067690 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/7ea78b30-1cce-42f6-abae-e7e66ee3daae-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"7ea78b30-1cce-42f6-abae-e7e66ee3daae\") " pod="openstack/prometheus-metric-storage-0" Dec 10 14:54:25 crc kubenswrapper[4718]: I1210 14:54:25.067720 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5xf2q\" (UniqueName: \"kubernetes.io/projected/7ea78b30-1cce-42f6-abae-e7e66ee3daae-kube-api-access-5xf2q\") pod \"prometheus-metric-storage-0\" (UID: \"7ea78b30-1cce-42f6-abae-e7e66ee3daae\") " pod="openstack/prometheus-metric-storage-0" Dec 10 14:54:25 crc kubenswrapper[4718]: I1210 14:54:25.174956 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/7ea78b30-1cce-42f6-abae-e7e66ee3daae-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"7ea78b30-1cce-42f6-abae-e7e66ee3daae\") " pod="openstack/prometheus-metric-storage-0" Dec 10 14:54:25 crc kubenswrapper[4718]: I1210 14:54:25.175086 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/7ea78b30-1cce-42f6-abae-e7e66ee3daae-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"7ea78b30-1cce-42f6-abae-e7e66ee3daae\") " pod="openstack/prometheus-metric-storage-0" Dec 10 14:54:25 crc kubenswrapper[4718]: I1210 14:54:25.175144 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5xf2q\" (UniqueName: \"kubernetes.io/projected/7ea78b30-1cce-42f6-abae-e7e66ee3daae-kube-api-access-5xf2q\") pod \"prometheus-metric-storage-0\" (UID: \"7ea78b30-1cce-42f6-abae-e7e66ee3daae\") " pod="openstack/prometheus-metric-storage-0" Dec 10 14:54:25 crc kubenswrapper[4718]: I1210 14:54:25.175298 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/7ea78b30-1cce-42f6-abae-e7e66ee3daae-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"7ea78b30-1cce-42f6-abae-e7e66ee3daae\") " pod="openstack/prometheus-metric-storage-0" Dec 10 14:54:25 crc kubenswrapper[4718]: I1210 14:54:25.175352 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/7ea78b30-1cce-42f6-abae-e7e66ee3daae-config\") pod \"prometheus-metric-storage-0\" (UID: \"7ea78b30-1cce-42f6-abae-e7e66ee3daae\") " pod="openstack/prometheus-metric-storage-0" Dec 10 14:54:25 crc kubenswrapper[4718]: I1210 14:54:25.175377 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/7ea78b30-1cce-42f6-abae-e7e66ee3daae-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"7ea78b30-1cce-42f6-abae-e7e66ee3daae\") " pod="openstack/prometheus-metric-storage-0" Dec 10 14:54:25 crc kubenswrapper[4718]: I1210 14:54:25.175616 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/7ea78b30-1cce-42f6-abae-e7e66ee3daae-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"7ea78b30-1cce-42f6-abae-e7e66ee3daae\") " pod="openstack/prometheus-metric-storage-0" Dec 10 14:54:25 crc kubenswrapper[4718]: I1210 14:54:25.175656 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-9caec827-4c52-4f0c-a90d-988b3cf66a46\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-9caec827-4c52-4f0c-a90d-988b3cf66a46\") pod \"prometheus-metric-storage-0\" (UID: \"7ea78b30-1cce-42f6-abae-e7e66ee3daae\") " pod="openstack/prometheus-metric-storage-0" Dec 10 14:54:25 crc kubenswrapper[4718]: I1210 14:54:25.177821 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/7ea78b30-1cce-42f6-abae-e7e66ee3daae-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"7ea78b30-1cce-42f6-abae-e7e66ee3daae\") " pod="openstack/prometheus-metric-storage-0" Dec 10 14:54:25 crc kubenswrapper[4718]: I1210 14:54:25.188063 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/7ea78b30-1cce-42f6-abae-e7e66ee3daae-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"7ea78b30-1cce-42f6-abae-e7e66ee3daae\") " pod="openstack/prometheus-metric-storage-0" Dec 10 14:54:25 crc kubenswrapper[4718]: I1210 14:54:25.207687 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/7ea78b30-1cce-42f6-abae-e7e66ee3daae-config\") pod \"prometheus-metric-storage-0\" (UID: \"7ea78b30-1cce-42f6-abae-e7e66ee3daae\") " pod="openstack/prometheus-metric-storage-0" Dec 10 14:54:25 crc kubenswrapper[4718]: I1210 14:54:25.209350 4718 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Dec 10 14:54:25 crc kubenswrapper[4718]: I1210 14:54:25.209443 4718 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-9caec827-4c52-4f0c-a90d-988b3cf66a46\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-9caec827-4c52-4f0c-a90d-988b3cf66a46\") pod \"prometheus-metric-storage-0\" (UID: \"7ea78b30-1cce-42f6-abae-e7e66ee3daae\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/536f9c0805135df6ee87eba8f71795b119991da876d5796e8953829643544095/globalmount\"" pod="openstack/prometheus-metric-storage-0" Dec 10 14:54:25 crc kubenswrapper[4718]: I1210 14:54:25.232095 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5xf2q\" (UniqueName: \"kubernetes.io/projected/7ea78b30-1cce-42f6-abae-e7e66ee3daae-kube-api-access-5xf2q\") pod \"prometheus-metric-storage-0\" (UID: \"7ea78b30-1cce-42f6-abae-e7e66ee3daae\") " pod="openstack/prometheus-metric-storage-0" Dec 10 14:54:25 crc kubenswrapper[4718]: I1210 14:54:25.237719 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/7ea78b30-1cce-42f6-abae-e7e66ee3daae-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"7ea78b30-1cce-42f6-abae-e7e66ee3daae\") " pod="openstack/prometheus-metric-storage-0" Dec 10 14:54:25 crc kubenswrapper[4718]: I1210 14:54:25.238299 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/7ea78b30-1cce-42f6-abae-e7e66ee3daae-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"7ea78b30-1cce-42f6-abae-e7e66ee3daae\") " pod="openstack/prometheus-metric-storage-0" Dec 10 14:54:25 crc kubenswrapper[4718]: I1210 14:54:25.254615 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/7ea78b30-1cce-42f6-abae-e7e66ee3daae-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"7ea78b30-1cce-42f6-abae-e7e66ee3daae\") " pod="openstack/prometheus-metric-storage-0" Dec 10 14:54:25 crc kubenswrapper[4718]: I1210 14:54:25.350760 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-9caec827-4c52-4f0c-a90d-988b3cf66a46\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-9caec827-4c52-4f0c-a90d-988b3cf66a46\") pod \"prometheus-metric-storage-0\" (UID: \"7ea78b30-1cce-42f6-abae-e7e66ee3daae\") " pod="openstack/prometheus-metric-storage-0" Dec 10 14:54:25 crc kubenswrapper[4718]: I1210 14:54:25.417790 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Dec 10 14:54:25 crc kubenswrapper[4718]: I1210 14:54:25.475601 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Dec 10 14:54:25 crc kubenswrapper[4718]: I1210 14:54:25.547703 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ftrjh"] Dec 10 14:54:25 crc kubenswrapper[4718]: I1210 14:54:25.549314 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ftrjh" Dec 10 14:54:25 crc kubenswrapper[4718]: I1210 14:54:25.553808 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-5l2bv" Dec 10 14:54:25 crc kubenswrapper[4718]: I1210 14:54:25.573773 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Dec 10 14:54:25 crc kubenswrapper[4718]: I1210 14:54:25.574467 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Dec 10 14:54:25 crc kubenswrapper[4718]: I1210 14:54:25.599597 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-l8vtq"] Dec 10 14:54:25 crc kubenswrapper[4718]: I1210 14:54:25.601840 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-l8vtq" Dec 10 14:54:25 crc kubenswrapper[4718]: I1210 14:54:25.603805 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3e4f376b-2175-46d8-8b88-0560a3fcf231-scripts\") pod \"ovn-controller-ftrjh\" (UID: \"3e4f376b-2175-46d8-8b88-0560a3fcf231\") " pod="openstack/ovn-controller-ftrjh" Dec 10 14:54:25 crc kubenswrapper[4718]: I1210 14:54:25.603844 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/3e4f376b-2175-46d8-8b88-0560a3fcf231-var-run-ovn\") pod \"ovn-controller-ftrjh\" (UID: \"3e4f376b-2175-46d8-8b88-0560a3fcf231\") " pod="openstack/ovn-controller-ftrjh" Dec 10 14:54:25 crc kubenswrapper[4718]: I1210 14:54:25.603883 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/3e4f376b-2175-46d8-8b88-0560a3fcf231-var-run\") pod \"ovn-controller-ftrjh\" (UID: \"3e4f376b-2175-46d8-8b88-0560a3fcf231\") " pod="openstack/ovn-controller-ftrjh" Dec 10 14:54:25 crc kubenswrapper[4718]: I1210 14:54:25.603923 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/3e4f376b-2175-46d8-8b88-0560a3fcf231-ovn-controller-tls-certs\") pod \"ovn-controller-ftrjh\" (UID: \"3e4f376b-2175-46d8-8b88-0560a3fcf231\") " pod="openstack/ovn-controller-ftrjh" Dec 10 14:54:25 crc kubenswrapper[4718]: I1210 14:54:25.603952 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/3e4f376b-2175-46d8-8b88-0560a3fcf231-var-log-ovn\") pod \"ovn-controller-ftrjh\" (UID: \"3e4f376b-2175-46d8-8b88-0560a3fcf231\") " pod="openstack/ovn-controller-ftrjh" Dec 10 14:54:25 crc kubenswrapper[4718]: I1210 14:54:25.603976 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-npj4s\" (UniqueName: \"kubernetes.io/projected/3e4f376b-2175-46d8-8b88-0560a3fcf231-kube-api-access-npj4s\") pod \"ovn-controller-ftrjh\" (UID: \"3e4f376b-2175-46d8-8b88-0560a3fcf231\") " pod="openstack/ovn-controller-ftrjh" Dec 10 14:54:25 crc kubenswrapper[4718]: I1210 14:54:25.604030 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e4f376b-2175-46d8-8b88-0560a3fcf231-combined-ca-bundle\") pod \"ovn-controller-ftrjh\" (UID: \"3e4f376b-2175-46d8-8b88-0560a3fcf231\") " pod="openstack/ovn-controller-ftrjh" Dec 10 14:54:25 crc kubenswrapper[4718]: I1210 14:54:25.616503 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ftrjh"] Dec 10 14:54:25 crc kubenswrapper[4718]: I1210 14:54:25.635018 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-l8vtq"] Dec 10 14:54:25 crc kubenswrapper[4718]: W1210 14:54:25.706332 4718 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddf93c205_2f35_4c9d_b3ce_45174d5bfc2d.slice/crio-ac5f251cc90fdf5c3f86c9f2b98c3eaac1c8218b5b067fede14aa969b67906cc WatchSource:0}: Error finding container ac5f251cc90fdf5c3f86c9f2b98c3eaac1c8218b5b067fede14aa969b67906cc: Status 404 returned error can't find the container with id ac5f251cc90fdf5c3f86c9f2b98c3eaac1c8218b5b067fede14aa969b67906cc Dec 10 14:54:25 crc kubenswrapper[4718]: I1210 14:54:25.708811 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/3e4f376b-2175-46d8-8b88-0560a3fcf231-var-run-ovn\") pod \"ovn-controller-ftrjh\" (UID: \"3e4f376b-2175-46d8-8b88-0560a3fcf231\") " pod="openstack/ovn-controller-ftrjh" Dec 10 14:54:25 crc kubenswrapper[4718]: I1210 14:54:25.709284 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/061fa283-77d3-42e2-b267-2c01852d4123-var-run\") pod \"ovn-controller-ovs-l8vtq\" (UID: \"061fa283-77d3-42e2-b267-2c01852d4123\") " pod="openstack/ovn-controller-ovs-l8vtq" Dec 10 14:54:25 crc kubenswrapper[4718]: I1210 14:54:25.709370 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/3e4f376b-2175-46d8-8b88-0560a3fcf231-var-run\") pod \"ovn-controller-ftrjh\" (UID: \"3e4f376b-2175-46d8-8b88-0560a3fcf231\") " pod="openstack/ovn-controller-ftrjh" Dec 10 14:54:25 crc kubenswrapper[4718]: I1210 14:54:25.709639 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/3e4f376b-2175-46d8-8b88-0560a3fcf231-var-run-ovn\") pod \"ovn-controller-ftrjh\" (UID: \"3e4f376b-2175-46d8-8b88-0560a3fcf231\") " pod="openstack/ovn-controller-ftrjh" Dec 10 14:54:25 crc kubenswrapper[4718]: I1210 14:54:25.709760 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/3e4f376b-2175-46d8-8b88-0560a3fcf231-var-run\") pod \"ovn-controller-ftrjh\" (UID: \"3e4f376b-2175-46d8-8b88-0560a3fcf231\") " pod="openstack/ovn-controller-ftrjh" Dec 10 14:54:25 crc kubenswrapper[4718]: I1210 14:54:25.709647 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cvwvg\" (UniqueName: \"kubernetes.io/projected/061fa283-77d3-42e2-b267-2c01852d4123-kube-api-access-cvwvg\") pod \"ovn-controller-ovs-l8vtq\" (UID: \"061fa283-77d3-42e2-b267-2c01852d4123\") " pod="openstack/ovn-controller-ovs-l8vtq" Dec 10 14:54:25 crc kubenswrapper[4718]: I1210 14:54:25.709894 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/3e4f376b-2175-46d8-8b88-0560a3fcf231-ovn-controller-tls-certs\") pod \"ovn-controller-ftrjh\" (UID: \"3e4f376b-2175-46d8-8b88-0560a3fcf231\") " pod="openstack/ovn-controller-ftrjh" Dec 10 14:54:25 crc kubenswrapper[4718]: I1210 14:54:25.709953 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/061fa283-77d3-42e2-b267-2c01852d4123-var-lib\") pod \"ovn-controller-ovs-l8vtq\" (UID: \"061fa283-77d3-42e2-b267-2c01852d4123\") " pod="openstack/ovn-controller-ovs-l8vtq" Dec 10 14:54:25 crc kubenswrapper[4718]: I1210 14:54:25.710016 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/3e4f376b-2175-46d8-8b88-0560a3fcf231-var-log-ovn\") pod \"ovn-controller-ftrjh\" (UID: \"3e4f376b-2175-46d8-8b88-0560a3fcf231\") " pod="openstack/ovn-controller-ftrjh" Dec 10 14:54:25 crc kubenswrapper[4718]: I1210 14:54:25.710034 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-npj4s\" (UniqueName: \"kubernetes.io/projected/3e4f376b-2175-46d8-8b88-0560a3fcf231-kube-api-access-npj4s\") pod \"ovn-controller-ftrjh\" (UID: \"3e4f376b-2175-46d8-8b88-0560a3fcf231\") " pod="openstack/ovn-controller-ftrjh" Dec 10 14:54:25 crc kubenswrapper[4718]: I1210 14:54:25.710320 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/061fa283-77d3-42e2-b267-2c01852d4123-var-log\") pod \"ovn-controller-ovs-l8vtq\" (UID: \"061fa283-77d3-42e2-b267-2c01852d4123\") " pod="openstack/ovn-controller-ovs-l8vtq" Dec 10 14:54:25 crc kubenswrapper[4718]: I1210 14:54:25.710362 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e4f376b-2175-46d8-8b88-0560a3fcf231-combined-ca-bundle\") pod \"ovn-controller-ftrjh\" (UID: \"3e4f376b-2175-46d8-8b88-0560a3fcf231\") " pod="openstack/ovn-controller-ftrjh" Dec 10 14:54:25 crc kubenswrapper[4718]: I1210 14:54:25.710619 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3e4f376b-2175-46d8-8b88-0560a3fcf231-scripts\") pod \"ovn-controller-ftrjh\" (UID: \"3e4f376b-2175-46d8-8b88-0560a3fcf231\") " pod="openstack/ovn-controller-ftrjh" Dec 10 14:54:25 crc kubenswrapper[4718]: I1210 14:54:25.710683 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/061fa283-77d3-42e2-b267-2c01852d4123-scripts\") pod \"ovn-controller-ovs-l8vtq\" (UID: \"061fa283-77d3-42e2-b267-2c01852d4123\") " pod="openstack/ovn-controller-ovs-l8vtq" Dec 10 14:54:25 crc kubenswrapper[4718]: I1210 14:54:25.710717 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/061fa283-77d3-42e2-b267-2c01852d4123-etc-ovs\") pod \"ovn-controller-ovs-l8vtq\" (UID: \"061fa283-77d3-42e2-b267-2c01852d4123\") " pod="openstack/ovn-controller-ovs-l8vtq" Dec 10 14:54:25 crc kubenswrapper[4718]: I1210 14:54:25.713056 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3e4f376b-2175-46d8-8b88-0560a3fcf231-scripts\") pod \"ovn-controller-ftrjh\" (UID: \"3e4f376b-2175-46d8-8b88-0560a3fcf231\") " pod="openstack/ovn-controller-ftrjh" Dec 10 14:54:25 crc kubenswrapper[4718]: I1210 14:54:25.713272 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/3e4f376b-2175-46d8-8b88-0560a3fcf231-var-log-ovn\") pod \"ovn-controller-ftrjh\" (UID: \"3e4f376b-2175-46d8-8b88-0560a3fcf231\") " pod="openstack/ovn-controller-ftrjh" Dec 10 14:54:25 crc kubenswrapper[4718]: I1210 14:54:25.721506 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/3e4f376b-2175-46d8-8b88-0560a3fcf231-ovn-controller-tls-certs\") pod \"ovn-controller-ftrjh\" (UID: \"3e4f376b-2175-46d8-8b88-0560a3fcf231\") " pod="openstack/ovn-controller-ftrjh" Dec 10 14:54:25 crc kubenswrapper[4718]: I1210 14:54:25.726567 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e4f376b-2175-46d8-8b88-0560a3fcf231-combined-ca-bundle\") pod \"ovn-controller-ftrjh\" (UID: \"3e4f376b-2175-46d8-8b88-0560a3fcf231\") " pod="openstack/ovn-controller-ftrjh" Dec 10 14:54:25 crc kubenswrapper[4718]: I1210 14:54:25.737406 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-npj4s\" (UniqueName: \"kubernetes.io/projected/3e4f376b-2175-46d8-8b88-0560a3fcf231-kube-api-access-npj4s\") pod \"ovn-controller-ftrjh\" (UID: \"3e4f376b-2175-46d8-8b88-0560a3fcf231\") " pod="openstack/ovn-controller-ftrjh" Dec 10 14:54:25 crc kubenswrapper[4718]: I1210 14:54:25.813353 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/061fa283-77d3-42e2-b267-2c01852d4123-var-log\") pod \"ovn-controller-ovs-l8vtq\" (UID: \"061fa283-77d3-42e2-b267-2c01852d4123\") " pod="openstack/ovn-controller-ovs-l8vtq" Dec 10 14:54:25 crc kubenswrapper[4718]: I1210 14:54:25.817645 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/061fa283-77d3-42e2-b267-2c01852d4123-scripts\") pod \"ovn-controller-ovs-l8vtq\" (UID: \"061fa283-77d3-42e2-b267-2c01852d4123\") " pod="openstack/ovn-controller-ovs-l8vtq" Dec 10 14:54:25 crc kubenswrapper[4718]: I1210 14:54:25.817672 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/061fa283-77d3-42e2-b267-2c01852d4123-etc-ovs\") pod \"ovn-controller-ovs-l8vtq\" (UID: \"061fa283-77d3-42e2-b267-2c01852d4123\") " pod="openstack/ovn-controller-ovs-l8vtq" Dec 10 14:54:25 crc kubenswrapper[4718]: I1210 14:54:25.817777 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/061fa283-77d3-42e2-b267-2c01852d4123-var-run\") pod \"ovn-controller-ovs-l8vtq\" (UID: \"061fa283-77d3-42e2-b267-2c01852d4123\") " pod="openstack/ovn-controller-ovs-l8vtq" Dec 10 14:54:25 crc kubenswrapper[4718]: I1210 14:54:25.817842 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cvwvg\" (UniqueName: \"kubernetes.io/projected/061fa283-77d3-42e2-b267-2c01852d4123-kube-api-access-cvwvg\") pod \"ovn-controller-ovs-l8vtq\" (UID: \"061fa283-77d3-42e2-b267-2c01852d4123\") " pod="openstack/ovn-controller-ovs-l8vtq" Dec 10 14:54:25 crc kubenswrapper[4718]: I1210 14:54:25.817903 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/061fa283-77d3-42e2-b267-2c01852d4123-var-lib\") pod \"ovn-controller-ovs-l8vtq\" (UID: \"061fa283-77d3-42e2-b267-2c01852d4123\") " pod="openstack/ovn-controller-ovs-l8vtq" Dec 10 14:54:25 crc kubenswrapper[4718]: I1210 14:54:25.818191 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/061fa283-77d3-42e2-b267-2c01852d4123-var-lib\") pod \"ovn-controller-ovs-l8vtq\" (UID: \"061fa283-77d3-42e2-b267-2c01852d4123\") " pod="openstack/ovn-controller-ovs-l8vtq" Dec 10 14:54:25 crc kubenswrapper[4718]: I1210 14:54:25.817502 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/061fa283-77d3-42e2-b267-2c01852d4123-var-log\") pod \"ovn-controller-ovs-l8vtq\" (UID: \"061fa283-77d3-42e2-b267-2c01852d4123\") " pod="openstack/ovn-controller-ovs-l8vtq" Dec 10 14:54:25 crc kubenswrapper[4718]: I1210 14:54:25.821028 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/061fa283-77d3-42e2-b267-2c01852d4123-scripts\") pod \"ovn-controller-ovs-l8vtq\" (UID: \"061fa283-77d3-42e2-b267-2c01852d4123\") " pod="openstack/ovn-controller-ovs-l8vtq" Dec 10 14:54:25 crc kubenswrapper[4718]: I1210 14:54:25.821160 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/061fa283-77d3-42e2-b267-2c01852d4123-var-run\") pod \"ovn-controller-ovs-l8vtq\" (UID: \"061fa283-77d3-42e2-b267-2c01852d4123\") " pod="openstack/ovn-controller-ovs-l8vtq" Dec 10 14:54:25 crc kubenswrapper[4718]: I1210 14:54:25.821186 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/061fa283-77d3-42e2-b267-2c01852d4123-etc-ovs\") pod \"ovn-controller-ovs-l8vtq\" (UID: \"061fa283-77d3-42e2-b267-2c01852d4123\") " pod="openstack/ovn-controller-ovs-l8vtq" Dec 10 14:54:25 crc kubenswrapper[4718]: I1210 14:54:25.846670 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cvwvg\" (UniqueName: \"kubernetes.io/projected/061fa283-77d3-42e2-b267-2c01852d4123-kube-api-access-cvwvg\") pod \"ovn-controller-ovs-l8vtq\" (UID: \"061fa283-77d3-42e2-b267-2c01852d4123\") " pod="openstack/ovn-controller-ovs-l8vtq" Dec 10 14:54:25 crc kubenswrapper[4718]: I1210 14:54:25.901403 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ftrjh" Dec 10 14:54:26 crc kubenswrapper[4718]: I1210 14:54:26.180436 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-l8vtq" Dec 10 14:54:26 crc kubenswrapper[4718]: I1210 14:54:26.466163 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Dec 10 14:54:26 crc kubenswrapper[4718]: I1210 14:54:26.468334 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Dec 10 14:54:26 crc kubenswrapper[4718]: I1210 14:54:26.482457 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Dec 10 14:54:26 crc kubenswrapper[4718]: I1210 14:54:26.483456 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Dec 10 14:54:26 crc kubenswrapper[4718]: I1210 14:54:26.483670 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Dec 10 14:54:26 crc kubenswrapper[4718]: I1210 14:54:26.483805 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-tccl9" Dec 10 14:54:26 crc kubenswrapper[4718]: I1210 14:54:26.500683 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Dec 10 14:54:26 crc kubenswrapper[4718]: I1210 14:54:26.508519 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Dec 10 14:54:26 crc kubenswrapper[4718]: I1210 14:54:26.668041 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/50fbf3ca-d871-4ccd-a412-636fa783e3d4-config\") pod \"ovsdbserver-nb-0\" (UID: \"50fbf3ca-d871-4ccd-a412-636fa783e3d4\") " pod="openstack/ovsdbserver-nb-0" Dec 10 14:54:26 crc kubenswrapper[4718]: I1210 14:54:26.668157 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/50fbf3ca-d871-4ccd-a412-636fa783e3d4-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"50fbf3ca-d871-4ccd-a412-636fa783e3d4\") " pod="openstack/ovsdbserver-nb-0" Dec 10 14:54:26 crc kubenswrapper[4718]: I1210 14:54:26.668241 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/50fbf3ca-d871-4ccd-a412-636fa783e3d4-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"50fbf3ca-d871-4ccd-a412-636fa783e3d4\") " pod="openstack/ovsdbserver-nb-0" Dec 10 14:54:26 crc kubenswrapper[4718]: I1210 14:54:26.668288 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/50fbf3ca-d871-4ccd-a412-636fa783e3d4-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"50fbf3ca-d871-4ccd-a412-636fa783e3d4\") " pod="openstack/ovsdbserver-nb-0" Dec 10 14:54:26 crc kubenswrapper[4718]: I1210 14:54:26.668333 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50fbf3ca-d871-4ccd-a412-636fa783e3d4-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"50fbf3ca-d871-4ccd-a412-636fa783e3d4\") " pod="openstack/ovsdbserver-nb-0" Dec 10 14:54:26 crc kubenswrapper[4718]: I1210 14:54:26.668370 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"ovsdbserver-nb-0\" (UID: \"50fbf3ca-d871-4ccd-a412-636fa783e3d4\") " pod="openstack/ovsdbserver-nb-0" Dec 10 14:54:26 crc kubenswrapper[4718]: I1210 14:54:26.668445 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/50fbf3ca-d871-4ccd-a412-636fa783e3d4-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"50fbf3ca-d871-4ccd-a412-636fa783e3d4\") " pod="openstack/ovsdbserver-nb-0" Dec 10 14:54:26 crc kubenswrapper[4718]: I1210 14:54:26.668473 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nstcd\" (UniqueName: \"kubernetes.io/projected/50fbf3ca-d871-4ccd-a412-636fa783e3d4-kube-api-access-nstcd\") pod \"ovsdbserver-nb-0\" (UID: \"50fbf3ca-d871-4ccd-a412-636fa783e3d4\") " pod="openstack/ovsdbserver-nb-0" Dec 10 14:54:26 crc kubenswrapper[4718]: I1210 14:54:26.709104 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"df93c205-2f35-4c9d-b3ce-45174d5bfc2d","Type":"ContainerStarted","Data":"ac5f251cc90fdf5c3f86c9f2b98c3eaac1c8218b5b067fede14aa969b67906cc"} Dec 10 14:54:26 crc kubenswrapper[4718]: I1210 14:54:26.835499 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/50fbf3ca-d871-4ccd-a412-636fa783e3d4-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"50fbf3ca-d871-4ccd-a412-636fa783e3d4\") " pod="openstack/ovsdbserver-nb-0" Dec 10 14:54:26 crc kubenswrapper[4718]: I1210 14:54:26.836251 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/50fbf3ca-d871-4ccd-a412-636fa783e3d4-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"50fbf3ca-d871-4ccd-a412-636fa783e3d4\") " pod="openstack/ovsdbserver-nb-0" Dec 10 14:54:26 crc kubenswrapper[4718]: I1210 14:54:26.836351 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50fbf3ca-d871-4ccd-a412-636fa783e3d4-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"50fbf3ca-d871-4ccd-a412-636fa783e3d4\") " pod="openstack/ovsdbserver-nb-0" Dec 10 14:54:26 crc kubenswrapper[4718]: I1210 14:54:26.836420 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"ovsdbserver-nb-0\" (UID: \"50fbf3ca-d871-4ccd-a412-636fa783e3d4\") " pod="openstack/ovsdbserver-nb-0" Dec 10 14:54:26 crc kubenswrapper[4718]: I1210 14:54:26.836508 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/50fbf3ca-d871-4ccd-a412-636fa783e3d4-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"50fbf3ca-d871-4ccd-a412-636fa783e3d4\") " pod="openstack/ovsdbserver-nb-0" Dec 10 14:54:26 crc kubenswrapper[4718]: I1210 14:54:26.836555 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nstcd\" (UniqueName: \"kubernetes.io/projected/50fbf3ca-d871-4ccd-a412-636fa783e3d4-kube-api-access-nstcd\") pod \"ovsdbserver-nb-0\" (UID: \"50fbf3ca-d871-4ccd-a412-636fa783e3d4\") " pod="openstack/ovsdbserver-nb-0" Dec 10 14:54:26 crc kubenswrapper[4718]: I1210 14:54:26.836636 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/50fbf3ca-d871-4ccd-a412-636fa783e3d4-config\") pod \"ovsdbserver-nb-0\" (UID: \"50fbf3ca-d871-4ccd-a412-636fa783e3d4\") " pod="openstack/ovsdbserver-nb-0" Dec 10 14:54:26 crc kubenswrapper[4718]: I1210 14:54:26.836748 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/50fbf3ca-d871-4ccd-a412-636fa783e3d4-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"50fbf3ca-d871-4ccd-a412-636fa783e3d4\") " pod="openstack/ovsdbserver-nb-0" Dec 10 14:54:26 crc kubenswrapper[4718]: I1210 14:54:26.837510 4718 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"ovsdbserver-nb-0\" (UID: \"50fbf3ca-d871-4ccd-a412-636fa783e3d4\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/ovsdbserver-nb-0" Dec 10 14:54:26 crc kubenswrapper[4718]: I1210 14:54:26.838365 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/50fbf3ca-d871-4ccd-a412-636fa783e3d4-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"50fbf3ca-d871-4ccd-a412-636fa783e3d4\") " pod="openstack/ovsdbserver-nb-0" Dec 10 14:54:26 crc kubenswrapper[4718]: I1210 14:54:26.839907 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/50fbf3ca-d871-4ccd-a412-636fa783e3d4-config\") pod \"ovsdbserver-nb-0\" (UID: \"50fbf3ca-d871-4ccd-a412-636fa783e3d4\") " pod="openstack/ovsdbserver-nb-0" Dec 10 14:54:26 crc kubenswrapper[4718]: I1210 14:54:26.842919 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/50fbf3ca-d871-4ccd-a412-636fa783e3d4-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"50fbf3ca-d871-4ccd-a412-636fa783e3d4\") " pod="openstack/ovsdbserver-nb-0" Dec 10 14:54:26 crc kubenswrapper[4718]: I1210 14:54:26.897770 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"ovsdbserver-nb-0\" (UID: \"50fbf3ca-d871-4ccd-a412-636fa783e3d4\") " pod="openstack/ovsdbserver-nb-0" Dec 10 14:54:27 crc kubenswrapper[4718]: I1210 14:54:27.009035 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/50fbf3ca-d871-4ccd-a412-636fa783e3d4-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"50fbf3ca-d871-4ccd-a412-636fa783e3d4\") " pod="openstack/ovsdbserver-nb-0" Dec 10 14:54:27 crc kubenswrapper[4718]: I1210 14:54:27.009223 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/50fbf3ca-d871-4ccd-a412-636fa783e3d4-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"50fbf3ca-d871-4ccd-a412-636fa783e3d4\") " pod="openstack/ovsdbserver-nb-0" Dec 10 14:54:27 crc kubenswrapper[4718]: I1210 14:54:27.027883 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50fbf3ca-d871-4ccd-a412-636fa783e3d4-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"50fbf3ca-d871-4ccd-a412-636fa783e3d4\") " pod="openstack/ovsdbserver-nb-0" Dec 10 14:54:27 crc kubenswrapper[4718]: I1210 14:54:27.039217 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nstcd\" (UniqueName: \"kubernetes.io/projected/50fbf3ca-d871-4ccd-a412-636fa783e3d4-kube-api-access-nstcd\") pod \"ovsdbserver-nb-0\" (UID: \"50fbf3ca-d871-4ccd-a412-636fa783e3d4\") " pod="openstack/ovsdbserver-nb-0" Dec 10 14:54:27 crc kubenswrapper[4718]: I1210 14:54:27.276137 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Dec 10 14:54:27 crc kubenswrapper[4718]: I1210 14:54:27.971660 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Dec 10 14:54:28 crc kubenswrapper[4718]: I1210 14:54:28.564358 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ftrjh"] Dec 10 14:54:29 crc kubenswrapper[4718]: I1210 14:54:29.411103 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Dec 10 14:54:29 crc kubenswrapper[4718]: I1210 14:54:29.413742 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Dec 10 14:54:29 crc kubenswrapper[4718]: I1210 14:54:29.425743 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Dec 10 14:54:29 crc kubenswrapper[4718]: I1210 14:54:29.426049 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-wr72f" Dec 10 14:54:29 crc kubenswrapper[4718]: I1210 14:54:29.426213 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Dec 10 14:54:29 crc kubenswrapper[4718]: I1210 14:54:29.444167 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Dec 10 14:54:29 crc kubenswrapper[4718]: I1210 14:54:29.444845 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Dec 10 14:54:29 crc kubenswrapper[4718]: W1210 14:54:29.497659 4718 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3e4f376b_2175_46d8_8b88_0560a3fcf231.slice/crio-e6aa6bdcf1c00442e37cbd1df6bea1c0fbce24b10244bcf16856b0edfb57b5b3 WatchSource:0}: Error finding container e6aa6bdcf1c00442e37cbd1df6bea1c0fbce24b10244bcf16856b0edfb57b5b3: Status 404 returned error can't find the container with id e6aa6bdcf1c00442e37cbd1df6bea1c0fbce24b10244bcf16856b0edfb57b5b3 Dec 10 14:54:29 crc kubenswrapper[4718]: I1210 14:54:29.548160 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1ccf190d-cc0e-471c-b506-9784b1e8b038-config\") pod \"ovsdbserver-sb-0\" (UID: \"1ccf190d-cc0e-471c-b506-9784b1e8b038\") " pod="openstack/ovsdbserver-sb-0" Dec 10 14:54:29 crc kubenswrapper[4718]: I1210 14:54:29.548226 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"ovsdbserver-sb-0\" (UID: \"1ccf190d-cc0e-471c-b506-9784b1e8b038\") " pod="openstack/ovsdbserver-sb-0" Dec 10 14:54:29 crc kubenswrapper[4718]: I1210 14:54:29.548283 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1ccf190d-cc0e-471c-b506-9784b1e8b038-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"1ccf190d-cc0e-471c-b506-9784b1e8b038\") " pod="openstack/ovsdbserver-sb-0" Dec 10 14:54:29 crc kubenswrapper[4718]: I1210 14:54:29.548349 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4n5x9\" (UniqueName: \"kubernetes.io/projected/1ccf190d-cc0e-471c-b506-9784b1e8b038-kube-api-access-4n5x9\") pod \"ovsdbserver-sb-0\" (UID: \"1ccf190d-cc0e-471c-b506-9784b1e8b038\") " pod="openstack/ovsdbserver-sb-0" Dec 10 14:54:29 crc kubenswrapper[4718]: I1210 14:54:29.548375 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/1ccf190d-cc0e-471c-b506-9784b1e8b038-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"1ccf190d-cc0e-471c-b506-9784b1e8b038\") " pod="openstack/ovsdbserver-sb-0" Dec 10 14:54:29 crc kubenswrapper[4718]: I1210 14:54:29.555084 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/1ccf190d-cc0e-471c-b506-9784b1e8b038-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"1ccf190d-cc0e-471c-b506-9784b1e8b038\") " pod="openstack/ovsdbserver-sb-0" Dec 10 14:54:29 crc kubenswrapper[4718]: I1210 14:54:29.555141 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ccf190d-cc0e-471c-b506-9784b1e8b038-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"1ccf190d-cc0e-471c-b506-9784b1e8b038\") " pod="openstack/ovsdbserver-sb-0" Dec 10 14:54:29 crc kubenswrapper[4718]: I1210 14:54:29.555364 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/1ccf190d-cc0e-471c-b506-9784b1e8b038-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"1ccf190d-cc0e-471c-b506-9784b1e8b038\") " pod="openstack/ovsdbserver-sb-0" Dec 10 14:54:29 crc kubenswrapper[4718]: I1210 14:54:29.574072 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Dec 10 14:54:29 crc kubenswrapper[4718]: I1210 14:54:29.674063 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/1ccf190d-cc0e-471c-b506-9784b1e8b038-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"1ccf190d-cc0e-471c-b506-9784b1e8b038\") " pod="openstack/ovsdbserver-sb-0" Dec 10 14:54:29 crc kubenswrapper[4718]: I1210 14:54:29.674183 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1ccf190d-cc0e-471c-b506-9784b1e8b038-config\") pod \"ovsdbserver-sb-0\" (UID: \"1ccf190d-cc0e-471c-b506-9784b1e8b038\") " pod="openstack/ovsdbserver-sb-0" Dec 10 14:54:29 crc kubenswrapper[4718]: I1210 14:54:29.674223 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"ovsdbserver-sb-0\" (UID: \"1ccf190d-cc0e-471c-b506-9784b1e8b038\") " pod="openstack/ovsdbserver-sb-0" Dec 10 14:54:29 crc kubenswrapper[4718]: I1210 14:54:29.674276 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1ccf190d-cc0e-471c-b506-9784b1e8b038-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"1ccf190d-cc0e-471c-b506-9784b1e8b038\") " pod="openstack/ovsdbserver-sb-0" Dec 10 14:54:29 crc kubenswrapper[4718]: I1210 14:54:29.674464 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4n5x9\" (UniqueName: \"kubernetes.io/projected/1ccf190d-cc0e-471c-b506-9784b1e8b038-kube-api-access-4n5x9\") pod \"ovsdbserver-sb-0\" (UID: \"1ccf190d-cc0e-471c-b506-9784b1e8b038\") " pod="openstack/ovsdbserver-sb-0" Dec 10 14:54:29 crc kubenswrapper[4718]: I1210 14:54:29.674494 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/1ccf190d-cc0e-471c-b506-9784b1e8b038-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"1ccf190d-cc0e-471c-b506-9784b1e8b038\") " pod="openstack/ovsdbserver-sb-0" Dec 10 14:54:29 crc kubenswrapper[4718]: I1210 14:54:29.674528 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/1ccf190d-cc0e-471c-b506-9784b1e8b038-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"1ccf190d-cc0e-471c-b506-9784b1e8b038\") " pod="openstack/ovsdbserver-sb-0" Dec 10 14:54:29 crc kubenswrapper[4718]: I1210 14:54:29.674553 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ccf190d-cc0e-471c-b506-9784b1e8b038-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"1ccf190d-cc0e-471c-b506-9784b1e8b038\") " pod="openstack/ovsdbserver-sb-0" Dec 10 14:54:29 crc kubenswrapper[4718]: I1210 14:54:29.674867 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/1ccf190d-cc0e-471c-b506-9784b1e8b038-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"1ccf190d-cc0e-471c-b506-9784b1e8b038\") " pod="openstack/ovsdbserver-sb-0" Dec 10 14:54:29 crc kubenswrapper[4718]: I1210 14:54:29.675899 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1ccf190d-cc0e-471c-b506-9784b1e8b038-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"1ccf190d-cc0e-471c-b506-9784b1e8b038\") " pod="openstack/ovsdbserver-sb-0" Dec 10 14:54:29 crc kubenswrapper[4718]: I1210 14:54:29.676627 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1ccf190d-cc0e-471c-b506-9784b1e8b038-config\") pod \"ovsdbserver-sb-0\" (UID: \"1ccf190d-cc0e-471c-b506-9784b1e8b038\") " pod="openstack/ovsdbserver-sb-0" Dec 10 14:54:29 crc kubenswrapper[4718]: I1210 14:54:29.676870 4718 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"ovsdbserver-sb-0\" (UID: \"1ccf190d-cc0e-471c-b506-9784b1e8b038\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/ovsdbserver-sb-0" Dec 10 14:54:29 crc kubenswrapper[4718]: I1210 14:54:29.677745 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-l8vtq"] Dec 10 14:54:29 crc kubenswrapper[4718]: I1210 14:54:29.724262 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/1ccf190d-cc0e-471c-b506-9784b1e8b038-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"1ccf190d-cc0e-471c-b506-9784b1e8b038\") " pod="openstack/ovsdbserver-sb-0" Dec 10 14:54:29 crc kubenswrapper[4718]: I1210 14:54:29.726145 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/1ccf190d-cc0e-471c-b506-9784b1e8b038-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"1ccf190d-cc0e-471c-b506-9784b1e8b038\") " pod="openstack/ovsdbserver-sb-0" Dec 10 14:54:29 crc kubenswrapper[4718]: I1210 14:54:29.763937 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4n5x9\" (UniqueName: \"kubernetes.io/projected/1ccf190d-cc0e-471c-b506-9784b1e8b038-kube-api-access-4n5x9\") pod \"ovsdbserver-sb-0\" (UID: \"1ccf190d-cc0e-471c-b506-9784b1e8b038\") " pod="openstack/ovsdbserver-sb-0" Dec 10 14:54:29 crc kubenswrapper[4718]: I1210 14:54:29.794535 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ccf190d-cc0e-471c-b506-9784b1e8b038-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"1ccf190d-cc0e-471c-b506-9784b1e8b038\") " pod="openstack/ovsdbserver-sb-0" Dec 10 14:54:29 crc kubenswrapper[4718]: I1210 14:54:29.858771 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"ovsdbserver-sb-0\" (UID: \"1ccf190d-cc0e-471c-b506-9784b1e8b038\") " pod="openstack/ovsdbserver-sb-0" Dec 10 14:54:30 crc kubenswrapper[4718]: I1210 14:54:30.075619 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ftrjh" event={"ID":"3e4f376b-2175-46d8-8b88-0560a3fcf231","Type":"ContainerStarted","Data":"e6aa6bdcf1c00442e37cbd1df6bea1c0fbce24b10244bcf16856b0edfb57b5b3"} Dec 10 14:54:30 crc kubenswrapper[4718]: I1210 14:54:30.085946 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"7ea78b30-1cce-42f6-abae-e7e66ee3daae","Type":"ContainerStarted","Data":"03ac05fb6f73e02c896909f290014b4ea13c46ae8f9e5843da11be794def74d4"} Dec 10 14:54:30 crc kubenswrapper[4718]: I1210 14:54:30.136622 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Dec 10 14:54:36 crc kubenswrapper[4718]: I1210 14:54:35.995555 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-l8vtq" event={"ID":"061fa283-77d3-42e2-b267-2c01852d4123","Type":"ContainerStarted","Data":"7903ea9cc31ced28142bccbd9104a058ef96b792f6de58f01e44bbe3a870d682"} Dec 10 14:54:36 crc kubenswrapper[4718]: I1210 14:54:36.599635 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"50fbf3ca-d871-4ccd-a412-636fa783e3d4","Type":"ContainerStarted","Data":"716c9a187e23f073da27bcc7624b74f755d7e35abe3415a24d16486ae6c3d9c7"} Dec 10 14:54:37 crc kubenswrapper[4718]: I1210 14:54:37.922296 4718 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-4bmlt container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.31:8443/healthz\": dial tcp 10.217.0.31:8443: i/o timeout" start-of-body= Dec 10 14:54:37 crc kubenswrapper[4718]: I1210 14:54:37.922821 4718 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-4bmlt" podUID="4ec81ced-fd94-49a3-aa46-d7febbbfb825" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.31:8443/healthz\": dial tcp 10.217.0.31:8443: i/o timeout" Dec 10 14:54:37 crc kubenswrapper[4718]: I1210 14:54:37.931469 4718 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/cinder-operator-controller-manager-6c677c69b-jfdsv" podUID="82086b4c-0222-45a7-a3c3-fc2504f63a4e" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.76:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 10 14:54:38 crc kubenswrapper[4718]: I1210 14:54:38.030704 4718 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-r4272 container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.18:5443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 10 14:54:38 crc kubenswrapper[4718]: I1210 14:54:38.030838 4718 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-r4272" podUID="3bc2976e-bdb0-4450-9c5e-73052e705f7a" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.18:5443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 10 14:54:38 crc kubenswrapper[4718]: I1210 14:54:38.030968 4718 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-4s6xq" podUID="61e4671d-9417-472d-9d76-64fdcc0e3297" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.75:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 10 14:54:38 crc kubenswrapper[4718]: I1210 14:54:38.031764 4718 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/metallb-operator-webhook-server-6fdd887f57-qm9df" podUID="dc4a693b-8b70-42fd-9a9e-83fcdcc7cb6e" containerName="webhook-server" probeResult="failure" output="Get \"http://10.217.0.57:7472/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 10 14:54:38 crc kubenswrapper[4718]: I1210 14:54:38.038659 4718 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-4bmlt container/olm-operator namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.31:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 10 14:54:38 crc kubenswrapper[4718]: I1210 14:54:38.038773 4718 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-4bmlt" podUID="4ec81ced-fd94-49a3-aa46-d7febbbfb825" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.31:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 10 14:54:38 crc kubenswrapper[4718]: I1210 14:54:38.080123 4718 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/cinder-operator-controller-manager-6c677c69b-jfdsv" podUID="82086b4c-0222-45a7-a3c3-fc2504f63a4e" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.76:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 10 14:54:38 crc kubenswrapper[4718]: I1210 14:54:38.081513 4718 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-qn8tr container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.25:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 10 14:54:38 crc kubenswrapper[4718]: I1210 14:54:38.081550 4718 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-qn8tr" podUID="079f1ed7-7f70-4b4c-9afd-cf0286348562" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.25:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 10 14:54:38 crc kubenswrapper[4718]: I1210 14:54:38.447987 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Dec 10 14:54:38 crc kubenswrapper[4718]: I1210 14:54:38.918341 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-nf6b6"] Dec 10 14:54:38 crc kubenswrapper[4718]: I1210 14:54:38.922266 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-nf6b6" Dec 10 14:54:38 crc kubenswrapper[4718]: I1210 14:54:38.930058 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Dec 10 14:54:38 crc kubenswrapper[4718]: I1210 14:54:38.938419 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-nf6b6"] Dec 10 14:54:39 crc kubenswrapper[4718]: I1210 14:54:39.017320 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2bqtf\" (UniqueName: \"kubernetes.io/projected/7326e5bc-27b1-4b9a-b0ea-979589622ea3-kube-api-access-2bqtf\") pod \"ovn-controller-metrics-nf6b6\" (UID: \"7326e5bc-27b1-4b9a-b0ea-979589622ea3\") " pod="openstack/ovn-controller-metrics-nf6b6" Dec 10 14:54:39 crc kubenswrapper[4718]: I1210 14:54:39.017554 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7326e5bc-27b1-4b9a-b0ea-979589622ea3-config\") pod \"ovn-controller-metrics-nf6b6\" (UID: \"7326e5bc-27b1-4b9a-b0ea-979589622ea3\") " pod="openstack/ovn-controller-metrics-nf6b6" Dec 10 14:54:39 crc kubenswrapper[4718]: I1210 14:54:39.017661 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/7326e5bc-27b1-4b9a-b0ea-979589622ea3-ovn-rundir\") pod \"ovn-controller-metrics-nf6b6\" (UID: \"7326e5bc-27b1-4b9a-b0ea-979589622ea3\") " pod="openstack/ovn-controller-metrics-nf6b6" Dec 10 14:54:39 crc kubenswrapper[4718]: I1210 14:54:39.017752 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/7326e5bc-27b1-4b9a-b0ea-979589622ea3-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-nf6b6\" (UID: \"7326e5bc-27b1-4b9a-b0ea-979589622ea3\") " pod="openstack/ovn-controller-metrics-nf6b6" Dec 10 14:54:39 crc kubenswrapper[4718]: I1210 14:54:39.017939 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7326e5bc-27b1-4b9a-b0ea-979589622ea3-combined-ca-bundle\") pod \"ovn-controller-metrics-nf6b6\" (UID: \"7326e5bc-27b1-4b9a-b0ea-979589622ea3\") " pod="openstack/ovn-controller-metrics-nf6b6" Dec 10 14:54:39 crc kubenswrapper[4718]: I1210 14:54:39.018111 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/7326e5bc-27b1-4b9a-b0ea-979589622ea3-ovs-rundir\") pod \"ovn-controller-metrics-nf6b6\" (UID: \"7326e5bc-27b1-4b9a-b0ea-979589622ea3\") " pod="openstack/ovn-controller-metrics-nf6b6" Dec 10 14:54:39 crc kubenswrapper[4718]: I1210 14:54:39.122590 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7326e5bc-27b1-4b9a-b0ea-979589622ea3-combined-ca-bundle\") pod \"ovn-controller-metrics-nf6b6\" (UID: \"7326e5bc-27b1-4b9a-b0ea-979589622ea3\") " pod="openstack/ovn-controller-metrics-nf6b6" Dec 10 14:54:39 crc kubenswrapper[4718]: I1210 14:54:39.122694 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/7326e5bc-27b1-4b9a-b0ea-979589622ea3-ovs-rundir\") pod \"ovn-controller-metrics-nf6b6\" (UID: \"7326e5bc-27b1-4b9a-b0ea-979589622ea3\") " pod="openstack/ovn-controller-metrics-nf6b6" Dec 10 14:54:39 crc kubenswrapper[4718]: I1210 14:54:39.122742 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2bqtf\" (UniqueName: \"kubernetes.io/projected/7326e5bc-27b1-4b9a-b0ea-979589622ea3-kube-api-access-2bqtf\") pod \"ovn-controller-metrics-nf6b6\" (UID: \"7326e5bc-27b1-4b9a-b0ea-979589622ea3\") " pod="openstack/ovn-controller-metrics-nf6b6" Dec 10 14:54:39 crc kubenswrapper[4718]: I1210 14:54:39.122796 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7326e5bc-27b1-4b9a-b0ea-979589622ea3-config\") pod \"ovn-controller-metrics-nf6b6\" (UID: \"7326e5bc-27b1-4b9a-b0ea-979589622ea3\") " pod="openstack/ovn-controller-metrics-nf6b6" Dec 10 14:54:39 crc kubenswrapper[4718]: I1210 14:54:39.122825 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/7326e5bc-27b1-4b9a-b0ea-979589622ea3-ovn-rundir\") pod \"ovn-controller-metrics-nf6b6\" (UID: \"7326e5bc-27b1-4b9a-b0ea-979589622ea3\") " pod="openstack/ovn-controller-metrics-nf6b6" Dec 10 14:54:39 crc kubenswrapper[4718]: I1210 14:54:39.122859 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/7326e5bc-27b1-4b9a-b0ea-979589622ea3-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-nf6b6\" (UID: \"7326e5bc-27b1-4b9a-b0ea-979589622ea3\") " pod="openstack/ovn-controller-metrics-nf6b6" Dec 10 14:54:39 crc kubenswrapper[4718]: I1210 14:54:39.123319 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/7326e5bc-27b1-4b9a-b0ea-979589622ea3-ovs-rundir\") pod \"ovn-controller-metrics-nf6b6\" (UID: \"7326e5bc-27b1-4b9a-b0ea-979589622ea3\") " pod="openstack/ovn-controller-metrics-nf6b6" Dec 10 14:54:39 crc kubenswrapper[4718]: I1210 14:54:39.123438 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/7326e5bc-27b1-4b9a-b0ea-979589622ea3-ovn-rundir\") pod \"ovn-controller-metrics-nf6b6\" (UID: \"7326e5bc-27b1-4b9a-b0ea-979589622ea3\") " pod="openstack/ovn-controller-metrics-nf6b6" Dec 10 14:54:39 crc kubenswrapper[4718]: I1210 14:54:39.124562 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7326e5bc-27b1-4b9a-b0ea-979589622ea3-config\") pod \"ovn-controller-metrics-nf6b6\" (UID: \"7326e5bc-27b1-4b9a-b0ea-979589622ea3\") " pod="openstack/ovn-controller-metrics-nf6b6" Dec 10 14:54:39 crc kubenswrapper[4718]: I1210 14:54:39.143059 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/7326e5bc-27b1-4b9a-b0ea-979589622ea3-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-nf6b6\" (UID: \"7326e5bc-27b1-4b9a-b0ea-979589622ea3\") " pod="openstack/ovn-controller-metrics-nf6b6" Dec 10 14:54:39 crc kubenswrapper[4718]: I1210 14:54:39.147106 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7326e5bc-27b1-4b9a-b0ea-979589622ea3-combined-ca-bundle\") pod \"ovn-controller-metrics-nf6b6\" (UID: \"7326e5bc-27b1-4b9a-b0ea-979589622ea3\") " pod="openstack/ovn-controller-metrics-nf6b6" Dec 10 14:54:39 crc kubenswrapper[4718]: I1210 14:54:39.187460 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2bqtf\" (UniqueName: \"kubernetes.io/projected/7326e5bc-27b1-4b9a-b0ea-979589622ea3-kube-api-access-2bqtf\") pod \"ovn-controller-metrics-nf6b6\" (UID: \"7326e5bc-27b1-4b9a-b0ea-979589622ea3\") " pod="openstack/ovn-controller-metrics-nf6b6" Dec 10 14:54:39 crc kubenswrapper[4718]: I1210 14:54:39.288674 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-nf6b6" Dec 10 14:54:39 crc kubenswrapper[4718]: I1210 14:54:39.308891 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-86b8f4ff9-5scsd"] Dec 10 14:54:39 crc kubenswrapper[4718]: I1210 14:54:39.351607 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6fb75c485f-vs8bb"] Dec 10 14:54:39 crc kubenswrapper[4718]: I1210 14:54:39.360654 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6fb75c485f-vs8bb" Dec 10 14:54:39 crc kubenswrapper[4718]: I1210 14:54:39.374816 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Dec 10 14:54:39 crc kubenswrapper[4718]: I1210 14:54:39.378422 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6fb75c485f-vs8bb"] Dec 10 14:54:39 crc kubenswrapper[4718]: I1210 14:54:39.436334 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/14db89d0-8a08-4320-b700-ec422442571c-dns-svc\") pod \"dnsmasq-dns-6fb75c485f-vs8bb\" (UID: \"14db89d0-8a08-4320-b700-ec422442571c\") " pod="openstack/dnsmasq-dns-6fb75c485f-vs8bb" Dec 10 14:54:39 crc kubenswrapper[4718]: I1210 14:54:39.437054 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/14db89d0-8a08-4320-b700-ec422442571c-ovsdbserver-nb\") pod \"dnsmasq-dns-6fb75c485f-vs8bb\" (UID: \"14db89d0-8a08-4320-b700-ec422442571c\") " pod="openstack/dnsmasq-dns-6fb75c485f-vs8bb" Dec 10 14:54:39 crc kubenswrapper[4718]: I1210 14:54:39.437328 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/14db89d0-8a08-4320-b700-ec422442571c-config\") pod \"dnsmasq-dns-6fb75c485f-vs8bb\" (UID: \"14db89d0-8a08-4320-b700-ec422442571c\") " pod="openstack/dnsmasq-dns-6fb75c485f-vs8bb" Dec 10 14:54:39 crc kubenswrapper[4718]: I1210 14:54:39.437628 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ltw2c\" (UniqueName: \"kubernetes.io/projected/14db89d0-8a08-4320-b700-ec422442571c-kube-api-access-ltw2c\") pod \"dnsmasq-dns-6fb75c485f-vs8bb\" (UID: \"14db89d0-8a08-4320-b700-ec422442571c\") " pod="openstack/dnsmasq-dns-6fb75c485f-vs8bb" Dec 10 14:54:39 crc kubenswrapper[4718]: I1210 14:54:39.548374 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ltw2c\" (UniqueName: \"kubernetes.io/projected/14db89d0-8a08-4320-b700-ec422442571c-kube-api-access-ltw2c\") pod \"dnsmasq-dns-6fb75c485f-vs8bb\" (UID: \"14db89d0-8a08-4320-b700-ec422442571c\") " pod="openstack/dnsmasq-dns-6fb75c485f-vs8bb" Dec 10 14:54:39 crc kubenswrapper[4718]: I1210 14:54:39.549175 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/14db89d0-8a08-4320-b700-ec422442571c-dns-svc\") pod \"dnsmasq-dns-6fb75c485f-vs8bb\" (UID: \"14db89d0-8a08-4320-b700-ec422442571c\") " pod="openstack/dnsmasq-dns-6fb75c485f-vs8bb" Dec 10 14:54:39 crc kubenswrapper[4718]: I1210 14:54:39.549319 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/14db89d0-8a08-4320-b700-ec422442571c-ovsdbserver-nb\") pod \"dnsmasq-dns-6fb75c485f-vs8bb\" (UID: \"14db89d0-8a08-4320-b700-ec422442571c\") " pod="openstack/dnsmasq-dns-6fb75c485f-vs8bb" Dec 10 14:54:39 crc kubenswrapper[4718]: I1210 14:54:39.549643 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/14db89d0-8a08-4320-b700-ec422442571c-config\") pod \"dnsmasq-dns-6fb75c485f-vs8bb\" (UID: \"14db89d0-8a08-4320-b700-ec422442571c\") " pod="openstack/dnsmasq-dns-6fb75c485f-vs8bb" Dec 10 14:54:39 crc kubenswrapper[4718]: I1210 14:54:39.551944 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/14db89d0-8a08-4320-b700-ec422442571c-dns-svc\") pod \"dnsmasq-dns-6fb75c485f-vs8bb\" (UID: \"14db89d0-8a08-4320-b700-ec422442571c\") " pod="openstack/dnsmasq-dns-6fb75c485f-vs8bb" Dec 10 14:54:39 crc kubenswrapper[4718]: I1210 14:54:39.552301 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/14db89d0-8a08-4320-b700-ec422442571c-config\") pod \"dnsmasq-dns-6fb75c485f-vs8bb\" (UID: \"14db89d0-8a08-4320-b700-ec422442571c\") " pod="openstack/dnsmasq-dns-6fb75c485f-vs8bb" Dec 10 14:54:39 crc kubenswrapper[4718]: I1210 14:54:39.552829 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/14db89d0-8a08-4320-b700-ec422442571c-ovsdbserver-nb\") pod \"dnsmasq-dns-6fb75c485f-vs8bb\" (UID: \"14db89d0-8a08-4320-b700-ec422442571c\") " pod="openstack/dnsmasq-dns-6fb75c485f-vs8bb" Dec 10 14:54:39 crc kubenswrapper[4718]: I1210 14:54:39.685207 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ltw2c\" (UniqueName: \"kubernetes.io/projected/14db89d0-8a08-4320-b700-ec422442571c-kube-api-access-ltw2c\") pod \"dnsmasq-dns-6fb75c485f-vs8bb\" (UID: \"14db89d0-8a08-4320-b700-ec422442571c\") " pod="openstack/dnsmasq-dns-6fb75c485f-vs8bb" Dec 10 14:54:39 crc kubenswrapper[4718]: I1210 14:54:39.711911 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6fb75c485f-vs8bb" Dec 10 14:54:39 crc kubenswrapper[4718]: I1210 14:54:39.833925 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5449989c59-s2cg4"] Dec 10 14:54:39 crc kubenswrapper[4718]: I1210 14:54:39.904516 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6dbf544cc9-9888d"] Dec 10 14:54:39 crc kubenswrapper[4718]: I1210 14:54:39.922495 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6dbf544cc9-9888d" Dec 10 14:54:39 crc kubenswrapper[4718]: I1210 14:54:39.927748 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Dec 10 14:54:39 crc kubenswrapper[4718]: I1210 14:54:39.960246 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6dbf544cc9-9888d"] Dec 10 14:54:40 crc kubenswrapper[4718]: I1210 14:54:40.006901 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/70c358d6-fb60-4ab9-a1d2-90a25b22fc41-ovsdbserver-sb\") pod \"dnsmasq-dns-6dbf544cc9-9888d\" (UID: \"70c358d6-fb60-4ab9-a1d2-90a25b22fc41\") " pod="openstack/dnsmasq-dns-6dbf544cc9-9888d" Dec 10 14:54:40 crc kubenswrapper[4718]: I1210 14:54:40.007006 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/70c358d6-fb60-4ab9-a1d2-90a25b22fc41-config\") pod \"dnsmasq-dns-6dbf544cc9-9888d\" (UID: \"70c358d6-fb60-4ab9-a1d2-90a25b22fc41\") " pod="openstack/dnsmasq-dns-6dbf544cc9-9888d" Dec 10 14:54:40 crc kubenswrapper[4718]: I1210 14:54:40.007038 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/70c358d6-fb60-4ab9-a1d2-90a25b22fc41-dns-svc\") pod \"dnsmasq-dns-6dbf544cc9-9888d\" (UID: \"70c358d6-fb60-4ab9-a1d2-90a25b22fc41\") " pod="openstack/dnsmasq-dns-6dbf544cc9-9888d" Dec 10 14:54:40 crc kubenswrapper[4718]: I1210 14:54:40.007113 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/70c358d6-fb60-4ab9-a1d2-90a25b22fc41-ovsdbserver-nb\") pod \"dnsmasq-dns-6dbf544cc9-9888d\" (UID: \"70c358d6-fb60-4ab9-a1d2-90a25b22fc41\") " pod="openstack/dnsmasq-dns-6dbf544cc9-9888d" Dec 10 14:54:40 crc kubenswrapper[4718]: I1210 14:54:40.007195 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8m6rg\" (UniqueName: \"kubernetes.io/projected/70c358d6-fb60-4ab9-a1d2-90a25b22fc41-kube-api-access-8m6rg\") pod \"dnsmasq-dns-6dbf544cc9-9888d\" (UID: \"70c358d6-fb60-4ab9-a1d2-90a25b22fc41\") " pod="openstack/dnsmasq-dns-6dbf544cc9-9888d" Dec 10 14:54:40 crc kubenswrapper[4718]: I1210 14:54:40.456760 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8m6rg\" (UniqueName: \"kubernetes.io/projected/70c358d6-fb60-4ab9-a1d2-90a25b22fc41-kube-api-access-8m6rg\") pod \"dnsmasq-dns-6dbf544cc9-9888d\" (UID: \"70c358d6-fb60-4ab9-a1d2-90a25b22fc41\") " pod="openstack/dnsmasq-dns-6dbf544cc9-9888d" Dec 10 14:54:40 crc kubenswrapper[4718]: I1210 14:54:40.457213 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/70c358d6-fb60-4ab9-a1d2-90a25b22fc41-ovsdbserver-sb\") pod \"dnsmasq-dns-6dbf544cc9-9888d\" (UID: \"70c358d6-fb60-4ab9-a1d2-90a25b22fc41\") " pod="openstack/dnsmasq-dns-6dbf544cc9-9888d" Dec 10 14:54:40 crc kubenswrapper[4718]: I1210 14:54:40.457307 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/70c358d6-fb60-4ab9-a1d2-90a25b22fc41-config\") pod \"dnsmasq-dns-6dbf544cc9-9888d\" (UID: \"70c358d6-fb60-4ab9-a1d2-90a25b22fc41\") " pod="openstack/dnsmasq-dns-6dbf544cc9-9888d" Dec 10 14:54:40 crc kubenswrapper[4718]: I1210 14:54:40.457348 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/70c358d6-fb60-4ab9-a1d2-90a25b22fc41-dns-svc\") pod \"dnsmasq-dns-6dbf544cc9-9888d\" (UID: \"70c358d6-fb60-4ab9-a1d2-90a25b22fc41\") " pod="openstack/dnsmasq-dns-6dbf544cc9-9888d" Dec 10 14:54:40 crc kubenswrapper[4718]: I1210 14:54:40.457696 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/70c358d6-fb60-4ab9-a1d2-90a25b22fc41-ovsdbserver-nb\") pod \"dnsmasq-dns-6dbf544cc9-9888d\" (UID: \"70c358d6-fb60-4ab9-a1d2-90a25b22fc41\") " pod="openstack/dnsmasq-dns-6dbf544cc9-9888d" Dec 10 14:54:40 crc kubenswrapper[4718]: I1210 14:54:40.468682 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/70c358d6-fb60-4ab9-a1d2-90a25b22fc41-dns-svc\") pod \"dnsmasq-dns-6dbf544cc9-9888d\" (UID: \"70c358d6-fb60-4ab9-a1d2-90a25b22fc41\") " pod="openstack/dnsmasq-dns-6dbf544cc9-9888d" Dec 10 14:54:40 crc kubenswrapper[4718]: I1210 14:54:40.476813 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/70c358d6-fb60-4ab9-a1d2-90a25b22fc41-ovsdbserver-nb\") pod \"dnsmasq-dns-6dbf544cc9-9888d\" (UID: \"70c358d6-fb60-4ab9-a1d2-90a25b22fc41\") " pod="openstack/dnsmasq-dns-6dbf544cc9-9888d" Dec 10 14:54:40 crc kubenswrapper[4718]: I1210 14:54:40.477588 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/70c358d6-fb60-4ab9-a1d2-90a25b22fc41-config\") pod \"dnsmasq-dns-6dbf544cc9-9888d\" (UID: \"70c358d6-fb60-4ab9-a1d2-90a25b22fc41\") " pod="openstack/dnsmasq-dns-6dbf544cc9-9888d" Dec 10 14:54:40 crc kubenswrapper[4718]: I1210 14:54:40.517041 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8m6rg\" (UniqueName: \"kubernetes.io/projected/70c358d6-fb60-4ab9-a1d2-90a25b22fc41-kube-api-access-8m6rg\") pod \"dnsmasq-dns-6dbf544cc9-9888d\" (UID: \"70c358d6-fb60-4ab9-a1d2-90a25b22fc41\") " pod="openstack/dnsmasq-dns-6dbf544cc9-9888d" Dec 10 14:54:40 crc kubenswrapper[4718]: I1210 14:54:40.542237 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/70c358d6-fb60-4ab9-a1d2-90a25b22fc41-ovsdbserver-sb\") pod \"dnsmasq-dns-6dbf544cc9-9888d\" (UID: \"70c358d6-fb60-4ab9-a1d2-90a25b22fc41\") " pod="openstack/dnsmasq-dns-6dbf544cc9-9888d" Dec 10 14:54:40 crc kubenswrapper[4718]: I1210 14:54:40.579824 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6dbf544cc9-9888d" Dec 10 14:54:43 crc kubenswrapper[4718]: E1210 14:54:43.190431 4718 kubelet.go:2526] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.17s" Dec 10 14:54:51 crc kubenswrapper[4718]: I1210 14:54:51.827886 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"1ccf190d-cc0e-471c-b506-9784b1e8b038","Type":"ContainerStarted","Data":"da72f5421eae9cecf9bf6ff72f03a0299bfdd78357135f4432dfdf48b66f5a34"} Dec 10 14:55:07 crc kubenswrapper[4718]: E1210 14:55:07.133900 4718 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = reading blob sha256:9edf1fd595077120223c7592832408eebed05c6c0300d97012a0e98d8ecca5b7: Get \"https://quay.rdoproject.org/v2/podified-master-centos10/openstack-ovn-base/blobs/sha256:9edf1fd595077120223c7592832408eebed05c6c0300d97012a0e98d8ecca5b7\": context canceled" image="quay.rdoproject.org/podified-master-centos10/openstack-ovn-base:current" Dec 10 14:55:07 crc kubenswrapper[4718]: E1210 14:55:07.134858 4718 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = reading blob sha256:9edf1fd595077120223c7592832408eebed05c6c0300d97012a0e98d8ecca5b7: Get \"https://quay.rdoproject.org/v2/podified-master-centos10/openstack-ovn-base/blobs/sha256:9edf1fd595077120223c7592832408eebed05c6c0300d97012a0e98d8ecca5b7\": context canceled" image="quay.rdoproject.org/podified-master-centos10/openstack-ovn-base:current" Dec 10 14:55:07 crc kubenswrapper[4718]: E1210 14:55:07.135226 4718 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:ovsdb-server-init,Image:quay.rdoproject.org/podified-master-centos10/openstack-ovn-base:current,Command:[/usr/local/bin/container-scripts/init-ovsdb-server.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n646h567hb7h8ch5d9h8dh58bhd5h54dh7bhf6h599h5f4h65bhb8h657h674h58ch5dchf5hffh67h86h668hc9h5c4hd7h69h5fbh697h56fhfcq,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-ovs,ReadOnly:false,MountPath:/etc/openvswitch,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:var-run,ReadOnly:false,MountPath:/var/run/openvswitch,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:var-log,ReadOnly:false,MountPath:/var/log/openvswitch,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:var-lib,ReadOnly:false,MountPath:/var/lib/openvswitch,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cvwvg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[NET_ADMIN SYS_ADMIN SYS_NICE],Drop:[],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-controller-ovs-l8vtq_openstack(061fa283-77d3-42e2-b267-2c01852d4123): ErrImagePull: rpc error: code = Canceled desc = reading blob sha256:9edf1fd595077120223c7592832408eebed05c6c0300d97012a0e98d8ecca5b7: Get \"https://quay.rdoproject.org/v2/podified-master-centos10/openstack-ovn-base/blobs/sha256:9edf1fd595077120223c7592832408eebed05c6c0300d97012a0e98d8ecca5b7\": context canceled" logger="UnhandledError" Dec 10 14:55:07 crc kubenswrapper[4718]: E1210 14:55:07.136542 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovsdb-server-init\" with ErrImagePull: \"rpc error: code = Canceled desc = reading blob sha256:9edf1fd595077120223c7592832408eebed05c6c0300d97012a0e98d8ecca5b7: Get \\\"https://quay.rdoproject.org/v2/podified-master-centos10/openstack-ovn-base/blobs/sha256:9edf1fd595077120223c7592832408eebed05c6c0300d97012a0e98d8ecca5b7\\\": context canceled\"" pod="openstack/ovn-controller-ovs-l8vtq" podUID="061fa283-77d3-42e2-b267-2c01852d4123" Dec 10 14:55:07 crc kubenswrapper[4718]: E1210 14:55:07.651747 4718 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = reading blob sha256:9edf1fd595077120223c7592832408eebed05c6c0300d97012a0e98d8ecca5b7: Get \"https://quay.rdoproject.org/v2/podified-master-centos10/openstack-ovn-nb-db-server/blobs/sha256:9edf1fd595077120223c7592832408eebed05c6c0300d97012a0e98d8ecca5b7\": context canceled" image="quay.rdoproject.org/podified-master-centos10/openstack-ovn-nb-db-server:current" Dec 10 14:55:07 crc kubenswrapper[4718]: E1210 14:55:07.651882 4718 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = reading blob sha256:9edf1fd595077120223c7592832408eebed05c6c0300d97012a0e98d8ecca5b7: Get \"https://quay.rdoproject.org/v2/podified-master-centos10/openstack-ovn-nb-db-server/blobs/sha256:9edf1fd595077120223c7592832408eebed05c6c0300d97012a0e98d8ecca5b7\": context canceled" image="quay.rdoproject.org/podified-master-centos10/openstack-ovn-nb-db-server:current" Dec 10 14:55:07 crc kubenswrapper[4718]: E1210 14:55:07.652176 4718 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ovsdbserver-nb,Image:quay.rdoproject.org/podified-master-centos10/openstack-ovn-nb-db-server:current,Command:[/usr/bin/dumb-init],Args:[/usr/local/bin/container-scripts/setup.sh],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n7h57bh5fdh67bh555h67ch54ch5d7h5f6h564h8chcdh576hb9h658h584h65h67h687h9hc8h647hb6hbch78h55h68fh89h59bhf5h674h8bq,ValueFrom:nil,},EnvVar{Name:OVN_LOGDIR,Value:/tmp,ValueFrom:nil,},EnvVar{Name:OVN_RUNDIR,Value:/tmp,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovndbcluster-nb-etc-ovn,ReadOnly:false,MountPath:/etc/ovn,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdb-rundir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdbserver-nb-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovndb.crt,SubPath:tls.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdbserver-nb-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/private/ovndb.key,SubPath:tls.key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdbserver-nb-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovndbca.crt,SubPath:ca.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nstcd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/pidof ovsdb-server],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/pidof ovsdb-server],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:&Lifecycle{PostStart:nil,PreStop:&LifecycleHandler{Exec:&ExecAction{Command:[/usr/local/bin/container-scripts/cleanup.sh],},HTTPGet:nil,TCPSocket:nil,Sleep:nil,},},TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/pidof ovsdb-server],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:20,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovsdbserver-nb-0_openstack(50fbf3ca-d871-4ccd-a412-636fa783e3d4): ErrImagePull: rpc error: code = Canceled desc = reading blob sha256:9edf1fd595077120223c7592832408eebed05c6c0300d97012a0e98d8ecca5b7: Get \"https://quay.rdoproject.org/v2/podified-master-centos10/openstack-ovn-nb-db-server/blobs/sha256:9edf1fd595077120223c7592832408eebed05c6c0300d97012a0e98d8ecca5b7\": context canceled" logger="UnhandledError" Dec 10 14:55:08 crc kubenswrapper[4718]: E1210 14:55:08.144763 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovsdb-server-init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ovn-base:current\\\"\"" pod="openstack/ovn-controller-ovs-l8vtq" podUID="061fa283-77d3-42e2-b267-2c01852d4123" Dec 10 14:55:08 crc kubenswrapper[4718]: I1210 14:55:08.332936 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6fb75c485f-vs8bb"] Dec 10 14:55:09 crc kubenswrapper[4718]: E1210 14:55:09.502898 4718 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/cluster-observability-operator/obo-prometheus-operator-prometheus-config-reloader-rhel9@sha256:1133c973c7472c665f910a722e19c8e2e27accb34b90fab67f14548627ce9c62" Dec 10 14:55:09 crc kubenswrapper[4718]: E1210 14:55:09.503856 4718 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init-config-reloader,Image:registry.redhat.io/cluster-observability-operator/obo-prometheus-operator-prometheus-config-reloader-rhel9@sha256:1133c973c7472c665f910a722e19c8e2e27accb34b90fab67f14548627ce9c62,Command:[/bin/prometheus-config-reloader],Args:[--watch-interval=0 --listen-address=:8081 --config-file=/etc/prometheus/config/prometheus.yaml.gz --config-envsubst-file=/etc/prometheus/config_out/prometheus.env.yaml --watched-dir=/etc/prometheus/rules/prometheus-metric-storage-rulefiles-0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:reloader-init,HostPort:0,ContainerPort:8081,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:SHARD,Value:0,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:false,MountPath:/etc/prometheus/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-out,ReadOnly:false,MountPath:/etc/prometheus/config_out,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:prometheus-metric-storage-rulefiles-0,ReadOnly:false,MountPath:/etc/prometheus/rules/prometheus-metric-storage-rulefiles-0,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5xf2q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod prometheus-metric-storage-0_openstack(7ea78b30-1cce-42f6-abae-e7e66ee3daae): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 10 14:55:09 crc kubenswrapper[4718]: E1210 14:55:09.505139 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init-config-reloader\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openstack/prometheus-metric-storage-0" podUID="7ea78b30-1cce-42f6-abae-e7e66ee3daae" Dec 10 14:55:10 crc kubenswrapper[4718]: E1210 14:55:10.171113 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init-config-reloader\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/cluster-observability-operator/obo-prometheus-operator-prometheus-config-reloader-rhel9@sha256:1133c973c7472c665f910a722e19c8e2e27accb34b90fab67f14548627ce9c62\\\"\"" pod="openstack/prometheus-metric-storage-0" podUID="7ea78b30-1cce-42f6-abae-e7e66ee3daae" Dec 10 14:55:16 crc kubenswrapper[4718]: E1210 14:55:16.088164 4718 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos10/openstack-mariadb:current" Dec 10 14:55:16 crc kubenswrapper[4718]: E1210 14:55:16.088805 4718 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos10/openstack-mariadb:current" Dec 10 14:55:16 crc kubenswrapper[4718]: E1210 14:55:16.089175 4718 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:mysql-bootstrap,Image:quay.rdoproject.org/podified-master-centos10/openstack-mariadb:current,Command:[bash /var/lib/operator-scripts/mysql_bootstrap.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:True,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:mysql-db,ReadOnly:false,MountPath:/var/lib/mysql,SubPath:mysql,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-default,ReadOnly:true,MountPath:/var/lib/config-data/default,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-generated,ReadOnly:false,MountPath:/var/lib/config-data/generated,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:operator-scripts,ReadOnly:true,MountPath:/var/lib/operator-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8tzvl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstack-cell1-galera-0_openstack(f81943cf-47c7-424d-9473-2df3195bc9a6): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 10 14:55:16 crc kubenswrapper[4718]: E1210 14:55:16.090579 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/openstack-cell1-galera-0" podUID="f81943cf-47c7-424d-9473-2df3195bc9a6" Dec 10 14:55:16 crc kubenswrapper[4718]: E1210 14:55:16.238036 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-mariadb:current\\\"\"" pod="openstack/openstack-cell1-galera-0" podUID="f81943cf-47c7-424d-9473-2df3195bc9a6" Dec 10 14:55:17 crc kubenswrapper[4718]: E1210 14:55:17.714010 4718 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos10/openstack-rabbitmq:current" Dec 10 14:55:17 crc kubenswrapper[4718]: E1210 14:55:17.714081 4718 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos10/openstack-rabbitmq:current" Dec 10 14:55:17 crc kubenswrapper[4718]: E1210 14:55:17.714279 4718 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:quay.rdoproject.org/podified-master-centos10/openstack-rabbitmq:current,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-55kgx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cell1-server-0_openstack(282d32e9-d539-4bac-9fd1-a8735e8d92e1): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 10 14:55:17 crc kubenswrapper[4718]: E1210 14:55:17.715595 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/rabbitmq-cell1-server-0" podUID="282d32e9-d539-4bac-9fd1-a8735e8d92e1" Dec 10 14:55:18 crc kubenswrapper[4718]: E1210 14:55:18.265995 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-rabbitmq:current\\\"\"" pod="openstack/rabbitmq-cell1-server-0" podUID="282d32e9-d539-4bac-9fd1-a8735e8d92e1" Dec 10 14:55:19 crc kubenswrapper[4718]: E1210 14:55:19.014865 4718 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos10/openstack-mariadb:current" Dec 10 14:55:19 crc kubenswrapper[4718]: E1210 14:55:19.016970 4718 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos10/openstack-mariadb:current" Dec 10 14:55:19 crc kubenswrapper[4718]: E1210 14:55:19.017336 4718 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:mysql-bootstrap,Image:quay.rdoproject.org/podified-master-centos10/openstack-mariadb:current,Command:[bash /var/lib/operator-scripts/mysql_bootstrap.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:True,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:mysql-db,ReadOnly:false,MountPath:/var/lib/mysql,SubPath:mysql,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-default,ReadOnly:true,MountPath:/var/lib/config-data/default,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-generated,ReadOnly:false,MountPath:/var/lib/config-data/generated,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:operator-scripts,ReadOnly:true,MountPath:/var/lib/operator-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zcp5c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstack-galera-0_openstack(0708d5de-311d-46e3-981e-7bd7a2fc495c): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 10 14:55:19 crc kubenswrapper[4718]: E1210 14:55:19.019903 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/openstack-galera-0" podUID="0708d5de-311d-46e3-981e-7bd7a2fc495c" Dec 10 14:55:19 crc kubenswrapper[4718]: E1210 14:55:19.106205 4718 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos10/openstack-rabbitmq:current" Dec 10 14:55:19 crc kubenswrapper[4718]: E1210 14:55:19.106309 4718 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos10/openstack-rabbitmq:current" Dec 10 14:55:19 crc kubenswrapper[4718]: E1210 14:55:19.106534 4718 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:quay.rdoproject.org/podified-master-centos10/openstack-rabbitmq:current,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-79ddq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-server-0_openstack(0fcf07d8-859b-4547-8a32-824f40da6a93): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 10 14:55:19 crc kubenswrapper[4718]: E1210 14:55:19.107926 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/rabbitmq-server-0" podUID="0fcf07d8-859b-4547-8a32-824f40da6a93" Dec 10 14:55:19 crc kubenswrapper[4718]: E1210 14:55:19.220012 4718 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos10/openstack-rabbitmq:current" Dec 10 14:55:19 crc kubenswrapper[4718]: E1210 14:55:19.220100 4718 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos10/openstack-rabbitmq:current" Dec 10 14:55:19 crc kubenswrapper[4718]: E1210 14:55:19.220334 4718 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:quay.rdoproject.org/podified-master-centos10/openstack-rabbitmq:current,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-85fbk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-notifications-server-0_openstack(5611ee41-14a4-45d3-88b1-e6e6c9bc4d13): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 10 14:55:19 crc kubenswrapper[4718]: E1210 14:55:19.221607 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/rabbitmq-notifications-server-0" podUID="5611ee41-14a4-45d3-88b1-e6e6c9bc4d13" Dec 10 14:55:19 crc kubenswrapper[4718]: I1210 14:55:19.277623 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6fb75c485f-vs8bb" event={"ID":"14db89d0-8a08-4320-b700-ec422442571c","Type":"ContainerStarted","Data":"64f2319e93acbd089f03e21bd6f54bcc11d127a019b47a282e6dfa74d40cee88"} Dec 10 14:55:19 crc kubenswrapper[4718]: E1210 14:55:19.279889 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-rabbitmq:current\\\"\"" pod="openstack/rabbitmq-notifications-server-0" podUID="5611ee41-14a4-45d3-88b1-e6e6c9bc4d13" Dec 10 14:55:19 crc kubenswrapper[4718]: E1210 14:55:19.280829 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-rabbitmq:current\\\"\"" pod="openstack/rabbitmq-server-0" podUID="0fcf07d8-859b-4547-8a32-824f40da6a93" Dec 10 14:55:19 crc kubenswrapper[4718]: E1210 14:55:19.281003 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-mariadb:current\\\"\"" pod="openstack/openstack-galera-0" podUID="0708d5de-311d-46e3-981e-7bd7a2fc495c" Dec 10 14:55:20 crc kubenswrapper[4718]: E1210 14:55:20.050022 4718 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos10/openstack-memcached:current" Dec 10 14:55:20 crc kubenswrapper[4718]: E1210 14:55:20.050107 4718 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos10/openstack-memcached:current" Dec 10 14:55:20 crc kubenswrapper[4718]: E1210 14:55:20.050491 4718 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:memcached,Image:quay.rdoproject.org/podified-master-centos10/openstack-memcached:current,Command:[/usr/bin/dumb-init -- /usr/local/bin/kolla_start],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:memcached,HostPort:0,ContainerPort:11211,Protocol:TCP,HostIP:,},ContainerPort{Name:memcached-tls,HostPort:0,ContainerPort:11212,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:POD_IPS,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIPs,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:CONFIG_HASH,Value:n56fh9bh657hc8h68ch5d8h85h4h558h5dbh5f7h58dhcbhc5h58dh8ch57fh57ch4h5b9hb6hbdh5c6h598h69hbch654hf8h75h565h5b4h677q,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/src,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:memcached-tls-certs,ReadOnly:true,MountPath:/var/lib/config-data/tls/certs/memcached.crt,SubPath:tls.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:memcached-tls-certs,ReadOnly:true,MountPath:/var/lib/config-data/tls/private/memcached.key,SubPath:tls.key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2cbzl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 11211 },Host:,},GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 11211 },Host:,},GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42457,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42457,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod memcached-0_openstack(571880ea-f2a9-4e9e-99a5-c8bcaffb8675): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 10 14:55:20 crc kubenswrapper[4718]: E1210 14:55:20.052129 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"memcached\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/memcached-0" podUID="571880ea-f2a9-4e9e-99a5-c8bcaffb8675" Dec 10 14:55:20 crc kubenswrapper[4718]: E1210 14:55:20.316134 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"memcached\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-memcached:current\\\"\"" pod="openstack/memcached-0" podUID="571880ea-f2a9-4e9e-99a5-c8bcaffb8675" Dec 10 14:55:26 crc kubenswrapper[4718]: E1210 14:55:26.081840 4718 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos10/openstack-neutron-server:current" Dec 10 14:55:26 crc kubenswrapper[4718]: E1210 14:55:26.082481 4718 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos10/openstack-neutron-server:current" Dec 10 14:55:26 crc kubenswrapper[4718]: E1210 14:55:26.082746 4718 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.rdoproject.org/podified-master-centos10/openstack-neutron-server:current,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9md9c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-8468885bfc-pmrts_openstack(5fe12fc7-cc3d-47cb-8c4c-c38cdae3d919): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 10 14:55:26 crc kubenswrapper[4718]: E1210 14:55:26.083913 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-8468885bfc-pmrts" podUID="5fe12fc7-cc3d-47cb-8c4c-c38cdae3d919" Dec 10 14:55:26 crc kubenswrapper[4718]: E1210 14:55:26.346973 4718 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos10/openstack-ovn-controller:current" Dec 10 14:55:26 crc kubenswrapper[4718]: E1210 14:55:26.347071 4718 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos10/openstack-ovn-controller:current" Dec 10 14:55:26 crc kubenswrapper[4718]: E1210 14:55:26.347343 4718 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ovn-controller,Image:quay.rdoproject.org/podified-master-centos10/openstack-ovn-controller:current,Command:[ovn-controller --pidfile unix:/run/openvswitch/db.sock --certificate=/etc/pki/tls/certs/ovndb.crt --private-key=/etc/pki/tls/private/ovndb.key --ca-cert=/etc/pki/tls/certs/ovndbca.crt],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n646h567hb7h8ch5d9h8dh58bhd5h54dh7bhf6h599h5f4h65bhb8h657h674h58ch5dchf5hffh67h86h668hc9h5c4hd7h69h5fbh697h56fhfcq,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:var-run,ReadOnly:false,MountPath:/var/run/openvswitch,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:var-run-ovn,ReadOnly:false,MountPath:/var/run/ovn,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:var-log-ovn,ReadOnly:false,MountPath:/var/log/ovn,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovn-controller-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovndb.crt,SubPath:tls.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovn-controller-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/private/ovndb.key,SubPath:tls.key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovn-controller-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovndbca.crt,SubPath:ca.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-npj4s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/local/bin/container-scripts/ovn_controller_liveness.sh],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:30,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/local/bin/container-scripts/ovn_controller_readiness.sh],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:30,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:&Lifecycle{PostStart:nil,PreStop:&LifecycleHandler{Exec:&ExecAction{Command:[/usr/share/ovn/scripts/ovn-ctl stop_controller],},HTTPGet:nil,TCPSocket:nil,Sleep:nil,},},TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[NET_ADMIN SYS_ADMIN SYS_NICE],Drop:[],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-controller-ftrjh_openstack(3e4f376b-2175-46d8-8b88-0560a3fcf231): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 10 14:55:26 crc kubenswrapper[4718]: E1210 14:55:26.349665 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovn-controller\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/ovn-controller-ftrjh" podUID="3e4f376b-2175-46d8-8b88-0560a3fcf231" Dec 10 14:55:26 crc kubenswrapper[4718]: E1210 14:55:26.370537 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovn-controller\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ovn-controller:current\\\"\"" pod="openstack/ovn-controller-ftrjh" podUID="3e4f376b-2175-46d8-8b88-0560a3fcf231" Dec 10 14:55:26 crc kubenswrapper[4718]: E1210 14:55:26.380375 4718 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos10/openstack-neutron-server:current" Dec 10 14:55:26 crc kubenswrapper[4718]: E1210 14:55:26.380469 4718 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos10/openstack-neutron-server:current" Dec 10 14:55:26 crc kubenswrapper[4718]: E1210 14:55:26.380639 4718 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.rdoproject.org/podified-master-centos10/openstack-neutron-server:current,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n659h4h664hbh658h587h67ch89h587h8fh679hc6hf9h55fh644h5d5h698h68dh5cdh5ffh669h54ch9h689hb8hd4h5bfhd8h5d7h5fh665h574q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-n6vl8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-86b8f4ff9-5scsd_openstack(d2148743-814f-409c-bbf9-65b8430f767c): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 10 14:55:26 crc kubenswrapper[4718]: E1210 14:55:26.382080 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-86b8f4ff9-5scsd" podUID="d2148743-814f-409c-bbf9-65b8430f767c" Dec 10 14:55:26 crc kubenswrapper[4718]: E1210 14:55:26.600734 4718 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos10/openstack-ovn-sb-db-server:current" Dec 10 14:55:26 crc kubenswrapper[4718]: E1210 14:55:26.600838 4718 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos10/openstack-ovn-sb-db-server:current" Dec 10 14:55:26 crc kubenswrapper[4718]: E1210 14:55:26.601230 4718 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ovsdbserver-sb,Image:quay.rdoproject.org/podified-master-centos10/openstack-ovn-sb-db-server:current,Command:[/usr/bin/dumb-init],Args:[/usr/local/bin/container-scripts/setup.sh],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n666h66h67h549h556h589h589h5dch575h9fh5c5h674h55h5fh9fh668h66fh78h685h694h55ch574h64dh77h574hf9h67bh5dfh59bh665h68ch5f6q,ValueFrom:nil,},EnvVar{Name:OVN_LOGDIR,Value:/tmp,ValueFrom:nil,},EnvVar{Name:OVN_RUNDIR,Value:/tmp,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovndbcluster-sb-etc-ovn,ReadOnly:false,MountPath:/etc/ovn,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdb-rundir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdbserver-sb-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovndb.crt,SubPath:tls.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdbserver-sb-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/private/ovndb.key,SubPath:tls.key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdbserver-sb-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovndbca.crt,SubPath:ca.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4n5x9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/pidof ovsdb-server],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/pidof ovsdb-server],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:&Lifecycle{PostStart:nil,PreStop:&LifecycleHandler{Exec:&ExecAction{Command:[/usr/local/bin/container-scripts/cleanup.sh],},HTTPGet:nil,TCPSocket:nil,Sleep:nil,},},TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/pidof ovsdb-server],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:20,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovsdbserver-sb-0_openstack(1ccf190d-cc0e-471c-b506-9784b1e8b038): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 10 14:55:26 crc kubenswrapper[4718]: E1210 14:55:26.657951 4718 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos10/openstack-neutron-server:current" Dec 10 14:55:26 crc kubenswrapper[4718]: E1210 14:55:26.658040 4718 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos10/openstack-neutron-server:current" Dec 10 14:55:26 crc kubenswrapper[4718]: E1210 14:55:26.658205 4718 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.rdoproject.org/podified-master-centos10/openstack-neutron-server:current,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n68chd6h679hbfh55fhc6h5ffh5d8h94h56ch589hb4hc5h57bh677hcdh655h8dh667h675h654h66ch567h8fh659h5b4h675h566h55bh54h67dh6dq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wnngk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-b9b4959cc-p58sh_openstack(bf0bd86a-1722-479c-8dcd-25080bc05c11): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 10 14:55:26 crc kubenswrapper[4718]: E1210 14:55:26.659581 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-b9b4959cc-p58sh" podUID="bf0bd86a-1722-479c-8dcd-25080bc05c11" Dec 10 14:55:26 crc kubenswrapper[4718]: E1210 14:55:26.895914 4718 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos10/openstack-neutron-server:current" Dec 10 14:55:26 crc kubenswrapper[4718]: E1210 14:55:26.896547 4718 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos10/openstack-neutron-server:current" Dec 10 14:55:26 crc kubenswrapper[4718]: E1210 14:55:26.896846 4718 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.rdoproject.org/podified-master-centos10/openstack-neutron-server:current,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n5c7h56dh5cfh8bh54fhbbhf4h5b9hdch67fhd7h55fh55fh6ch9h548h54ch665h647h6h8fhd6h5dfh5cdh58bh577h66fh695h5fbh55h77h5fcq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pbfdx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-5449989c59-s2cg4_openstack(28bd664c-dd6f-4b1d-a9c3-239b94717974): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 10 14:55:26 crc kubenswrapper[4718]: E1210 14:55:26.898743 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-5449989c59-s2cg4" podUID="28bd664c-dd6f-4b1d-a9c3-239b94717974" Dec 10 14:55:26 crc kubenswrapper[4718]: E1210 14:55:26.925780 4718 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos10/openstack-neutron-server:current" Dec 10 14:55:26 crc kubenswrapper[4718]: E1210 14:55:26.925870 4718 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos10/openstack-neutron-server:current" Dec 10 14:55:26 crc kubenswrapper[4718]: E1210 14:55:26.926119 4718 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.rdoproject.org/podified-master-centos10/openstack-neutron-server:current,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nj9qn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-545d49fd5c-wvw6h_openstack(d49150b3-2c3e-4214-9c50-cbd6a1aaa0d3): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 10 14:55:26 crc kubenswrapper[4718]: E1210 14:55:26.927300 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-545d49fd5c-wvw6h" podUID="d49150b3-2c3e-4214-9c50-cbd6a1aaa0d3" Dec 10 14:55:27 crc kubenswrapper[4718]: I1210 14:55:27.323075 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6dbf544cc9-9888d"] Dec 10 14:55:27 crc kubenswrapper[4718]: I1210 14:55:27.392378 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-nf6b6"] Dec 10 14:55:28 crc kubenswrapper[4718]: I1210 14:55:28.991130 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8468885bfc-pmrts" Dec 10 14:55:29 crc kubenswrapper[4718]: I1210 14:55:29.152483 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5fe12fc7-cc3d-47cb-8c4c-c38cdae3d919-config\") pod \"5fe12fc7-cc3d-47cb-8c4c-c38cdae3d919\" (UID: \"5fe12fc7-cc3d-47cb-8c4c-c38cdae3d919\") " Dec 10 14:55:29 crc kubenswrapper[4718]: I1210 14:55:29.152752 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9md9c\" (UniqueName: \"kubernetes.io/projected/5fe12fc7-cc3d-47cb-8c4c-c38cdae3d919-kube-api-access-9md9c\") pod \"5fe12fc7-cc3d-47cb-8c4c-c38cdae3d919\" (UID: \"5fe12fc7-cc3d-47cb-8c4c-c38cdae3d919\") " Dec 10 14:55:29 crc kubenswrapper[4718]: I1210 14:55:29.153730 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5fe12fc7-cc3d-47cb-8c4c-c38cdae3d919-config" (OuterVolumeSpecName: "config") pod "5fe12fc7-cc3d-47cb-8c4c-c38cdae3d919" (UID: "5fe12fc7-cc3d-47cb-8c4c-c38cdae3d919"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:55:29 crc kubenswrapper[4718]: I1210 14:55:29.162253 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe12fc7-cc3d-47cb-8c4c-c38cdae3d919-kube-api-access-9md9c" (OuterVolumeSpecName: "kube-api-access-9md9c") pod "5fe12fc7-cc3d-47cb-8c4c-c38cdae3d919" (UID: "5fe12fc7-cc3d-47cb-8c4c-c38cdae3d919"). InnerVolumeSpecName "kube-api-access-9md9c". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 14:55:29 crc kubenswrapper[4718]: W1210 14:55:29.174932 4718 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7326e5bc_27b1_4b9a_b0ea_979589622ea3.slice/crio-c7f433150ce27a0752a04ac407afbd416b9ab61dececfda7a63db4bd3247a7f8 WatchSource:0}: Error finding container c7f433150ce27a0752a04ac407afbd416b9ab61dececfda7a63db4bd3247a7f8: Status 404 returned error can't find the container with id c7f433150ce27a0752a04ac407afbd416b9ab61dececfda7a63db4bd3247a7f8 Dec 10 14:55:29 crc kubenswrapper[4718]: I1210 14:55:29.256600 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9md9c\" (UniqueName: \"kubernetes.io/projected/5fe12fc7-cc3d-47cb-8c4c-c38cdae3d919-kube-api-access-9md9c\") on node \"crc\" DevicePath \"\"" Dec 10 14:55:29 crc kubenswrapper[4718]: I1210 14:55:29.256665 4718 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5fe12fc7-cc3d-47cb-8c4c-c38cdae3d919-config\") on node \"crc\" DevicePath \"\"" Dec 10 14:55:29 crc kubenswrapper[4718]: I1210 14:55:29.268381 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86b8f4ff9-5scsd" Dec 10 14:55:29 crc kubenswrapper[4718]: I1210 14:55:29.284115 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-545d49fd5c-wvw6h" Dec 10 14:55:29 crc kubenswrapper[4718]: I1210 14:55:29.305179 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5449989c59-s2cg4" Dec 10 14:55:29 crc kubenswrapper[4718]: I1210 14:55:29.315583 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b9b4959cc-p58sh" Dec 10 14:55:29 crc kubenswrapper[4718]: I1210 14:55:29.357342 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d49150b3-2c3e-4214-9c50-cbd6a1aaa0d3-config\") pod \"d49150b3-2c3e-4214-9c50-cbd6a1aaa0d3\" (UID: \"d49150b3-2c3e-4214-9c50-cbd6a1aaa0d3\") " Dec 10 14:55:29 crc kubenswrapper[4718]: I1210 14:55:29.357411 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n6vl8\" (UniqueName: \"kubernetes.io/projected/d2148743-814f-409c-bbf9-65b8430f767c-kube-api-access-n6vl8\") pod \"d2148743-814f-409c-bbf9-65b8430f767c\" (UID: \"d2148743-814f-409c-bbf9-65b8430f767c\") " Dec 10 14:55:29 crc kubenswrapper[4718]: I1210 14:55:29.357450 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d49150b3-2c3e-4214-9c50-cbd6a1aaa0d3-dns-svc\") pod \"d49150b3-2c3e-4214-9c50-cbd6a1aaa0d3\" (UID: \"d49150b3-2c3e-4214-9c50-cbd6a1aaa0d3\") " Dec 10 14:55:29 crc kubenswrapper[4718]: I1210 14:55:29.357487 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/28bd664c-dd6f-4b1d-a9c3-239b94717974-dns-svc\") pod \"28bd664c-dd6f-4b1d-a9c3-239b94717974\" (UID: \"28bd664c-dd6f-4b1d-a9c3-239b94717974\") " Dec 10 14:55:29 crc kubenswrapper[4718]: I1210 14:55:29.357513 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nj9qn\" (UniqueName: \"kubernetes.io/projected/d49150b3-2c3e-4214-9c50-cbd6a1aaa0d3-kube-api-access-nj9qn\") pod \"d49150b3-2c3e-4214-9c50-cbd6a1aaa0d3\" (UID: \"d49150b3-2c3e-4214-9c50-cbd6a1aaa0d3\") " Dec 10 14:55:29 crc kubenswrapper[4718]: I1210 14:55:29.357548 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d2148743-814f-409c-bbf9-65b8430f767c-config\") pod \"d2148743-814f-409c-bbf9-65b8430f767c\" (UID: \"d2148743-814f-409c-bbf9-65b8430f767c\") " Dec 10 14:55:29 crc kubenswrapper[4718]: I1210 14:55:29.357621 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bf0bd86a-1722-479c-8dcd-25080bc05c11-dns-svc\") pod \"bf0bd86a-1722-479c-8dcd-25080bc05c11\" (UID: \"bf0bd86a-1722-479c-8dcd-25080bc05c11\") " Dec 10 14:55:29 crc kubenswrapper[4718]: I1210 14:55:29.358605 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf0bd86a-1722-479c-8dcd-25080bc05c11-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "bf0bd86a-1722-479c-8dcd-25080bc05c11" (UID: "bf0bd86a-1722-479c-8dcd-25080bc05c11"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:55:29 crc kubenswrapper[4718]: I1210 14:55:29.358980 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/28bd664c-dd6f-4b1d-a9c3-239b94717974-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "28bd664c-dd6f-4b1d-a9c3-239b94717974" (UID: "28bd664c-dd6f-4b1d-a9c3-239b94717974"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:55:29 crc kubenswrapper[4718]: I1210 14:55:29.359158 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d49150b3-2c3e-4214-9c50-cbd6a1aaa0d3-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "d49150b3-2c3e-4214-9c50-cbd6a1aaa0d3" (UID: "d49150b3-2c3e-4214-9c50-cbd6a1aaa0d3"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:55:29 crc kubenswrapper[4718]: I1210 14:55:29.359303 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d2148743-814f-409c-bbf9-65b8430f767c-config" (OuterVolumeSpecName: "config") pod "d2148743-814f-409c-bbf9-65b8430f767c" (UID: "d2148743-814f-409c-bbf9-65b8430f767c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:55:29 crc kubenswrapper[4718]: I1210 14:55:29.364523 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d49150b3-2c3e-4214-9c50-cbd6a1aaa0d3-config" (OuterVolumeSpecName: "config") pod "d49150b3-2c3e-4214-9c50-cbd6a1aaa0d3" (UID: "d49150b3-2c3e-4214-9c50-cbd6a1aaa0d3"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:55:29 crc kubenswrapper[4718]: I1210 14:55:29.375860 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d49150b3-2c3e-4214-9c50-cbd6a1aaa0d3-kube-api-access-nj9qn" (OuterVolumeSpecName: "kube-api-access-nj9qn") pod "d49150b3-2c3e-4214-9c50-cbd6a1aaa0d3" (UID: "d49150b3-2c3e-4214-9c50-cbd6a1aaa0d3"). InnerVolumeSpecName "kube-api-access-nj9qn". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 14:55:29 crc kubenswrapper[4718]: I1210 14:55:29.379698 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d2148743-814f-409c-bbf9-65b8430f767c-kube-api-access-n6vl8" (OuterVolumeSpecName: "kube-api-access-n6vl8") pod "d2148743-814f-409c-bbf9-65b8430f767c" (UID: "d2148743-814f-409c-bbf9-65b8430f767c"). InnerVolumeSpecName "kube-api-access-n6vl8". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 14:55:29 crc kubenswrapper[4718]: I1210 14:55:29.406188 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-nf6b6" event={"ID":"7326e5bc-27b1-4b9a-b0ea-979589622ea3","Type":"ContainerStarted","Data":"c7f433150ce27a0752a04ac407afbd416b9ab61dececfda7a63db4bd3247a7f8"} Dec 10 14:55:29 crc kubenswrapper[4718]: I1210 14:55:29.408411 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8468885bfc-pmrts" event={"ID":"5fe12fc7-cc3d-47cb-8c4c-c38cdae3d919","Type":"ContainerDied","Data":"e7e854b27fe4c2083fc67987539206de13def1157c1d20405fc2c9fa3d057799"} Dec 10 14:55:29 crc kubenswrapper[4718]: I1210 14:55:29.408433 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8468885bfc-pmrts" Dec 10 14:55:29 crc kubenswrapper[4718]: I1210 14:55:29.413053 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-545d49fd5c-wvw6h" event={"ID":"d49150b3-2c3e-4214-9c50-cbd6a1aaa0d3","Type":"ContainerDied","Data":"87fa213d2cfcd20df65fcf2cb55aee09e060a173a2c7be18387f767077cf8e32"} Dec 10 14:55:29 crc kubenswrapper[4718]: I1210 14:55:29.413150 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-545d49fd5c-wvw6h" Dec 10 14:55:29 crc kubenswrapper[4718]: I1210 14:55:29.414469 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6dbf544cc9-9888d" event={"ID":"70c358d6-fb60-4ab9-a1d2-90a25b22fc41","Type":"ContainerStarted","Data":"52a87a9a3d6bcf2eab8b92f0c90a008ce355e13306642b5564f066d2af1f3b64"} Dec 10 14:55:29 crc kubenswrapper[4718]: I1210 14:55:29.419672 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86b8f4ff9-5scsd" event={"ID":"d2148743-814f-409c-bbf9-65b8430f767c","Type":"ContainerDied","Data":"f7e5681debd50b6e2c0b90341f7455a01662f5e8b0e3bc62939e5dec75701f21"} Dec 10 14:55:29 crc kubenswrapper[4718]: I1210 14:55:29.419799 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86b8f4ff9-5scsd" Dec 10 14:55:29 crc kubenswrapper[4718]: I1210 14:55:29.422246 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b9b4959cc-p58sh" Dec 10 14:55:29 crc kubenswrapper[4718]: I1210 14:55:29.422255 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b9b4959cc-p58sh" event={"ID":"bf0bd86a-1722-479c-8dcd-25080bc05c11","Type":"ContainerDied","Data":"173b426c7ab1ee7b23cf2e07c0d00c3a281c2b77eedf6efae3529bd44cd02324"} Dec 10 14:55:29 crc kubenswrapper[4718]: I1210 14:55:29.423271 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5449989c59-s2cg4" event={"ID":"28bd664c-dd6f-4b1d-a9c3-239b94717974","Type":"ContainerDied","Data":"00f5aa4d56f522440443732213f24df960c770f9c14bca76a450b55894d92916"} Dec 10 14:55:29 crc kubenswrapper[4718]: I1210 14:55:29.423364 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5449989c59-s2cg4" Dec 10 14:55:29 crc kubenswrapper[4718]: I1210 14:55:29.461983 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bf0bd86a-1722-479c-8dcd-25080bc05c11-config\") pod \"bf0bd86a-1722-479c-8dcd-25080bc05c11\" (UID: \"bf0bd86a-1722-479c-8dcd-25080bc05c11\") " Dec 10 14:55:29 crc kubenswrapper[4718]: I1210 14:55:29.462053 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pbfdx\" (UniqueName: \"kubernetes.io/projected/28bd664c-dd6f-4b1d-a9c3-239b94717974-kube-api-access-pbfdx\") pod \"28bd664c-dd6f-4b1d-a9c3-239b94717974\" (UID: \"28bd664c-dd6f-4b1d-a9c3-239b94717974\") " Dec 10 14:55:29 crc kubenswrapper[4718]: I1210 14:55:29.462141 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/28bd664c-dd6f-4b1d-a9c3-239b94717974-config\") pod \"28bd664c-dd6f-4b1d-a9c3-239b94717974\" (UID: \"28bd664c-dd6f-4b1d-a9c3-239b94717974\") " Dec 10 14:55:29 crc kubenswrapper[4718]: I1210 14:55:29.462184 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wnngk\" (UniqueName: \"kubernetes.io/projected/bf0bd86a-1722-479c-8dcd-25080bc05c11-kube-api-access-wnngk\") pod \"bf0bd86a-1722-479c-8dcd-25080bc05c11\" (UID: \"bf0bd86a-1722-479c-8dcd-25080bc05c11\") " Dec 10 14:55:29 crc kubenswrapper[4718]: I1210 14:55:29.462222 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d2148743-814f-409c-bbf9-65b8430f767c-dns-svc\") pod \"d2148743-814f-409c-bbf9-65b8430f767c\" (UID: \"d2148743-814f-409c-bbf9-65b8430f767c\") " Dec 10 14:55:29 crc kubenswrapper[4718]: I1210 14:55:29.463062 4718 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d49150b3-2c3e-4214-9c50-cbd6a1aaa0d3-config\") on node \"crc\" DevicePath \"\"" Dec 10 14:55:29 crc kubenswrapper[4718]: I1210 14:55:29.466852 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n6vl8\" (UniqueName: \"kubernetes.io/projected/d2148743-814f-409c-bbf9-65b8430f767c-kube-api-access-n6vl8\") on node \"crc\" DevicePath \"\"" Dec 10 14:55:29 crc kubenswrapper[4718]: I1210 14:55:29.466876 4718 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d49150b3-2c3e-4214-9c50-cbd6a1aaa0d3-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 10 14:55:29 crc kubenswrapper[4718]: I1210 14:55:29.466892 4718 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/28bd664c-dd6f-4b1d-a9c3-239b94717974-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 10 14:55:29 crc kubenswrapper[4718]: I1210 14:55:29.466906 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nj9qn\" (UniqueName: \"kubernetes.io/projected/d49150b3-2c3e-4214-9c50-cbd6a1aaa0d3-kube-api-access-nj9qn\") on node \"crc\" DevicePath \"\"" Dec 10 14:55:29 crc kubenswrapper[4718]: I1210 14:55:29.466924 4718 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d2148743-814f-409c-bbf9-65b8430f767c-config\") on node \"crc\" DevicePath \"\"" Dec 10 14:55:29 crc kubenswrapper[4718]: I1210 14:55:29.466938 4718 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bf0bd86a-1722-479c-8dcd-25080bc05c11-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 10 14:55:29 crc kubenswrapper[4718]: I1210 14:55:29.463136 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/28bd664c-dd6f-4b1d-a9c3-239b94717974-config" (OuterVolumeSpecName: "config") pod "28bd664c-dd6f-4b1d-a9c3-239b94717974" (UID: "28bd664c-dd6f-4b1d-a9c3-239b94717974"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:55:29 crc kubenswrapper[4718]: I1210 14:55:29.463890 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d2148743-814f-409c-bbf9-65b8430f767c-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "d2148743-814f-409c-bbf9-65b8430f767c" (UID: "d2148743-814f-409c-bbf9-65b8430f767c"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:55:29 crc kubenswrapper[4718]: I1210 14:55:29.465840 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf0bd86a-1722-479c-8dcd-25080bc05c11-config" (OuterVolumeSpecName: "config") pod "bf0bd86a-1722-479c-8dcd-25080bc05c11" (UID: "bf0bd86a-1722-479c-8dcd-25080bc05c11"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:55:29 crc kubenswrapper[4718]: I1210 14:55:29.467822 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/28bd664c-dd6f-4b1d-a9c3-239b94717974-kube-api-access-pbfdx" (OuterVolumeSpecName: "kube-api-access-pbfdx") pod "28bd664c-dd6f-4b1d-a9c3-239b94717974" (UID: "28bd664c-dd6f-4b1d-a9c3-239b94717974"). InnerVolumeSpecName "kube-api-access-pbfdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 14:55:29 crc kubenswrapper[4718]: I1210 14:55:29.469615 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf0bd86a-1722-479c-8dcd-25080bc05c11-kube-api-access-wnngk" (OuterVolumeSpecName: "kube-api-access-wnngk") pod "bf0bd86a-1722-479c-8dcd-25080bc05c11" (UID: "bf0bd86a-1722-479c-8dcd-25080bc05c11"). InnerVolumeSpecName "kube-api-access-wnngk". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 14:55:29 crc kubenswrapper[4718]: I1210 14:55:29.504478 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8468885bfc-pmrts"] Dec 10 14:55:29 crc kubenswrapper[4718]: I1210 14:55:29.532244 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-8468885bfc-pmrts"] Dec 10 14:55:29 crc kubenswrapper[4718]: I1210 14:55:29.551255 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-545d49fd5c-wvw6h"] Dec 10 14:55:29 crc kubenswrapper[4718]: I1210 14:55:29.565362 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-545d49fd5c-wvw6h"] Dec 10 14:55:29 crc kubenswrapper[4718]: I1210 14:55:29.567961 4718 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bf0bd86a-1722-479c-8dcd-25080bc05c11-config\") on node \"crc\" DevicePath \"\"" Dec 10 14:55:29 crc kubenswrapper[4718]: I1210 14:55:29.568013 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pbfdx\" (UniqueName: \"kubernetes.io/projected/28bd664c-dd6f-4b1d-a9c3-239b94717974-kube-api-access-pbfdx\") on node \"crc\" DevicePath \"\"" Dec 10 14:55:29 crc kubenswrapper[4718]: I1210 14:55:29.568029 4718 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/28bd664c-dd6f-4b1d-a9c3-239b94717974-config\") on node \"crc\" DevicePath \"\"" Dec 10 14:55:29 crc kubenswrapper[4718]: I1210 14:55:29.568041 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wnngk\" (UniqueName: \"kubernetes.io/projected/bf0bd86a-1722-479c-8dcd-25080bc05c11-kube-api-access-wnngk\") on node \"crc\" DevicePath \"\"" Dec 10 14:55:29 crc kubenswrapper[4718]: I1210 14:55:29.568053 4718 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d2148743-814f-409c-bbf9-65b8430f767c-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 10 14:55:29 crc kubenswrapper[4718]: I1210 14:55:29.806094 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-86b8f4ff9-5scsd"] Dec 10 14:55:29 crc kubenswrapper[4718]: I1210 14:55:29.813425 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-86b8f4ff9-5scsd"] Dec 10 14:55:29 crc kubenswrapper[4718]: I1210 14:55:29.847325 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-b9b4959cc-p58sh"] Dec 10 14:55:29 crc kubenswrapper[4718]: I1210 14:55:29.863874 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-b9b4959cc-p58sh"] Dec 10 14:55:29 crc kubenswrapper[4718]: I1210 14:55:29.883353 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5449989c59-s2cg4"] Dec 10 14:55:29 crc kubenswrapper[4718]: I1210 14:55:29.890119 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5449989c59-s2cg4"] Dec 10 14:55:29 crc kubenswrapper[4718]: E1210 14:55:29.926148 4718 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.k8s.io/kube-state-metrics/kube-state-metrics@sha256:db384bf43222b066c378e77027a675d4cd9911107adba46c2922b3a55e10d6fb" Dec 10 14:55:29 crc kubenswrapper[4718]: E1210 14:55:29.926486 4718 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.k8s.io/kube-state-metrics/kube-state-metrics@sha256:db384bf43222b066c378e77027a675d4cd9911107adba46c2922b3a55e10d6fb" Dec 10 14:55:29 crc kubenswrapper[4718]: E1210 14:55:29.926848 4718 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-state-metrics,Image:registry.k8s.io/kube-state-metrics/kube-state-metrics@sha256:db384bf43222b066c378e77027a675d4cd9911107adba46c2922b3a55e10d6fb,Command:[],Args:[--resources=pods --namespaces=openstack],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:http-metrics,HostPort:0,ContainerPort:8080,Protocol:TCP,HostIP:,},ContainerPort{Name:telemetry,HostPort:0,ContainerPort:8081,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-8rnc7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{0 8080 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod kube-state-metrics-0_openstack(df93c205-2f35-4c9d-b3ce-45174d5bfc2d): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 10 14:55:29 crc kubenswrapper[4718]: E1210 14:55:29.928439 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-state-metrics\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openstack/kube-state-metrics-0" podUID="df93c205-2f35-4c9d-b3ce-45174d5bfc2d" Dec 10 14:55:30 crc kubenswrapper[4718]: I1210 14:55:30.035044 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="28bd664c-dd6f-4b1d-a9c3-239b94717974" path="/var/lib/kubelet/pods/28bd664c-dd6f-4b1d-a9c3-239b94717974/volumes" Dec 10 14:55:30 crc kubenswrapper[4718]: I1210 14:55:30.037264 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe12fc7-cc3d-47cb-8c4c-c38cdae3d919" path="/var/lib/kubelet/pods/5fe12fc7-cc3d-47cb-8c4c-c38cdae3d919/volumes" Dec 10 14:55:30 crc kubenswrapper[4718]: I1210 14:55:30.037970 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf0bd86a-1722-479c-8dcd-25080bc05c11" path="/var/lib/kubelet/pods/bf0bd86a-1722-479c-8dcd-25080bc05c11/volumes" Dec 10 14:55:30 crc kubenswrapper[4718]: I1210 14:55:30.038574 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d2148743-814f-409c-bbf9-65b8430f767c" path="/var/lib/kubelet/pods/d2148743-814f-409c-bbf9-65b8430f767c/volumes" Dec 10 14:55:30 crc kubenswrapper[4718]: I1210 14:55:30.039150 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d49150b3-2c3e-4214-9c50-cbd6a1aaa0d3" path="/var/lib/kubelet/pods/d49150b3-2c3e-4214-9c50-cbd6a1aaa0d3/volumes" Dec 10 14:55:30 crc kubenswrapper[4718]: E1210 14:55:30.462336 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-state-metrics\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.k8s.io/kube-state-metrics/kube-state-metrics@sha256:db384bf43222b066c378e77027a675d4cd9911107adba46c2922b3a55e10d6fb\\\"\"" pod="openstack/kube-state-metrics-0" podUID="df93c205-2f35-4c9d-b3ce-45174d5bfc2d" Dec 10 14:55:30 crc kubenswrapper[4718]: E1210 14:55:30.882871 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovsdbserver-sb\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/ovsdbserver-sb-0" podUID="1ccf190d-cc0e-471c-b506-9784b1e8b038" Dec 10 14:55:30 crc kubenswrapper[4718]: E1210 14:55:30.883210 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovsdbserver-nb\" with ErrImagePull: \"rpc error: code = Canceled desc = reading blob sha256:9edf1fd595077120223c7592832408eebed05c6c0300d97012a0e98d8ecca5b7: Get \\\"https://quay.rdoproject.org/v2/podified-master-centos10/openstack-ovn-nb-db-server/blobs/sha256:9edf1fd595077120223c7592832408eebed05c6c0300d97012a0e98d8ecca5b7\\\": context canceled\"" pod="openstack/ovsdbserver-nb-0" podUID="50fbf3ca-d871-4ccd-a412-636fa783e3d4" Dec 10 14:55:31 crc kubenswrapper[4718]: I1210 14:55:31.446435 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"50fbf3ca-d871-4ccd-a412-636fa783e3d4","Type":"ContainerStarted","Data":"04542a9e11f38b025f83a66c58683d8b097c6aa2966a62f5c58c982fdf67b766"} Dec 10 14:55:31 crc kubenswrapper[4718]: I1210 14:55:31.449186 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"1ccf190d-cc0e-471c-b506-9784b1e8b038","Type":"ContainerStarted","Data":"9d245a40da159714e08b1a349469485c9f7321d054abeccea406cd3692665edc"} Dec 10 14:55:31 crc kubenswrapper[4718]: E1210 14:55:31.451927 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovsdbserver-sb\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ovn-sb-db-server:current\\\"\"" pod="openstack/ovsdbserver-sb-0" podUID="1ccf190d-cc0e-471c-b506-9784b1e8b038" Dec 10 14:55:31 crc kubenswrapper[4718]: I1210 14:55:31.454008 4718 generic.go:334] "Generic (PLEG): container finished" podID="70c358d6-fb60-4ab9-a1d2-90a25b22fc41" containerID="d3d55b735cbd45ecf74bca41cb5524fbb537701aff9307c4016798c12a5259ad" exitCode=0 Dec 10 14:55:31 crc kubenswrapper[4718]: I1210 14:55:31.455407 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6dbf544cc9-9888d" event={"ID":"70c358d6-fb60-4ab9-a1d2-90a25b22fc41","Type":"ContainerDied","Data":"d3d55b735cbd45ecf74bca41cb5524fbb537701aff9307c4016798c12a5259ad"} Dec 10 14:55:31 crc kubenswrapper[4718]: I1210 14:55:31.458814 4718 generic.go:334] "Generic (PLEG): container finished" podID="061fa283-77d3-42e2-b267-2c01852d4123" containerID="a9207738dd6c76815f5307d9d81a664230fce4498de55233f288dc8d1f9eb6d3" exitCode=0 Dec 10 14:55:31 crc kubenswrapper[4718]: I1210 14:55:31.458899 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-l8vtq" event={"ID":"061fa283-77d3-42e2-b267-2c01852d4123","Type":"ContainerDied","Data":"a9207738dd6c76815f5307d9d81a664230fce4498de55233f288dc8d1f9eb6d3"} Dec 10 14:55:31 crc kubenswrapper[4718]: I1210 14:55:31.462607 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"f81943cf-47c7-424d-9473-2df3195bc9a6","Type":"ContainerStarted","Data":"6afd55fc661e4f95dd82b5831dae5657b43daf8fd4e4b7c7afbe4d6c73fe1a60"} Dec 10 14:55:31 crc kubenswrapper[4718]: I1210 14:55:31.470720 4718 generic.go:334] "Generic (PLEG): container finished" podID="14db89d0-8a08-4320-b700-ec422442571c" containerID="7598dd5d228dccafb8a7cb1418972eb5584fd4fd30b307cbad88707228171b32" exitCode=0 Dec 10 14:55:31 crc kubenswrapper[4718]: I1210 14:55:31.470806 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6fb75c485f-vs8bb" event={"ID":"14db89d0-8a08-4320-b700-ec422442571c","Type":"ContainerDied","Data":"7598dd5d228dccafb8a7cb1418972eb5584fd4fd30b307cbad88707228171b32"} Dec 10 14:55:32 crc kubenswrapper[4718]: I1210 14:55:32.483291 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6fb75c485f-vs8bb" event={"ID":"14db89d0-8a08-4320-b700-ec422442571c","Type":"ContainerStarted","Data":"c19add53ec11a0ab0ace153f0b4bb980702e61b3c6e39b1692826de3be16d214"} Dec 10 14:55:32 crc kubenswrapper[4718]: E1210 14:55:32.485285 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovsdbserver-sb\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ovn-sb-db-server:current\\\"\"" pod="openstack/ovsdbserver-sb-0" podUID="1ccf190d-cc0e-471c-b506-9784b1e8b038" Dec 10 14:55:32 crc kubenswrapper[4718]: I1210 14:55:32.516687 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6fb75c485f-vs8bb" podStartSLOduration=42.185850268 podStartE2EDuration="53.51659719s" podCreationTimestamp="2025-12-10 14:54:39 +0000 UTC" firstStartedPulling="2025-12-10 14:55:18.991377706 +0000 UTC m=+1423.940601123" lastFinishedPulling="2025-12-10 14:55:30.322124628 +0000 UTC m=+1435.271348045" observedRunningTime="2025-12-10 14:55:32.514138797 +0000 UTC m=+1437.463362214" watchObservedRunningTime="2025-12-10 14:55:32.51659719 +0000 UTC m=+1437.465820617" Dec 10 14:55:33 crc kubenswrapper[4718]: I1210 14:55:33.511108 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"282d32e9-d539-4bac-9fd1-a8735e8d92e1","Type":"ContainerStarted","Data":"49d4ddfce0fa0f7d4939e716b90aec39e4406e5c6207e6f2157b8812b80cc12d"} Dec 10 14:55:33 crc kubenswrapper[4718]: I1210 14:55:33.519825 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6dbf544cc9-9888d" event={"ID":"70c358d6-fb60-4ab9-a1d2-90a25b22fc41","Type":"ContainerStarted","Data":"ec3eb61a9c9aa6199c543fd386fad65006c618b56050a4a97d14354821670db8"} Dec 10 14:55:33 crc kubenswrapper[4718]: I1210 14:55:33.520199 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6dbf544cc9-9888d" Dec 10 14:55:33 crc kubenswrapper[4718]: I1210 14:55:33.527526 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-l8vtq" event={"ID":"061fa283-77d3-42e2-b267-2c01852d4123","Type":"ContainerStarted","Data":"59a9dc61d28585d75811795a015d5bab0f397d66aa42414d6598f1bf73767e42"} Dec 10 14:55:33 crc kubenswrapper[4718]: I1210 14:55:33.527617 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-l8vtq" event={"ID":"061fa283-77d3-42e2-b267-2c01852d4123","Type":"ContainerStarted","Data":"8e5c59bc279474cb41825b268ec72c33e8e298a19c3f405056a7d4d2146b3e8b"} Dec 10 14:55:33 crc kubenswrapper[4718]: I1210 14:55:33.528802 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-l8vtq" Dec 10 14:55:33 crc kubenswrapper[4718]: I1210 14:55:33.528866 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6fb75c485f-vs8bb" Dec 10 14:55:33 crc kubenswrapper[4718]: I1210 14:55:33.528923 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-l8vtq" Dec 10 14:55:33 crc kubenswrapper[4718]: I1210 14:55:33.590743 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-l8vtq" podStartSLOduration=13.555951406 podStartE2EDuration="1m8.590705381s" podCreationTimestamp="2025-12-10 14:54:25 +0000 UTC" firstStartedPulling="2025-12-10 14:54:34.877783687 +0000 UTC m=+1379.827007104" lastFinishedPulling="2025-12-10 14:55:29.912537662 +0000 UTC m=+1434.861761079" observedRunningTime="2025-12-10 14:55:33.566641387 +0000 UTC m=+1438.515864804" watchObservedRunningTime="2025-12-10 14:55:33.590705381 +0000 UTC m=+1438.539928808" Dec 10 14:55:33 crc kubenswrapper[4718]: I1210 14:55:33.607596 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6dbf544cc9-9888d" podStartSLOduration=52.925780527 podStartE2EDuration="54.607563081s" podCreationTimestamp="2025-12-10 14:54:39 +0000 UTC" firstStartedPulling="2025-12-10 14:55:28.855187188 +0000 UTC m=+1433.804410605" lastFinishedPulling="2025-12-10 14:55:30.536969732 +0000 UTC m=+1435.486193159" observedRunningTime="2025-12-10 14:55:33.598324566 +0000 UTC m=+1438.547548003" watchObservedRunningTime="2025-12-10 14:55:33.607563081 +0000 UTC m=+1438.556786498" Dec 10 14:55:34 crc kubenswrapper[4718]: I1210 14:55:34.635448 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"7ea78b30-1cce-42f6-abae-e7e66ee3daae","Type":"ContainerStarted","Data":"cb941a979e3d51662c1c888d4e80eee2c6c2c4f0cc496d6051c6c77d152f3af2"} Dec 10 14:55:34 crc kubenswrapper[4718]: I1210 14:55:34.639288 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"50fbf3ca-d871-4ccd-a412-636fa783e3d4","Type":"ContainerStarted","Data":"439ff1c7c5770fe2b302a2b370d1902aac5c23c27bf0f23f99c9436f39af0700"} Dec 10 14:55:34 crc kubenswrapper[4718]: I1210 14:55:34.642643 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-nf6b6" event={"ID":"7326e5bc-27b1-4b9a-b0ea-979589622ea3","Type":"ContainerStarted","Data":"62fa43a6e449a5d73f42e4a71a35b1ea30af06b3cb0139dfbeabe96ae3468881"} Dec 10 14:55:34 crc kubenswrapper[4718]: I1210 14:55:34.700224 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=11.493922974 podStartE2EDuration="1m9.700193594s" podCreationTimestamp="2025-12-10 14:54:25 +0000 UTC" firstStartedPulling="2025-12-10 14:54:34.878370592 +0000 UTC m=+1379.827594009" lastFinishedPulling="2025-12-10 14:55:33.084641202 +0000 UTC m=+1438.033864629" observedRunningTime="2025-12-10 14:55:34.69336181 +0000 UTC m=+1439.642585227" watchObservedRunningTime="2025-12-10 14:55:34.700193594 +0000 UTC m=+1439.649417011" Dec 10 14:55:34 crc kubenswrapper[4718]: I1210 14:55:34.744242 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-nf6b6" podStartSLOduration=52.639334675 podStartE2EDuration="56.744212688s" podCreationTimestamp="2025-12-10 14:54:38 +0000 UTC" firstStartedPulling="2025-12-10 14:55:29.177687371 +0000 UTC m=+1434.126910798" lastFinishedPulling="2025-12-10 14:55:33.282565394 +0000 UTC m=+1438.231788811" observedRunningTime="2025-12-10 14:55:34.734225413 +0000 UTC m=+1439.683448830" watchObservedRunningTime="2025-12-10 14:55:34.744212688 +0000 UTC m=+1439.693436105" Dec 10 14:55:35 crc kubenswrapper[4718]: I1210 14:55:35.887288 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-notifications-server-0" event={"ID":"5611ee41-14a4-45d3-88b1-e6e6c9bc4d13","Type":"ContainerStarted","Data":"c7bce5dfb390fbf9cd361478a7cdb8ba64af763cff7949a115daaf88f8422b66"} Dec 10 14:55:35 crc kubenswrapper[4718]: I1210 14:55:35.897559 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"0708d5de-311d-46e3-981e-7bd7a2fc495c","Type":"ContainerStarted","Data":"9573232c3d4b99f6fc5ff9580fc0a2c41563d6454c0fa5e757a74796a6103eeb"} Dec 10 14:55:35 crc kubenswrapper[4718]: I1210 14:55:35.900751 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"571880ea-f2a9-4e9e-99a5-c8bcaffb8675","Type":"ContainerStarted","Data":"0a260683094fd5de76fed1ebce033fc4e3c0c53dee597906f42bb8d138e03147"} Dec 10 14:55:35 crc kubenswrapper[4718]: I1210 14:55:35.901312 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Dec 10 14:55:35 crc kubenswrapper[4718]: I1210 14:55:35.975783 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=7.22332952 podStartE2EDuration="1m19.975753108s" podCreationTimestamp="2025-12-10 14:54:16 +0000 UTC" firstStartedPulling="2025-12-10 14:54:21.553727807 +0000 UTC m=+1366.502951234" lastFinishedPulling="2025-12-10 14:55:34.306151395 +0000 UTC m=+1439.255374822" observedRunningTime="2025-12-10 14:55:35.97151929 +0000 UTC m=+1440.920742707" watchObservedRunningTime="2025-12-10 14:55:35.975753108 +0000 UTC m=+1440.924976525" Dec 10 14:55:36 crc kubenswrapper[4718]: I1210 14:55:36.278996 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Dec 10 14:55:36 crc kubenswrapper[4718]: I1210 14:55:36.329221 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Dec 10 14:55:36 crc kubenswrapper[4718]: I1210 14:55:36.913243 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"0fcf07d8-859b-4547-8a32-824f40da6a93","Type":"ContainerStarted","Data":"c8b2fcef7b1e85bda85dbf340e17ddc30ec7066a78b9e32b881e0b894f46560d"} Dec 10 14:55:36 crc kubenswrapper[4718]: I1210 14:55:36.913861 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Dec 10 14:55:38 crc kubenswrapper[4718]: I1210 14:55:38.953755 4718 generic.go:334] "Generic (PLEG): container finished" podID="f81943cf-47c7-424d-9473-2df3195bc9a6" containerID="6afd55fc661e4f95dd82b5831dae5657b43daf8fd4e4b7c7afbe4d6c73fe1a60" exitCode=0 Dec 10 14:55:38 crc kubenswrapper[4718]: I1210 14:55:38.953836 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"f81943cf-47c7-424d-9473-2df3195bc9a6","Type":"ContainerDied","Data":"6afd55fc661e4f95dd82b5831dae5657b43daf8fd4e4b7c7afbe4d6c73fe1a60"} Dec 10 14:55:39 crc kubenswrapper[4718]: I1210 14:55:39.028640 4718 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 10 14:55:39 crc kubenswrapper[4718]: I1210 14:55:39.714853 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6fb75c485f-vs8bb" Dec 10 14:55:39 crc kubenswrapper[4718]: I1210 14:55:39.980309 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ftrjh" event={"ID":"3e4f376b-2175-46d8-8b88-0560a3fcf231","Type":"ContainerStarted","Data":"278b08e25f0c8a20450039757c8ce04008da17c83f079bb22f9ddf35e065c1b3"} Dec 10 14:55:39 crc kubenswrapper[4718]: I1210 14:55:39.981291 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ftrjh" Dec 10 14:55:39 crc kubenswrapper[4718]: I1210 14:55:39.987975 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"f81943cf-47c7-424d-9473-2df3195bc9a6","Type":"ContainerStarted","Data":"6c848fd92ddb856f17c0db73ceb5def858c035b54a8f1126622b42e5dab1f703"} Dec 10 14:55:40 crc kubenswrapper[4718]: I1210 14:55:40.005966 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ftrjh" podStartSLOduration=5.362512662 podStartE2EDuration="1m15.005931864s" podCreationTimestamp="2025-12-10 14:54:25 +0000 UTC" firstStartedPulling="2025-12-10 14:54:29.578884487 +0000 UTC m=+1374.528107904" lastFinishedPulling="2025-12-10 14:55:39.222303689 +0000 UTC m=+1444.171527106" observedRunningTime="2025-12-10 14:55:40.000801123 +0000 UTC m=+1444.950024550" watchObservedRunningTime="2025-12-10 14:55:40.005931864 +0000 UTC m=+1444.955155281" Dec 10 14:55:40 crc kubenswrapper[4718]: I1210 14:55:40.032152 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=13.875669314 podStartE2EDuration="1m25.032111803s" podCreationTimestamp="2025-12-10 14:54:15 +0000 UTC" firstStartedPulling="2025-12-10 14:54:19.335823081 +0000 UTC m=+1364.285046498" lastFinishedPulling="2025-12-10 14:55:30.49226557 +0000 UTC m=+1435.441488987" observedRunningTime="2025-12-10 14:55:40.025967386 +0000 UTC m=+1444.975190803" watchObservedRunningTime="2025-12-10 14:55:40.032111803 +0000 UTC m=+1444.981335220" Dec 10 14:55:40 crc kubenswrapper[4718]: I1210 14:55:40.582611 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6dbf544cc9-9888d" Dec 10 14:55:40 crc kubenswrapper[4718]: I1210 14:55:40.649664 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6fb75c485f-vs8bb"] Dec 10 14:55:40 crc kubenswrapper[4718]: I1210 14:55:40.650525 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6fb75c485f-vs8bb" podUID="14db89d0-8a08-4320-b700-ec422442571c" containerName="dnsmasq-dns" containerID="cri-o://c19add53ec11a0ab0ace153f0b4bb980702e61b3c6e39b1692826de3be16d214" gracePeriod=10 Dec 10 14:55:42 crc kubenswrapper[4718]: I1210 14:55:42.389354 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Dec 10 14:55:42 crc kubenswrapper[4718]: I1210 14:55:42.457129 4718 generic.go:334] "Generic (PLEG): container finished" podID="14db89d0-8a08-4320-b700-ec422442571c" containerID="c19add53ec11a0ab0ace153f0b4bb980702e61b3c6e39b1692826de3be16d214" exitCode=0 Dec 10 14:55:42 crc kubenswrapper[4718]: I1210 14:55:42.457619 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6fb75c485f-vs8bb" event={"ID":"14db89d0-8a08-4320-b700-ec422442571c","Type":"ContainerDied","Data":"c19add53ec11a0ab0ace153f0b4bb980702e61b3c6e39b1692826de3be16d214"} Dec 10 14:55:42 crc kubenswrapper[4718]: I1210 14:55:42.465156 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Dec 10 14:55:44 crc kubenswrapper[4718]: I1210 14:55:44.735836 4718 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-6fb75c485f-vs8bb" podUID="14db89d0-8a08-4320-b700-ec422442571c" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.117:5353: connect: connection refused" Dec 10 14:55:47 crc kubenswrapper[4718]: I1210 14:55:47.388063 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Dec 10 14:55:47 crc kubenswrapper[4718]: I1210 14:55:47.388474 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Dec 10 14:55:48 crc kubenswrapper[4718]: I1210 14:55:48.471353 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6fb75c485f-vs8bb" Dec 10 14:55:48 crc kubenswrapper[4718]: I1210 14:55:48.622069 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/14db89d0-8a08-4320-b700-ec422442571c-dns-svc\") pod \"14db89d0-8a08-4320-b700-ec422442571c\" (UID: \"14db89d0-8a08-4320-b700-ec422442571c\") " Dec 10 14:55:48 crc kubenswrapper[4718]: I1210 14:55:48.622206 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ltw2c\" (UniqueName: \"kubernetes.io/projected/14db89d0-8a08-4320-b700-ec422442571c-kube-api-access-ltw2c\") pod \"14db89d0-8a08-4320-b700-ec422442571c\" (UID: \"14db89d0-8a08-4320-b700-ec422442571c\") " Dec 10 14:55:48 crc kubenswrapper[4718]: I1210 14:55:48.622330 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/14db89d0-8a08-4320-b700-ec422442571c-ovsdbserver-nb\") pod \"14db89d0-8a08-4320-b700-ec422442571c\" (UID: \"14db89d0-8a08-4320-b700-ec422442571c\") " Dec 10 14:55:48 crc kubenswrapper[4718]: I1210 14:55:48.622459 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/14db89d0-8a08-4320-b700-ec422442571c-config\") pod \"14db89d0-8a08-4320-b700-ec422442571c\" (UID: \"14db89d0-8a08-4320-b700-ec422442571c\") " Dec 10 14:55:48 crc kubenswrapper[4718]: I1210 14:55:48.633928 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/14db89d0-8a08-4320-b700-ec422442571c-kube-api-access-ltw2c" (OuterVolumeSpecName: "kube-api-access-ltw2c") pod "14db89d0-8a08-4320-b700-ec422442571c" (UID: "14db89d0-8a08-4320-b700-ec422442571c"). InnerVolumeSpecName "kube-api-access-ltw2c". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 14:55:48 crc kubenswrapper[4718]: I1210 14:55:48.667379 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/14db89d0-8a08-4320-b700-ec422442571c-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "14db89d0-8a08-4320-b700-ec422442571c" (UID: "14db89d0-8a08-4320-b700-ec422442571c"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:55:48 crc kubenswrapper[4718]: I1210 14:55:48.667438 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/14db89d0-8a08-4320-b700-ec422442571c-config" (OuterVolumeSpecName: "config") pod "14db89d0-8a08-4320-b700-ec422442571c" (UID: "14db89d0-8a08-4320-b700-ec422442571c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:55:48 crc kubenswrapper[4718]: I1210 14:55:48.684966 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/14db89d0-8a08-4320-b700-ec422442571c-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "14db89d0-8a08-4320-b700-ec422442571c" (UID: "14db89d0-8a08-4320-b700-ec422442571c"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:55:48 crc kubenswrapper[4718]: I1210 14:55:48.724978 4718 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/14db89d0-8a08-4320-b700-ec422442571c-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 10 14:55:48 crc kubenswrapper[4718]: I1210 14:55:48.725030 4718 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/14db89d0-8a08-4320-b700-ec422442571c-config\") on node \"crc\" DevicePath \"\"" Dec 10 14:55:48 crc kubenswrapper[4718]: I1210 14:55:48.725042 4718 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/14db89d0-8a08-4320-b700-ec422442571c-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 10 14:55:48 crc kubenswrapper[4718]: I1210 14:55:48.725054 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ltw2c\" (UniqueName: \"kubernetes.io/projected/14db89d0-8a08-4320-b700-ec422442571c-kube-api-access-ltw2c\") on node \"crc\" DevicePath \"\"" Dec 10 14:55:48 crc kubenswrapper[4718]: I1210 14:55:48.810734 4718 generic.go:334] "Generic (PLEG): container finished" podID="0708d5de-311d-46e3-981e-7bd7a2fc495c" containerID="9573232c3d4b99f6fc5ff9580fc0a2c41563d6454c0fa5e757a74796a6103eeb" exitCode=0 Dec 10 14:55:48 crc kubenswrapper[4718]: I1210 14:55:48.810845 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"0708d5de-311d-46e3-981e-7bd7a2fc495c","Type":"ContainerDied","Data":"9573232c3d4b99f6fc5ff9580fc0a2c41563d6454c0fa5e757a74796a6103eeb"} Dec 10 14:55:48 crc kubenswrapper[4718]: I1210 14:55:48.815883 4718 generic.go:334] "Generic (PLEG): container finished" podID="7ea78b30-1cce-42f6-abae-e7e66ee3daae" containerID="cb941a979e3d51662c1c888d4e80eee2c6c2c4f0cc496d6051c6c77d152f3af2" exitCode=0 Dec 10 14:55:48 crc kubenswrapper[4718]: I1210 14:55:48.815950 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"7ea78b30-1cce-42f6-abae-e7e66ee3daae","Type":"ContainerDied","Data":"cb941a979e3d51662c1c888d4e80eee2c6c2c4f0cc496d6051c6c77d152f3af2"} Dec 10 14:55:48 crc kubenswrapper[4718]: I1210 14:55:48.820124 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6fb75c485f-vs8bb" event={"ID":"14db89d0-8a08-4320-b700-ec422442571c","Type":"ContainerDied","Data":"64f2319e93acbd089f03e21bd6f54bcc11d127a019b47a282e6dfa74d40cee88"} Dec 10 14:55:48 crc kubenswrapper[4718]: I1210 14:55:48.820250 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6fb75c485f-vs8bb" Dec 10 14:55:48 crc kubenswrapper[4718]: I1210 14:55:48.820354 4718 scope.go:117] "RemoveContainer" containerID="c19add53ec11a0ab0ace153f0b4bb980702e61b3c6e39b1692826de3be16d214" Dec 10 14:55:48 crc kubenswrapper[4718]: I1210 14:55:48.898669 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6fb75c485f-vs8bb"] Dec 10 14:55:48 crc kubenswrapper[4718]: I1210 14:55:48.905462 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6fb75c485f-vs8bb"] Dec 10 14:55:48 crc kubenswrapper[4718]: I1210 14:55:48.970485 4718 scope.go:117] "RemoveContainer" containerID="7598dd5d228dccafb8a7cb1418972eb5584fd4fd30b307cbad88707228171b32" Dec 10 14:55:49 crc kubenswrapper[4718]: I1210 14:55:49.855369 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"df93c205-2f35-4c9d-b3ce-45174d5bfc2d","Type":"ContainerStarted","Data":"dd5efa31a3cc66a353bf75473378d6508fad8bf88fa1da9766951ed362f1ef91"} Dec 10 14:55:49 crc kubenswrapper[4718]: I1210 14:55:49.858643 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Dec 10 14:55:49 crc kubenswrapper[4718]: I1210 14:55:49.863905 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-76f9c4c8bc-nqtx8"] Dec 10 14:55:49 crc kubenswrapper[4718]: E1210 14:55:49.873914 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14db89d0-8a08-4320-b700-ec422442571c" containerName="dnsmasq-dns" Dec 10 14:55:49 crc kubenswrapper[4718]: I1210 14:55:49.874000 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="14db89d0-8a08-4320-b700-ec422442571c" containerName="dnsmasq-dns" Dec 10 14:55:49 crc kubenswrapper[4718]: E1210 14:55:49.874022 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14db89d0-8a08-4320-b700-ec422442571c" containerName="init" Dec 10 14:55:49 crc kubenswrapper[4718]: I1210 14:55:49.874030 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="14db89d0-8a08-4320-b700-ec422442571c" containerName="init" Dec 10 14:55:49 crc kubenswrapper[4718]: I1210 14:55:49.874265 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="14db89d0-8a08-4320-b700-ec422442571c" containerName="dnsmasq-dns" Dec 10 14:55:49 crc kubenswrapper[4718]: I1210 14:55:49.875560 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"0708d5de-311d-46e3-981e-7bd7a2fc495c","Type":"ContainerStarted","Data":"1af9f0ca3c5b0ed5fd60c686823c9d1782488bc2bc88898d708c09b3bf0353ec"} Dec 10 14:55:49 crc kubenswrapper[4718]: I1210 14:55:49.875719 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-76f9c4c8bc-nqtx8" Dec 10 14:55:49 crc kubenswrapper[4718]: I1210 14:55:49.893701 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"1ccf190d-cc0e-471c-b506-9784b1e8b038","Type":"ContainerStarted","Data":"82a994b84ed5503ab6a7bf6a0833687b4ec30932f9dc69bd47b543de0c371310"} Dec 10 14:55:49 crc kubenswrapper[4718]: I1210 14:55:49.894993 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-76f9c4c8bc-nqtx8"] Dec 10 14:55:49 crc kubenswrapper[4718]: I1210 14:55:49.903333 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=7.740370749 podStartE2EDuration="1m30.903297763s" podCreationTimestamp="2025-12-10 14:54:19 +0000 UTC" firstStartedPulling="2025-12-10 14:54:25.808011776 +0000 UTC m=+1370.757235193" lastFinishedPulling="2025-12-10 14:55:48.97093879 +0000 UTC m=+1453.920162207" observedRunningTime="2025-12-10 14:55:49.899847405 +0000 UTC m=+1454.849070822" watchObservedRunningTime="2025-12-10 14:55:49.903297763 +0000 UTC m=+1454.852521170" Dec 10 14:55:49 crc kubenswrapper[4718]: I1210 14:55:49.956048 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fbzpq\" (UniqueName: \"kubernetes.io/projected/cf76b176-14cf-4972-9384-7a0c69151f84-kube-api-access-fbzpq\") pod \"dnsmasq-dns-76f9c4c8bc-nqtx8\" (UID: \"cf76b176-14cf-4972-9384-7a0c69151f84\") " pod="openstack/dnsmasq-dns-76f9c4c8bc-nqtx8" Dec 10 14:55:49 crc kubenswrapper[4718]: I1210 14:55:49.956104 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cf76b176-14cf-4972-9384-7a0c69151f84-ovsdbserver-nb\") pod \"dnsmasq-dns-76f9c4c8bc-nqtx8\" (UID: \"cf76b176-14cf-4972-9384-7a0c69151f84\") " pod="openstack/dnsmasq-dns-76f9c4c8bc-nqtx8" Dec 10 14:55:49 crc kubenswrapper[4718]: I1210 14:55:49.956131 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cf76b176-14cf-4972-9384-7a0c69151f84-config\") pod \"dnsmasq-dns-76f9c4c8bc-nqtx8\" (UID: \"cf76b176-14cf-4972-9384-7a0c69151f84\") " pod="openstack/dnsmasq-dns-76f9c4c8bc-nqtx8" Dec 10 14:55:49 crc kubenswrapper[4718]: I1210 14:55:49.956298 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cf76b176-14cf-4972-9384-7a0c69151f84-dns-svc\") pod \"dnsmasq-dns-76f9c4c8bc-nqtx8\" (UID: \"cf76b176-14cf-4972-9384-7a0c69151f84\") " pod="openstack/dnsmasq-dns-76f9c4c8bc-nqtx8" Dec 10 14:55:49 crc kubenswrapper[4718]: I1210 14:55:49.956336 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cf76b176-14cf-4972-9384-7a0c69151f84-ovsdbserver-sb\") pod \"dnsmasq-dns-76f9c4c8bc-nqtx8\" (UID: \"cf76b176-14cf-4972-9384-7a0c69151f84\") " pod="openstack/dnsmasq-dns-76f9c4c8bc-nqtx8" Dec 10 14:55:49 crc kubenswrapper[4718]: I1210 14:55:49.991913 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=25.440324111 podStartE2EDuration="1m21.991876544s" podCreationTimestamp="2025-12-10 14:54:28 +0000 UTC" firstStartedPulling="2025-12-10 14:54:51.804796088 +0000 UTC m=+1396.754019505" lastFinishedPulling="2025-12-10 14:55:48.356348521 +0000 UTC m=+1453.305571938" observedRunningTime="2025-12-10 14:55:49.968503837 +0000 UTC m=+1454.917727264" watchObservedRunningTime="2025-12-10 14:55:49.991876544 +0000 UTC m=+1454.941099961" Dec 10 14:55:50 crc kubenswrapper[4718]: I1210 14:55:50.001058 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=-9223371940.853748 podStartE2EDuration="1m36.001027358s" podCreationTimestamp="2025-12-10 14:54:14 +0000 UTC" firstStartedPulling="2025-12-10 14:54:18.36182366 +0000 UTC m=+1363.311047077" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 14:55:49.999338135 +0000 UTC m=+1454.948561552" watchObservedRunningTime="2025-12-10 14:55:50.001027358 +0000 UTC m=+1454.950250775" Dec 10 14:55:50 crc kubenswrapper[4718]: I1210 14:55:50.034083 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="14db89d0-8a08-4320-b700-ec422442571c" path="/var/lib/kubelet/pods/14db89d0-8a08-4320-b700-ec422442571c/volumes" Dec 10 14:55:50 crc kubenswrapper[4718]: I1210 14:55:50.062273 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cf76b176-14cf-4972-9384-7a0c69151f84-dns-svc\") pod \"dnsmasq-dns-76f9c4c8bc-nqtx8\" (UID: \"cf76b176-14cf-4972-9384-7a0c69151f84\") " pod="openstack/dnsmasq-dns-76f9c4c8bc-nqtx8" Dec 10 14:55:50 crc kubenswrapper[4718]: I1210 14:55:50.062347 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cf76b176-14cf-4972-9384-7a0c69151f84-ovsdbserver-sb\") pod \"dnsmasq-dns-76f9c4c8bc-nqtx8\" (UID: \"cf76b176-14cf-4972-9384-7a0c69151f84\") " pod="openstack/dnsmasq-dns-76f9c4c8bc-nqtx8" Dec 10 14:55:50 crc kubenswrapper[4718]: I1210 14:55:50.062481 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fbzpq\" (UniqueName: \"kubernetes.io/projected/cf76b176-14cf-4972-9384-7a0c69151f84-kube-api-access-fbzpq\") pod \"dnsmasq-dns-76f9c4c8bc-nqtx8\" (UID: \"cf76b176-14cf-4972-9384-7a0c69151f84\") " pod="openstack/dnsmasq-dns-76f9c4c8bc-nqtx8" Dec 10 14:55:50 crc kubenswrapper[4718]: I1210 14:55:50.062512 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cf76b176-14cf-4972-9384-7a0c69151f84-ovsdbserver-nb\") pod \"dnsmasq-dns-76f9c4c8bc-nqtx8\" (UID: \"cf76b176-14cf-4972-9384-7a0c69151f84\") " pod="openstack/dnsmasq-dns-76f9c4c8bc-nqtx8" Dec 10 14:55:50 crc kubenswrapper[4718]: I1210 14:55:50.062548 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cf76b176-14cf-4972-9384-7a0c69151f84-config\") pod \"dnsmasq-dns-76f9c4c8bc-nqtx8\" (UID: \"cf76b176-14cf-4972-9384-7a0c69151f84\") " pod="openstack/dnsmasq-dns-76f9c4c8bc-nqtx8" Dec 10 14:55:50 crc kubenswrapper[4718]: I1210 14:55:50.063770 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cf76b176-14cf-4972-9384-7a0c69151f84-dns-svc\") pod \"dnsmasq-dns-76f9c4c8bc-nqtx8\" (UID: \"cf76b176-14cf-4972-9384-7a0c69151f84\") " pod="openstack/dnsmasq-dns-76f9c4c8bc-nqtx8" Dec 10 14:55:50 crc kubenswrapper[4718]: I1210 14:55:50.064127 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cf76b176-14cf-4972-9384-7a0c69151f84-ovsdbserver-nb\") pod \"dnsmasq-dns-76f9c4c8bc-nqtx8\" (UID: \"cf76b176-14cf-4972-9384-7a0c69151f84\") " pod="openstack/dnsmasq-dns-76f9c4c8bc-nqtx8" Dec 10 14:55:50 crc kubenswrapper[4718]: I1210 14:55:50.065314 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cf76b176-14cf-4972-9384-7a0c69151f84-config\") pod \"dnsmasq-dns-76f9c4c8bc-nqtx8\" (UID: \"cf76b176-14cf-4972-9384-7a0c69151f84\") " pod="openstack/dnsmasq-dns-76f9c4c8bc-nqtx8" Dec 10 14:55:50 crc kubenswrapper[4718]: I1210 14:55:50.065345 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cf76b176-14cf-4972-9384-7a0c69151f84-ovsdbserver-sb\") pod \"dnsmasq-dns-76f9c4c8bc-nqtx8\" (UID: \"cf76b176-14cf-4972-9384-7a0c69151f84\") " pod="openstack/dnsmasq-dns-76f9c4c8bc-nqtx8" Dec 10 14:55:50 crc kubenswrapper[4718]: I1210 14:55:50.090793 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fbzpq\" (UniqueName: \"kubernetes.io/projected/cf76b176-14cf-4972-9384-7a0c69151f84-kube-api-access-fbzpq\") pod \"dnsmasq-dns-76f9c4c8bc-nqtx8\" (UID: \"cf76b176-14cf-4972-9384-7a0c69151f84\") " pod="openstack/dnsmasq-dns-76f9c4c8bc-nqtx8" Dec 10 14:55:50 crc kubenswrapper[4718]: I1210 14:55:50.138198 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Dec 10 14:55:50 crc kubenswrapper[4718]: I1210 14:55:50.202244 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-76f9c4c8bc-nqtx8" Dec 10 14:55:50 crc kubenswrapper[4718]: I1210 14:55:50.815743 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-76f9c4c8bc-nqtx8"] Dec 10 14:55:50 crc kubenswrapper[4718]: W1210 14:55:50.825310 4718 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcf76b176_14cf_4972_9384_7a0c69151f84.slice/crio-3340fba6875db912b203ea8f7075075f9619277405b4cd4046fe49a5bd336e3d WatchSource:0}: Error finding container 3340fba6875db912b203ea8f7075075f9619277405b4cd4046fe49a5bd336e3d: Status 404 returned error can't find the container with id 3340fba6875db912b203ea8f7075075f9619277405b4cd4046fe49a5bd336e3d Dec 10 14:55:50 crc kubenswrapper[4718]: I1210 14:55:50.935923 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-76f9c4c8bc-nqtx8" event={"ID":"cf76b176-14cf-4972-9384-7a0c69151f84","Type":"ContainerStarted","Data":"3340fba6875db912b203ea8f7075075f9619277405b4cd4046fe49a5bd336e3d"} Dec 10 14:55:51 crc kubenswrapper[4718]: I1210 14:55:51.088812 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-storage-0"] Dec 10 14:55:51 crc kubenswrapper[4718]: I1210 14:55:51.097091 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Dec 10 14:55:51 crc kubenswrapper[4718]: I1210 14:55:51.101133 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Dec 10 14:55:51 crc kubenswrapper[4718]: I1210 14:55:51.101520 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Dec 10 14:55:51 crc kubenswrapper[4718]: I1210 14:55:51.104473 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-6dn8w" Dec 10 14:55:51 crc kubenswrapper[4718]: I1210 14:55:51.108314 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Dec 10 14:55:51 crc kubenswrapper[4718]: I1210 14:55:51.111887 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Dec 10 14:55:51 crc kubenswrapper[4718]: I1210 14:55:51.139608 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Dec 10 14:55:51 crc kubenswrapper[4718]: I1210 14:55:51.207347 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r6srw\" (UniqueName: \"kubernetes.io/projected/e094c947-215b-4386-906f-5ee833afa9d0-kube-api-access-r6srw\") pod \"swift-storage-0\" (UID: \"e094c947-215b-4386-906f-5ee833afa9d0\") " pod="openstack/swift-storage-0" Dec 10 14:55:51 crc kubenswrapper[4718]: I1210 14:55:51.207756 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/e094c947-215b-4386-906f-5ee833afa9d0-lock\") pod \"swift-storage-0\" (UID: \"e094c947-215b-4386-906f-5ee833afa9d0\") " pod="openstack/swift-storage-0" Dec 10 14:55:51 crc kubenswrapper[4718]: I1210 14:55:51.207830 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/e094c947-215b-4386-906f-5ee833afa9d0-etc-swift\") pod \"swift-storage-0\" (UID: \"e094c947-215b-4386-906f-5ee833afa9d0\") " pod="openstack/swift-storage-0" Dec 10 14:55:51 crc kubenswrapper[4718]: I1210 14:55:51.207866 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"swift-storage-0\" (UID: \"e094c947-215b-4386-906f-5ee833afa9d0\") " pod="openstack/swift-storage-0" Dec 10 14:55:51 crc kubenswrapper[4718]: I1210 14:55:51.207882 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/e094c947-215b-4386-906f-5ee833afa9d0-cache\") pod \"swift-storage-0\" (UID: \"e094c947-215b-4386-906f-5ee833afa9d0\") " pod="openstack/swift-storage-0" Dec 10 14:55:51 crc kubenswrapper[4718]: I1210 14:55:51.310644 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"swift-storage-0\" (UID: \"e094c947-215b-4386-906f-5ee833afa9d0\") " pod="openstack/swift-storage-0" Dec 10 14:55:51 crc kubenswrapper[4718]: I1210 14:55:51.310709 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/e094c947-215b-4386-906f-5ee833afa9d0-cache\") pod \"swift-storage-0\" (UID: \"e094c947-215b-4386-906f-5ee833afa9d0\") " pod="openstack/swift-storage-0" Dec 10 14:55:51 crc kubenswrapper[4718]: I1210 14:55:51.310879 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r6srw\" (UniqueName: \"kubernetes.io/projected/e094c947-215b-4386-906f-5ee833afa9d0-kube-api-access-r6srw\") pod \"swift-storage-0\" (UID: \"e094c947-215b-4386-906f-5ee833afa9d0\") " pod="openstack/swift-storage-0" Dec 10 14:55:51 crc kubenswrapper[4718]: I1210 14:55:51.310918 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/e094c947-215b-4386-906f-5ee833afa9d0-lock\") pod \"swift-storage-0\" (UID: \"e094c947-215b-4386-906f-5ee833afa9d0\") " pod="openstack/swift-storage-0" Dec 10 14:55:51 crc kubenswrapper[4718]: I1210 14:55:51.310982 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/e094c947-215b-4386-906f-5ee833afa9d0-etc-swift\") pod \"swift-storage-0\" (UID: \"e094c947-215b-4386-906f-5ee833afa9d0\") " pod="openstack/swift-storage-0" Dec 10 14:55:51 crc kubenswrapper[4718]: E1210 14:55:51.312330 4718 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Dec 10 14:55:51 crc kubenswrapper[4718]: E1210 14:55:51.312466 4718 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Dec 10 14:55:51 crc kubenswrapper[4718]: E1210 14:55:51.312923 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e094c947-215b-4386-906f-5ee833afa9d0-etc-swift podName:e094c947-215b-4386-906f-5ee833afa9d0 nodeName:}" failed. No retries permitted until 2025-12-10 14:55:51.812823277 +0000 UTC m=+1456.762046694 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/e094c947-215b-4386-906f-5ee833afa9d0-etc-swift") pod "swift-storage-0" (UID: "e094c947-215b-4386-906f-5ee833afa9d0") : configmap "swift-ring-files" not found Dec 10 14:55:51 crc kubenswrapper[4718]: I1210 14:55:51.313056 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/e094c947-215b-4386-906f-5ee833afa9d0-cache\") pod \"swift-storage-0\" (UID: \"e094c947-215b-4386-906f-5ee833afa9d0\") " pod="openstack/swift-storage-0" Dec 10 14:55:51 crc kubenswrapper[4718]: I1210 14:55:51.313105 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/e094c947-215b-4386-906f-5ee833afa9d0-lock\") pod \"swift-storage-0\" (UID: \"e094c947-215b-4386-906f-5ee833afa9d0\") " pod="openstack/swift-storage-0" Dec 10 14:55:51 crc kubenswrapper[4718]: I1210 14:55:51.313587 4718 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"swift-storage-0\" (UID: \"e094c947-215b-4386-906f-5ee833afa9d0\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/swift-storage-0" Dec 10 14:55:51 crc kubenswrapper[4718]: I1210 14:55:51.350440 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r6srw\" (UniqueName: \"kubernetes.io/projected/e094c947-215b-4386-906f-5ee833afa9d0-kube-api-access-r6srw\") pod \"swift-storage-0\" (UID: \"e094c947-215b-4386-906f-5ee833afa9d0\") " pod="openstack/swift-storage-0" Dec 10 14:55:51 crc kubenswrapper[4718]: I1210 14:55:51.354188 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"swift-storage-0\" (UID: \"e094c947-215b-4386-906f-5ee833afa9d0\") " pod="openstack/swift-storage-0" Dec 10 14:55:51 crc kubenswrapper[4718]: I1210 14:55:51.665557 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-nhp59"] Dec 10 14:55:51 crc kubenswrapper[4718]: I1210 14:55:51.668056 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-nhp59" Dec 10 14:55:51 crc kubenswrapper[4718]: I1210 14:55:51.670858 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Dec 10 14:55:51 crc kubenswrapper[4718]: I1210 14:55:51.671106 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Dec 10 14:55:51 crc kubenswrapper[4718]: I1210 14:55:51.672507 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Dec 10 14:55:51 crc kubenswrapper[4718]: I1210 14:55:51.691356 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-nhp59"] Dec 10 14:55:51 crc kubenswrapper[4718]: I1210 14:55:51.824287 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/ea7defa5-2130-4d6d-8bba-9416bec21dfa-etc-swift\") pod \"swift-ring-rebalance-nhp59\" (UID: \"ea7defa5-2130-4d6d-8bba-9416bec21dfa\") " pod="openstack/swift-ring-rebalance-nhp59" Dec 10 14:55:51 crc kubenswrapper[4718]: I1210 14:55:51.824404 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ea7defa5-2130-4d6d-8bba-9416bec21dfa-combined-ca-bundle\") pod \"swift-ring-rebalance-nhp59\" (UID: \"ea7defa5-2130-4d6d-8bba-9416bec21dfa\") " pod="openstack/swift-ring-rebalance-nhp59" Dec 10 14:55:51 crc kubenswrapper[4718]: I1210 14:55:51.824444 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ea7defa5-2130-4d6d-8bba-9416bec21dfa-scripts\") pod \"swift-ring-rebalance-nhp59\" (UID: \"ea7defa5-2130-4d6d-8bba-9416bec21dfa\") " pod="openstack/swift-ring-rebalance-nhp59" Dec 10 14:55:51 crc kubenswrapper[4718]: I1210 14:55:51.824510 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/ea7defa5-2130-4d6d-8bba-9416bec21dfa-dispersionconf\") pod \"swift-ring-rebalance-nhp59\" (UID: \"ea7defa5-2130-4d6d-8bba-9416bec21dfa\") " pod="openstack/swift-ring-rebalance-nhp59" Dec 10 14:55:51 crc kubenswrapper[4718]: I1210 14:55:51.824663 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/ea7defa5-2130-4d6d-8bba-9416bec21dfa-ring-data-devices\") pod \"swift-ring-rebalance-nhp59\" (UID: \"ea7defa5-2130-4d6d-8bba-9416bec21dfa\") " pod="openstack/swift-ring-rebalance-nhp59" Dec 10 14:55:51 crc kubenswrapper[4718]: I1210 14:55:51.824986 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7bwsz\" (UniqueName: \"kubernetes.io/projected/ea7defa5-2130-4d6d-8bba-9416bec21dfa-kube-api-access-7bwsz\") pod \"swift-ring-rebalance-nhp59\" (UID: \"ea7defa5-2130-4d6d-8bba-9416bec21dfa\") " pod="openstack/swift-ring-rebalance-nhp59" Dec 10 14:55:51 crc kubenswrapper[4718]: I1210 14:55:51.825310 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/ea7defa5-2130-4d6d-8bba-9416bec21dfa-swiftconf\") pod \"swift-ring-rebalance-nhp59\" (UID: \"ea7defa5-2130-4d6d-8bba-9416bec21dfa\") " pod="openstack/swift-ring-rebalance-nhp59" Dec 10 14:55:51 crc kubenswrapper[4718]: I1210 14:55:51.825430 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/e094c947-215b-4386-906f-5ee833afa9d0-etc-swift\") pod \"swift-storage-0\" (UID: \"e094c947-215b-4386-906f-5ee833afa9d0\") " pod="openstack/swift-storage-0" Dec 10 14:55:51 crc kubenswrapper[4718]: E1210 14:55:51.825794 4718 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Dec 10 14:55:51 crc kubenswrapper[4718]: E1210 14:55:51.825848 4718 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Dec 10 14:55:51 crc kubenswrapper[4718]: E1210 14:55:51.825937 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e094c947-215b-4386-906f-5ee833afa9d0-etc-swift podName:e094c947-215b-4386-906f-5ee833afa9d0 nodeName:}" failed. No retries permitted until 2025-12-10 14:55:52.825904085 +0000 UTC m=+1457.775127502 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/e094c947-215b-4386-906f-5ee833afa9d0-etc-swift") pod "swift-storage-0" (UID: "e094c947-215b-4386-906f-5ee833afa9d0") : configmap "swift-ring-files" not found Dec 10 14:55:51 crc kubenswrapper[4718]: I1210 14:55:51.927954 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/ea7defa5-2130-4d6d-8bba-9416bec21dfa-etc-swift\") pod \"swift-ring-rebalance-nhp59\" (UID: \"ea7defa5-2130-4d6d-8bba-9416bec21dfa\") " pod="openstack/swift-ring-rebalance-nhp59" Dec 10 14:55:51 crc kubenswrapper[4718]: I1210 14:55:51.928035 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ea7defa5-2130-4d6d-8bba-9416bec21dfa-combined-ca-bundle\") pod \"swift-ring-rebalance-nhp59\" (UID: \"ea7defa5-2130-4d6d-8bba-9416bec21dfa\") " pod="openstack/swift-ring-rebalance-nhp59" Dec 10 14:55:51 crc kubenswrapper[4718]: I1210 14:55:51.928065 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ea7defa5-2130-4d6d-8bba-9416bec21dfa-scripts\") pod \"swift-ring-rebalance-nhp59\" (UID: \"ea7defa5-2130-4d6d-8bba-9416bec21dfa\") " pod="openstack/swift-ring-rebalance-nhp59" Dec 10 14:55:51 crc kubenswrapper[4718]: I1210 14:55:51.928861 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/ea7defa5-2130-4d6d-8bba-9416bec21dfa-etc-swift\") pod \"swift-ring-rebalance-nhp59\" (UID: \"ea7defa5-2130-4d6d-8bba-9416bec21dfa\") " pod="openstack/swift-ring-rebalance-nhp59" Dec 10 14:55:51 crc kubenswrapper[4718]: I1210 14:55:51.929497 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ea7defa5-2130-4d6d-8bba-9416bec21dfa-scripts\") pod \"swift-ring-rebalance-nhp59\" (UID: \"ea7defa5-2130-4d6d-8bba-9416bec21dfa\") " pod="openstack/swift-ring-rebalance-nhp59" Dec 10 14:55:51 crc kubenswrapper[4718]: I1210 14:55:51.932573 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/ea7defa5-2130-4d6d-8bba-9416bec21dfa-dispersionconf\") pod \"swift-ring-rebalance-nhp59\" (UID: \"ea7defa5-2130-4d6d-8bba-9416bec21dfa\") " pod="openstack/swift-ring-rebalance-nhp59" Dec 10 14:55:51 crc kubenswrapper[4718]: I1210 14:55:51.932673 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/ea7defa5-2130-4d6d-8bba-9416bec21dfa-ring-data-devices\") pod \"swift-ring-rebalance-nhp59\" (UID: \"ea7defa5-2130-4d6d-8bba-9416bec21dfa\") " pod="openstack/swift-ring-rebalance-nhp59" Dec 10 14:55:51 crc kubenswrapper[4718]: I1210 14:55:51.932871 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7bwsz\" (UniqueName: \"kubernetes.io/projected/ea7defa5-2130-4d6d-8bba-9416bec21dfa-kube-api-access-7bwsz\") pod \"swift-ring-rebalance-nhp59\" (UID: \"ea7defa5-2130-4d6d-8bba-9416bec21dfa\") " pod="openstack/swift-ring-rebalance-nhp59" Dec 10 14:55:51 crc kubenswrapper[4718]: I1210 14:55:51.933098 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/ea7defa5-2130-4d6d-8bba-9416bec21dfa-swiftconf\") pod \"swift-ring-rebalance-nhp59\" (UID: \"ea7defa5-2130-4d6d-8bba-9416bec21dfa\") " pod="openstack/swift-ring-rebalance-nhp59" Dec 10 14:55:51 crc kubenswrapper[4718]: I1210 14:55:51.934563 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/ea7defa5-2130-4d6d-8bba-9416bec21dfa-ring-data-devices\") pod \"swift-ring-rebalance-nhp59\" (UID: \"ea7defa5-2130-4d6d-8bba-9416bec21dfa\") " pod="openstack/swift-ring-rebalance-nhp59" Dec 10 14:55:51 crc kubenswrapper[4718]: I1210 14:55:51.937118 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ea7defa5-2130-4d6d-8bba-9416bec21dfa-combined-ca-bundle\") pod \"swift-ring-rebalance-nhp59\" (UID: \"ea7defa5-2130-4d6d-8bba-9416bec21dfa\") " pod="openstack/swift-ring-rebalance-nhp59" Dec 10 14:55:51 crc kubenswrapper[4718]: I1210 14:55:51.937615 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/ea7defa5-2130-4d6d-8bba-9416bec21dfa-swiftconf\") pod \"swift-ring-rebalance-nhp59\" (UID: \"ea7defa5-2130-4d6d-8bba-9416bec21dfa\") " pod="openstack/swift-ring-rebalance-nhp59" Dec 10 14:55:51 crc kubenswrapper[4718]: I1210 14:55:51.938144 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/ea7defa5-2130-4d6d-8bba-9416bec21dfa-dispersionconf\") pod \"swift-ring-rebalance-nhp59\" (UID: \"ea7defa5-2130-4d6d-8bba-9416bec21dfa\") " pod="openstack/swift-ring-rebalance-nhp59" Dec 10 14:55:51 crc kubenswrapper[4718]: I1210 14:55:51.956226 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7bwsz\" (UniqueName: \"kubernetes.io/projected/ea7defa5-2130-4d6d-8bba-9416bec21dfa-kube-api-access-7bwsz\") pod \"swift-ring-rebalance-nhp59\" (UID: \"ea7defa5-2130-4d6d-8bba-9416bec21dfa\") " pod="openstack/swift-ring-rebalance-nhp59" Dec 10 14:55:51 crc kubenswrapper[4718]: I1210 14:55:51.964095 4718 generic.go:334] "Generic (PLEG): container finished" podID="cf76b176-14cf-4972-9384-7a0c69151f84" containerID="2f7cdfb3d0c175c44e60379e8206d5d52bb733fd38297d73b8dff8867da7d899" exitCode=0 Dec 10 14:55:51 crc kubenswrapper[4718]: I1210 14:55:51.964216 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-76f9c4c8bc-nqtx8" event={"ID":"cf76b176-14cf-4972-9384-7a0c69151f84","Type":"ContainerDied","Data":"2f7cdfb3d0c175c44e60379e8206d5d52bb733fd38297d73b8dff8867da7d899"} Dec 10 14:55:51 crc kubenswrapper[4718]: I1210 14:55:51.999479 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-nhp59" Dec 10 14:55:52 crc kubenswrapper[4718]: I1210 14:55:52.559104 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-nhp59"] Dec 10 14:55:52 crc kubenswrapper[4718]: W1210 14:55:52.568129 4718 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podea7defa5_2130_4d6d_8bba_9416bec21dfa.slice/crio-ef27066b5d923d770874d9592e3cf142494ccbb0d8d13650efd64bf07290ba33 WatchSource:0}: Error finding container ef27066b5d923d770874d9592e3cf142494ccbb0d8d13650efd64bf07290ba33: Status 404 returned error can't find the container with id ef27066b5d923d770874d9592e3cf142494ccbb0d8d13650efd64bf07290ba33 Dec 10 14:55:52 crc kubenswrapper[4718]: I1210 14:55:52.861650 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/e094c947-215b-4386-906f-5ee833afa9d0-etc-swift\") pod \"swift-storage-0\" (UID: \"e094c947-215b-4386-906f-5ee833afa9d0\") " pod="openstack/swift-storage-0" Dec 10 14:55:52 crc kubenswrapper[4718]: E1210 14:55:52.861964 4718 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Dec 10 14:55:52 crc kubenswrapper[4718]: E1210 14:55:52.862010 4718 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Dec 10 14:55:52 crc kubenswrapper[4718]: E1210 14:55:52.862139 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e094c947-215b-4386-906f-5ee833afa9d0-etc-swift podName:e094c947-215b-4386-906f-5ee833afa9d0 nodeName:}" failed. No retries permitted until 2025-12-10 14:55:54.862110378 +0000 UTC m=+1459.811333795 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/e094c947-215b-4386-906f-5ee833afa9d0-etc-swift") pod "swift-storage-0" (UID: "e094c947-215b-4386-906f-5ee833afa9d0") : configmap "swift-ring-files" not found Dec 10 14:55:52 crc kubenswrapper[4718]: I1210 14:55:52.987360 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-76f9c4c8bc-nqtx8" event={"ID":"cf76b176-14cf-4972-9384-7a0c69151f84","Type":"ContainerStarted","Data":"d1aac8ecc11e9d176473e21ff10b04dcb193e5b9f6b234592fbb3ca7dfb64480"} Dec 10 14:55:52 crc kubenswrapper[4718]: I1210 14:55:52.987513 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-76f9c4c8bc-nqtx8" Dec 10 14:55:52 crc kubenswrapper[4718]: I1210 14:55:52.995334 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-nhp59" event={"ID":"ea7defa5-2130-4d6d-8bba-9416bec21dfa","Type":"ContainerStarted","Data":"ef27066b5d923d770874d9592e3cf142494ccbb0d8d13650efd64bf07290ba33"} Dec 10 14:55:53 crc kubenswrapper[4718]: I1210 14:55:53.523171 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Dec 10 14:55:53 crc kubenswrapper[4718]: I1210 14:55:53.565344 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-76f9c4c8bc-nqtx8" podStartSLOduration=4.56531203 podStartE2EDuration="4.56531203s" podCreationTimestamp="2025-12-10 14:55:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 14:55:53.02592749 +0000 UTC m=+1457.975150907" watchObservedRunningTime="2025-12-10 14:55:53.56531203 +0000 UTC m=+1458.514535447" Dec 10 14:55:53 crc kubenswrapper[4718]: I1210 14:55:53.775748 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Dec 10 14:55:54 crc kubenswrapper[4718]: I1210 14:55:54.194631 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Dec 10 14:55:54 crc kubenswrapper[4718]: I1210 14:55:54.335370 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Dec 10 14:55:54 crc kubenswrapper[4718]: I1210 14:55:54.520400 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Dec 10 14:55:54 crc kubenswrapper[4718]: I1210 14:55:54.524586 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Dec 10 14:55:54 crc kubenswrapper[4718]: I1210 14:55:54.529939 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-7jfxm" Dec 10 14:55:54 crc kubenswrapper[4718]: I1210 14:55:54.530266 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Dec 10 14:55:54 crc kubenswrapper[4718]: I1210 14:55:54.531862 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Dec 10 14:55:54 crc kubenswrapper[4718]: I1210 14:55:54.532346 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Dec 10 14:55:54 crc kubenswrapper[4718]: I1210 14:55:54.539794 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Dec 10 14:55:54 crc kubenswrapper[4718]: I1210 14:55:54.702291 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/870a0a88-dfaa-49b8-96ae-96f5991f2e75-config\") pod \"ovn-northd-0\" (UID: \"870a0a88-dfaa-49b8-96ae-96f5991f2e75\") " pod="openstack/ovn-northd-0" Dec 10 14:55:54 crc kubenswrapper[4718]: I1210 14:55:54.702948 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-57djk\" (UniqueName: \"kubernetes.io/projected/870a0a88-dfaa-49b8-96ae-96f5991f2e75-kube-api-access-57djk\") pod \"ovn-northd-0\" (UID: \"870a0a88-dfaa-49b8-96ae-96f5991f2e75\") " pod="openstack/ovn-northd-0" Dec 10 14:55:54 crc kubenswrapper[4718]: I1210 14:55:54.703017 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/870a0a88-dfaa-49b8-96ae-96f5991f2e75-scripts\") pod \"ovn-northd-0\" (UID: \"870a0a88-dfaa-49b8-96ae-96f5991f2e75\") " pod="openstack/ovn-northd-0" Dec 10 14:55:54 crc kubenswrapper[4718]: I1210 14:55:54.703140 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/870a0a88-dfaa-49b8-96ae-96f5991f2e75-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"870a0a88-dfaa-49b8-96ae-96f5991f2e75\") " pod="openstack/ovn-northd-0" Dec 10 14:55:54 crc kubenswrapper[4718]: I1210 14:55:54.703201 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/870a0a88-dfaa-49b8-96ae-96f5991f2e75-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"870a0a88-dfaa-49b8-96ae-96f5991f2e75\") " pod="openstack/ovn-northd-0" Dec 10 14:55:54 crc kubenswrapper[4718]: I1210 14:55:54.703237 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/870a0a88-dfaa-49b8-96ae-96f5991f2e75-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"870a0a88-dfaa-49b8-96ae-96f5991f2e75\") " pod="openstack/ovn-northd-0" Dec 10 14:55:54 crc kubenswrapper[4718]: I1210 14:55:54.703253 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/870a0a88-dfaa-49b8-96ae-96f5991f2e75-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"870a0a88-dfaa-49b8-96ae-96f5991f2e75\") " pod="openstack/ovn-northd-0" Dec 10 14:55:54 crc kubenswrapper[4718]: I1210 14:55:54.805974 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/870a0a88-dfaa-49b8-96ae-96f5991f2e75-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"870a0a88-dfaa-49b8-96ae-96f5991f2e75\") " pod="openstack/ovn-northd-0" Dec 10 14:55:54 crc kubenswrapper[4718]: I1210 14:55:54.806047 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/870a0a88-dfaa-49b8-96ae-96f5991f2e75-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"870a0a88-dfaa-49b8-96ae-96f5991f2e75\") " pod="openstack/ovn-northd-0" Dec 10 14:55:54 crc kubenswrapper[4718]: I1210 14:55:54.806098 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/870a0a88-dfaa-49b8-96ae-96f5991f2e75-config\") pod \"ovn-northd-0\" (UID: \"870a0a88-dfaa-49b8-96ae-96f5991f2e75\") " pod="openstack/ovn-northd-0" Dec 10 14:55:54 crc kubenswrapper[4718]: I1210 14:55:54.806182 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-57djk\" (UniqueName: \"kubernetes.io/projected/870a0a88-dfaa-49b8-96ae-96f5991f2e75-kube-api-access-57djk\") pod \"ovn-northd-0\" (UID: \"870a0a88-dfaa-49b8-96ae-96f5991f2e75\") " pod="openstack/ovn-northd-0" Dec 10 14:55:54 crc kubenswrapper[4718]: I1210 14:55:54.806223 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/870a0a88-dfaa-49b8-96ae-96f5991f2e75-scripts\") pod \"ovn-northd-0\" (UID: \"870a0a88-dfaa-49b8-96ae-96f5991f2e75\") " pod="openstack/ovn-northd-0" Dec 10 14:55:54 crc kubenswrapper[4718]: I1210 14:55:54.806316 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/870a0a88-dfaa-49b8-96ae-96f5991f2e75-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"870a0a88-dfaa-49b8-96ae-96f5991f2e75\") " pod="openstack/ovn-northd-0" Dec 10 14:55:54 crc kubenswrapper[4718]: I1210 14:55:54.806372 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/870a0a88-dfaa-49b8-96ae-96f5991f2e75-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"870a0a88-dfaa-49b8-96ae-96f5991f2e75\") " pod="openstack/ovn-northd-0" Dec 10 14:55:54 crc kubenswrapper[4718]: I1210 14:55:54.807107 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/870a0a88-dfaa-49b8-96ae-96f5991f2e75-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"870a0a88-dfaa-49b8-96ae-96f5991f2e75\") " pod="openstack/ovn-northd-0" Dec 10 14:55:54 crc kubenswrapper[4718]: I1210 14:55:54.808812 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/870a0a88-dfaa-49b8-96ae-96f5991f2e75-scripts\") pod \"ovn-northd-0\" (UID: \"870a0a88-dfaa-49b8-96ae-96f5991f2e75\") " pod="openstack/ovn-northd-0" Dec 10 14:55:54 crc kubenswrapper[4718]: I1210 14:55:54.811487 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/870a0a88-dfaa-49b8-96ae-96f5991f2e75-config\") pod \"ovn-northd-0\" (UID: \"870a0a88-dfaa-49b8-96ae-96f5991f2e75\") " pod="openstack/ovn-northd-0" Dec 10 14:55:54 crc kubenswrapper[4718]: I1210 14:55:54.815333 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/870a0a88-dfaa-49b8-96ae-96f5991f2e75-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"870a0a88-dfaa-49b8-96ae-96f5991f2e75\") " pod="openstack/ovn-northd-0" Dec 10 14:55:54 crc kubenswrapper[4718]: I1210 14:55:54.815574 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/870a0a88-dfaa-49b8-96ae-96f5991f2e75-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"870a0a88-dfaa-49b8-96ae-96f5991f2e75\") " pod="openstack/ovn-northd-0" Dec 10 14:55:54 crc kubenswrapper[4718]: I1210 14:55:54.816313 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/870a0a88-dfaa-49b8-96ae-96f5991f2e75-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"870a0a88-dfaa-49b8-96ae-96f5991f2e75\") " pod="openstack/ovn-northd-0" Dec 10 14:55:54 crc kubenswrapper[4718]: I1210 14:55:54.830718 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-57djk\" (UniqueName: \"kubernetes.io/projected/870a0a88-dfaa-49b8-96ae-96f5991f2e75-kube-api-access-57djk\") pod \"ovn-northd-0\" (UID: \"870a0a88-dfaa-49b8-96ae-96f5991f2e75\") " pod="openstack/ovn-northd-0" Dec 10 14:55:54 crc kubenswrapper[4718]: I1210 14:55:54.875558 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Dec 10 14:55:54 crc kubenswrapper[4718]: I1210 14:55:54.908964 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/e094c947-215b-4386-906f-5ee833afa9d0-etc-swift\") pod \"swift-storage-0\" (UID: \"e094c947-215b-4386-906f-5ee833afa9d0\") " pod="openstack/swift-storage-0" Dec 10 14:55:54 crc kubenswrapper[4718]: E1210 14:55:54.909287 4718 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Dec 10 14:55:54 crc kubenswrapper[4718]: E1210 14:55:54.909334 4718 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Dec 10 14:55:54 crc kubenswrapper[4718]: E1210 14:55:54.909471 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e094c947-215b-4386-906f-5ee833afa9d0-etc-swift podName:e094c947-215b-4386-906f-5ee833afa9d0 nodeName:}" failed. No retries permitted until 2025-12-10 14:55:58.909439024 +0000 UTC m=+1463.858662441 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/e094c947-215b-4386-906f-5ee833afa9d0-etc-swift") pod "swift-storage-0" (UID: "e094c947-215b-4386-906f-5ee833afa9d0") : configmap "swift-ring-files" not found Dec 10 14:55:56 crc kubenswrapper[4718]: I1210 14:55:56.602964 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Dec 10 14:55:56 crc kubenswrapper[4718]: I1210 14:55:56.603326 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Dec 10 14:55:59 crc kubenswrapper[4718]: I1210 14:55:59.073983 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/e094c947-215b-4386-906f-5ee833afa9d0-etc-swift\") pod \"swift-storage-0\" (UID: \"e094c947-215b-4386-906f-5ee833afa9d0\") " pod="openstack/swift-storage-0" Dec 10 14:55:59 crc kubenswrapper[4718]: E1210 14:55:59.074749 4718 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Dec 10 14:55:59 crc kubenswrapper[4718]: E1210 14:55:59.074798 4718 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Dec 10 14:55:59 crc kubenswrapper[4718]: E1210 14:55:59.074870 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e094c947-215b-4386-906f-5ee833afa9d0-etc-swift podName:e094c947-215b-4386-906f-5ee833afa9d0 nodeName:}" failed. No retries permitted until 2025-12-10 14:56:07.074846353 +0000 UTC m=+1472.024069770 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/e094c947-215b-4386-906f-5ee833afa9d0-etc-swift") pod "swift-storage-0" (UID: "e094c947-215b-4386-906f-5ee833afa9d0") : configmap "swift-ring-files" not found Dec 10 14:55:59 crc kubenswrapper[4718]: I1210 14:55:59.625971 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Dec 10 14:55:59 crc kubenswrapper[4718]: I1210 14:55:59.814157 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Dec 10 14:56:00 crc kubenswrapper[4718]: I1210 14:56:00.087968 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-db-create-ktpkr"] Dec 10 14:56:00 crc kubenswrapper[4718]: I1210 14:56:00.090265 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-create-ktpkr" Dec 10 14:56:00 crc kubenswrapper[4718]: I1210 14:56:00.100733 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-d3c5-account-create-update-8jq74"] Dec 10 14:56:00 crc kubenswrapper[4718]: I1210 14:56:00.108985 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-d3c5-account-create-update-8jq74" Dec 10 14:56:00 crc kubenswrapper[4718]: I1210 14:56:00.112624 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-db-create-ktpkr"] Dec 10 14:56:00 crc kubenswrapper[4718]: I1210 14:56:00.114012 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-db-secret" Dec 10 14:56:00 crc kubenswrapper[4718]: I1210 14:56:00.124942 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Dec 10 14:56:00 crc kubenswrapper[4718]: I1210 14:56:00.129011 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-d3c5-account-create-update-8jq74"] Dec 10 14:56:00 crc kubenswrapper[4718]: I1210 14:56:00.201713 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7dfaf5d8-4439-43ec-b379-df78a4e20036-operator-scripts\") pod \"watcher-db-create-ktpkr\" (UID: \"7dfaf5d8-4439-43ec-b379-df78a4e20036\") " pod="openstack/watcher-db-create-ktpkr" Dec 10 14:56:00 crc kubenswrapper[4718]: I1210 14:56:00.201867 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m8v77\" (UniqueName: \"kubernetes.io/projected/7dfaf5d8-4439-43ec-b379-df78a4e20036-kube-api-access-m8v77\") pod \"watcher-db-create-ktpkr\" (UID: \"7dfaf5d8-4439-43ec-b379-df78a4e20036\") " pod="openstack/watcher-db-create-ktpkr" Dec 10 14:56:00 crc kubenswrapper[4718]: I1210 14:56:00.202118 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r6wpf\" (UniqueName: \"kubernetes.io/projected/a0434939-9b67-49dd-a86c-d3c366f75328-kube-api-access-r6wpf\") pod \"watcher-d3c5-account-create-update-8jq74\" (UID: \"a0434939-9b67-49dd-a86c-d3c366f75328\") " pod="openstack/watcher-d3c5-account-create-update-8jq74" Dec 10 14:56:00 crc kubenswrapper[4718]: I1210 14:56:00.202224 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a0434939-9b67-49dd-a86c-d3c366f75328-operator-scripts\") pod \"watcher-d3c5-account-create-update-8jq74\" (UID: \"a0434939-9b67-49dd-a86c-d3c366f75328\") " pod="openstack/watcher-d3c5-account-create-update-8jq74" Dec 10 14:56:00 crc kubenswrapper[4718]: I1210 14:56:00.207011 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-76f9c4c8bc-nqtx8" Dec 10 14:56:00 crc kubenswrapper[4718]: I1210 14:56:00.303689 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r6wpf\" (UniqueName: \"kubernetes.io/projected/a0434939-9b67-49dd-a86c-d3c366f75328-kube-api-access-r6wpf\") pod \"watcher-d3c5-account-create-update-8jq74\" (UID: \"a0434939-9b67-49dd-a86c-d3c366f75328\") " pod="openstack/watcher-d3c5-account-create-update-8jq74" Dec 10 14:56:00 crc kubenswrapper[4718]: I1210 14:56:00.303789 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a0434939-9b67-49dd-a86c-d3c366f75328-operator-scripts\") pod \"watcher-d3c5-account-create-update-8jq74\" (UID: \"a0434939-9b67-49dd-a86c-d3c366f75328\") " pod="openstack/watcher-d3c5-account-create-update-8jq74" Dec 10 14:56:00 crc kubenswrapper[4718]: I1210 14:56:00.303924 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7dfaf5d8-4439-43ec-b379-df78a4e20036-operator-scripts\") pod \"watcher-db-create-ktpkr\" (UID: \"7dfaf5d8-4439-43ec-b379-df78a4e20036\") " pod="openstack/watcher-db-create-ktpkr" Dec 10 14:56:00 crc kubenswrapper[4718]: I1210 14:56:00.303989 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m8v77\" (UniqueName: \"kubernetes.io/projected/7dfaf5d8-4439-43ec-b379-df78a4e20036-kube-api-access-m8v77\") pod \"watcher-db-create-ktpkr\" (UID: \"7dfaf5d8-4439-43ec-b379-df78a4e20036\") " pod="openstack/watcher-db-create-ktpkr" Dec 10 14:56:00 crc kubenswrapper[4718]: I1210 14:56:00.305967 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a0434939-9b67-49dd-a86c-d3c366f75328-operator-scripts\") pod \"watcher-d3c5-account-create-update-8jq74\" (UID: \"a0434939-9b67-49dd-a86c-d3c366f75328\") " pod="openstack/watcher-d3c5-account-create-update-8jq74" Dec 10 14:56:00 crc kubenswrapper[4718]: I1210 14:56:00.306355 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7dfaf5d8-4439-43ec-b379-df78a4e20036-operator-scripts\") pod \"watcher-db-create-ktpkr\" (UID: \"7dfaf5d8-4439-43ec-b379-df78a4e20036\") " pod="openstack/watcher-db-create-ktpkr" Dec 10 14:56:00 crc kubenswrapper[4718]: I1210 14:56:00.343340 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m8v77\" (UniqueName: \"kubernetes.io/projected/7dfaf5d8-4439-43ec-b379-df78a4e20036-kube-api-access-m8v77\") pod \"watcher-db-create-ktpkr\" (UID: \"7dfaf5d8-4439-43ec-b379-df78a4e20036\") " pod="openstack/watcher-db-create-ktpkr" Dec 10 14:56:00 crc kubenswrapper[4718]: I1210 14:56:00.346018 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r6wpf\" (UniqueName: \"kubernetes.io/projected/a0434939-9b67-49dd-a86c-d3c366f75328-kube-api-access-r6wpf\") pod \"watcher-d3c5-account-create-update-8jq74\" (UID: \"a0434939-9b67-49dd-a86c-d3c366f75328\") " pod="openstack/watcher-d3c5-account-create-update-8jq74" Dec 10 14:56:00 crc kubenswrapper[4718]: I1210 14:56:00.352814 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6dbf544cc9-9888d"] Dec 10 14:56:00 crc kubenswrapper[4718]: I1210 14:56:00.353179 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6dbf544cc9-9888d" podUID="70c358d6-fb60-4ab9-a1d2-90a25b22fc41" containerName="dnsmasq-dns" containerID="cri-o://ec3eb61a9c9aa6199c543fd386fad65006c618b56050a4a97d14354821670db8" gracePeriod=10 Dec 10 14:56:00 crc kubenswrapper[4718]: I1210 14:56:00.429275 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-create-ktpkr" Dec 10 14:56:00 crc kubenswrapper[4718]: I1210 14:56:00.460019 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-d3c5-account-create-update-8jq74" Dec 10 14:56:00 crc kubenswrapper[4718]: I1210 14:56:00.584366 4718 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-6dbf544cc9-9888d" podUID="70c358d6-fb60-4ab9-a1d2-90a25b22fc41" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.118:5353: connect: connection refused" Dec 10 14:56:01 crc kubenswrapper[4718]: I1210 14:56:01.908731 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6dbf544cc9-9888d" Dec 10 14:56:01 crc kubenswrapper[4718]: I1210 14:56:01.966300 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/70c358d6-fb60-4ab9-a1d2-90a25b22fc41-dns-svc\") pod \"70c358d6-fb60-4ab9-a1d2-90a25b22fc41\" (UID: \"70c358d6-fb60-4ab9-a1d2-90a25b22fc41\") " Dec 10 14:56:01 crc kubenswrapper[4718]: I1210 14:56:01.966447 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/70c358d6-fb60-4ab9-a1d2-90a25b22fc41-ovsdbserver-nb\") pod \"70c358d6-fb60-4ab9-a1d2-90a25b22fc41\" (UID: \"70c358d6-fb60-4ab9-a1d2-90a25b22fc41\") " Dec 10 14:56:01 crc kubenswrapper[4718]: I1210 14:56:01.966516 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/70c358d6-fb60-4ab9-a1d2-90a25b22fc41-ovsdbserver-sb\") pod \"70c358d6-fb60-4ab9-a1d2-90a25b22fc41\" (UID: \"70c358d6-fb60-4ab9-a1d2-90a25b22fc41\") " Dec 10 14:56:01 crc kubenswrapper[4718]: I1210 14:56:01.966556 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/70c358d6-fb60-4ab9-a1d2-90a25b22fc41-config\") pod \"70c358d6-fb60-4ab9-a1d2-90a25b22fc41\" (UID: \"70c358d6-fb60-4ab9-a1d2-90a25b22fc41\") " Dec 10 14:56:01 crc kubenswrapper[4718]: I1210 14:56:01.966710 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8m6rg\" (UniqueName: \"kubernetes.io/projected/70c358d6-fb60-4ab9-a1d2-90a25b22fc41-kube-api-access-8m6rg\") pod \"70c358d6-fb60-4ab9-a1d2-90a25b22fc41\" (UID: \"70c358d6-fb60-4ab9-a1d2-90a25b22fc41\") " Dec 10 14:56:01 crc kubenswrapper[4718]: I1210 14:56:01.984732 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/70c358d6-fb60-4ab9-a1d2-90a25b22fc41-kube-api-access-8m6rg" (OuterVolumeSpecName: "kube-api-access-8m6rg") pod "70c358d6-fb60-4ab9-a1d2-90a25b22fc41" (UID: "70c358d6-fb60-4ab9-a1d2-90a25b22fc41"). InnerVolumeSpecName "kube-api-access-8m6rg". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 14:56:02 crc kubenswrapper[4718]: I1210 14:56:02.070658 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8m6rg\" (UniqueName: \"kubernetes.io/projected/70c358d6-fb60-4ab9-a1d2-90a25b22fc41-kube-api-access-8m6rg\") on node \"crc\" DevicePath \"\"" Dec 10 14:56:02 crc kubenswrapper[4718]: I1210 14:56:02.110027 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/70c358d6-fb60-4ab9-a1d2-90a25b22fc41-config" (OuterVolumeSpecName: "config") pod "70c358d6-fb60-4ab9-a1d2-90a25b22fc41" (UID: "70c358d6-fb60-4ab9-a1d2-90a25b22fc41"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:56:02 crc kubenswrapper[4718]: I1210 14:56:02.112094 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/70c358d6-fb60-4ab9-a1d2-90a25b22fc41-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "70c358d6-fb60-4ab9-a1d2-90a25b22fc41" (UID: "70c358d6-fb60-4ab9-a1d2-90a25b22fc41"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:56:02 crc kubenswrapper[4718]: I1210 14:56:02.112497 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/70c358d6-fb60-4ab9-a1d2-90a25b22fc41-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "70c358d6-fb60-4ab9-a1d2-90a25b22fc41" (UID: "70c358d6-fb60-4ab9-a1d2-90a25b22fc41"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:56:02 crc kubenswrapper[4718]: I1210 14:56:02.116113 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/70c358d6-fb60-4ab9-a1d2-90a25b22fc41-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "70c358d6-fb60-4ab9-a1d2-90a25b22fc41" (UID: "70c358d6-fb60-4ab9-a1d2-90a25b22fc41"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:56:02 crc kubenswrapper[4718]: I1210 14:56:02.173662 4718 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/70c358d6-fb60-4ab9-a1d2-90a25b22fc41-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 10 14:56:02 crc kubenswrapper[4718]: I1210 14:56:02.173726 4718 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/70c358d6-fb60-4ab9-a1d2-90a25b22fc41-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 10 14:56:02 crc kubenswrapper[4718]: I1210 14:56:02.173739 4718 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/70c358d6-fb60-4ab9-a1d2-90a25b22fc41-config\") on node \"crc\" DevicePath \"\"" Dec 10 14:56:02 crc kubenswrapper[4718]: I1210 14:56:02.173755 4718 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/70c358d6-fb60-4ab9-a1d2-90a25b22fc41-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 10 14:56:02 crc kubenswrapper[4718]: I1210 14:56:02.188818 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Dec 10 14:56:02 crc kubenswrapper[4718]: W1210 14:56:02.193422 4718 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod870a0a88_dfaa_49b8_96ae_96f5991f2e75.slice/crio-d76142907fa6a2bc7de569af804a0fc445b5819f4471fd2714fbb20a1e5fa0dd WatchSource:0}: Error finding container d76142907fa6a2bc7de569af804a0fc445b5819f4471fd2714fbb20a1e5fa0dd: Status 404 returned error can't find the container with id d76142907fa6a2bc7de569af804a0fc445b5819f4471fd2714fbb20a1e5fa0dd Dec 10 14:56:02 crc kubenswrapper[4718]: I1210 14:56:02.225448 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"870a0a88-dfaa-49b8-96ae-96f5991f2e75","Type":"ContainerStarted","Data":"d76142907fa6a2bc7de569af804a0fc445b5819f4471fd2714fbb20a1e5fa0dd"} Dec 10 14:56:02 crc kubenswrapper[4718]: I1210 14:56:02.229818 4718 generic.go:334] "Generic (PLEG): container finished" podID="70c358d6-fb60-4ab9-a1d2-90a25b22fc41" containerID="ec3eb61a9c9aa6199c543fd386fad65006c618b56050a4a97d14354821670db8" exitCode=0 Dec 10 14:56:02 crc kubenswrapper[4718]: I1210 14:56:02.229869 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6dbf544cc9-9888d" event={"ID":"70c358d6-fb60-4ab9-a1d2-90a25b22fc41","Type":"ContainerDied","Data":"ec3eb61a9c9aa6199c543fd386fad65006c618b56050a4a97d14354821670db8"} Dec 10 14:56:02 crc kubenswrapper[4718]: I1210 14:56:02.229903 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6dbf544cc9-9888d" event={"ID":"70c358d6-fb60-4ab9-a1d2-90a25b22fc41","Type":"ContainerDied","Data":"52a87a9a3d6bcf2eab8b92f0c90a008ce355e13306642b5564f066d2af1f3b64"} Dec 10 14:56:02 crc kubenswrapper[4718]: I1210 14:56:02.229930 4718 scope.go:117] "RemoveContainer" containerID="ec3eb61a9c9aa6199c543fd386fad65006c618b56050a4a97d14354821670db8" Dec 10 14:56:02 crc kubenswrapper[4718]: I1210 14:56:02.230141 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6dbf544cc9-9888d" Dec 10 14:56:02 crc kubenswrapper[4718]: I1210 14:56:02.268081 4718 scope.go:117] "RemoveContainer" containerID="d3d55b735cbd45ecf74bca41cb5524fbb537701aff9307c4016798c12a5259ad" Dec 10 14:56:02 crc kubenswrapper[4718]: I1210 14:56:02.296181 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6dbf544cc9-9888d"] Dec 10 14:56:02 crc kubenswrapper[4718]: I1210 14:56:02.306529 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6dbf544cc9-9888d"] Dec 10 14:56:02 crc kubenswrapper[4718]: I1210 14:56:02.309718 4718 scope.go:117] "RemoveContainer" containerID="ec3eb61a9c9aa6199c543fd386fad65006c618b56050a4a97d14354821670db8" Dec 10 14:56:02 crc kubenswrapper[4718]: E1210 14:56:02.311686 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ec3eb61a9c9aa6199c543fd386fad65006c618b56050a4a97d14354821670db8\": container with ID starting with ec3eb61a9c9aa6199c543fd386fad65006c618b56050a4a97d14354821670db8 not found: ID does not exist" containerID="ec3eb61a9c9aa6199c543fd386fad65006c618b56050a4a97d14354821670db8" Dec 10 14:56:02 crc kubenswrapper[4718]: I1210 14:56:02.311765 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ec3eb61a9c9aa6199c543fd386fad65006c618b56050a4a97d14354821670db8"} err="failed to get container status \"ec3eb61a9c9aa6199c543fd386fad65006c618b56050a4a97d14354821670db8\": rpc error: code = NotFound desc = could not find container \"ec3eb61a9c9aa6199c543fd386fad65006c618b56050a4a97d14354821670db8\": container with ID starting with ec3eb61a9c9aa6199c543fd386fad65006c618b56050a4a97d14354821670db8 not found: ID does not exist" Dec 10 14:56:02 crc kubenswrapper[4718]: I1210 14:56:02.311806 4718 scope.go:117] "RemoveContainer" containerID="d3d55b735cbd45ecf74bca41cb5524fbb537701aff9307c4016798c12a5259ad" Dec 10 14:56:02 crc kubenswrapper[4718]: E1210 14:56:02.312706 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d3d55b735cbd45ecf74bca41cb5524fbb537701aff9307c4016798c12a5259ad\": container with ID starting with d3d55b735cbd45ecf74bca41cb5524fbb537701aff9307c4016798c12a5259ad not found: ID does not exist" containerID="d3d55b735cbd45ecf74bca41cb5524fbb537701aff9307c4016798c12a5259ad" Dec 10 14:56:02 crc kubenswrapper[4718]: I1210 14:56:02.312761 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d3d55b735cbd45ecf74bca41cb5524fbb537701aff9307c4016798c12a5259ad"} err="failed to get container status \"d3d55b735cbd45ecf74bca41cb5524fbb537701aff9307c4016798c12a5259ad\": rpc error: code = NotFound desc = could not find container \"d3d55b735cbd45ecf74bca41cb5524fbb537701aff9307c4016798c12a5259ad\": container with ID starting with d3d55b735cbd45ecf74bca41cb5524fbb537701aff9307c4016798c12a5259ad not found: ID does not exist" Dec 10 14:56:02 crc kubenswrapper[4718]: I1210 14:56:02.328900 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-d3c5-account-create-update-8jq74"] Dec 10 14:56:02 crc kubenswrapper[4718]: W1210 14:56:02.329754 4718 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda0434939_9b67_49dd_a86c_d3c366f75328.slice/crio-8106f87ac3cd50f7c5d0238bd33a0e01b0f7a2c57641df8577e51f2cf1156429 WatchSource:0}: Error finding container 8106f87ac3cd50f7c5d0238bd33a0e01b0f7a2c57641df8577e51f2cf1156429: Status 404 returned error can't find the container with id 8106f87ac3cd50f7c5d0238bd33a0e01b0f7a2c57641df8577e51f2cf1156429 Dec 10 14:56:02 crc kubenswrapper[4718]: I1210 14:56:02.450358 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-db-create-ktpkr"] Dec 10 14:56:02 crc kubenswrapper[4718]: W1210 14:56:02.451640 4718 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7dfaf5d8_4439_43ec_b379_df78a4e20036.slice/crio-cd649ae068dcebcb7aafa8bc7fcb9723384bb1b9b4ea9040c6b2f5c454bad3f3 WatchSource:0}: Error finding container cd649ae068dcebcb7aafa8bc7fcb9723384bb1b9b4ea9040c6b2f5c454bad3f3: Status 404 returned error can't find the container with id cd649ae068dcebcb7aafa8bc7fcb9723384bb1b9b4ea9040c6b2f5c454bad3f3 Dec 10 14:56:03 crc kubenswrapper[4718]: I1210 14:56:03.243468 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"7ea78b30-1cce-42f6-abae-e7e66ee3daae","Type":"ContainerStarted","Data":"1bf31a0918086e5ea79a4eb9e5ad6ffbe73f1fa6e3677827467ec663c596cc98"} Dec 10 14:56:03 crc kubenswrapper[4718]: I1210 14:56:03.245560 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-db-create-ktpkr" event={"ID":"7dfaf5d8-4439-43ec-b379-df78a4e20036","Type":"ContainerStarted","Data":"165041dfec3fd1a13de2cdfb612c54a2d79536a91631ef4f4d900216e1f99d71"} Dec 10 14:56:03 crc kubenswrapper[4718]: I1210 14:56:03.245590 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-db-create-ktpkr" event={"ID":"7dfaf5d8-4439-43ec-b379-df78a4e20036","Type":"ContainerStarted","Data":"cd649ae068dcebcb7aafa8bc7fcb9723384bb1b9b4ea9040c6b2f5c454bad3f3"} Dec 10 14:56:03 crc kubenswrapper[4718]: I1210 14:56:03.248822 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-d3c5-account-create-update-8jq74" event={"ID":"a0434939-9b67-49dd-a86c-d3c366f75328","Type":"ContainerStarted","Data":"59bdd796d7d6c933cb1363548c6ed8b05419bc137c307cf44d7a62ed849e92f0"} Dec 10 14:56:03 crc kubenswrapper[4718]: I1210 14:56:03.248873 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-d3c5-account-create-update-8jq74" event={"ID":"a0434939-9b67-49dd-a86c-d3c366f75328","Type":"ContainerStarted","Data":"8106f87ac3cd50f7c5d0238bd33a0e01b0f7a2c57641df8577e51f2cf1156429"} Dec 10 14:56:03 crc kubenswrapper[4718]: I1210 14:56:03.252625 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-nhp59" event={"ID":"ea7defa5-2130-4d6d-8bba-9416bec21dfa","Type":"ContainerStarted","Data":"c9c0e0a614eb99a46e7fee4f275ecfed2eadb733257d7dc05cc902927c6066e6"} Dec 10 14:56:03 crc kubenswrapper[4718]: I1210 14:56:03.268915 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-db-create-ktpkr" podStartSLOduration=3.2688779009999998 podStartE2EDuration="3.268877901s" podCreationTimestamp="2025-12-10 14:56:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 14:56:03.267107725 +0000 UTC m=+1468.216331162" watchObservedRunningTime="2025-12-10 14:56:03.268877901 +0000 UTC m=+1468.218101308" Dec 10 14:56:03 crc kubenswrapper[4718]: I1210 14:56:03.297895 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-nhp59" podStartSLOduration=3.183806209 podStartE2EDuration="12.29785447s" podCreationTimestamp="2025-12-10 14:55:51 +0000 UTC" firstStartedPulling="2025-12-10 14:55:52.571455318 +0000 UTC m=+1457.520678735" lastFinishedPulling="2025-12-10 14:56:01.685503579 +0000 UTC m=+1466.634726996" observedRunningTime="2025-12-10 14:56:03.292049712 +0000 UTC m=+1468.241273139" watchObservedRunningTime="2025-12-10 14:56:03.29785447 +0000 UTC m=+1468.247077887" Dec 10 14:56:03 crc kubenswrapper[4718]: I1210 14:56:03.313612 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-d3c5-account-create-update-8jq74" podStartSLOduration=3.313587432 podStartE2EDuration="3.313587432s" podCreationTimestamp="2025-12-10 14:56:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 14:56:03.310270567 +0000 UTC m=+1468.259493984" watchObservedRunningTime="2025-12-10 14:56:03.313587432 +0000 UTC m=+1468.262810849" Dec 10 14:56:04 crc kubenswrapper[4718]: I1210 14:56:04.033787 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="70c358d6-fb60-4ab9-a1d2-90a25b22fc41" path="/var/lib/kubelet/pods/70c358d6-fb60-4ab9-a1d2-90a25b22fc41/volumes" Dec 10 14:56:04 crc kubenswrapper[4718]: I1210 14:56:04.273989 4718 generic.go:334] "Generic (PLEG): container finished" podID="a0434939-9b67-49dd-a86c-d3c366f75328" containerID="59bdd796d7d6c933cb1363548c6ed8b05419bc137c307cf44d7a62ed849e92f0" exitCode=0 Dec 10 14:56:04 crc kubenswrapper[4718]: I1210 14:56:04.274052 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-d3c5-account-create-update-8jq74" event={"ID":"a0434939-9b67-49dd-a86c-d3c366f75328","Type":"ContainerDied","Data":"59bdd796d7d6c933cb1363548c6ed8b05419bc137c307cf44d7a62ed849e92f0"} Dec 10 14:56:04 crc kubenswrapper[4718]: I1210 14:56:04.279068 4718 generic.go:334] "Generic (PLEG): container finished" podID="7dfaf5d8-4439-43ec-b379-df78a4e20036" containerID="165041dfec3fd1a13de2cdfb612c54a2d79536a91631ef4f4d900216e1f99d71" exitCode=0 Dec 10 14:56:04 crc kubenswrapper[4718]: I1210 14:56:04.279816 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-db-create-ktpkr" event={"ID":"7dfaf5d8-4439-43ec-b379-df78a4e20036","Type":"ContainerDied","Data":"165041dfec3fd1a13de2cdfb612c54a2d79536a91631ef4f4d900216e1f99d71"} Dec 10 14:56:05 crc kubenswrapper[4718]: I1210 14:56:05.291985 4718 generic.go:334] "Generic (PLEG): container finished" podID="282d32e9-d539-4bac-9fd1-a8735e8d92e1" containerID="49d4ddfce0fa0f7d4939e716b90aec39e4406e5c6207e6f2157b8812b80cc12d" exitCode=0 Dec 10 14:56:05 crc kubenswrapper[4718]: I1210 14:56:05.292068 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"282d32e9-d539-4bac-9fd1-a8735e8d92e1","Type":"ContainerDied","Data":"49d4ddfce0fa0f7d4939e716b90aec39e4406e5c6207e6f2157b8812b80cc12d"} Dec 10 14:56:05 crc kubenswrapper[4718]: I1210 14:56:05.296435 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"870a0a88-dfaa-49b8-96ae-96f5991f2e75","Type":"ContainerStarted","Data":"a8adfbef817fa7d420ccecdd7524fb867b9271c80492d7f3bcac6eeb90acb601"} Dec 10 14:56:05 crc kubenswrapper[4718]: I1210 14:56:05.296512 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"870a0a88-dfaa-49b8-96ae-96f5991f2e75","Type":"ContainerStarted","Data":"b7115f2d9b0ae6db50927ee759c4030af04adc26e07b3f4eecf5e2716d20411a"} Dec 10 14:56:05 crc kubenswrapper[4718]: I1210 14:56:05.360687 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=9.513477996 podStartE2EDuration="11.360655622s" podCreationTimestamp="2025-12-10 14:55:54 +0000 UTC" firstStartedPulling="2025-12-10 14:56:02.197078489 +0000 UTC m=+1467.146301906" lastFinishedPulling="2025-12-10 14:56:04.044256115 +0000 UTC m=+1468.993479532" observedRunningTime="2025-12-10 14:56:05.354378921 +0000 UTC m=+1470.303602348" watchObservedRunningTime="2025-12-10 14:56:05.360655622 +0000 UTC m=+1470.309879039" Dec 10 14:56:05 crc kubenswrapper[4718]: I1210 14:56:05.890044 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-create-ktpkr" Dec 10 14:56:05 crc kubenswrapper[4718]: I1210 14:56:05.902599 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-d3c5-account-create-update-8jq74" Dec 10 14:56:06 crc kubenswrapper[4718]: I1210 14:56:06.008903 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a0434939-9b67-49dd-a86c-d3c366f75328-operator-scripts\") pod \"a0434939-9b67-49dd-a86c-d3c366f75328\" (UID: \"a0434939-9b67-49dd-a86c-d3c366f75328\") " Dec 10 14:56:06 crc kubenswrapper[4718]: I1210 14:56:06.008978 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r6wpf\" (UniqueName: \"kubernetes.io/projected/a0434939-9b67-49dd-a86c-d3c366f75328-kube-api-access-r6wpf\") pod \"a0434939-9b67-49dd-a86c-d3c366f75328\" (UID: \"a0434939-9b67-49dd-a86c-d3c366f75328\") " Dec 10 14:56:06 crc kubenswrapper[4718]: I1210 14:56:06.009234 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m8v77\" (UniqueName: \"kubernetes.io/projected/7dfaf5d8-4439-43ec-b379-df78a4e20036-kube-api-access-m8v77\") pod \"7dfaf5d8-4439-43ec-b379-df78a4e20036\" (UID: \"7dfaf5d8-4439-43ec-b379-df78a4e20036\") " Dec 10 14:56:06 crc kubenswrapper[4718]: I1210 14:56:06.009302 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7dfaf5d8-4439-43ec-b379-df78a4e20036-operator-scripts\") pod \"7dfaf5d8-4439-43ec-b379-df78a4e20036\" (UID: \"7dfaf5d8-4439-43ec-b379-df78a4e20036\") " Dec 10 14:56:06 crc kubenswrapper[4718]: I1210 14:56:06.009683 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a0434939-9b67-49dd-a86c-d3c366f75328-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "a0434939-9b67-49dd-a86c-d3c366f75328" (UID: "a0434939-9b67-49dd-a86c-d3c366f75328"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:56:06 crc kubenswrapper[4718]: I1210 14:56:06.010191 4718 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a0434939-9b67-49dd-a86c-d3c366f75328-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 10 14:56:06 crc kubenswrapper[4718]: I1210 14:56:06.010725 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7dfaf5d8-4439-43ec-b379-df78a4e20036-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "7dfaf5d8-4439-43ec-b379-df78a4e20036" (UID: "7dfaf5d8-4439-43ec-b379-df78a4e20036"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:56:06 crc kubenswrapper[4718]: I1210 14:56:06.015877 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7dfaf5d8-4439-43ec-b379-df78a4e20036-kube-api-access-m8v77" (OuterVolumeSpecName: "kube-api-access-m8v77") pod "7dfaf5d8-4439-43ec-b379-df78a4e20036" (UID: "7dfaf5d8-4439-43ec-b379-df78a4e20036"). InnerVolumeSpecName "kube-api-access-m8v77". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 14:56:06 crc kubenswrapper[4718]: I1210 14:56:06.015933 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0434939-9b67-49dd-a86c-d3c366f75328-kube-api-access-r6wpf" (OuterVolumeSpecName: "kube-api-access-r6wpf") pod "a0434939-9b67-49dd-a86c-d3c366f75328" (UID: "a0434939-9b67-49dd-a86c-d3c366f75328"). InnerVolumeSpecName "kube-api-access-r6wpf". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 14:56:06 crc kubenswrapper[4718]: I1210 14:56:06.114295 4718 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7dfaf5d8-4439-43ec-b379-df78a4e20036-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 10 14:56:06 crc kubenswrapper[4718]: I1210 14:56:06.114341 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r6wpf\" (UniqueName: \"kubernetes.io/projected/a0434939-9b67-49dd-a86c-d3c366f75328-kube-api-access-r6wpf\") on node \"crc\" DevicePath \"\"" Dec 10 14:56:06 crc kubenswrapper[4718]: I1210 14:56:06.114355 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m8v77\" (UniqueName: \"kubernetes.io/projected/7dfaf5d8-4439-43ec-b379-df78a4e20036-kube-api-access-m8v77\") on node \"crc\" DevicePath \"\"" Dec 10 14:56:06 crc kubenswrapper[4718]: I1210 14:56:06.235719 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-l8vtq" Dec 10 14:56:06 crc kubenswrapper[4718]: I1210 14:56:06.241895 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-l8vtq" Dec 10 14:56:06 crc kubenswrapper[4718]: I1210 14:56:06.314935 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-d3c5-account-create-update-8jq74" event={"ID":"a0434939-9b67-49dd-a86c-d3c366f75328","Type":"ContainerDied","Data":"8106f87ac3cd50f7c5d0238bd33a0e01b0f7a2c57641df8577e51f2cf1156429"} Dec 10 14:56:06 crc kubenswrapper[4718]: I1210 14:56:06.315004 4718 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8106f87ac3cd50f7c5d0238bd33a0e01b0f7a2c57641df8577e51f2cf1156429" Dec 10 14:56:06 crc kubenswrapper[4718]: I1210 14:56:06.315162 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-d3c5-account-create-update-8jq74" Dec 10 14:56:06 crc kubenswrapper[4718]: I1210 14:56:06.321790 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"282d32e9-d539-4bac-9fd1-a8735e8d92e1","Type":"ContainerStarted","Data":"59d74c840e862e7d3f916969d5014bad6fbf30eedc965312f87861e06f8d87fc"} Dec 10 14:56:06 crc kubenswrapper[4718]: I1210 14:56:06.323812 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Dec 10 14:56:06 crc kubenswrapper[4718]: I1210 14:56:06.330320 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"7ea78b30-1cce-42f6-abae-e7e66ee3daae","Type":"ContainerStarted","Data":"7c47ffed17c564b3746d6b3253d391d6fbc1beb686f24aec2763ec3af824c602"} Dec 10 14:56:06 crc kubenswrapper[4718]: I1210 14:56:06.333565 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-db-create-ktpkr" event={"ID":"7dfaf5d8-4439-43ec-b379-df78a4e20036","Type":"ContainerDied","Data":"cd649ae068dcebcb7aafa8bc7fcb9723384bb1b9b4ea9040c6b2f5c454bad3f3"} Dec 10 14:56:06 crc kubenswrapper[4718]: I1210 14:56:06.333605 4718 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cd649ae068dcebcb7aafa8bc7fcb9723384bb1b9b4ea9040c6b2f5c454bad3f3" Dec 10 14:56:06 crc kubenswrapper[4718]: I1210 14:56:06.333909 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-create-ktpkr" Dec 10 14:56:06 crc kubenswrapper[4718]: I1210 14:56:06.334029 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Dec 10 14:56:06 crc kubenswrapper[4718]: I1210 14:56:06.358698 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=41.945545568 podStartE2EDuration="1m55.358665299s" podCreationTimestamp="2025-12-10 14:54:11 +0000 UTC" firstStartedPulling="2025-12-10 14:54:16.915183495 +0000 UTC m=+1361.864406912" lastFinishedPulling="2025-12-10 14:55:30.328303226 +0000 UTC m=+1435.277526643" observedRunningTime="2025-12-10 14:56:06.356579946 +0000 UTC m=+1471.305803363" watchObservedRunningTime="2025-12-10 14:56:06.358665299 +0000 UTC m=+1471.307888716" Dec 10 14:56:06 crc kubenswrapper[4718]: I1210 14:56:06.803847 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-kbxrn"] Dec 10 14:56:06 crc kubenswrapper[4718]: E1210 14:56:06.807438 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="70c358d6-fb60-4ab9-a1d2-90a25b22fc41" containerName="init" Dec 10 14:56:06 crc kubenswrapper[4718]: I1210 14:56:06.807479 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="70c358d6-fb60-4ab9-a1d2-90a25b22fc41" containerName="init" Dec 10 14:56:06 crc kubenswrapper[4718]: E1210 14:56:06.807507 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="70c358d6-fb60-4ab9-a1d2-90a25b22fc41" containerName="dnsmasq-dns" Dec 10 14:56:06 crc kubenswrapper[4718]: I1210 14:56:06.807675 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="70c358d6-fb60-4ab9-a1d2-90a25b22fc41" containerName="dnsmasq-dns" Dec 10 14:56:06 crc kubenswrapper[4718]: E1210 14:56:06.807754 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a0434939-9b67-49dd-a86c-d3c366f75328" containerName="mariadb-account-create-update" Dec 10 14:56:06 crc kubenswrapper[4718]: I1210 14:56:06.807771 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="a0434939-9b67-49dd-a86c-d3c366f75328" containerName="mariadb-account-create-update" Dec 10 14:56:06 crc kubenswrapper[4718]: E1210 14:56:06.807804 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7dfaf5d8-4439-43ec-b379-df78a4e20036" containerName="mariadb-database-create" Dec 10 14:56:06 crc kubenswrapper[4718]: I1210 14:56:06.807815 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="7dfaf5d8-4439-43ec-b379-df78a4e20036" containerName="mariadb-database-create" Dec 10 14:56:06 crc kubenswrapper[4718]: I1210 14:56:06.808660 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="a0434939-9b67-49dd-a86c-d3c366f75328" containerName="mariadb-account-create-update" Dec 10 14:56:06 crc kubenswrapper[4718]: I1210 14:56:06.808707 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="70c358d6-fb60-4ab9-a1d2-90a25b22fc41" containerName="dnsmasq-dns" Dec 10 14:56:06 crc kubenswrapper[4718]: I1210 14:56:06.808736 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="7dfaf5d8-4439-43ec-b379-df78a4e20036" containerName="mariadb-database-create" Dec 10 14:56:06 crc kubenswrapper[4718]: I1210 14:56:06.811179 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-kbxrn" Dec 10 14:56:06 crc kubenswrapper[4718]: I1210 14:56:06.863079 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-kbxrn"] Dec 10 14:56:06 crc kubenswrapper[4718]: I1210 14:56:06.994731 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cjtxc\" (UniqueName: \"kubernetes.io/projected/6c0ffc92-31c4-4d50-be24-ce7b6fed6506-kube-api-access-cjtxc\") pod \"keystone-db-create-kbxrn\" (UID: \"6c0ffc92-31c4-4d50-be24-ce7b6fed6506\") " pod="openstack/keystone-db-create-kbxrn" Dec 10 14:56:06 crc kubenswrapper[4718]: I1210 14:56:06.995253 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6c0ffc92-31c4-4d50-be24-ce7b6fed6506-operator-scripts\") pod \"keystone-db-create-kbxrn\" (UID: \"6c0ffc92-31c4-4d50-be24-ce7b6fed6506\") " pod="openstack/keystone-db-create-kbxrn" Dec 10 14:56:07 crc kubenswrapper[4718]: I1210 14:56:07.076056 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-316a-account-create-update-xsvgc"] Dec 10 14:56:07 crc kubenswrapper[4718]: I1210 14:56:07.077937 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-316a-account-create-update-xsvgc" Dec 10 14:56:07 crc kubenswrapper[4718]: I1210 14:56:07.081415 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Dec 10 14:56:07 crc kubenswrapper[4718]: I1210 14:56:07.096893 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-316a-account-create-update-xsvgc"] Dec 10 14:56:07 crc kubenswrapper[4718]: I1210 14:56:07.097244 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6c0ffc92-31c4-4d50-be24-ce7b6fed6506-operator-scripts\") pod \"keystone-db-create-kbxrn\" (UID: \"6c0ffc92-31c4-4d50-be24-ce7b6fed6506\") " pod="openstack/keystone-db-create-kbxrn" Dec 10 14:56:07 crc kubenswrapper[4718]: I1210 14:56:07.097320 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/e094c947-215b-4386-906f-5ee833afa9d0-etc-swift\") pod \"swift-storage-0\" (UID: \"e094c947-215b-4386-906f-5ee833afa9d0\") " pod="openstack/swift-storage-0" Dec 10 14:56:07 crc kubenswrapper[4718]: I1210 14:56:07.097380 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cjtxc\" (UniqueName: \"kubernetes.io/projected/6c0ffc92-31c4-4d50-be24-ce7b6fed6506-kube-api-access-cjtxc\") pod \"keystone-db-create-kbxrn\" (UID: \"6c0ffc92-31c4-4d50-be24-ce7b6fed6506\") " pod="openstack/keystone-db-create-kbxrn" Dec 10 14:56:07 crc kubenswrapper[4718]: E1210 14:56:07.097991 4718 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Dec 10 14:56:07 crc kubenswrapper[4718]: E1210 14:56:07.098021 4718 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Dec 10 14:56:07 crc kubenswrapper[4718]: E1210 14:56:07.098079 4718 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e094c947-215b-4386-906f-5ee833afa9d0-etc-swift podName:e094c947-215b-4386-906f-5ee833afa9d0 nodeName:}" failed. No retries permitted until 2025-12-10 14:56:23.098056615 +0000 UTC m=+1488.047280032 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/e094c947-215b-4386-906f-5ee833afa9d0-etc-swift") pod "swift-storage-0" (UID: "e094c947-215b-4386-906f-5ee833afa9d0") : configmap "swift-ring-files" not found Dec 10 14:56:07 crc kubenswrapper[4718]: I1210 14:56:07.098745 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6c0ffc92-31c4-4d50-be24-ce7b6fed6506-operator-scripts\") pod \"keystone-db-create-kbxrn\" (UID: \"6c0ffc92-31c4-4d50-be24-ce7b6fed6506\") " pod="openstack/keystone-db-create-kbxrn" Dec 10 14:56:07 crc kubenswrapper[4718]: I1210 14:56:07.146535 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cjtxc\" (UniqueName: \"kubernetes.io/projected/6c0ffc92-31c4-4d50-be24-ce7b6fed6506-kube-api-access-cjtxc\") pod \"keystone-db-create-kbxrn\" (UID: \"6c0ffc92-31c4-4d50-be24-ce7b6fed6506\") " pod="openstack/keystone-db-create-kbxrn" Dec 10 14:56:07 crc kubenswrapper[4718]: I1210 14:56:07.148697 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-kbxrn" Dec 10 14:56:07 crc kubenswrapper[4718]: I1210 14:56:07.185925 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-dgmf7"] Dec 10 14:56:07 crc kubenswrapper[4718]: I1210 14:56:07.187595 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-dgmf7" Dec 10 14:56:07 crc kubenswrapper[4718]: I1210 14:56:07.199921 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-whjhg\" (UniqueName: \"kubernetes.io/projected/65778ba5-a5bf-495f-97ba-045a6ef28ce8-kube-api-access-whjhg\") pod \"keystone-316a-account-create-update-xsvgc\" (UID: \"65778ba5-a5bf-495f-97ba-045a6ef28ce8\") " pod="openstack/keystone-316a-account-create-update-xsvgc" Dec 10 14:56:07 crc kubenswrapper[4718]: I1210 14:56:07.200103 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/65778ba5-a5bf-495f-97ba-045a6ef28ce8-operator-scripts\") pod \"keystone-316a-account-create-update-xsvgc\" (UID: \"65778ba5-a5bf-495f-97ba-045a6ef28ce8\") " pod="openstack/keystone-316a-account-create-update-xsvgc" Dec 10 14:56:07 crc kubenswrapper[4718]: I1210 14:56:07.211942 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ftrjh-config-57trk"] Dec 10 14:56:07 crc kubenswrapper[4718]: I1210 14:56:07.213648 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ftrjh-config-57trk" Dec 10 14:56:07 crc kubenswrapper[4718]: I1210 14:56:07.214148 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-dgmf7"] Dec 10 14:56:07 crc kubenswrapper[4718]: I1210 14:56:07.216492 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Dec 10 14:56:07 crc kubenswrapper[4718]: I1210 14:56:07.277916 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ftrjh-config-57trk"] Dec 10 14:56:07 crc kubenswrapper[4718]: I1210 14:56:07.305897 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/65778ba5-a5bf-495f-97ba-045a6ef28ce8-operator-scripts\") pod \"keystone-316a-account-create-update-xsvgc\" (UID: \"65778ba5-a5bf-495f-97ba-045a6ef28ce8\") " pod="openstack/keystone-316a-account-create-update-xsvgc" Dec 10 14:56:07 crc kubenswrapper[4718]: I1210 14:56:07.306093 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-plhdt\" (UniqueName: \"kubernetes.io/projected/2f53aabb-ce76-4ee7-973b-ebea7a6b06b1-kube-api-access-plhdt\") pod \"ovn-controller-ftrjh-config-57trk\" (UID: \"2f53aabb-ce76-4ee7-973b-ebea7a6b06b1\") " pod="openstack/ovn-controller-ftrjh-config-57trk" Dec 10 14:56:07 crc kubenswrapper[4718]: I1210 14:56:07.306345 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/2f53aabb-ce76-4ee7-973b-ebea7a6b06b1-var-log-ovn\") pod \"ovn-controller-ftrjh-config-57trk\" (UID: \"2f53aabb-ce76-4ee7-973b-ebea7a6b06b1\") " pod="openstack/ovn-controller-ftrjh-config-57trk" Dec 10 14:56:07 crc kubenswrapper[4718]: I1210 14:56:07.306567 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-whjhg\" (UniqueName: \"kubernetes.io/projected/65778ba5-a5bf-495f-97ba-045a6ef28ce8-kube-api-access-whjhg\") pod \"keystone-316a-account-create-update-xsvgc\" (UID: \"65778ba5-a5bf-495f-97ba-045a6ef28ce8\") " pod="openstack/keystone-316a-account-create-update-xsvgc" Dec 10 14:56:07 crc kubenswrapper[4718]: I1210 14:56:07.306778 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2f53aabb-ce76-4ee7-973b-ebea7a6b06b1-scripts\") pod \"ovn-controller-ftrjh-config-57trk\" (UID: \"2f53aabb-ce76-4ee7-973b-ebea7a6b06b1\") " pod="openstack/ovn-controller-ftrjh-config-57trk" Dec 10 14:56:07 crc kubenswrapper[4718]: I1210 14:56:07.306965 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/55ba3ecd-dc8b-455c-a7a5-737d89a02227-operator-scripts\") pod \"placement-db-create-dgmf7\" (UID: \"55ba3ecd-dc8b-455c-a7a5-737d89a02227\") " pod="openstack/placement-db-create-dgmf7" Dec 10 14:56:07 crc kubenswrapper[4718]: I1210 14:56:07.307068 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/2f53aabb-ce76-4ee7-973b-ebea7a6b06b1-additional-scripts\") pod \"ovn-controller-ftrjh-config-57trk\" (UID: \"2f53aabb-ce76-4ee7-973b-ebea7a6b06b1\") " pod="openstack/ovn-controller-ftrjh-config-57trk" Dec 10 14:56:07 crc kubenswrapper[4718]: I1210 14:56:07.307209 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/2f53aabb-ce76-4ee7-973b-ebea7a6b06b1-var-run-ovn\") pod \"ovn-controller-ftrjh-config-57trk\" (UID: \"2f53aabb-ce76-4ee7-973b-ebea7a6b06b1\") " pod="openstack/ovn-controller-ftrjh-config-57trk" Dec 10 14:56:07 crc kubenswrapper[4718]: I1210 14:56:07.307419 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/2f53aabb-ce76-4ee7-973b-ebea7a6b06b1-var-run\") pod \"ovn-controller-ftrjh-config-57trk\" (UID: \"2f53aabb-ce76-4ee7-973b-ebea7a6b06b1\") " pod="openstack/ovn-controller-ftrjh-config-57trk" Dec 10 14:56:07 crc kubenswrapper[4718]: I1210 14:56:07.307451 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5tvrj\" (UniqueName: \"kubernetes.io/projected/55ba3ecd-dc8b-455c-a7a5-737d89a02227-kube-api-access-5tvrj\") pod \"placement-db-create-dgmf7\" (UID: \"55ba3ecd-dc8b-455c-a7a5-737d89a02227\") " pod="openstack/placement-db-create-dgmf7" Dec 10 14:56:07 crc kubenswrapper[4718]: I1210 14:56:07.307686 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/65778ba5-a5bf-495f-97ba-045a6ef28ce8-operator-scripts\") pod \"keystone-316a-account-create-update-xsvgc\" (UID: \"65778ba5-a5bf-495f-97ba-045a6ef28ce8\") " pod="openstack/keystone-316a-account-create-update-xsvgc" Dec 10 14:56:07 crc kubenswrapper[4718]: I1210 14:56:07.354543 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-whjhg\" (UniqueName: \"kubernetes.io/projected/65778ba5-a5bf-495f-97ba-045a6ef28ce8-kube-api-access-whjhg\") pod \"keystone-316a-account-create-update-xsvgc\" (UID: \"65778ba5-a5bf-495f-97ba-045a6ef28ce8\") " pod="openstack/keystone-316a-account-create-update-xsvgc" Dec 10 14:56:07 crc kubenswrapper[4718]: I1210 14:56:07.356486 4718 generic.go:334] "Generic (PLEG): container finished" podID="5611ee41-14a4-45d3-88b1-e6e6c9bc4d13" containerID="c7bce5dfb390fbf9cd361478a7cdb8ba64af763cff7949a115daaf88f8422b66" exitCode=0 Dec 10 14:56:07 crc kubenswrapper[4718]: I1210 14:56:07.359483 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-notifications-server-0" event={"ID":"5611ee41-14a4-45d3-88b1-e6e6c9bc4d13","Type":"ContainerDied","Data":"c7bce5dfb390fbf9cd361478a7cdb8ba64af763cff7949a115daaf88f8422b66"} Dec 10 14:56:07 crc kubenswrapper[4718]: I1210 14:56:07.409752 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-plhdt\" (UniqueName: \"kubernetes.io/projected/2f53aabb-ce76-4ee7-973b-ebea7a6b06b1-kube-api-access-plhdt\") pod \"ovn-controller-ftrjh-config-57trk\" (UID: \"2f53aabb-ce76-4ee7-973b-ebea7a6b06b1\") " pod="openstack/ovn-controller-ftrjh-config-57trk" Dec 10 14:56:07 crc kubenswrapper[4718]: I1210 14:56:07.409833 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/2f53aabb-ce76-4ee7-973b-ebea7a6b06b1-var-log-ovn\") pod \"ovn-controller-ftrjh-config-57trk\" (UID: \"2f53aabb-ce76-4ee7-973b-ebea7a6b06b1\") " pod="openstack/ovn-controller-ftrjh-config-57trk" Dec 10 14:56:07 crc kubenswrapper[4718]: I1210 14:56:07.409893 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2f53aabb-ce76-4ee7-973b-ebea7a6b06b1-scripts\") pod \"ovn-controller-ftrjh-config-57trk\" (UID: \"2f53aabb-ce76-4ee7-973b-ebea7a6b06b1\") " pod="openstack/ovn-controller-ftrjh-config-57trk" Dec 10 14:56:07 crc kubenswrapper[4718]: I1210 14:56:07.409928 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/55ba3ecd-dc8b-455c-a7a5-737d89a02227-operator-scripts\") pod \"placement-db-create-dgmf7\" (UID: \"55ba3ecd-dc8b-455c-a7a5-737d89a02227\") " pod="openstack/placement-db-create-dgmf7" Dec 10 14:56:07 crc kubenswrapper[4718]: I1210 14:56:07.409954 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/2f53aabb-ce76-4ee7-973b-ebea7a6b06b1-additional-scripts\") pod \"ovn-controller-ftrjh-config-57trk\" (UID: \"2f53aabb-ce76-4ee7-973b-ebea7a6b06b1\") " pod="openstack/ovn-controller-ftrjh-config-57trk" Dec 10 14:56:07 crc kubenswrapper[4718]: I1210 14:56:07.409990 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/2f53aabb-ce76-4ee7-973b-ebea7a6b06b1-var-run-ovn\") pod \"ovn-controller-ftrjh-config-57trk\" (UID: \"2f53aabb-ce76-4ee7-973b-ebea7a6b06b1\") " pod="openstack/ovn-controller-ftrjh-config-57trk" Dec 10 14:56:07 crc kubenswrapper[4718]: I1210 14:56:07.410023 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/2f53aabb-ce76-4ee7-973b-ebea7a6b06b1-var-run\") pod \"ovn-controller-ftrjh-config-57trk\" (UID: \"2f53aabb-ce76-4ee7-973b-ebea7a6b06b1\") " pod="openstack/ovn-controller-ftrjh-config-57trk" Dec 10 14:56:07 crc kubenswrapper[4718]: I1210 14:56:07.410046 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5tvrj\" (UniqueName: \"kubernetes.io/projected/55ba3ecd-dc8b-455c-a7a5-737d89a02227-kube-api-access-5tvrj\") pod \"placement-db-create-dgmf7\" (UID: \"55ba3ecd-dc8b-455c-a7a5-737d89a02227\") " pod="openstack/placement-db-create-dgmf7" Dec 10 14:56:07 crc kubenswrapper[4718]: I1210 14:56:07.410939 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/55ba3ecd-dc8b-455c-a7a5-737d89a02227-operator-scripts\") pod \"placement-db-create-dgmf7\" (UID: \"55ba3ecd-dc8b-455c-a7a5-737d89a02227\") " pod="openstack/placement-db-create-dgmf7" Dec 10 14:56:07 crc kubenswrapper[4718]: I1210 14:56:07.411000 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/2f53aabb-ce76-4ee7-973b-ebea7a6b06b1-additional-scripts\") pod \"ovn-controller-ftrjh-config-57trk\" (UID: \"2f53aabb-ce76-4ee7-973b-ebea7a6b06b1\") " pod="openstack/ovn-controller-ftrjh-config-57trk" Dec 10 14:56:07 crc kubenswrapper[4718]: I1210 14:56:07.411307 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/2f53aabb-ce76-4ee7-973b-ebea7a6b06b1-var-run-ovn\") pod \"ovn-controller-ftrjh-config-57trk\" (UID: \"2f53aabb-ce76-4ee7-973b-ebea7a6b06b1\") " pod="openstack/ovn-controller-ftrjh-config-57trk" Dec 10 14:56:07 crc kubenswrapper[4718]: I1210 14:56:07.411306 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/2f53aabb-ce76-4ee7-973b-ebea7a6b06b1-var-log-ovn\") pod \"ovn-controller-ftrjh-config-57trk\" (UID: \"2f53aabb-ce76-4ee7-973b-ebea7a6b06b1\") " pod="openstack/ovn-controller-ftrjh-config-57trk" Dec 10 14:56:07 crc kubenswrapper[4718]: I1210 14:56:07.411363 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/2f53aabb-ce76-4ee7-973b-ebea7a6b06b1-var-run\") pod \"ovn-controller-ftrjh-config-57trk\" (UID: \"2f53aabb-ce76-4ee7-973b-ebea7a6b06b1\") " pod="openstack/ovn-controller-ftrjh-config-57trk" Dec 10 14:56:07 crc kubenswrapper[4718]: I1210 14:56:07.413170 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2f53aabb-ce76-4ee7-973b-ebea7a6b06b1-scripts\") pod \"ovn-controller-ftrjh-config-57trk\" (UID: \"2f53aabb-ce76-4ee7-973b-ebea7a6b06b1\") " pod="openstack/ovn-controller-ftrjh-config-57trk" Dec 10 14:56:07 crc kubenswrapper[4718]: I1210 14:56:07.431991 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-316a-account-create-update-xsvgc" Dec 10 14:56:07 crc kubenswrapper[4718]: I1210 14:56:07.454789 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5tvrj\" (UniqueName: \"kubernetes.io/projected/55ba3ecd-dc8b-455c-a7a5-737d89a02227-kube-api-access-5tvrj\") pod \"placement-db-create-dgmf7\" (UID: \"55ba3ecd-dc8b-455c-a7a5-737d89a02227\") " pod="openstack/placement-db-create-dgmf7" Dec 10 14:56:07 crc kubenswrapper[4718]: I1210 14:56:07.461622 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-plhdt\" (UniqueName: \"kubernetes.io/projected/2f53aabb-ce76-4ee7-973b-ebea7a6b06b1-kube-api-access-plhdt\") pod \"ovn-controller-ftrjh-config-57trk\" (UID: \"2f53aabb-ce76-4ee7-973b-ebea7a6b06b1\") " pod="openstack/ovn-controller-ftrjh-config-57trk" Dec 10 14:56:07 crc kubenswrapper[4718]: I1210 14:56:07.527074 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-dgmf7" Dec 10 14:56:07 crc kubenswrapper[4718]: I1210 14:56:07.552129 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ftrjh-config-57trk" Dec 10 14:56:08 crc kubenswrapper[4718]: I1210 14:56:08.092108 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-kbxrn"] Dec 10 14:56:08 crc kubenswrapper[4718]: I1210 14:56:08.235485 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-66e1-account-create-update-tt7r2"] Dec 10 14:56:08 crc kubenswrapper[4718]: I1210 14:56:08.241893 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-66e1-account-create-update-tt7r2" Dec 10 14:56:08 crc kubenswrapper[4718]: I1210 14:56:08.248331 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Dec 10 14:56:08 crc kubenswrapper[4718]: I1210 14:56:08.310570 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-66e1-account-create-update-tt7r2"] Dec 10 14:56:08 crc kubenswrapper[4718]: I1210 14:56:08.322233 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-316a-account-create-update-xsvgc"] Dec 10 14:56:08 crc kubenswrapper[4718]: I1210 14:56:08.348046 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fp8bf\" (UniqueName: \"kubernetes.io/projected/93986540-1957-4930-a489-dd0e648099a7-kube-api-access-fp8bf\") pod \"placement-66e1-account-create-update-tt7r2\" (UID: \"93986540-1957-4930-a489-dd0e648099a7\") " pod="openstack/placement-66e1-account-create-update-tt7r2" Dec 10 14:56:08 crc kubenswrapper[4718]: I1210 14:56:08.348184 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/93986540-1957-4930-a489-dd0e648099a7-operator-scripts\") pod \"placement-66e1-account-create-update-tt7r2\" (UID: \"93986540-1957-4930-a489-dd0e648099a7\") " pod="openstack/placement-66e1-account-create-update-tt7r2" Dec 10 14:56:08 crc kubenswrapper[4718]: I1210 14:56:08.424922 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-notifications-server-0" event={"ID":"5611ee41-14a4-45d3-88b1-e6e6c9bc4d13","Type":"ContainerStarted","Data":"6a93e0bc2dcb3ea325854e7cab995c418c9534f23703a7f1c505da55eec45689"} Dec 10 14:56:08 crc kubenswrapper[4718]: I1210 14:56:08.425621 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-notifications-server-0" Dec 10 14:56:08 crc kubenswrapper[4718]: I1210 14:56:08.464242 4718 generic.go:334] "Generic (PLEG): container finished" podID="0fcf07d8-859b-4547-8a32-824f40da6a93" containerID="c8b2fcef7b1e85bda85dbf340e17ddc30ec7066a78b9e32b881e0b894f46560d" exitCode=0 Dec 10 14:56:08 crc kubenswrapper[4718]: I1210 14:56:08.464438 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"0fcf07d8-859b-4547-8a32-824f40da6a93","Type":"ContainerDied","Data":"c8b2fcef7b1e85bda85dbf340e17ddc30ec7066a78b9e32b881e0b894f46560d"} Dec 10 14:56:08 crc kubenswrapper[4718]: I1210 14:56:08.470969 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fp8bf\" (UniqueName: \"kubernetes.io/projected/93986540-1957-4930-a489-dd0e648099a7-kube-api-access-fp8bf\") pod \"placement-66e1-account-create-update-tt7r2\" (UID: \"93986540-1957-4930-a489-dd0e648099a7\") " pod="openstack/placement-66e1-account-create-update-tt7r2" Dec 10 14:56:08 crc kubenswrapper[4718]: I1210 14:56:08.471787 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-kbxrn" event={"ID":"6c0ffc92-31c4-4d50-be24-ce7b6fed6506","Type":"ContainerStarted","Data":"9cf5025f0be81cd11dffd0813e3b73e68190a411e4b7788af3a4fe556e2c55f3"} Dec 10 14:56:08 crc kubenswrapper[4718]: I1210 14:56:08.493098 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/93986540-1957-4930-a489-dd0e648099a7-operator-scripts\") pod \"placement-66e1-account-create-update-tt7r2\" (UID: \"93986540-1957-4930-a489-dd0e648099a7\") " pod="openstack/placement-66e1-account-create-update-tt7r2" Dec 10 14:56:08 crc kubenswrapper[4718]: I1210 14:56:08.495071 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/93986540-1957-4930-a489-dd0e648099a7-operator-scripts\") pod \"placement-66e1-account-create-update-tt7r2\" (UID: \"93986540-1957-4930-a489-dd0e648099a7\") " pod="openstack/placement-66e1-account-create-update-tt7r2" Dec 10 14:56:08 crc kubenswrapper[4718]: I1210 14:56:08.496110 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-notifications-server-0" podStartSLOduration=-9223371921.358704 podStartE2EDuration="1m55.496072015s" podCreationTimestamp="2025-12-10 14:54:13 +0000 UTC" firstStartedPulling="2025-12-10 14:54:17.871298429 +0000 UTC m=+1362.820521846" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 14:56:08.482307204 +0000 UTC m=+1473.431530621" watchObservedRunningTime="2025-12-10 14:56:08.496072015 +0000 UTC m=+1473.445295432" Dec 10 14:56:08 crc kubenswrapper[4718]: I1210 14:56:08.523720 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fp8bf\" (UniqueName: \"kubernetes.io/projected/93986540-1957-4930-a489-dd0e648099a7-kube-api-access-fp8bf\") pod \"placement-66e1-account-create-update-tt7r2\" (UID: \"93986540-1957-4930-a489-dd0e648099a7\") " pod="openstack/placement-66e1-account-create-update-tt7r2" Dec 10 14:56:08 crc kubenswrapper[4718]: I1210 14:56:08.649184 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-66e1-account-create-update-tt7r2" Dec 10 14:56:08 crc kubenswrapper[4718]: I1210 14:56:08.785406 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ftrjh-config-57trk"] Dec 10 14:56:08 crc kubenswrapper[4718]: I1210 14:56:08.853332 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-dgmf7"] Dec 10 14:56:09 crc kubenswrapper[4718]: I1210 14:56:09.460752 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-66e1-account-create-update-tt7r2"] Dec 10 14:56:09 crc kubenswrapper[4718]: I1210 14:56:09.504110 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-kbxrn" event={"ID":"6c0ffc92-31c4-4d50-be24-ce7b6fed6506","Type":"ContainerStarted","Data":"f64d88482fac23cf5723258ca8d01796d926859245dc69f1d95365586300f637"} Dec 10 14:56:09 crc kubenswrapper[4718]: I1210 14:56:09.510613 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ftrjh-config-57trk" event={"ID":"2f53aabb-ce76-4ee7-973b-ebea7a6b06b1","Type":"ContainerStarted","Data":"9767fa4be9db5239bc2e275864f64a42c331bb666051c212b5410c4b4cb772bf"} Dec 10 14:56:09 crc kubenswrapper[4718]: I1210 14:56:09.525777 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"0fcf07d8-859b-4547-8a32-824f40da6a93","Type":"ContainerStarted","Data":"39f018b3b86bf926a366646740cd5924108999d0e42616a6428f138277fcd0df"} Dec 10 14:56:09 crc kubenswrapper[4718]: I1210 14:56:09.526125 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Dec 10 14:56:09 crc kubenswrapper[4718]: I1210 14:56:09.548147 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-dgmf7" event={"ID":"55ba3ecd-dc8b-455c-a7a5-737d89a02227","Type":"ContainerStarted","Data":"6febd6def6c02ad2a58fbe71e1ce8de745dc9b13bd89f9fbcfcc1cfcbab515b6"} Dec 10 14:56:09 crc kubenswrapper[4718]: I1210 14:56:09.548286 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-dgmf7" event={"ID":"55ba3ecd-dc8b-455c-a7a5-737d89a02227","Type":"ContainerStarted","Data":"54e42d5ca72873d5e068f44be153c23f25998e17308c81a150a87f52e4160a13"} Dec 10 14:56:09 crc kubenswrapper[4718]: I1210 14:56:09.555309 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-316a-account-create-update-xsvgc" event={"ID":"65778ba5-a5bf-495f-97ba-045a6ef28ce8","Type":"ContainerStarted","Data":"3858ea3a4ddc2da9264644a70e9fe5c89b6c4dec6a43e1dfa65be42268a53a87"} Dec 10 14:56:09 crc kubenswrapper[4718]: I1210 14:56:09.555648 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-316a-account-create-update-xsvgc" event={"ID":"65778ba5-a5bf-495f-97ba-045a6ef28ce8","Type":"ContainerStarted","Data":"65e77b0857d869de03d40aeb03cb361f8641a8a1cbbb2766c0793f46c830c8b2"} Dec 10 14:56:09 crc kubenswrapper[4718]: I1210 14:56:09.596074 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=-9223371917.258736 podStartE2EDuration="1m59.596040337s" podCreationTimestamp="2025-12-10 14:54:10 +0000 UTC" firstStartedPulling="2025-12-10 14:54:16.862108426 +0000 UTC m=+1361.811331843" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 14:56:09.586835682 +0000 UTC m=+1474.536059099" watchObservedRunningTime="2025-12-10 14:56:09.596040337 +0000 UTC m=+1474.545263754" Dec 10 14:56:09 crc kubenswrapper[4718]: I1210 14:56:09.596292 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-create-kbxrn" podStartSLOduration=3.596283263 podStartE2EDuration="3.596283263s" podCreationTimestamp="2025-12-10 14:56:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 14:56:09.551551871 +0000 UTC m=+1474.500775288" watchObservedRunningTime="2025-12-10 14:56:09.596283263 +0000 UTC m=+1474.545506700" Dec 10 14:56:09 crc kubenswrapper[4718]: I1210 14:56:09.614284 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-create-dgmf7" podStartSLOduration=2.614259152 podStartE2EDuration="2.614259152s" podCreationTimestamp="2025-12-10 14:56:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 14:56:09.612909297 +0000 UTC m=+1474.562132704" watchObservedRunningTime="2025-12-10 14:56:09.614259152 +0000 UTC m=+1474.563482569" Dec 10 14:56:09 crc kubenswrapper[4718]: I1210 14:56:09.640910 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-316a-account-create-update-xsvgc" podStartSLOduration=2.640872761 podStartE2EDuration="2.640872761s" podCreationTimestamp="2025-12-10 14:56:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 14:56:09.629103791 +0000 UTC m=+1474.578327208" watchObservedRunningTime="2025-12-10 14:56:09.640872761 +0000 UTC m=+1474.590096178" Dec 10 14:56:10 crc kubenswrapper[4718]: I1210 14:56:10.575641 4718 generic.go:334] "Generic (PLEG): container finished" podID="2f53aabb-ce76-4ee7-973b-ebea7a6b06b1" containerID="0ce446244bfe9eab1d65bf0e9a1639aa43526a4913e613958f2dd7091d1b01d5" exitCode=0 Dec 10 14:56:10 crc kubenswrapper[4718]: I1210 14:56:10.575779 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ftrjh-config-57trk" event={"ID":"2f53aabb-ce76-4ee7-973b-ebea7a6b06b1","Type":"ContainerDied","Data":"0ce446244bfe9eab1d65bf0e9a1639aa43526a4913e613958f2dd7091d1b01d5"} Dec 10 14:56:10 crc kubenswrapper[4718]: I1210 14:56:10.588195 4718 generic.go:334] "Generic (PLEG): container finished" podID="55ba3ecd-dc8b-455c-a7a5-737d89a02227" containerID="6febd6def6c02ad2a58fbe71e1ce8de745dc9b13bd89f9fbcfcc1cfcbab515b6" exitCode=0 Dec 10 14:56:10 crc kubenswrapper[4718]: I1210 14:56:10.588307 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-dgmf7" event={"ID":"55ba3ecd-dc8b-455c-a7a5-737d89a02227","Type":"ContainerDied","Data":"6febd6def6c02ad2a58fbe71e1ce8de745dc9b13bd89f9fbcfcc1cfcbab515b6"} Dec 10 14:56:10 crc kubenswrapper[4718]: I1210 14:56:10.596424 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-66e1-account-create-update-tt7r2" event={"ID":"93986540-1957-4930-a489-dd0e648099a7","Type":"ContainerStarted","Data":"022e20bd087211a2be64a6e1310f61cd5da2269ffc534257c6e8b0151e372ef8"} Dec 10 14:56:10 crc kubenswrapper[4718]: I1210 14:56:10.596488 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-66e1-account-create-update-tt7r2" event={"ID":"93986540-1957-4930-a489-dd0e648099a7","Type":"ContainerStarted","Data":"3c687e21a4d6a43c46fbe8318c5c0ae2b92a955b12fab6594d548216e58a2129"} Dec 10 14:56:10 crc kubenswrapper[4718]: I1210 14:56:10.599096 4718 generic.go:334] "Generic (PLEG): container finished" podID="65778ba5-a5bf-495f-97ba-045a6ef28ce8" containerID="3858ea3a4ddc2da9264644a70e9fe5c89b6c4dec6a43e1dfa65be42268a53a87" exitCode=0 Dec 10 14:56:10 crc kubenswrapper[4718]: I1210 14:56:10.599166 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-316a-account-create-update-xsvgc" event={"ID":"65778ba5-a5bf-495f-97ba-045a6ef28ce8","Type":"ContainerDied","Data":"3858ea3a4ddc2da9264644a70e9fe5c89b6c4dec6a43e1dfa65be42268a53a87"} Dec 10 14:56:10 crc kubenswrapper[4718]: I1210 14:56:10.604699 4718 generic.go:334] "Generic (PLEG): container finished" podID="6c0ffc92-31c4-4d50-be24-ce7b6fed6506" containerID="f64d88482fac23cf5723258ca8d01796d926859245dc69f1d95365586300f637" exitCode=0 Dec 10 14:56:10 crc kubenswrapper[4718]: I1210 14:56:10.605109 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-kbxrn" event={"ID":"6c0ffc92-31c4-4d50-be24-ce7b6fed6506","Type":"ContainerDied","Data":"f64d88482fac23cf5723258ca8d01796d926859245dc69f1d95365586300f637"} Dec 10 14:56:10 crc kubenswrapper[4718]: I1210 14:56:10.810680 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-66e1-account-create-update-tt7r2" podStartSLOduration=2.810647233 podStartE2EDuration="2.810647233s" podCreationTimestamp="2025-12-10 14:56:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 14:56:10.801794277 +0000 UTC m=+1475.751017694" watchObservedRunningTime="2025-12-10 14:56:10.810647233 +0000 UTC m=+1475.759870650" Dec 10 14:56:11 crc kubenswrapper[4718]: I1210 14:56:11.019865 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ftrjh" Dec 10 14:56:11 crc kubenswrapper[4718]: I1210 14:56:11.614305 4718 generic.go:334] "Generic (PLEG): container finished" podID="93986540-1957-4930-a489-dd0e648099a7" containerID="022e20bd087211a2be64a6e1310f61cd5da2269ffc534257c6e8b0151e372ef8" exitCode=0 Dec 10 14:56:11 crc kubenswrapper[4718]: I1210 14:56:11.615047 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-66e1-account-create-update-tt7r2" event={"ID":"93986540-1957-4930-a489-dd0e648099a7","Type":"ContainerDied","Data":"022e20bd087211a2be64a6e1310f61cd5da2269ffc534257c6e8b0151e372ef8"} Dec 10 14:56:13 crc kubenswrapper[4718]: I1210 14:56:13.624930 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-dgmf7" Dec 10 14:56:13 crc kubenswrapper[4718]: I1210 14:56:13.635324 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-66e1-account-create-update-tt7r2" Dec 10 14:56:13 crc kubenswrapper[4718]: I1210 14:56:13.639305 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-316a-account-create-update-xsvgc" event={"ID":"65778ba5-a5bf-495f-97ba-045a6ef28ce8","Type":"ContainerDied","Data":"65e77b0857d869de03d40aeb03cb361f8641a8a1cbbb2766c0793f46c830c8b2"} Dec 10 14:56:13 crc kubenswrapper[4718]: I1210 14:56:13.639365 4718 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="65e77b0857d869de03d40aeb03cb361f8641a8a1cbbb2766c0793f46c830c8b2" Dec 10 14:56:13 crc kubenswrapper[4718]: I1210 14:56:13.643211 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-kbxrn" event={"ID":"6c0ffc92-31c4-4d50-be24-ce7b6fed6506","Type":"ContainerDied","Data":"9cf5025f0be81cd11dffd0813e3b73e68190a411e4b7788af3a4fe556e2c55f3"} Dec 10 14:56:13 crc kubenswrapper[4718]: I1210 14:56:13.643267 4718 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9cf5025f0be81cd11dffd0813e3b73e68190a411e4b7788af3a4fe556e2c55f3" Dec 10 14:56:13 crc kubenswrapper[4718]: I1210 14:56:13.647248 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ftrjh-config-57trk" event={"ID":"2f53aabb-ce76-4ee7-973b-ebea7a6b06b1","Type":"ContainerDied","Data":"9767fa4be9db5239bc2e275864f64a42c331bb666051c212b5410c4b4cb772bf"} Dec 10 14:56:13 crc kubenswrapper[4718]: I1210 14:56:13.647295 4718 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9767fa4be9db5239bc2e275864f64a42c331bb666051c212b5410c4b4cb772bf" Dec 10 14:56:13 crc kubenswrapper[4718]: I1210 14:56:13.647368 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ftrjh-config-57trk" Dec 10 14:56:13 crc kubenswrapper[4718]: I1210 14:56:13.650018 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-dgmf7" event={"ID":"55ba3ecd-dc8b-455c-a7a5-737d89a02227","Type":"ContainerDied","Data":"54e42d5ca72873d5e068f44be153c23f25998e17308c81a150a87f52e4160a13"} Dec 10 14:56:13 crc kubenswrapper[4718]: I1210 14:56:13.650093 4718 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="54e42d5ca72873d5e068f44be153c23f25998e17308c81a150a87f52e4160a13" Dec 10 14:56:13 crc kubenswrapper[4718]: I1210 14:56:13.650058 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-dgmf7" Dec 10 14:56:13 crc kubenswrapper[4718]: I1210 14:56:13.651920 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-66e1-account-create-update-tt7r2" event={"ID":"93986540-1957-4930-a489-dd0e648099a7","Type":"ContainerDied","Data":"3c687e21a4d6a43c46fbe8318c5c0ae2b92a955b12fab6594d548216e58a2129"} Dec 10 14:56:13 crc kubenswrapper[4718]: I1210 14:56:13.651991 4718 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3c687e21a4d6a43c46fbe8318c5c0ae2b92a955b12fab6594d548216e58a2129" Dec 10 14:56:13 crc kubenswrapper[4718]: I1210 14:56:13.651956 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-66e1-account-create-update-tt7r2" Dec 10 14:56:13 crc kubenswrapper[4718]: I1210 14:56:13.658197 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-kbxrn" Dec 10 14:56:13 crc kubenswrapper[4718]: I1210 14:56:13.661731 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-316a-account-create-update-xsvgc" Dec 10 14:56:13 crc kubenswrapper[4718]: I1210 14:56:13.737678 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/2f53aabb-ce76-4ee7-973b-ebea7a6b06b1-var-log-ovn\") pod \"2f53aabb-ce76-4ee7-973b-ebea7a6b06b1\" (UID: \"2f53aabb-ce76-4ee7-973b-ebea7a6b06b1\") " Dec 10 14:56:13 crc kubenswrapper[4718]: I1210 14:56:13.737748 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/93986540-1957-4930-a489-dd0e648099a7-operator-scripts\") pod \"93986540-1957-4930-a489-dd0e648099a7\" (UID: \"93986540-1957-4930-a489-dd0e648099a7\") " Dec 10 14:56:13 crc kubenswrapper[4718]: I1210 14:56:13.737803 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/2f53aabb-ce76-4ee7-973b-ebea7a6b06b1-additional-scripts\") pod \"2f53aabb-ce76-4ee7-973b-ebea7a6b06b1\" (UID: \"2f53aabb-ce76-4ee7-973b-ebea7a6b06b1\") " Dec 10 14:56:13 crc kubenswrapper[4718]: I1210 14:56:13.737843 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-whjhg\" (UniqueName: \"kubernetes.io/projected/65778ba5-a5bf-495f-97ba-045a6ef28ce8-kube-api-access-whjhg\") pod \"65778ba5-a5bf-495f-97ba-045a6ef28ce8\" (UID: \"65778ba5-a5bf-495f-97ba-045a6ef28ce8\") " Dec 10 14:56:13 crc kubenswrapper[4718]: I1210 14:56:13.737889 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/55ba3ecd-dc8b-455c-a7a5-737d89a02227-operator-scripts\") pod \"55ba3ecd-dc8b-455c-a7a5-737d89a02227\" (UID: \"55ba3ecd-dc8b-455c-a7a5-737d89a02227\") " Dec 10 14:56:13 crc kubenswrapper[4718]: I1210 14:56:13.737973 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fp8bf\" (UniqueName: \"kubernetes.io/projected/93986540-1957-4930-a489-dd0e648099a7-kube-api-access-fp8bf\") pod \"93986540-1957-4930-a489-dd0e648099a7\" (UID: \"93986540-1957-4930-a489-dd0e648099a7\") " Dec 10 14:56:13 crc kubenswrapper[4718]: I1210 14:56:13.738010 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/65778ba5-a5bf-495f-97ba-045a6ef28ce8-operator-scripts\") pod \"65778ba5-a5bf-495f-97ba-045a6ef28ce8\" (UID: \"65778ba5-a5bf-495f-97ba-045a6ef28ce8\") " Dec 10 14:56:13 crc kubenswrapper[4718]: I1210 14:56:13.738076 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6c0ffc92-31c4-4d50-be24-ce7b6fed6506-operator-scripts\") pod \"6c0ffc92-31c4-4d50-be24-ce7b6fed6506\" (UID: \"6c0ffc92-31c4-4d50-be24-ce7b6fed6506\") " Dec 10 14:56:13 crc kubenswrapper[4718]: I1210 14:56:13.738148 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2f53aabb-ce76-4ee7-973b-ebea7a6b06b1-scripts\") pod \"2f53aabb-ce76-4ee7-973b-ebea7a6b06b1\" (UID: \"2f53aabb-ce76-4ee7-973b-ebea7a6b06b1\") " Dec 10 14:56:13 crc kubenswrapper[4718]: I1210 14:56:13.738168 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/2f53aabb-ce76-4ee7-973b-ebea7a6b06b1-var-run-ovn\") pod \"2f53aabb-ce76-4ee7-973b-ebea7a6b06b1\" (UID: \"2f53aabb-ce76-4ee7-973b-ebea7a6b06b1\") " Dec 10 14:56:13 crc kubenswrapper[4718]: I1210 14:56:13.738205 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/2f53aabb-ce76-4ee7-973b-ebea7a6b06b1-var-run\") pod \"2f53aabb-ce76-4ee7-973b-ebea7a6b06b1\" (UID: \"2f53aabb-ce76-4ee7-973b-ebea7a6b06b1\") " Dec 10 14:56:13 crc kubenswrapper[4718]: I1210 14:56:13.738238 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5tvrj\" (UniqueName: \"kubernetes.io/projected/55ba3ecd-dc8b-455c-a7a5-737d89a02227-kube-api-access-5tvrj\") pod \"55ba3ecd-dc8b-455c-a7a5-737d89a02227\" (UID: \"55ba3ecd-dc8b-455c-a7a5-737d89a02227\") " Dec 10 14:56:13 crc kubenswrapper[4718]: I1210 14:56:13.738309 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cjtxc\" (UniqueName: \"kubernetes.io/projected/6c0ffc92-31c4-4d50-be24-ce7b6fed6506-kube-api-access-cjtxc\") pod \"6c0ffc92-31c4-4d50-be24-ce7b6fed6506\" (UID: \"6c0ffc92-31c4-4d50-be24-ce7b6fed6506\") " Dec 10 14:56:13 crc kubenswrapper[4718]: I1210 14:56:13.738334 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-plhdt\" (UniqueName: \"kubernetes.io/projected/2f53aabb-ce76-4ee7-973b-ebea7a6b06b1-kube-api-access-plhdt\") pod \"2f53aabb-ce76-4ee7-973b-ebea7a6b06b1\" (UID: \"2f53aabb-ce76-4ee7-973b-ebea7a6b06b1\") " Dec 10 14:56:13 crc kubenswrapper[4718]: I1210 14:56:13.739194 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2f53aabb-ce76-4ee7-973b-ebea7a6b06b1-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "2f53aabb-ce76-4ee7-973b-ebea7a6b06b1" (UID: "2f53aabb-ce76-4ee7-973b-ebea7a6b06b1"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 10 14:56:13 crc kubenswrapper[4718]: I1210 14:56:13.739573 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2f53aabb-ce76-4ee7-973b-ebea7a6b06b1-var-run" (OuterVolumeSpecName: "var-run") pod "2f53aabb-ce76-4ee7-973b-ebea7a6b06b1" (UID: "2f53aabb-ce76-4ee7-973b-ebea7a6b06b1"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 10 14:56:13 crc kubenswrapper[4718]: I1210 14:56:13.739696 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/55ba3ecd-dc8b-455c-a7a5-737d89a02227-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "55ba3ecd-dc8b-455c-a7a5-737d89a02227" (UID: "55ba3ecd-dc8b-455c-a7a5-737d89a02227"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:56:13 crc kubenswrapper[4718]: I1210 14:56:13.739962 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2f53aabb-ce76-4ee7-973b-ebea7a6b06b1-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "2f53aabb-ce76-4ee7-973b-ebea7a6b06b1" (UID: "2f53aabb-ce76-4ee7-973b-ebea7a6b06b1"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:56:13 crc kubenswrapper[4718]: I1210 14:56:13.740222 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6c0ffc92-31c4-4d50-be24-ce7b6fed6506-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "6c0ffc92-31c4-4d50-be24-ce7b6fed6506" (UID: "6c0ffc92-31c4-4d50-be24-ce7b6fed6506"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:56:13 crc kubenswrapper[4718]: I1210 14:56:13.740821 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/93986540-1957-4930-a489-dd0e648099a7-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "93986540-1957-4930-a489-dd0e648099a7" (UID: "93986540-1957-4930-a489-dd0e648099a7"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:56:13 crc kubenswrapper[4718]: I1210 14:56:13.740807 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/65778ba5-a5bf-495f-97ba-045a6ef28ce8-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "65778ba5-a5bf-495f-97ba-045a6ef28ce8" (UID: "65778ba5-a5bf-495f-97ba-045a6ef28ce8"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:56:13 crc kubenswrapper[4718]: I1210 14:56:13.740868 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2f53aabb-ce76-4ee7-973b-ebea7a6b06b1-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "2f53aabb-ce76-4ee7-973b-ebea7a6b06b1" (UID: "2f53aabb-ce76-4ee7-973b-ebea7a6b06b1"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 10 14:56:13 crc kubenswrapper[4718]: I1210 14:56:13.741165 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2f53aabb-ce76-4ee7-973b-ebea7a6b06b1-scripts" (OuterVolumeSpecName: "scripts") pod "2f53aabb-ce76-4ee7-973b-ebea7a6b06b1" (UID: "2f53aabb-ce76-4ee7-973b-ebea7a6b06b1"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:56:13 crc kubenswrapper[4718]: I1210 14:56:13.749779 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/65778ba5-a5bf-495f-97ba-045a6ef28ce8-kube-api-access-whjhg" (OuterVolumeSpecName: "kube-api-access-whjhg") pod "65778ba5-a5bf-495f-97ba-045a6ef28ce8" (UID: "65778ba5-a5bf-495f-97ba-045a6ef28ce8"). InnerVolumeSpecName "kube-api-access-whjhg". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 14:56:13 crc kubenswrapper[4718]: I1210 14:56:13.749868 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/93986540-1957-4930-a489-dd0e648099a7-kube-api-access-fp8bf" (OuterVolumeSpecName: "kube-api-access-fp8bf") pod "93986540-1957-4930-a489-dd0e648099a7" (UID: "93986540-1957-4930-a489-dd0e648099a7"). InnerVolumeSpecName "kube-api-access-fp8bf". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 14:56:13 crc kubenswrapper[4718]: I1210 14:56:13.755151 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2f53aabb-ce76-4ee7-973b-ebea7a6b06b1-kube-api-access-plhdt" (OuterVolumeSpecName: "kube-api-access-plhdt") pod "2f53aabb-ce76-4ee7-973b-ebea7a6b06b1" (UID: "2f53aabb-ce76-4ee7-973b-ebea7a6b06b1"). InnerVolumeSpecName "kube-api-access-plhdt". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 14:56:13 crc kubenswrapper[4718]: I1210 14:56:13.755288 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6c0ffc92-31c4-4d50-be24-ce7b6fed6506-kube-api-access-cjtxc" (OuterVolumeSpecName: "kube-api-access-cjtxc") pod "6c0ffc92-31c4-4d50-be24-ce7b6fed6506" (UID: "6c0ffc92-31c4-4d50-be24-ce7b6fed6506"). InnerVolumeSpecName "kube-api-access-cjtxc". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 14:56:13 crc kubenswrapper[4718]: I1210 14:56:13.770841 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/55ba3ecd-dc8b-455c-a7a5-737d89a02227-kube-api-access-5tvrj" (OuterVolumeSpecName: "kube-api-access-5tvrj") pod "55ba3ecd-dc8b-455c-a7a5-737d89a02227" (UID: "55ba3ecd-dc8b-455c-a7a5-737d89a02227"). InnerVolumeSpecName "kube-api-access-5tvrj". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 14:56:13 crc kubenswrapper[4718]: I1210 14:56:13.841833 4718 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/2f53aabb-ce76-4ee7-973b-ebea7a6b06b1-var-log-ovn\") on node \"crc\" DevicePath \"\"" Dec 10 14:56:13 crc kubenswrapper[4718]: I1210 14:56:13.841881 4718 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/93986540-1957-4930-a489-dd0e648099a7-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 10 14:56:13 crc kubenswrapper[4718]: I1210 14:56:13.841898 4718 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/2f53aabb-ce76-4ee7-973b-ebea7a6b06b1-additional-scripts\") on node \"crc\" DevicePath \"\"" Dec 10 14:56:13 crc kubenswrapper[4718]: I1210 14:56:13.841908 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-whjhg\" (UniqueName: \"kubernetes.io/projected/65778ba5-a5bf-495f-97ba-045a6ef28ce8-kube-api-access-whjhg\") on node \"crc\" DevicePath \"\"" Dec 10 14:56:13 crc kubenswrapper[4718]: I1210 14:56:13.841923 4718 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/55ba3ecd-dc8b-455c-a7a5-737d89a02227-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 10 14:56:13 crc kubenswrapper[4718]: I1210 14:56:13.841932 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fp8bf\" (UniqueName: \"kubernetes.io/projected/93986540-1957-4930-a489-dd0e648099a7-kube-api-access-fp8bf\") on node \"crc\" DevicePath \"\"" Dec 10 14:56:13 crc kubenswrapper[4718]: I1210 14:56:13.841940 4718 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/65778ba5-a5bf-495f-97ba-045a6ef28ce8-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 10 14:56:13 crc kubenswrapper[4718]: I1210 14:56:13.841950 4718 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6c0ffc92-31c4-4d50-be24-ce7b6fed6506-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 10 14:56:13 crc kubenswrapper[4718]: I1210 14:56:13.841984 4718 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2f53aabb-ce76-4ee7-973b-ebea7a6b06b1-scripts\") on node \"crc\" DevicePath \"\"" Dec 10 14:56:13 crc kubenswrapper[4718]: I1210 14:56:13.841995 4718 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/2f53aabb-ce76-4ee7-973b-ebea7a6b06b1-var-run-ovn\") on node \"crc\" DevicePath \"\"" Dec 10 14:56:13 crc kubenswrapper[4718]: I1210 14:56:13.842006 4718 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/2f53aabb-ce76-4ee7-973b-ebea7a6b06b1-var-run\") on node \"crc\" DevicePath \"\"" Dec 10 14:56:13 crc kubenswrapper[4718]: I1210 14:56:13.842015 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5tvrj\" (UniqueName: \"kubernetes.io/projected/55ba3ecd-dc8b-455c-a7a5-737d89a02227-kube-api-access-5tvrj\") on node \"crc\" DevicePath \"\"" Dec 10 14:56:13 crc kubenswrapper[4718]: I1210 14:56:13.842027 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cjtxc\" (UniqueName: \"kubernetes.io/projected/6c0ffc92-31c4-4d50-be24-ce7b6fed6506-kube-api-access-cjtxc\") on node \"crc\" DevicePath \"\"" Dec 10 14:56:13 crc kubenswrapper[4718]: I1210 14:56:13.842036 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-plhdt\" (UniqueName: \"kubernetes.io/projected/2f53aabb-ce76-4ee7-973b-ebea7a6b06b1-kube-api-access-plhdt\") on node \"crc\" DevicePath \"\"" Dec 10 14:56:14 crc kubenswrapper[4718]: I1210 14:56:14.665182 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-316a-account-create-update-xsvgc" Dec 10 14:56:14 crc kubenswrapper[4718]: I1210 14:56:14.665172 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"7ea78b30-1cce-42f6-abae-e7e66ee3daae","Type":"ContainerStarted","Data":"8619081bdf22c0511257418f9e3c2197eaa3fdcdecc7ef404b955f8c3df57477"} Dec 10 14:56:14 crc kubenswrapper[4718]: I1210 14:56:14.665274 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ftrjh-config-57trk" Dec 10 14:56:14 crc kubenswrapper[4718]: I1210 14:56:14.665273 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-kbxrn" Dec 10 14:56:14 crc kubenswrapper[4718]: I1210 14:56:14.702419 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=7.391873996 podStartE2EDuration="1m51.702358025s" podCreationTimestamp="2025-12-10 14:54:23 +0000 UTC" firstStartedPulling="2025-12-10 14:54:29.492891365 +0000 UTC m=+1374.442114782" lastFinishedPulling="2025-12-10 14:56:13.803375394 +0000 UTC m=+1478.752598811" observedRunningTime="2025-12-10 14:56:14.695322515 +0000 UTC m=+1479.644545942" watchObservedRunningTime="2025-12-10 14:56:14.702358025 +0000 UTC m=+1479.651581442" Dec 10 14:56:14 crc kubenswrapper[4718]: I1210 14:56:14.921485 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-ftrjh-config-57trk"] Dec 10 14:56:14 crc kubenswrapper[4718]: I1210 14:56:14.932177 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-ftrjh-config-57trk"] Dec 10 14:56:14 crc kubenswrapper[4718]: I1210 14:56:14.976032 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Dec 10 14:56:15 crc kubenswrapper[4718]: I1210 14:56:15.420076 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Dec 10 14:56:16 crc kubenswrapper[4718]: I1210 14:56:16.037655 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2f53aabb-ce76-4ee7-973b-ebea7a6b06b1" path="/var/lib/kubelet/pods/2f53aabb-ce76-4ee7-973b-ebea7a6b06b1/volumes" Dec 10 14:56:16 crc kubenswrapper[4718]: I1210 14:56:16.691852 4718 generic.go:334] "Generic (PLEG): container finished" podID="ea7defa5-2130-4d6d-8bba-9416bec21dfa" containerID="c9c0e0a614eb99a46e7fee4f275ecfed2eadb733257d7dc05cc902927c6066e6" exitCode=0 Dec 10 14:56:16 crc kubenswrapper[4718]: I1210 14:56:16.691971 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-nhp59" event={"ID":"ea7defa5-2130-4d6d-8bba-9416bec21dfa","Type":"ContainerDied","Data":"c9c0e0a614eb99a46e7fee4f275ecfed2eadb733257d7dc05cc902927c6066e6"} Dec 10 14:56:18 crc kubenswrapper[4718]: I1210 14:56:18.082351 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-nhp59" Dec 10 14:56:18 crc kubenswrapper[4718]: I1210 14:56:18.084111 4718 patch_prober.go:28] interesting pod/machine-config-daemon-8zmhn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 10 14:56:18 crc kubenswrapper[4718]: I1210 14:56:18.084281 4718 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" podUID="8db53917-7cfb-496d-b8a0-5cc68f3be4e7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 10 14:56:18 crc kubenswrapper[4718]: I1210 14:56:18.186613 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7bwsz\" (UniqueName: \"kubernetes.io/projected/ea7defa5-2130-4d6d-8bba-9416bec21dfa-kube-api-access-7bwsz\") pod \"ea7defa5-2130-4d6d-8bba-9416bec21dfa\" (UID: \"ea7defa5-2130-4d6d-8bba-9416bec21dfa\") " Dec 10 14:56:18 crc kubenswrapper[4718]: I1210 14:56:18.186997 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/ea7defa5-2130-4d6d-8bba-9416bec21dfa-dispersionconf\") pod \"ea7defa5-2130-4d6d-8bba-9416bec21dfa\" (UID: \"ea7defa5-2130-4d6d-8bba-9416bec21dfa\") " Dec 10 14:56:18 crc kubenswrapper[4718]: I1210 14:56:18.187803 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ea7defa5-2130-4d6d-8bba-9416bec21dfa-scripts\") pod \"ea7defa5-2130-4d6d-8bba-9416bec21dfa\" (UID: \"ea7defa5-2130-4d6d-8bba-9416bec21dfa\") " Dec 10 14:56:18 crc kubenswrapper[4718]: I1210 14:56:18.188017 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ea7defa5-2130-4d6d-8bba-9416bec21dfa-combined-ca-bundle\") pod \"ea7defa5-2130-4d6d-8bba-9416bec21dfa\" (UID: \"ea7defa5-2130-4d6d-8bba-9416bec21dfa\") " Dec 10 14:56:18 crc kubenswrapper[4718]: I1210 14:56:18.188228 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/ea7defa5-2130-4d6d-8bba-9416bec21dfa-etc-swift\") pod \"ea7defa5-2130-4d6d-8bba-9416bec21dfa\" (UID: \"ea7defa5-2130-4d6d-8bba-9416bec21dfa\") " Dec 10 14:56:18 crc kubenswrapper[4718]: I1210 14:56:18.188402 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/ea7defa5-2130-4d6d-8bba-9416bec21dfa-ring-data-devices\") pod \"ea7defa5-2130-4d6d-8bba-9416bec21dfa\" (UID: \"ea7defa5-2130-4d6d-8bba-9416bec21dfa\") " Dec 10 14:56:18 crc kubenswrapper[4718]: I1210 14:56:18.188565 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/ea7defa5-2130-4d6d-8bba-9416bec21dfa-swiftconf\") pod \"ea7defa5-2130-4d6d-8bba-9416bec21dfa\" (UID: \"ea7defa5-2130-4d6d-8bba-9416bec21dfa\") " Dec 10 14:56:18 crc kubenswrapper[4718]: I1210 14:56:18.190138 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ea7defa5-2130-4d6d-8bba-9416bec21dfa-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "ea7defa5-2130-4d6d-8bba-9416bec21dfa" (UID: "ea7defa5-2130-4d6d-8bba-9416bec21dfa"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:56:18 crc kubenswrapper[4718]: I1210 14:56:18.191501 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ea7defa5-2130-4d6d-8bba-9416bec21dfa-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "ea7defa5-2130-4d6d-8bba-9416bec21dfa" (UID: "ea7defa5-2130-4d6d-8bba-9416bec21dfa"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 14:56:18 crc kubenswrapper[4718]: I1210 14:56:18.222936 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ea7defa5-2130-4d6d-8bba-9416bec21dfa-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "ea7defa5-2130-4d6d-8bba-9416bec21dfa" (UID: "ea7defa5-2130-4d6d-8bba-9416bec21dfa"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 14:56:18 crc kubenswrapper[4718]: I1210 14:56:18.226867 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ea7defa5-2130-4d6d-8bba-9416bec21dfa-kube-api-access-7bwsz" (OuterVolumeSpecName: "kube-api-access-7bwsz") pod "ea7defa5-2130-4d6d-8bba-9416bec21dfa" (UID: "ea7defa5-2130-4d6d-8bba-9416bec21dfa"). InnerVolumeSpecName "kube-api-access-7bwsz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 14:56:18 crc kubenswrapper[4718]: I1210 14:56:18.232556 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ea7defa5-2130-4d6d-8bba-9416bec21dfa-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "ea7defa5-2130-4d6d-8bba-9416bec21dfa" (UID: "ea7defa5-2130-4d6d-8bba-9416bec21dfa"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 14:56:18 crc kubenswrapper[4718]: I1210 14:56:18.250719 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ea7defa5-2130-4d6d-8bba-9416bec21dfa-scripts" (OuterVolumeSpecName: "scripts") pod "ea7defa5-2130-4d6d-8bba-9416bec21dfa" (UID: "ea7defa5-2130-4d6d-8bba-9416bec21dfa"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:56:18 crc kubenswrapper[4718]: I1210 14:56:18.256862 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ea7defa5-2130-4d6d-8bba-9416bec21dfa-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ea7defa5-2130-4d6d-8bba-9416bec21dfa" (UID: "ea7defa5-2130-4d6d-8bba-9416bec21dfa"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 14:56:18 crc kubenswrapper[4718]: I1210 14:56:18.298459 4718 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/ea7defa5-2130-4d6d-8bba-9416bec21dfa-swiftconf\") on node \"crc\" DevicePath \"\"" Dec 10 14:56:18 crc kubenswrapper[4718]: I1210 14:56:18.298516 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7bwsz\" (UniqueName: \"kubernetes.io/projected/ea7defa5-2130-4d6d-8bba-9416bec21dfa-kube-api-access-7bwsz\") on node \"crc\" DevicePath \"\"" Dec 10 14:56:18 crc kubenswrapper[4718]: I1210 14:56:18.298536 4718 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/ea7defa5-2130-4d6d-8bba-9416bec21dfa-dispersionconf\") on node \"crc\" DevicePath \"\"" Dec 10 14:56:18 crc kubenswrapper[4718]: I1210 14:56:18.298554 4718 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ea7defa5-2130-4d6d-8bba-9416bec21dfa-scripts\") on node \"crc\" DevicePath \"\"" Dec 10 14:56:18 crc kubenswrapper[4718]: I1210 14:56:18.298567 4718 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ea7defa5-2130-4d6d-8bba-9416bec21dfa-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 10 14:56:18 crc kubenswrapper[4718]: I1210 14:56:18.298581 4718 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/ea7defa5-2130-4d6d-8bba-9416bec21dfa-etc-swift\") on node \"crc\" DevicePath \"\"" Dec 10 14:56:18 crc kubenswrapper[4718]: I1210 14:56:18.298593 4718 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/ea7defa5-2130-4d6d-8bba-9416bec21dfa-ring-data-devices\") on node \"crc\" DevicePath \"\"" Dec 10 14:56:18 crc kubenswrapper[4718]: I1210 14:56:18.717870 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-nhp59" event={"ID":"ea7defa5-2130-4d6d-8bba-9416bec21dfa","Type":"ContainerDied","Data":"ef27066b5d923d770874d9592e3cf142494ccbb0d8d13650efd64bf07290ba33"} Dec 10 14:56:18 crc kubenswrapper[4718]: I1210 14:56:18.717971 4718 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ef27066b5d923d770874d9592e3cf142494ccbb0d8d13650efd64bf07290ba33" Dec 10 14:56:18 crc kubenswrapper[4718]: I1210 14:56:18.718030 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-nhp59" Dec 10 14:56:23 crc kubenswrapper[4718]: I1210 14:56:23.109407 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/e094c947-215b-4386-906f-5ee833afa9d0-etc-swift\") pod \"swift-storage-0\" (UID: \"e094c947-215b-4386-906f-5ee833afa9d0\") " pod="openstack/swift-storage-0" Dec 10 14:56:23 crc kubenswrapper[4718]: I1210 14:56:23.118901 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/e094c947-215b-4386-906f-5ee833afa9d0-etc-swift\") pod \"swift-storage-0\" (UID: \"e094c947-215b-4386-906f-5ee833afa9d0\") " pod="openstack/swift-storage-0" Dec 10 14:56:23 crc kubenswrapper[4718]: I1210 14:56:23.229624 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Dec 10 14:56:23 crc kubenswrapper[4718]: I1210 14:56:23.749312 4718 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="0fcf07d8-859b-4547-8a32-824f40da6a93" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.103:5671: connect: connection refused" Dec 10 14:56:23 crc kubenswrapper[4718]: I1210 14:56:23.833869 4718 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="282d32e9-d539-4bac-9fd1-a8735e8d92e1" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.104:5671: connect: connection refused" Dec 10 14:56:24 crc kubenswrapper[4718]: I1210 14:56:24.311240 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Dec 10 14:56:24 crc kubenswrapper[4718]: I1210 14:56:24.796657 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"e094c947-215b-4386-906f-5ee833afa9d0","Type":"ContainerStarted","Data":"7ddbbe081d32abc2d86a0d59a74b0642abea1bd7eaf00dc335d727edffa118ce"} Dec 10 14:56:25 crc kubenswrapper[4718]: I1210 14:56:25.285042 4718 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-notifications-server-0" podUID="5611ee41-14a4-45d3-88b1-e6e6c9bc4d13" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.106:5671: connect: connection refused" Dec 10 14:56:25 crc kubenswrapper[4718]: I1210 14:56:25.419764 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Dec 10 14:56:25 crc kubenswrapper[4718]: I1210 14:56:25.424331 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Dec 10 14:56:25 crc kubenswrapper[4718]: I1210 14:56:25.811212 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Dec 10 14:56:29 crc kubenswrapper[4718]: I1210 14:56:29.735815 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Dec 10 14:56:29 crc kubenswrapper[4718]: I1210 14:56:29.737862 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="7ea78b30-1cce-42f6-abae-e7e66ee3daae" containerName="prometheus" containerID="cri-o://1bf31a0918086e5ea79a4eb9e5ad6ffbe73f1fa6e3677827467ec663c596cc98" gracePeriod=600 Dec 10 14:56:29 crc kubenswrapper[4718]: I1210 14:56:29.738062 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="7ea78b30-1cce-42f6-abae-e7e66ee3daae" containerName="thanos-sidecar" containerID="cri-o://8619081bdf22c0511257418f9e3c2197eaa3fdcdecc7ef404b955f8c3df57477" gracePeriod=600 Dec 10 14:56:29 crc kubenswrapper[4718]: I1210 14:56:29.738126 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="7ea78b30-1cce-42f6-abae-e7e66ee3daae" containerName="config-reloader" containerID="cri-o://7c47ffed17c564b3746d6b3253d391d6fbc1beb686f24aec2763ec3af824c602" gracePeriod=600 Dec 10 14:56:29 crc kubenswrapper[4718]: I1210 14:56:29.766987 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"e094c947-215b-4386-906f-5ee833afa9d0","Type":"ContainerStarted","Data":"2a06bd0bb14d3eaef1c3b6a556fea1d43c5d256528631cc4a24e600fb4187b7c"} Dec 10 14:56:29 crc kubenswrapper[4718]: I1210 14:56:29.767738 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"e094c947-215b-4386-906f-5ee833afa9d0","Type":"ContainerStarted","Data":"b5382d967d58634adaff83bd82ac10a2353f94781f5c4b8a5099338018bb97bc"} Dec 10 14:56:29 crc kubenswrapper[4718]: I1210 14:56:29.767823 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"e094c947-215b-4386-906f-5ee833afa9d0","Type":"ContainerStarted","Data":"4aa045f41f7751c1e6e525ec280388d20305941bfe6d4edf94a1d573e5bdd452"} Dec 10 14:56:30 crc kubenswrapper[4718]: I1210 14:56:30.420906 4718 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/prometheus-metric-storage-0" podUID="7ea78b30-1cce-42f6-abae-e7e66ee3daae" containerName="prometheus" probeResult="failure" output="Get \"http://10.217.0.111:9090/-/ready\": dial tcp 10.217.0.111:9090: connect: connection refused" Dec 10 14:56:30 crc kubenswrapper[4718]: I1210 14:56:30.831239 4718 generic.go:334] "Generic (PLEG): container finished" podID="7ea78b30-1cce-42f6-abae-e7e66ee3daae" containerID="8619081bdf22c0511257418f9e3c2197eaa3fdcdecc7ef404b955f8c3df57477" exitCode=0 Dec 10 14:56:30 crc kubenswrapper[4718]: I1210 14:56:30.831889 4718 generic.go:334] "Generic (PLEG): container finished" podID="7ea78b30-1cce-42f6-abae-e7e66ee3daae" containerID="7c47ffed17c564b3746d6b3253d391d6fbc1beb686f24aec2763ec3af824c602" exitCode=0 Dec 10 14:56:30 crc kubenswrapper[4718]: I1210 14:56:30.831903 4718 generic.go:334] "Generic (PLEG): container finished" podID="7ea78b30-1cce-42f6-abae-e7e66ee3daae" containerID="1bf31a0918086e5ea79a4eb9e5ad6ffbe73f1fa6e3677827467ec663c596cc98" exitCode=0 Dec 10 14:56:30 crc kubenswrapper[4718]: I1210 14:56:30.831306 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"7ea78b30-1cce-42f6-abae-e7e66ee3daae","Type":"ContainerDied","Data":"8619081bdf22c0511257418f9e3c2197eaa3fdcdecc7ef404b955f8c3df57477"} Dec 10 14:56:30 crc kubenswrapper[4718]: I1210 14:56:30.832099 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"7ea78b30-1cce-42f6-abae-e7e66ee3daae","Type":"ContainerDied","Data":"7c47ffed17c564b3746d6b3253d391d6fbc1beb686f24aec2763ec3af824c602"} Dec 10 14:56:30 crc kubenswrapper[4718]: I1210 14:56:30.832119 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"7ea78b30-1cce-42f6-abae-e7e66ee3daae","Type":"ContainerDied","Data":"1bf31a0918086e5ea79a4eb9e5ad6ffbe73f1fa6e3677827467ec663c596cc98"} Dec 10 14:56:30 crc kubenswrapper[4718]: I1210 14:56:30.839306 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"e094c947-215b-4386-906f-5ee833afa9d0","Type":"ContainerStarted","Data":"43b9a3287828170b915ec672b722503d532063dda6c84a1df5f1049f594802b4"} Dec 10 14:56:32 crc kubenswrapper[4718]: I1210 14:56:32.290209 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Dec 10 14:56:32 crc kubenswrapper[4718]: I1210 14:56:32.484790 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/7ea78b30-1cce-42f6-abae-e7e66ee3daae-config\") pod \"7ea78b30-1cce-42f6-abae-e7e66ee3daae\" (UID: \"7ea78b30-1cce-42f6-abae-e7e66ee3daae\") " Dec 10 14:56:32 crc kubenswrapper[4718]: I1210 14:56:32.484927 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/7ea78b30-1cce-42f6-abae-e7e66ee3daae-config-out\") pod \"7ea78b30-1cce-42f6-abae-e7e66ee3daae\" (UID: \"7ea78b30-1cce-42f6-abae-e7e66ee3daae\") " Dec 10 14:56:32 crc kubenswrapper[4718]: I1210 14:56:32.485501 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-db\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-9caec827-4c52-4f0c-a90d-988b3cf66a46\") pod \"7ea78b30-1cce-42f6-abae-e7e66ee3daae\" (UID: \"7ea78b30-1cce-42f6-abae-e7e66ee3daae\") " Dec 10 14:56:32 crc kubenswrapper[4718]: I1210 14:56:32.485622 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/7ea78b30-1cce-42f6-abae-e7e66ee3daae-tls-assets\") pod \"7ea78b30-1cce-42f6-abae-e7e66ee3daae\" (UID: \"7ea78b30-1cce-42f6-abae-e7e66ee3daae\") " Dec 10 14:56:32 crc kubenswrapper[4718]: I1210 14:56:32.486267 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/7ea78b30-1cce-42f6-abae-e7e66ee3daae-thanos-prometheus-http-client-file\") pod \"7ea78b30-1cce-42f6-abae-e7e66ee3daae\" (UID: \"7ea78b30-1cce-42f6-abae-e7e66ee3daae\") " Dec 10 14:56:32 crc kubenswrapper[4718]: I1210 14:56:32.486466 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/7ea78b30-1cce-42f6-abae-e7e66ee3daae-web-config\") pod \"7ea78b30-1cce-42f6-abae-e7e66ee3daae\" (UID: \"7ea78b30-1cce-42f6-abae-e7e66ee3daae\") " Dec 10 14:56:32 crc kubenswrapper[4718]: I1210 14:56:32.486521 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/7ea78b30-1cce-42f6-abae-e7e66ee3daae-prometheus-metric-storage-rulefiles-0\") pod \"7ea78b30-1cce-42f6-abae-e7e66ee3daae\" (UID: \"7ea78b30-1cce-42f6-abae-e7e66ee3daae\") " Dec 10 14:56:32 crc kubenswrapper[4718]: I1210 14:56:32.486555 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5xf2q\" (UniqueName: \"kubernetes.io/projected/7ea78b30-1cce-42f6-abae-e7e66ee3daae-kube-api-access-5xf2q\") pod \"7ea78b30-1cce-42f6-abae-e7e66ee3daae\" (UID: \"7ea78b30-1cce-42f6-abae-e7e66ee3daae\") " Dec 10 14:56:32 crc kubenswrapper[4718]: I1210 14:56:32.487461 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7ea78b30-1cce-42f6-abae-e7e66ee3daae-prometheus-metric-storage-rulefiles-0" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-0") pod "7ea78b30-1cce-42f6-abae-e7e66ee3daae" (UID: "7ea78b30-1cce-42f6-abae-e7e66ee3daae"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:56:32 crc kubenswrapper[4718]: I1210 14:56:32.487638 4718 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/7ea78b30-1cce-42f6-abae-e7e66ee3daae-prometheus-metric-storage-rulefiles-0\") on node \"crc\" DevicePath \"\"" Dec 10 14:56:32 crc kubenswrapper[4718]: I1210 14:56:32.494886 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7ea78b30-1cce-42f6-abae-e7e66ee3daae-config" (OuterVolumeSpecName: "config") pod "7ea78b30-1cce-42f6-abae-e7e66ee3daae" (UID: "7ea78b30-1cce-42f6-abae-e7e66ee3daae"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 14:56:32 crc kubenswrapper[4718]: I1210 14:56:32.498710 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7ea78b30-1cce-42f6-abae-e7e66ee3daae-tls-assets" (OuterVolumeSpecName: "tls-assets") pod "7ea78b30-1cce-42f6-abae-e7e66ee3daae" (UID: "7ea78b30-1cce-42f6-abae-e7e66ee3daae"). InnerVolumeSpecName "tls-assets". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 14:56:32 crc kubenswrapper[4718]: I1210 14:56:32.498840 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7ea78b30-1cce-42f6-abae-e7e66ee3daae-config-out" (OuterVolumeSpecName: "config-out") pod "7ea78b30-1cce-42f6-abae-e7e66ee3daae" (UID: "7ea78b30-1cce-42f6-abae-e7e66ee3daae"). InnerVolumeSpecName "config-out". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 14:56:32 crc kubenswrapper[4718]: I1210 14:56:32.507050 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7ea78b30-1cce-42f6-abae-e7e66ee3daae-kube-api-access-5xf2q" (OuterVolumeSpecName: "kube-api-access-5xf2q") pod "7ea78b30-1cce-42f6-abae-e7e66ee3daae" (UID: "7ea78b30-1cce-42f6-abae-e7e66ee3daae"). InnerVolumeSpecName "kube-api-access-5xf2q". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 14:56:32 crc kubenswrapper[4718]: I1210 14:56:32.517299 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7ea78b30-1cce-42f6-abae-e7e66ee3daae-web-config" (OuterVolumeSpecName: "web-config") pod "7ea78b30-1cce-42f6-abae-e7e66ee3daae" (UID: "7ea78b30-1cce-42f6-abae-e7e66ee3daae"). InnerVolumeSpecName "web-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 14:56:32 crc kubenswrapper[4718]: I1210 14:56:32.521150 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7ea78b30-1cce-42f6-abae-e7e66ee3daae-thanos-prometheus-http-client-file" (OuterVolumeSpecName: "thanos-prometheus-http-client-file") pod "7ea78b30-1cce-42f6-abae-e7e66ee3daae" (UID: "7ea78b30-1cce-42f6-abae-e7e66ee3daae"). InnerVolumeSpecName "thanos-prometheus-http-client-file". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 14:56:32 crc kubenswrapper[4718]: I1210 14:56:32.526118 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-9caec827-4c52-4f0c-a90d-988b3cf66a46" (OuterVolumeSpecName: "prometheus-metric-storage-db") pod "7ea78b30-1cce-42f6-abae-e7e66ee3daae" (UID: "7ea78b30-1cce-42f6-abae-e7e66ee3daae"). InnerVolumeSpecName "pvc-9caec827-4c52-4f0c-a90d-988b3cf66a46". PluginName "kubernetes.io/csi", VolumeGidValue "" Dec 10 14:56:32 crc kubenswrapper[4718]: I1210 14:56:32.589965 4718 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-9caec827-4c52-4f0c-a90d-988b3cf66a46\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-9caec827-4c52-4f0c-a90d-988b3cf66a46\") on node \"crc\" " Dec 10 14:56:32 crc kubenswrapper[4718]: I1210 14:56:32.590020 4718 reconciler_common.go:293] "Volume detached for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/7ea78b30-1cce-42f6-abae-e7e66ee3daae-tls-assets\") on node \"crc\" DevicePath \"\"" Dec 10 14:56:32 crc kubenswrapper[4718]: I1210 14:56:32.590037 4718 reconciler_common.go:293] "Volume detached for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/7ea78b30-1cce-42f6-abae-e7e66ee3daae-thanos-prometheus-http-client-file\") on node \"crc\" DevicePath \"\"" Dec 10 14:56:32 crc kubenswrapper[4718]: I1210 14:56:32.590055 4718 reconciler_common.go:293] "Volume detached for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/7ea78b30-1cce-42f6-abae-e7e66ee3daae-web-config\") on node \"crc\" DevicePath \"\"" Dec 10 14:56:32 crc kubenswrapper[4718]: I1210 14:56:32.590074 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5xf2q\" (UniqueName: \"kubernetes.io/projected/7ea78b30-1cce-42f6-abae-e7e66ee3daae-kube-api-access-5xf2q\") on node \"crc\" DevicePath \"\"" Dec 10 14:56:32 crc kubenswrapper[4718]: I1210 14:56:32.590088 4718 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/7ea78b30-1cce-42f6-abae-e7e66ee3daae-config\") on node \"crc\" DevicePath \"\"" Dec 10 14:56:32 crc kubenswrapper[4718]: I1210 14:56:32.590097 4718 reconciler_common.go:293] "Volume detached for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/7ea78b30-1cce-42f6-abae-e7e66ee3daae-config-out\") on node \"crc\" DevicePath \"\"" Dec 10 14:56:32 crc kubenswrapper[4718]: I1210 14:56:32.619339 4718 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Dec 10 14:56:32 crc kubenswrapper[4718]: I1210 14:56:32.619968 4718 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-9caec827-4c52-4f0c-a90d-988b3cf66a46" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-9caec827-4c52-4f0c-a90d-988b3cf66a46") on node "crc" Dec 10 14:56:32 crc kubenswrapper[4718]: I1210 14:56:32.691739 4718 reconciler_common.go:293] "Volume detached for volume \"pvc-9caec827-4c52-4f0c-a90d-988b3cf66a46\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-9caec827-4c52-4f0c-a90d-988b3cf66a46\") on node \"crc\" DevicePath \"\"" Dec 10 14:56:32 crc kubenswrapper[4718]: I1210 14:56:32.866335 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"7ea78b30-1cce-42f6-abae-e7e66ee3daae","Type":"ContainerDied","Data":"03ac05fb6f73e02c896909f290014b4ea13c46ae8f9e5843da11be794def74d4"} Dec 10 14:56:32 crc kubenswrapper[4718]: I1210 14:56:32.866452 4718 scope.go:117] "RemoveContainer" containerID="8619081bdf22c0511257418f9e3c2197eaa3fdcdecc7ef404b955f8c3df57477" Dec 10 14:56:32 crc kubenswrapper[4718]: I1210 14:56:32.866660 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Dec 10 14:56:32 crc kubenswrapper[4718]: I1210 14:56:32.906857 4718 scope.go:117] "RemoveContainer" containerID="7c47ffed17c564b3746d6b3253d391d6fbc1beb686f24aec2763ec3af824c602" Dec 10 14:56:32 crc kubenswrapper[4718]: I1210 14:56:32.913562 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Dec 10 14:56:32 crc kubenswrapper[4718]: I1210 14:56:32.919409 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/prometheus-metric-storage-0"] Dec 10 14:56:32 crc kubenswrapper[4718]: I1210 14:56:32.957348 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Dec 10 14:56:32 crc kubenswrapper[4718]: E1210 14:56:32.958137 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ea78b30-1cce-42f6-abae-e7e66ee3daae" containerName="init-config-reloader" Dec 10 14:56:32 crc kubenswrapper[4718]: I1210 14:56:32.958174 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ea78b30-1cce-42f6-abae-e7e66ee3daae" containerName="init-config-reloader" Dec 10 14:56:32 crc kubenswrapper[4718]: E1210 14:56:32.958193 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65778ba5-a5bf-495f-97ba-045a6ef28ce8" containerName="mariadb-account-create-update" Dec 10 14:56:32 crc kubenswrapper[4718]: I1210 14:56:32.958202 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="65778ba5-a5bf-495f-97ba-045a6ef28ce8" containerName="mariadb-account-create-update" Dec 10 14:56:32 crc kubenswrapper[4718]: E1210 14:56:32.958233 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea7defa5-2130-4d6d-8bba-9416bec21dfa" containerName="swift-ring-rebalance" Dec 10 14:56:32 crc kubenswrapper[4718]: I1210 14:56:32.958243 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea7defa5-2130-4d6d-8bba-9416bec21dfa" containerName="swift-ring-rebalance" Dec 10 14:56:32 crc kubenswrapper[4718]: E1210 14:56:32.958258 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c0ffc92-31c4-4d50-be24-ce7b6fed6506" containerName="mariadb-database-create" Dec 10 14:56:32 crc kubenswrapper[4718]: I1210 14:56:32.958266 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c0ffc92-31c4-4d50-be24-ce7b6fed6506" containerName="mariadb-database-create" Dec 10 14:56:32 crc kubenswrapper[4718]: E1210 14:56:32.958286 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ea78b30-1cce-42f6-abae-e7e66ee3daae" containerName="config-reloader" Dec 10 14:56:32 crc kubenswrapper[4718]: I1210 14:56:32.958294 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ea78b30-1cce-42f6-abae-e7e66ee3daae" containerName="config-reloader" Dec 10 14:56:32 crc kubenswrapper[4718]: E1210 14:56:32.958305 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ea78b30-1cce-42f6-abae-e7e66ee3daae" containerName="thanos-sidecar" Dec 10 14:56:32 crc kubenswrapper[4718]: I1210 14:56:32.958313 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ea78b30-1cce-42f6-abae-e7e66ee3daae" containerName="thanos-sidecar" Dec 10 14:56:32 crc kubenswrapper[4718]: E1210 14:56:32.958330 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ea78b30-1cce-42f6-abae-e7e66ee3daae" containerName="prometheus" Dec 10 14:56:32 crc kubenswrapper[4718]: I1210 14:56:32.958340 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ea78b30-1cce-42f6-abae-e7e66ee3daae" containerName="prometheus" Dec 10 14:56:32 crc kubenswrapper[4718]: E1210 14:56:32.958357 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="55ba3ecd-dc8b-455c-a7a5-737d89a02227" containerName="mariadb-database-create" Dec 10 14:56:32 crc kubenswrapper[4718]: I1210 14:56:32.958366 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="55ba3ecd-dc8b-455c-a7a5-737d89a02227" containerName="mariadb-database-create" Dec 10 14:56:32 crc kubenswrapper[4718]: E1210 14:56:32.958380 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f53aabb-ce76-4ee7-973b-ebea7a6b06b1" containerName="ovn-config" Dec 10 14:56:32 crc kubenswrapper[4718]: I1210 14:56:32.958408 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f53aabb-ce76-4ee7-973b-ebea7a6b06b1" containerName="ovn-config" Dec 10 14:56:32 crc kubenswrapper[4718]: E1210 14:56:32.958426 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="93986540-1957-4930-a489-dd0e648099a7" containerName="mariadb-account-create-update" Dec 10 14:56:32 crc kubenswrapper[4718]: I1210 14:56:32.958434 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="93986540-1957-4930-a489-dd0e648099a7" containerName="mariadb-account-create-update" Dec 10 14:56:32 crc kubenswrapper[4718]: I1210 14:56:32.958671 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="7ea78b30-1cce-42f6-abae-e7e66ee3daae" containerName="prometheus" Dec 10 14:56:32 crc kubenswrapper[4718]: I1210 14:56:32.958695 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="ea7defa5-2130-4d6d-8bba-9416bec21dfa" containerName="swift-ring-rebalance" Dec 10 14:56:32 crc kubenswrapper[4718]: I1210 14:56:32.958707 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="6c0ffc92-31c4-4d50-be24-ce7b6fed6506" containerName="mariadb-database-create" Dec 10 14:56:32 crc kubenswrapper[4718]: I1210 14:56:32.958716 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="2f53aabb-ce76-4ee7-973b-ebea7a6b06b1" containerName="ovn-config" Dec 10 14:56:32 crc kubenswrapper[4718]: I1210 14:56:32.958732 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="93986540-1957-4930-a489-dd0e648099a7" containerName="mariadb-account-create-update" Dec 10 14:56:32 crc kubenswrapper[4718]: I1210 14:56:32.958747 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="7ea78b30-1cce-42f6-abae-e7e66ee3daae" containerName="config-reloader" Dec 10 14:56:32 crc kubenswrapper[4718]: I1210 14:56:32.958758 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="7ea78b30-1cce-42f6-abae-e7e66ee3daae" containerName="thanos-sidecar" Dec 10 14:56:32 crc kubenswrapper[4718]: I1210 14:56:32.958776 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="65778ba5-a5bf-495f-97ba-045a6ef28ce8" containerName="mariadb-account-create-update" Dec 10 14:56:32 crc kubenswrapper[4718]: I1210 14:56:32.958788 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="55ba3ecd-dc8b-455c-a7a5-737d89a02227" containerName="mariadb-database-create" Dec 10 14:56:32 crc kubenswrapper[4718]: I1210 14:56:32.961256 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Dec 10 14:56:32 crc kubenswrapper[4718]: I1210 14:56:32.970570 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-8v2sk" Dec 10 14:56:32 crc kubenswrapper[4718]: I1210 14:56:32.971121 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Dec 10 14:56:32 crc kubenswrapper[4718]: I1210 14:56:32.971292 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-metric-storage-prometheus-svc" Dec 10 14:56:32 crc kubenswrapper[4718]: I1210 14:56:32.971152 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Dec 10 14:56:32 crc kubenswrapper[4718]: I1210 14:56:32.979261 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Dec 10 14:56:32 crc kubenswrapper[4718]: I1210 14:56:32.982170 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Dec 10 14:56:32 crc kubenswrapper[4718]: I1210 14:56:32.982498 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Dec 10 14:56:32 crc kubenswrapper[4718]: I1210 14:56:32.985542 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Dec 10 14:56:33 crc kubenswrapper[4718]: I1210 14:56:33.007021 4718 scope.go:117] "RemoveContainer" containerID="1bf31a0918086e5ea79a4eb9e5ad6ffbe73f1fa6e3677827467ec663c596cc98" Dec 10 14:56:33 crc kubenswrapper[4718]: I1210 14:56:33.051514 4718 scope.go:117] "RemoveContainer" containerID="cb941a979e3d51662c1c888d4e80eee2c6c2c4f0cc496d6051c6c77d152f3af2" Dec 10 14:56:33 crc kubenswrapper[4718]: I1210 14:56:33.098292 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/9261d0c6-0506-4f71-ad72-76eb7dd4ac0a-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"9261d0c6-0506-4f71-ad72-76eb7dd4ac0a\") " pod="openstack/prometheus-metric-storage-0" Dec 10 14:56:33 crc kubenswrapper[4718]: I1210 14:56:33.098355 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/9261d0c6-0506-4f71-ad72-76eb7dd4ac0a-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"9261d0c6-0506-4f71-ad72-76eb7dd4ac0a\") " pod="openstack/prometheus-metric-storage-0" Dec 10 14:56:33 crc kubenswrapper[4718]: I1210 14:56:33.098407 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9261d0c6-0506-4f71-ad72-76eb7dd4ac0a-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"9261d0c6-0506-4f71-ad72-76eb7dd4ac0a\") " pod="openstack/prometheus-metric-storage-0" Dec 10 14:56:33 crc kubenswrapper[4718]: I1210 14:56:33.098599 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/9261d0c6-0506-4f71-ad72-76eb7dd4ac0a-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"9261d0c6-0506-4f71-ad72-76eb7dd4ac0a\") " pod="openstack/prometheus-metric-storage-0" Dec 10 14:56:33 crc kubenswrapper[4718]: I1210 14:56:33.098746 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xm59j\" (UniqueName: \"kubernetes.io/projected/9261d0c6-0506-4f71-ad72-76eb7dd4ac0a-kube-api-access-xm59j\") pod \"prometheus-metric-storage-0\" (UID: \"9261d0c6-0506-4f71-ad72-76eb7dd4ac0a\") " pod="openstack/prometheus-metric-storage-0" Dec 10 14:56:33 crc kubenswrapper[4718]: I1210 14:56:33.098919 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/9261d0c6-0506-4f71-ad72-76eb7dd4ac0a-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"9261d0c6-0506-4f71-ad72-76eb7dd4ac0a\") " pod="openstack/prometheus-metric-storage-0" Dec 10 14:56:33 crc kubenswrapper[4718]: I1210 14:56:33.099062 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/9261d0c6-0506-4f71-ad72-76eb7dd4ac0a-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"9261d0c6-0506-4f71-ad72-76eb7dd4ac0a\") " pod="openstack/prometheus-metric-storage-0" Dec 10 14:56:33 crc kubenswrapper[4718]: I1210 14:56:33.099219 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-9caec827-4c52-4f0c-a90d-988b3cf66a46\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-9caec827-4c52-4f0c-a90d-988b3cf66a46\") pod \"prometheus-metric-storage-0\" (UID: \"9261d0c6-0506-4f71-ad72-76eb7dd4ac0a\") " pod="openstack/prometheus-metric-storage-0" Dec 10 14:56:33 crc kubenswrapper[4718]: I1210 14:56:33.099275 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/9261d0c6-0506-4f71-ad72-76eb7dd4ac0a-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"9261d0c6-0506-4f71-ad72-76eb7dd4ac0a\") " pod="openstack/prometheus-metric-storage-0" Dec 10 14:56:33 crc kubenswrapper[4718]: I1210 14:56:33.099515 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/9261d0c6-0506-4f71-ad72-76eb7dd4ac0a-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"9261d0c6-0506-4f71-ad72-76eb7dd4ac0a\") " pod="openstack/prometheus-metric-storage-0" Dec 10 14:56:33 crc kubenswrapper[4718]: I1210 14:56:33.099668 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/9261d0c6-0506-4f71-ad72-76eb7dd4ac0a-config\") pod \"prometheus-metric-storage-0\" (UID: \"9261d0c6-0506-4f71-ad72-76eb7dd4ac0a\") " pod="openstack/prometheus-metric-storage-0" Dec 10 14:56:33 crc kubenswrapper[4718]: I1210 14:56:33.202129 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/9261d0c6-0506-4f71-ad72-76eb7dd4ac0a-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"9261d0c6-0506-4f71-ad72-76eb7dd4ac0a\") " pod="openstack/prometheus-metric-storage-0" Dec 10 14:56:33 crc kubenswrapper[4718]: I1210 14:56:33.202270 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-9caec827-4c52-4f0c-a90d-988b3cf66a46\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-9caec827-4c52-4f0c-a90d-988b3cf66a46\") pod \"prometheus-metric-storage-0\" (UID: \"9261d0c6-0506-4f71-ad72-76eb7dd4ac0a\") " pod="openstack/prometheus-metric-storage-0" Dec 10 14:56:33 crc kubenswrapper[4718]: I1210 14:56:33.203141 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/9261d0c6-0506-4f71-ad72-76eb7dd4ac0a-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"9261d0c6-0506-4f71-ad72-76eb7dd4ac0a\") " pod="openstack/prometheus-metric-storage-0" Dec 10 14:56:33 crc kubenswrapper[4718]: I1210 14:56:33.203307 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/9261d0c6-0506-4f71-ad72-76eb7dd4ac0a-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"9261d0c6-0506-4f71-ad72-76eb7dd4ac0a\") " pod="openstack/prometheus-metric-storage-0" Dec 10 14:56:33 crc kubenswrapper[4718]: I1210 14:56:33.203485 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/9261d0c6-0506-4f71-ad72-76eb7dd4ac0a-config\") pod \"prometheus-metric-storage-0\" (UID: \"9261d0c6-0506-4f71-ad72-76eb7dd4ac0a\") " pod="openstack/prometheus-metric-storage-0" Dec 10 14:56:33 crc kubenswrapper[4718]: I1210 14:56:33.203557 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/9261d0c6-0506-4f71-ad72-76eb7dd4ac0a-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"9261d0c6-0506-4f71-ad72-76eb7dd4ac0a\") " pod="openstack/prometheus-metric-storage-0" Dec 10 14:56:33 crc kubenswrapper[4718]: I1210 14:56:33.203589 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/9261d0c6-0506-4f71-ad72-76eb7dd4ac0a-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"9261d0c6-0506-4f71-ad72-76eb7dd4ac0a\") " pod="openstack/prometheus-metric-storage-0" Dec 10 14:56:33 crc kubenswrapper[4718]: I1210 14:56:33.203637 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9261d0c6-0506-4f71-ad72-76eb7dd4ac0a-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"9261d0c6-0506-4f71-ad72-76eb7dd4ac0a\") " pod="openstack/prometheus-metric-storage-0" Dec 10 14:56:33 crc kubenswrapper[4718]: I1210 14:56:33.203721 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/9261d0c6-0506-4f71-ad72-76eb7dd4ac0a-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"9261d0c6-0506-4f71-ad72-76eb7dd4ac0a\") " pod="openstack/prometheus-metric-storage-0" Dec 10 14:56:33 crc kubenswrapper[4718]: I1210 14:56:33.203767 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xm59j\" (UniqueName: \"kubernetes.io/projected/9261d0c6-0506-4f71-ad72-76eb7dd4ac0a-kube-api-access-xm59j\") pod \"prometheus-metric-storage-0\" (UID: \"9261d0c6-0506-4f71-ad72-76eb7dd4ac0a\") " pod="openstack/prometheus-metric-storage-0" Dec 10 14:56:33 crc kubenswrapper[4718]: I1210 14:56:33.203875 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/9261d0c6-0506-4f71-ad72-76eb7dd4ac0a-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"9261d0c6-0506-4f71-ad72-76eb7dd4ac0a\") " pod="openstack/prometheus-metric-storage-0" Dec 10 14:56:33 crc kubenswrapper[4718]: I1210 14:56:33.206467 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/9261d0c6-0506-4f71-ad72-76eb7dd4ac0a-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"9261d0c6-0506-4f71-ad72-76eb7dd4ac0a\") " pod="openstack/prometheus-metric-storage-0" Dec 10 14:56:33 crc kubenswrapper[4718]: I1210 14:56:33.210081 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/9261d0c6-0506-4f71-ad72-76eb7dd4ac0a-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"9261d0c6-0506-4f71-ad72-76eb7dd4ac0a\") " pod="openstack/prometheus-metric-storage-0" Dec 10 14:56:33 crc kubenswrapper[4718]: I1210 14:56:33.211790 4718 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Dec 10 14:56:33 crc kubenswrapper[4718]: I1210 14:56:33.211807 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9261d0c6-0506-4f71-ad72-76eb7dd4ac0a-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"9261d0c6-0506-4f71-ad72-76eb7dd4ac0a\") " pod="openstack/prometheus-metric-storage-0" Dec 10 14:56:33 crc kubenswrapper[4718]: I1210 14:56:33.211825 4718 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-9caec827-4c52-4f0c-a90d-988b3cf66a46\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-9caec827-4c52-4f0c-a90d-988b3cf66a46\") pod \"prometheus-metric-storage-0\" (UID: \"9261d0c6-0506-4f71-ad72-76eb7dd4ac0a\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/536f9c0805135df6ee87eba8f71795b119991da876d5796e8953829643544095/globalmount\"" pod="openstack/prometheus-metric-storage-0" Dec 10 14:56:33 crc kubenswrapper[4718]: I1210 14:56:33.212301 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/9261d0c6-0506-4f71-ad72-76eb7dd4ac0a-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"9261d0c6-0506-4f71-ad72-76eb7dd4ac0a\") " pod="openstack/prometheus-metric-storage-0" Dec 10 14:56:33 crc kubenswrapper[4718]: I1210 14:56:33.212461 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/9261d0c6-0506-4f71-ad72-76eb7dd4ac0a-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"9261d0c6-0506-4f71-ad72-76eb7dd4ac0a\") " pod="openstack/prometheus-metric-storage-0" Dec 10 14:56:33 crc kubenswrapper[4718]: I1210 14:56:33.212468 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/9261d0c6-0506-4f71-ad72-76eb7dd4ac0a-config\") pod \"prometheus-metric-storage-0\" (UID: \"9261d0c6-0506-4f71-ad72-76eb7dd4ac0a\") " pod="openstack/prometheus-metric-storage-0" Dec 10 14:56:33 crc kubenswrapper[4718]: I1210 14:56:33.213226 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/9261d0c6-0506-4f71-ad72-76eb7dd4ac0a-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"9261d0c6-0506-4f71-ad72-76eb7dd4ac0a\") " pod="openstack/prometheus-metric-storage-0" Dec 10 14:56:33 crc kubenswrapper[4718]: I1210 14:56:33.213951 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/9261d0c6-0506-4f71-ad72-76eb7dd4ac0a-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"9261d0c6-0506-4f71-ad72-76eb7dd4ac0a\") " pod="openstack/prometheus-metric-storage-0" Dec 10 14:56:33 crc kubenswrapper[4718]: I1210 14:56:33.214603 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/9261d0c6-0506-4f71-ad72-76eb7dd4ac0a-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"9261d0c6-0506-4f71-ad72-76eb7dd4ac0a\") " pod="openstack/prometheus-metric-storage-0" Dec 10 14:56:33 crc kubenswrapper[4718]: I1210 14:56:33.243339 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xm59j\" (UniqueName: \"kubernetes.io/projected/9261d0c6-0506-4f71-ad72-76eb7dd4ac0a-kube-api-access-xm59j\") pod \"prometheus-metric-storage-0\" (UID: \"9261d0c6-0506-4f71-ad72-76eb7dd4ac0a\") " pod="openstack/prometheus-metric-storage-0" Dec 10 14:56:33 crc kubenswrapper[4718]: I1210 14:56:33.264694 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-9caec827-4c52-4f0c-a90d-988b3cf66a46\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-9caec827-4c52-4f0c-a90d-988b3cf66a46\") pod \"prometheus-metric-storage-0\" (UID: \"9261d0c6-0506-4f71-ad72-76eb7dd4ac0a\") " pod="openstack/prometheus-metric-storage-0" Dec 10 14:56:33 crc kubenswrapper[4718]: I1210 14:56:33.315748 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Dec 10 14:56:33 crc kubenswrapper[4718]: I1210 14:56:33.743067 4718 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="0fcf07d8-859b-4547-8a32-824f40da6a93" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.103:5671: connect: connection refused" Dec 10 14:56:33 crc kubenswrapper[4718]: I1210 14:56:33.828598 4718 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="282d32e9-d539-4bac-9fd1-a8735e8d92e1" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.104:5671: connect: connection refused" Dec 10 14:56:33 crc kubenswrapper[4718]: I1210 14:56:33.864425 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Dec 10 14:56:33 crc kubenswrapper[4718]: I1210 14:56:33.884496 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"e094c947-215b-4386-906f-5ee833afa9d0","Type":"ContainerStarted","Data":"94a5af03c92b21a31c4e5293457ab15d3c8ce05a62a5a278772fd2a72eb5e5b8"} Dec 10 14:56:33 crc kubenswrapper[4718]: I1210 14:56:33.897537 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"9261d0c6-0506-4f71-ad72-76eb7dd4ac0a","Type":"ContainerStarted","Data":"85ce5f60b0cdd99a5a754c2bed864db403c9218d2ecb82c9cd834bcea355b53d"} Dec 10 14:56:34 crc kubenswrapper[4718]: I1210 14:56:34.112015 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7ea78b30-1cce-42f6-abae-e7e66ee3daae" path="/var/lib/kubelet/pods/7ea78b30-1cce-42f6-abae-e7e66ee3daae/volumes" Dec 10 14:56:34 crc kubenswrapper[4718]: I1210 14:56:34.919919 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"e094c947-215b-4386-906f-5ee833afa9d0","Type":"ContainerStarted","Data":"082472f55c7b17e937c5ecf14c84319704c8eaa399a9edd2d676d70a9d2ac0e9"} Dec 10 14:56:34 crc kubenswrapper[4718]: I1210 14:56:34.920009 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"e094c947-215b-4386-906f-5ee833afa9d0","Type":"ContainerStarted","Data":"b16f72c40bf239e1e25614fea7fa97bd515adedd08054eed2a2d887aa491a770"} Dec 10 14:56:35 crc kubenswrapper[4718]: I1210 14:56:35.280913 4718 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-notifications-server-0" podUID="5611ee41-14a4-45d3-88b1-e6e6c9bc4d13" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.106:5671: connect: connection refused" Dec 10 14:56:35 crc kubenswrapper[4718]: I1210 14:56:35.970895 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"e094c947-215b-4386-906f-5ee833afa9d0","Type":"ContainerStarted","Data":"06b059e21aba09c4aede9e8b4610a3370a866c4feb007b24a44fcfb5c68cae3d"} Dec 10 14:56:36 crc kubenswrapper[4718]: I1210 14:56:36.058573 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-btnnx"] Dec 10 14:56:36 crc kubenswrapper[4718]: I1210 14:56:36.060847 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-btnnx" Dec 10 14:56:36 crc kubenswrapper[4718]: I1210 14:56:36.098554 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-btnnx"] Dec 10 14:56:36 crc kubenswrapper[4718]: I1210 14:56:36.144583 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/635b489b-951a-4df2-8cb2-4c1fff2d8d59-utilities\") pod \"redhat-operators-btnnx\" (UID: \"635b489b-951a-4df2-8cb2-4c1fff2d8d59\") " pod="openshift-marketplace/redhat-operators-btnnx" Dec 10 14:56:36 crc kubenswrapper[4718]: I1210 14:56:36.144876 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dr65l\" (UniqueName: \"kubernetes.io/projected/635b489b-951a-4df2-8cb2-4c1fff2d8d59-kube-api-access-dr65l\") pod \"redhat-operators-btnnx\" (UID: \"635b489b-951a-4df2-8cb2-4c1fff2d8d59\") " pod="openshift-marketplace/redhat-operators-btnnx" Dec 10 14:56:36 crc kubenswrapper[4718]: I1210 14:56:36.145347 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/635b489b-951a-4df2-8cb2-4c1fff2d8d59-catalog-content\") pod \"redhat-operators-btnnx\" (UID: \"635b489b-951a-4df2-8cb2-4c1fff2d8d59\") " pod="openshift-marketplace/redhat-operators-btnnx" Dec 10 14:56:36 crc kubenswrapper[4718]: I1210 14:56:36.248152 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/635b489b-951a-4df2-8cb2-4c1fff2d8d59-catalog-content\") pod \"redhat-operators-btnnx\" (UID: \"635b489b-951a-4df2-8cb2-4c1fff2d8d59\") " pod="openshift-marketplace/redhat-operators-btnnx" Dec 10 14:56:36 crc kubenswrapper[4718]: I1210 14:56:36.248313 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/635b489b-951a-4df2-8cb2-4c1fff2d8d59-utilities\") pod \"redhat-operators-btnnx\" (UID: \"635b489b-951a-4df2-8cb2-4c1fff2d8d59\") " pod="openshift-marketplace/redhat-operators-btnnx" Dec 10 14:56:36 crc kubenswrapper[4718]: I1210 14:56:36.248374 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dr65l\" (UniqueName: \"kubernetes.io/projected/635b489b-951a-4df2-8cb2-4c1fff2d8d59-kube-api-access-dr65l\") pod \"redhat-operators-btnnx\" (UID: \"635b489b-951a-4df2-8cb2-4c1fff2d8d59\") " pod="openshift-marketplace/redhat-operators-btnnx" Dec 10 14:56:36 crc kubenswrapper[4718]: I1210 14:56:36.248835 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/635b489b-951a-4df2-8cb2-4c1fff2d8d59-catalog-content\") pod \"redhat-operators-btnnx\" (UID: \"635b489b-951a-4df2-8cb2-4c1fff2d8d59\") " pod="openshift-marketplace/redhat-operators-btnnx" Dec 10 14:56:36 crc kubenswrapper[4718]: I1210 14:56:36.248885 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/635b489b-951a-4df2-8cb2-4c1fff2d8d59-utilities\") pod \"redhat-operators-btnnx\" (UID: \"635b489b-951a-4df2-8cb2-4c1fff2d8d59\") " pod="openshift-marketplace/redhat-operators-btnnx" Dec 10 14:56:36 crc kubenswrapper[4718]: I1210 14:56:36.458109 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dr65l\" (UniqueName: \"kubernetes.io/projected/635b489b-951a-4df2-8cb2-4c1fff2d8d59-kube-api-access-dr65l\") pod \"redhat-operators-btnnx\" (UID: \"635b489b-951a-4df2-8cb2-4c1fff2d8d59\") " pod="openshift-marketplace/redhat-operators-btnnx" Dec 10 14:56:36 crc kubenswrapper[4718]: I1210 14:56:36.688090 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-btnnx" Dec 10 14:56:37 crc kubenswrapper[4718]: I1210 14:56:37.086066 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-btnnx"] Dec 10 14:56:37 crc kubenswrapper[4718]: W1210 14:56:37.178686 4718 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod635b489b_951a_4df2_8cb2_4c1fff2d8d59.slice/crio-fb68f262a6056fc6394d74bffd14fbc9f09620a007aa44a71adc3e253155b53b WatchSource:0}: Error finding container fb68f262a6056fc6394d74bffd14fbc9f09620a007aa44a71adc3e253155b53b: Status 404 returned error can't find the container with id fb68f262a6056fc6394d74bffd14fbc9f09620a007aa44a71adc3e253155b53b Dec 10 14:56:38 crc kubenswrapper[4718]: I1210 14:56:38.002088 4718 generic.go:334] "Generic (PLEG): container finished" podID="635b489b-951a-4df2-8cb2-4c1fff2d8d59" containerID="3c14f40fb175da7e2372cde8f8f0cd7f54b89e5f8f4a9daf4e37538e5b6417a8" exitCode=0 Dec 10 14:56:38 crc kubenswrapper[4718]: I1210 14:56:38.002171 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-btnnx" event={"ID":"635b489b-951a-4df2-8cb2-4c1fff2d8d59","Type":"ContainerDied","Data":"3c14f40fb175da7e2372cde8f8f0cd7f54b89e5f8f4a9daf4e37538e5b6417a8"} Dec 10 14:56:38 crc kubenswrapper[4718]: I1210 14:56:38.002642 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-btnnx" event={"ID":"635b489b-951a-4df2-8cb2-4c1fff2d8d59","Type":"ContainerStarted","Data":"fb68f262a6056fc6394d74bffd14fbc9f09620a007aa44a71adc3e253155b53b"} Dec 10 14:56:38 crc kubenswrapper[4718]: I1210 14:56:38.006504 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"9261d0c6-0506-4f71-ad72-76eb7dd4ac0a","Type":"ContainerStarted","Data":"c1a5f18a598e9f56435185660387a975748c62c649fb18418697ce5535793d33"} Dec 10 14:56:39 crc kubenswrapper[4718]: I1210 14:56:39.049561 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"e094c947-215b-4386-906f-5ee833afa9d0","Type":"ContainerStarted","Data":"6969c19904aefd015ff0d7a8da2667d6d9d2f3a2e5e1187c6eae0e1658b24eb0"} Dec 10 14:56:40 crc kubenswrapper[4718]: I1210 14:56:40.096306 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"e094c947-215b-4386-906f-5ee833afa9d0","Type":"ContainerStarted","Data":"b058d8758db2415c61d34cb5a53095dc338863956607a69c801c870de80e7cc5"} Dec 10 14:56:40 crc kubenswrapper[4718]: I1210 14:56:40.096951 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"e094c947-215b-4386-906f-5ee833afa9d0","Type":"ContainerStarted","Data":"18e5e126c8c8e23db1b557f9e4ff9fe40567adcb5b7cf74b2c66d16bc08fe8c5"} Dec 10 14:56:40 crc kubenswrapper[4718]: I1210 14:56:40.096970 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"e094c947-215b-4386-906f-5ee833afa9d0","Type":"ContainerStarted","Data":"e511e292c6a98cd3701981c3d75093e924b006c82224b4bed0b35455ff7a34f3"} Dec 10 14:56:40 crc kubenswrapper[4718]: I1210 14:56:40.099308 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-btnnx" event={"ID":"635b489b-951a-4df2-8cb2-4c1fff2d8d59","Type":"ContainerStarted","Data":"ac39c130c0d7547daf2a4b4b1c0defe5839dc7009455adf2d179c57f99cbcedc"} Dec 10 14:56:43 crc kubenswrapper[4718]: I1210 14:56:43.135807 4718 generic.go:334] "Generic (PLEG): container finished" podID="635b489b-951a-4df2-8cb2-4c1fff2d8d59" containerID="ac39c130c0d7547daf2a4b4b1c0defe5839dc7009455adf2d179c57f99cbcedc" exitCode=0 Dec 10 14:56:43 crc kubenswrapper[4718]: I1210 14:56:43.135882 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-btnnx" event={"ID":"635b489b-951a-4df2-8cb2-4c1fff2d8d59","Type":"ContainerDied","Data":"ac39c130c0d7547daf2a4b4b1c0defe5839dc7009455adf2d179c57f99cbcedc"} Dec 10 14:56:43 crc kubenswrapper[4718]: I1210 14:56:43.149471 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"e094c947-215b-4386-906f-5ee833afa9d0","Type":"ContainerStarted","Data":"7bcf70e00a4a4f84fc36d7aba45a044a26fbac07512afbbe23227f18555d8df3"} Dec 10 14:56:43 crc kubenswrapper[4718]: I1210 14:56:43.149546 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"e094c947-215b-4386-906f-5ee833afa9d0","Type":"ContainerStarted","Data":"4c7fd0850c9a47beb2a9c2605ced7273114465321e9153ae6034a35c17709334"} Dec 10 14:56:43 crc kubenswrapper[4718]: I1210 14:56:43.744705 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Dec 10 14:56:43 crc kubenswrapper[4718]: I1210 14:56:43.830722 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Dec 10 14:56:44 crc kubenswrapper[4718]: I1210 14:56:44.237309 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"e094c947-215b-4386-906f-5ee833afa9d0","Type":"ContainerStarted","Data":"0b25d8710b12270650255c62225122746dbd4fd9a542a0892c80d8e1f58c33d1"} Dec 10 14:56:44 crc kubenswrapper[4718]: I1210 14:56:44.534040 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-storage-0" podStartSLOduration=40.750039473 podStartE2EDuration="54.533985553s" podCreationTimestamp="2025-12-10 14:55:50 +0000 UTC" firstStartedPulling="2025-12-10 14:56:24.327230926 +0000 UTC m=+1489.276454343" lastFinishedPulling="2025-12-10 14:56:38.111177006 +0000 UTC m=+1503.060400423" observedRunningTime="2025-12-10 14:56:44.498374324 +0000 UTC m=+1509.447597751" watchObservedRunningTime="2025-12-10 14:56:44.533985553 +0000 UTC m=+1509.483208960" Dec 10 14:56:44 crc kubenswrapper[4718]: I1210 14:56:44.869571 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-k6j4m"] Dec 10 14:56:44 crc kubenswrapper[4718]: I1210 14:56:44.871151 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-k6j4m" Dec 10 14:56:44 crc kubenswrapper[4718]: I1210 14:56:44.894253 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-k6j4m"] Dec 10 14:56:44 crc kubenswrapper[4718]: I1210 14:56:44.969846 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-fe2e-account-create-update-q9jg8"] Dec 10 14:56:44 crc kubenswrapper[4718]: I1210 14:56:44.971633 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-fe2e-account-create-update-q9jg8" Dec 10 14:56:44 crc kubenswrapper[4718]: I1210 14:56:44.978907 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Dec 10 14:56:44 crc kubenswrapper[4718]: I1210 14:56:44.995244 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-fe2e-account-create-update-q9jg8"] Dec 10 14:56:45 crc kubenswrapper[4718]: I1210 14:56:45.030712 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/df5dab54-fa6b-4cfd-9c05-a6eeeec57de3-operator-scripts\") pod \"cinder-db-create-k6j4m\" (UID: \"df5dab54-fa6b-4cfd-9c05-a6eeeec57de3\") " pod="openstack/cinder-db-create-k6j4m" Dec 10 14:56:45 crc kubenswrapper[4718]: I1210 14:56:45.030853 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gfnpw\" (UniqueName: \"kubernetes.io/projected/df5dab54-fa6b-4cfd-9c05-a6eeeec57de3-kube-api-access-gfnpw\") pod \"cinder-db-create-k6j4m\" (UID: \"df5dab54-fa6b-4cfd-9c05-a6eeeec57de3\") " pod="openstack/cinder-db-create-k6j4m" Dec 10 14:56:45 crc kubenswrapper[4718]: I1210 14:56:45.083002 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db97-account-create-update-5zs2l"] Dec 10 14:56:45 crc kubenswrapper[4718]: I1210 14:56:45.085375 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db97-account-create-update-5zs2l" Dec 10 14:56:45 crc kubenswrapper[4718]: I1210 14:56:45.092575 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-5fkv5"] Dec 10 14:56:45 crc kubenswrapper[4718]: I1210 14:56:45.094172 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-5fkv5" Dec 10 14:56:45 crc kubenswrapper[4718]: I1210 14:56:45.096916 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Dec 10 14:56:45 crc kubenswrapper[4718]: I1210 14:56:45.114481 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db97-account-create-update-5zs2l"] Dec 10 14:56:45 crc kubenswrapper[4718]: I1210 14:56:45.133212 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gfnpw\" (UniqueName: \"kubernetes.io/projected/df5dab54-fa6b-4cfd-9c05-a6eeeec57de3-kube-api-access-gfnpw\") pod \"cinder-db-create-k6j4m\" (UID: \"df5dab54-fa6b-4cfd-9c05-a6eeeec57de3\") " pod="openstack/cinder-db-create-k6j4m" Dec 10 14:56:45 crc kubenswrapper[4718]: I1210 14:56:45.133352 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ec673fdd-e3b3-4576-aefa-c53822bd5f27-operator-scripts\") pod \"cinder-fe2e-account-create-update-q9jg8\" (UID: \"ec673fdd-e3b3-4576-aefa-c53822bd5f27\") " pod="openstack/cinder-fe2e-account-create-update-q9jg8" Dec 10 14:56:45 crc kubenswrapper[4718]: I1210 14:56:45.133435 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mhbxd\" (UniqueName: \"kubernetes.io/projected/ec673fdd-e3b3-4576-aefa-c53822bd5f27-kube-api-access-mhbxd\") pod \"cinder-fe2e-account-create-update-q9jg8\" (UID: \"ec673fdd-e3b3-4576-aefa-c53822bd5f27\") " pod="openstack/cinder-fe2e-account-create-update-q9jg8" Dec 10 14:56:45 crc kubenswrapper[4718]: I1210 14:56:45.133498 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/df5dab54-fa6b-4cfd-9c05-a6eeeec57de3-operator-scripts\") pod \"cinder-db-create-k6j4m\" (UID: \"df5dab54-fa6b-4cfd-9c05-a6eeeec57de3\") " pod="openstack/cinder-db-create-k6j4m" Dec 10 14:56:45 crc kubenswrapper[4718]: I1210 14:56:45.134554 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/df5dab54-fa6b-4cfd-9c05-a6eeeec57de3-operator-scripts\") pod \"cinder-db-create-k6j4m\" (UID: \"df5dab54-fa6b-4cfd-9c05-a6eeeec57de3\") " pod="openstack/cinder-db-create-k6j4m" Dec 10 14:56:45 crc kubenswrapper[4718]: I1210 14:56:45.134808 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-5fkv5"] Dec 10 14:56:45 crc kubenswrapper[4718]: I1210 14:56:45.204218 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gfnpw\" (UniqueName: \"kubernetes.io/projected/df5dab54-fa6b-4cfd-9c05-a6eeeec57de3-kube-api-access-gfnpw\") pod \"cinder-db-create-k6j4m\" (UID: \"df5dab54-fa6b-4cfd-9c05-a6eeeec57de3\") " pod="openstack/cinder-db-create-k6j4m" Dec 10 14:56:45 crc kubenswrapper[4718]: I1210 14:56:45.235332 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/73737af3-b524-4da3-a97a-d3815b32ae7b-operator-scripts\") pod \"barbican-db97-account-create-update-5zs2l\" (UID: \"73737af3-b524-4da3-a97a-d3815b32ae7b\") " pod="openstack/barbican-db97-account-create-update-5zs2l" Dec 10 14:56:45 crc kubenswrapper[4718]: I1210 14:56:45.235464 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ec673fdd-e3b3-4576-aefa-c53822bd5f27-operator-scripts\") pod \"cinder-fe2e-account-create-update-q9jg8\" (UID: \"ec673fdd-e3b3-4576-aefa-c53822bd5f27\") " pod="openstack/cinder-fe2e-account-create-update-q9jg8" Dec 10 14:56:45 crc kubenswrapper[4718]: I1210 14:56:45.235521 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tm8vd\" (UniqueName: \"kubernetes.io/projected/1e87b030-a8e8-444f-9dda-a2d7a563aba9-kube-api-access-tm8vd\") pod \"barbican-db-create-5fkv5\" (UID: \"1e87b030-a8e8-444f-9dda-a2d7a563aba9\") " pod="openstack/barbican-db-create-5fkv5" Dec 10 14:56:45 crc kubenswrapper[4718]: I1210 14:56:45.235562 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mhbxd\" (UniqueName: \"kubernetes.io/projected/ec673fdd-e3b3-4576-aefa-c53822bd5f27-kube-api-access-mhbxd\") pod \"cinder-fe2e-account-create-update-q9jg8\" (UID: \"ec673fdd-e3b3-4576-aefa-c53822bd5f27\") " pod="openstack/cinder-fe2e-account-create-update-q9jg8" Dec 10 14:56:45 crc kubenswrapper[4718]: I1210 14:56:45.235623 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1e87b030-a8e8-444f-9dda-a2d7a563aba9-operator-scripts\") pod \"barbican-db-create-5fkv5\" (UID: \"1e87b030-a8e8-444f-9dda-a2d7a563aba9\") " pod="openstack/barbican-db-create-5fkv5" Dec 10 14:56:45 crc kubenswrapper[4718]: I1210 14:56:45.235711 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sc2kt\" (UniqueName: \"kubernetes.io/projected/73737af3-b524-4da3-a97a-d3815b32ae7b-kube-api-access-sc2kt\") pod \"barbican-db97-account-create-update-5zs2l\" (UID: \"73737af3-b524-4da3-a97a-d3815b32ae7b\") " pod="openstack/barbican-db97-account-create-update-5zs2l" Dec 10 14:56:45 crc kubenswrapper[4718]: I1210 14:56:45.236666 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ec673fdd-e3b3-4576-aefa-c53822bd5f27-operator-scripts\") pod \"cinder-fe2e-account-create-update-q9jg8\" (UID: \"ec673fdd-e3b3-4576-aefa-c53822bd5f27\") " pod="openstack/cinder-fe2e-account-create-update-q9jg8" Dec 10 14:56:45 crc kubenswrapper[4718]: I1210 14:56:45.267490 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-k6j4m" Dec 10 14:56:45 crc kubenswrapper[4718]: I1210 14:56:45.275551 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mhbxd\" (UniqueName: \"kubernetes.io/projected/ec673fdd-e3b3-4576-aefa-c53822bd5f27-kube-api-access-mhbxd\") pod \"cinder-fe2e-account-create-update-q9jg8\" (UID: \"ec673fdd-e3b3-4576-aefa-c53822bd5f27\") " pod="openstack/cinder-fe2e-account-create-update-q9jg8" Dec 10 14:56:45 crc kubenswrapper[4718]: I1210 14:56:45.287586 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-notifications-server-0" Dec 10 14:56:45 crc kubenswrapper[4718]: I1210 14:56:45.298873 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-btnnx" event={"ID":"635b489b-951a-4df2-8cb2-4c1fff2d8d59","Type":"ContainerStarted","Data":"c4e9ffcc0a5c69fe431afe1bbbabf195c4044f88bf58abafd2d676bd90fded01"} Dec 10 14:56:45 crc kubenswrapper[4718]: I1210 14:56:45.305636 4718 generic.go:334] "Generic (PLEG): container finished" podID="9261d0c6-0506-4f71-ad72-76eb7dd4ac0a" containerID="c1a5f18a598e9f56435185660387a975748c62c649fb18418697ce5535793d33" exitCode=0 Dec 10 14:56:45 crc kubenswrapper[4718]: I1210 14:56:45.307288 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"9261d0c6-0506-4f71-ad72-76eb7dd4ac0a","Type":"ContainerDied","Data":"c1a5f18a598e9f56435185660387a975748c62c649fb18418697ce5535793d33"} Dec 10 14:56:45 crc kubenswrapper[4718]: I1210 14:56:45.310631 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-fe2e-account-create-update-q9jg8" Dec 10 14:56:45 crc kubenswrapper[4718]: I1210 14:56:45.337338 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tm8vd\" (UniqueName: \"kubernetes.io/projected/1e87b030-a8e8-444f-9dda-a2d7a563aba9-kube-api-access-tm8vd\") pod \"barbican-db-create-5fkv5\" (UID: \"1e87b030-a8e8-444f-9dda-a2d7a563aba9\") " pod="openstack/barbican-db-create-5fkv5" Dec 10 14:56:45 crc kubenswrapper[4718]: I1210 14:56:45.337453 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1e87b030-a8e8-444f-9dda-a2d7a563aba9-operator-scripts\") pod \"barbican-db-create-5fkv5\" (UID: \"1e87b030-a8e8-444f-9dda-a2d7a563aba9\") " pod="openstack/barbican-db-create-5fkv5" Dec 10 14:56:45 crc kubenswrapper[4718]: I1210 14:56:45.337507 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sc2kt\" (UniqueName: \"kubernetes.io/projected/73737af3-b524-4da3-a97a-d3815b32ae7b-kube-api-access-sc2kt\") pod \"barbican-db97-account-create-update-5zs2l\" (UID: \"73737af3-b524-4da3-a97a-d3815b32ae7b\") " pod="openstack/barbican-db97-account-create-update-5zs2l" Dec 10 14:56:45 crc kubenswrapper[4718]: I1210 14:56:45.337590 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/73737af3-b524-4da3-a97a-d3815b32ae7b-operator-scripts\") pod \"barbican-db97-account-create-update-5zs2l\" (UID: \"73737af3-b524-4da3-a97a-d3815b32ae7b\") " pod="openstack/barbican-db97-account-create-update-5zs2l" Dec 10 14:56:45 crc kubenswrapper[4718]: I1210 14:56:45.338451 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/73737af3-b524-4da3-a97a-d3815b32ae7b-operator-scripts\") pod \"barbican-db97-account-create-update-5zs2l\" (UID: \"73737af3-b524-4da3-a97a-d3815b32ae7b\") " pod="openstack/barbican-db97-account-create-update-5zs2l" Dec 10 14:56:45 crc kubenswrapper[4718]: I1210 14:56:45.338577 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1e87b030-a8e8-444f-9dda-a2d7a563aba9-operator-scripts\") pod \"barbican-db-create-5fkv5\" (UID: \"1e87b030-a8e8-444f-9dda-a2d7a563aba9\") " pod="openstack/barbican-db-create-5fkv5" Dec 10 14:56:45 crc kubenswrapper[4718]: I1210 14:56:45.347064 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-55b99bf79c-dwmpv"] Dec 10 14:56:45 crc kubenswrapper[4718]: I1210 14:56:45.349105 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55b99bf79c-dwmpv" Dec 10 14:56:45 crc kubenswrapper[4718]: I1210 14:56:45.354957 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Dec 10 14:56:45 crc kubenswrapper[4718]: I1210 14:56:45.364089 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tm8vd\" (UniqueName: \"kubernetes.io/projected/1e87b030-a8e8-444f-9dda-a2d7a563aba9-kube-api-access-tm8vd\") pod \"barbican-db-create-5fkv5\" (UID: \"1e87b030-a8e8-444f-9dda-a2d7a563aba9\") " pod="openstack/barbican-db-create-5fkv5" Dec 10 14:56:45 crc kubenswrapper[4718]: I1210 14:56:45.369928 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sc2kt\" (UniqueName: \"kubernetes.io/projected/73737af3-b524-4da3-a97a-d3815b32ae7b-kube-api-access-sc2kt\") pod \"barbican-db97-account-create-update-5zs2l\" (UID: \"73737af3-b524-4da3-a97a-d3815b32ae7b\") " pod="openstack/barbican-db97-account-create-update-5zs2l" Dec 10 14:56:45 crc kubenswrapper[4718]: I1210 14:56:45.391183 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-55b99bf79c-dwmpv"] Dec 10 14:56:45 crc kubenswrapper[4718]: I1210 14:56:45.421415 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db97-account-create-update-5zs2l" Dec 10 14:56:45 crc kubenswrapper[4718]: I1210 14:56:45.438602 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-5fkv5" Dec 10 14:56:45 crc kubenswrapper[4718]: I1210 14:56:45.439828 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9grpf\" (UniqueName: \"kubernetes.io/projected/b15123b2-81bb-4fab-b6cd-ed84c0965118-kube-api-access-9grpf\") pod \"dnsmasq-dns-55b99bf79c-dwmpv\" (UID: \"b15123b2-81bb-4fab-b6cd-ed84c0965118\") " pod="openstack/dnsmasq-dns-55b99bf79c-dwmpv" Dec 10 14:56:45 crc kubenswrapper[4718]: I1210 14:56:45.439967 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b15123b2-81bb-4fab-b6cd-ed84c0965118-config\") pod \"dnsmasq-dns-55b99bf79c-dwmpv\" (UID: \"b15123b2-81bb-4fab-b6cd-ed84c0965118\") " pod="openstack/dnsmasq-dns-55b99bf79c-dwmpv" Dec 10 14:56:45 crc kubenswrapper[4718]: I1210 14:56:45.440101 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b15123b2-81bb-4fab-b6cd-ed84c0965118-ovsdbserver-nb\") pod \"dnsmasq-dns-55b99bf79c-dwmpv\" (UID: \"b15123b2-81bb-4fab-b6cd-ed84c0965118\") " pod="openstack/dnsmasq-dns-55b99bf79c-dwmpv" Dec 10 14:56:45 crc kubenswrapper[4718]: I1210 14:56:45.440134 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b15123b2-81bb-4fab-b6cd-ed84c0965118-dns-swift-storage-0\") pod \"dnsmasq-dns-55b99bf79c-dwmpv\" (UID: \"b15123b2-81bb-4fab-b6cd-ed84c0965118\") " pod="openstack/dnsmasq-dns-55b99bf79c-dwmpv" Dec 10 14:56:45 crc kubenswrapper[4718]: I1210 14:56:45.440160 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b15123b2-81bb-4fab-b6cd-ed84c0965118-ovsdbserver-sb\") pod \"dnsmasq-dns-55b99bf79c-dwmpv\" (UID: \"b15123b2-81bb-4fab-b6cd-ed84c0965118\") " pod="openstack/dnsmasq-dns-55b99bf79c-dwmpv" Dec 10 14:56:45 crc kubenswrapper[4718]: I1210 14:56:45.440284 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b15123b2-81bb-4fab-b6cd-ed84c0965118-dns-svc\") pod \"dnsmasq-dns-55b99bf79c-dwmpv\" (UID: \"b15123b2-81bb-4fab-b6cd-ed84c0965118\") " pod="openstack/dnsmasq-dns-55b99bf79c-dwmpv" Dec 10 14:56:45 crc kubenswrapper[4718]: I1210 14:56:45.528702 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-nhtpt"] Dec 10 14:56:45 crc kubenswrapper[4718]: I1210 14:56:45.530708 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-nhtpt" Dec 10 14:56:45 crc kubenswrapper[4718]: I1210 14:56:45.537444 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Dec 10 14:56:45 crc kubenswrapper[4718]: I1210 14:56:45.537484 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Dec 10 14:56:45 crc kubenswrapper[4718]: I1210 14:56:45.537532 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Dec 10 14:56:45 crc kubenswrapper[4718]: I1210 14:56:45.537950 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-fvdkw" Dec 10 14:56:45 crc kubenswrapper[4718]: I1210 14:56:45.542112 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b15123b2-81bb-4fab-b6cd-ed84c0965118-config\") pod \"dnsmasq-dns-55b99bf79c-dwmpv\" (UID: \"b15123b2-81bb-4fab-b6cd-ed84c0965118\") " pod="openstack/dnsmasq-dns-55b99bf79c-dwmpv" Dec 10 14:56:45 crc kubenswrapper[4718]: I1210 14:56:45.542245 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b15123b2-81bb-4fab-b6cd-ed84c0965118-ovsdbserver-nb\") pod \"dnsmasq-dns-55b99bf79c-dwmpv\" (UID: \"b15123b2-81bb-4fab-b6cd-ed84c0965118\") " pod="openstack/dnsmasq-dns-55b99bf79c-dwmpv" Dec 10 14:56:45 crc kubenswrapper[4718]: I1210 14:56:45.542288 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b15123b2-81bb-4fab-b6cd-ed84c0965118-dns-swift-storage-0\") pod \"dnsmasq-dns-55b99bf79c-dwmpv\" (UID: \"b15123b2-81bb-4fab-b6cd-ed84c0965118\") " pod="openstack/dnsmasq-dns-55b99bf79c-dwmpv" Dec 10 14:56:45 crc kubenswrapper[4718]: I1210 14:56:45.542327 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b15123b2-81bb-4fab-b6cd-ed84c0965118-ovsdbserver-sb\") pod \"dnsmasq-dns-55b99bf79c-dwmpv\" (UID: \"b15123b2-81bb-4fab-b6cd-ed84c0965118\") " pod="openstack/dnsmasq-dns-55b99bf79c-dwmpv" Dec 10 14:56:45 crc kubenswrapper[4718]: I1210 14:56:45.542503 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b15123b2-81bb-4fab-b6cd-ed84c0965118-dns-svc\") pod \"dnsmasq-dns-55b99bf79c-dwmpv\" (UID: \"b15123b2-81bb-4fab-b6cd-ed84c0965118\") " pod="openstack/dnsmasq-dns-55b99bf79c-dwmpv" Dec 10 14:56:45 crc kubenswrapper[4718]: I1210 14:56:45.542611 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9grpf\" (UniqueName: \"kubernetes.io/projected/b15123b2-81bb-4fab-b6cd-ed84c0965118-kube-api-access-9grpf\") pod \"dnsmasq-dns-55b99bf79c-dwmpv\" (UID: \"b15123b2-81bb-4fab-b6cd-ed84c0965118\") " pod="openstack/dnsmasq-dns-55b99bf79c-dwmpv" Dec 10 14:56:45 crc kubenswrapper[4718]: I1210 14:56:45.546767 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b15123b2-81bb-4fab-b6cd-ed84c0965118-config\") pod \"dnsmasq-dns-55b99bf79c-dwmpv\" (UID: \"b15123b2-81bb-4fab-b6cd-ed84c0965118\") " pod="openstack/dnsmasq-dns-55b99bf79c-dwmpv" Dec 10 14:56:45 crc kubenswrapper[4718]: I1210 14:56:45.547630 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b15123b2-81bb-4fab-b6cd-ed84c0965118-ovsdbserver-nb\") pod \"dnsmasq-dns-55b99bf79c-dwmpv\" (UID: \"b15123b2-81bb-4fab-b6cd-ed84c0965118\") " pod="openstack/dnsmasq-dns-55b99bf79c-dwmpv" Dec 10 14:56:45 crc kubenswrapper[4718]: I1210 14:56:45.548251 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b15123b2-81bb-4fab-b6cd-ed84c0965118-dns-swift-storage-0\") pod \"dnsmasq-dns-55b99bf79c-dwmpv\" (UID: \"b15123b2-81bb-4fab-b6cd-ed84c0965118\") " pod="openstack/dnsmasq-dns-55b99bf79c-dwmpv" Dec 10 14:56:45 crc kubenswrapper[4718]: I1210 14:56:45.549130 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b15123b2-81bb-4fab-b6cd-ed84c0965118-ovsdbserver-sb\") pod \"dnsmasq-dns-55b99bf79c-dwmpv\" (UID: \"b15123b2-81bb-4fab-b6cd-ed84c0965118\") " pod="openstack/dnsmasq-dns-55b99bf79c-dwmpv" Dec 10 14:56:45 crc kubenswrapper[4718]: I1210 14:56:45.550966 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b15123b2-81bb-4fab-b6cd-ed84c0965118-dns-svc\") pod \"dnsmasq-dns-55b99bf79c-dwmpv\" (UID: \"b15123b2-81bb-4fab-b6cd-ed84c0965118\") " pod="openstack/dnsmasq-dns-55b99bf79c-dwmpv" Dec 10 14:56:45 crc kubenswrapper[4718]: I1210 14:56:45.564523 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-nhtpt"] Dec 10 14:56:45 crc kubenswrapper[4718]: I1210 14:56:45.565823 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-btnnx" podStartSLOduration=3.818960245 podStartE2EDuration="9.565791724s" podCreationTimestamp="2025-12-10 14:56:36 +0000 UTC" firstStartedPulling="2025-12-10 14:56:38.106950159 +0000 UTC m=+1503.056173576" lastFinishedPulling="2025-12-10 14:56:43.853781638 +0000 UTC m=+1508.803005055" observedRunningTime="2025-12-10 14:56:45.557711118 +0000 UTC m=+1510.506934535" watchObservedRunningTime="2025-12-10 14:56:45.565791724 +0000 UTC m=+1510.515015151" Dec 10 14:56:45 crc kubenswrapper[4718]: I1210 14:56:45.582749 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9grpf\" (UniqueName: \"kubernetes.io/projected/b15123b2-81bb-4fab-b6cd-ed84c0965118-kube-api-access-9grpf\") pod \"dnsmasq-dns-55b99bf79c-dwmpv\" (UID: \"b15123b2-81bb-4fab-b6cd-ed84c0965118\") " pod="openstack/dnsmasq-dns-55b99bf79c-dwmpv" Dec 10 14:56:45 crc kubenswrapper[4718]: I1210 14:56:45.645131 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/801e3b76-dd13-4285-9597-8f7874496ed5-combined-ca-bundle\") pod \"keystone-db-sync-nhtpt\" (UID: \"801e3b76-dd13-4285-9597-8f7874496ed5\") " pod="openstack/keystone-db-sync-nhtpt" Dec 10 14:56:45 crc kubenswrapper[4718]: I1210 14:56:45.645924 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/801e3b76-dd13-4285-9597-8f7874496ed5-config-data\") pod \"keystone-db-sync-nhtpt\" (UID: \"801e3b76-dd13-4285-9597-8f7874496ed5\") " pod="openstack/keystone-db-sync-nhtpt" Dec 10 14:56:45 crc kubenswrapper[4718]: I1210 14:56:45.646192 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q5gsd\" (UniqueName: \"kubernetes.io/projected/801e3b76-dd13-4285-9597-8f7874496ed5-kube-api-access-q5gsd\") pod \"keystone-db-sync-nhtpt\" (UID: \"801e3b76-dd13-4285-9597-8f7874496ed5\") " pod="openstack/keystone-db-sync-nhtpt" Dec 10 14:56:48 crc kubenswrapper[4718]: I1210 14:56:45.748935 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/801e3b76-dd13-4285-9597-8f7874496ed5-combined-ca-bundle\") pod \"keystone-db-sync-nhtpt\" (UID: \"801e3b76-dd13-4285-9597-8f7874496ed5\") " pod="openstack/keystone-db-sync-nhtpt" Dec 10 14:56:48 crc kubenswrapper[4718]: I1210 14:56:45.749010 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/801e3b76-dd13-4285-9597-8f7874496ed5-config-data\") pod \"keystone-db-sync-nhtpt\" (UID: \"801e3b76-dd13-4285-9597-8f7874496ed5\") " pod="openstack/keystone-db-sync-nhtpt" Dec 10 14:56:48 crc kubenswrapper[4718]: I1210 14:56:45.749104 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q5gsd\" (UniqueName: \"kubernetes.io/projected/801e3b76-dd13-4285-9597-8f7874496ed5-kube-api-access-q5gsd\") pod \"keystone-db-sync-nhtpt\" (UID: \"801e3b76-dd13-4285-9597-8f7874496ed5\") " pod="openstack/keystone-db-sync-nhtpt" Dec 10 14:56:48 crc kubenswrapper[4718]: I1210 14:56:45.761103 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/801e3b76-dd13-4285-9597-8f7874496ed5-config-data\") pod \"keystone-db-sync-nhtpt\" (UID: \"801e3b76-dd13-4285-9597-8f7874496ed5\") " pod="openstack/keystone-db-sync-nhtpt" Dec 10 14:56:48 crc kubenswrapper[4718]: I1210 14:56:45.762492 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/801e3b76-dd13-4285-9597-8f7874496ed5-combined-ca-bundle\") pod \"keystone-db-sync-nhtpt\" (UID: \"801e3b76-dd13-4285-9597-8f7874496ed5\") " pod="openstack/keystone-db-sync-nhtpt" Dec 10 14:56:48 crc kubenswrapper[4718]: I1210 14:56:45.782331 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q5gsd\" (UniqueName: \"kubernetes.io/projected/801e3b76-dd13-4285-9597-8f7874496ed5-kube-api-access-q5gsd\") pod \"keystone-db-sync-nhtpt\" (UID: \"801e3b76-dd13-4285-9597-8f7874496ed5\") " pod="openstack/keystone-db-sync-nhtpt" Dec 10 14:56:48 crc kubenswrapper[4718]: I1210 14:56:45.796610 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55b99bf79c-dwmpv" Dec 10 14:56:48 crc kubenswrapper[4718]: I1210 14:56:45.886917 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-nhtpt" Dec 10 14:56:48 crc kubenswrapper[4718]: I1210 14:56:46.088112 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-k6j4m"] Dec 10 14:56:48 crc kubenswrapper[4718]: I1210 14:56:46.345069 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-k6j4m" event={"ID":"df5dab54-fa6b-4cfd-9c05-a6eeeec57de3","Type":"ContainerStarted","Data":"320079ad5df1cbdb308a57d042f4094309270472ed6f81e78fb5f6722367693f"} Dec 10 14:56:48 crc kubenswrapper[4718]: I1210 14:56:46.689837 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-btnnx" Dec 10 14:56:48 crc kubenswrapper[4718]: I1210 14:56:46.690235 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-btnnx" Dec 10 14:56:48 crc kubenswrapper[4718]: I1210 14:56:47.358033 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-k6j4m" event={"ID":"df5dab54-fa6b-4cfd-9c05-a6eeeec57de3","Type":"ContainerStarted","Data":"db973de307b14a5162d5187cdbfacfe4c57fd955103c73d63283b37fdf9a5823"} Dec 10 14:56:48 crc kubenswrapper[4718]: I1210 14:56:47.378486 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"9261d0c6-0506-4f71-ad72-76eb7dd4ac0a","Type":"ContainerStarted","Data":"b37608f294d0a205897165cd8ffe52eeb5998e5893fb0745c136901828bea0a0"} Dec 10 14:56:48 crc kubenswrapper[4718]: I1210 14:56:47.404584 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-create-k6j4m" podStartSLOduration=3.404549796 podStartE2EDuration="3.404549796s" podCreationTimestamp="2025-12-10 14:56:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 14:56:47.388325801 +0000 UTC m=+1512.337549238" watchObservedRunningTime="2025-12-10 14:56:47.404549796 +0000 UTC m=+1512.353773213" Dec 10 14:56:48 crc kubenswrapper[4718]: I1210 14:56:47.777095 4718 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-btnnx" podUID="635b489b-951a-4df2-8cb2-4c1fff2d8d59" containerName="registry-server" probeResult="failure" output=< Dec 10 14:56:48 crc kubenswrapper[4718]: timeout: failed to connect service ":50051" within 1s Dec 10 14:56:48 crc kubenswrapper[4718]: > Dec 10 14:56:48 crc kubenswrapper[4718]: I1210 14:56:48.084511 4718 patch_prober.go:28] interesting pod/machine-config-daemon-8zmhn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 10 14:56:48 crc kubenswrapper[4718]: I1210 14:56:48.084690 4718 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" podUID="8db53917-7cfb-496d-b8a0-5cc68f3be4e7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 10 14:56:48 crc kubenswrapper[4718]: I1210 14:56:48.577791 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-pxzzc"] Dec 10 14:56:48 crc kubenswrapper[4718]: I1210 14:56:48.587648 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-pxzzc" Dec 10 14:56:48 crc kubenswrapper[4718]: I1210 14:56:48.609883 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-pxzzc"] Dec 10 14:56:48 crc kubenswrapper[4718]: I1210 14:56:48.638453 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-db-sync-bcnrt"] Dec 10 14:56:48 crc kubenswrapper[4718]: I1210 14:56:48.644559 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-sync-bcnrt" Dec 10 14:56:48 crc kubenswrapper[4718]: I1210 14:56:48.656962 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-config-data" Dec 10 14:56:48 crc kubenswrapper[4718]: I1210 14:56:48.663644 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-watcher-dockercfg-xlv7b" Dec 10 14:56:48 crc kubenswrapper[4718]: I1210 14:56:48.674727 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-db-sync-bcnrt"] Dec 10 14:56:48 crc kubenswrapper[4718]: I1210 14:56:48.735993 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-phsj2\" (UniqueName: \"kubernetes.io/projected/d1528b35-a982-4f53-90fb-0f0374ffcdb3-kube-api-access-phsj2\") pod \"glance-db-create-pxzzc\" (UID: \"d1528b35-a982-4f53-90fb-0f0374ffcdb3\") " pod="openstack/glance-db-create-pxzzc" Dec 10 14:56:48 crc kubenswrapper[4718]: I1210 14:56:48.736108 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d1528b35-a982-4f53-90fb-0f0374ffcdb3-operator-scripts\") pod \"glance-db-create-pxzzc\" (UID: \"d1528b35-a982-4f53-90fb-0f0374ffcdb3\") " pod="openstack/glance-db-create-pxzzc" Dec 10 14:56:48 crc kubenswrapper[4718]: I1210 14:56:48.795805 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-mxtgr"] Dec 10 14:56:48 crc kubenswrapper[4718]: I1210 14:56:48.817130 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-mxtgr" Dec 10 14:56:48 crc kubenswrapper[4718]: I1210 14:56:48.830547 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-mxtgr"] Dec 10 14:56:48 crc kubenswrapper[4718]: I1210 14:56:48.844954 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/bf1fd7f3-5d9a-44b4-8e4e-e71df148b109-db-sync-config-data\") pod \"watcher-db-sync-bcnrt\" (UID: \"bf1fd7f3-5d9a-44b4-8e4e-e71df148b109\") " pod="openstack/watcher-db-sync-bcnrt" Dec 10 14:56:48 crc kubenswrapper[4718]: I1210 14:56:48.845103 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bf1fd7f3-5d9a-44b4-8e4e-e71df148b109-config-data\") pod \"watcher-db-sync-bcnrt\" (UID: \"bf1fd7f3-5d9a-44b4-8e4e-e71df148b109\") " pod="openstack/watcher-db-sync-bcnrt" Dec 10 14:56:48 crc kubenswrapper[4718]: I1210 14:56:48.845251 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-phsj2\" (UniqueName: \"kubernetes.io/projected/d1528b35-a982-4f53-90fb-0f0374ffcdb3-kube-api-access-phsj2\") pod \"glance-db-create-pxzzc\" (UID: \"d1528b35-a982-4f53-90fb-0f0374ffcdb3\") " pod="openstack/glance-db-create-pxzzc" Dec 10 14:56:48 crc kubenswrapper[4718]: I1210 14:56:48.845331 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d1528b35-a982-4f53-90fb-0f0374ffcdb3-operator-scripts\") pod \"glance-db-create-pxzzc\" (UID: \"d1528b35-a982-4f53-90fb-0f0374ffcdb3\") " pod="openstack/glance-db-create-pxzzc" Dec 10 14:56:48 crc kubenswrapper[4718]: I1210 14:56:48.845425 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf1fd7f3-5d9a-44b4-8e4e-e71df148b109-combined-ca-bundle\") pod \"watcher-db-sync-bcnrt\" (UID: \"bf1fd7f3-5d9a-44b4-8e4e-e71df148b109\") " pod="openstack/watcher-db-sync-bcnrt" Dec 10 14:56:48 crc kubenswrapper[4718]: I1210 14:56:48.845484 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mhdl4\" (UniqueName: \"kubernetes.io/projected/bf1fd7f3-5d9a-44b4-8e4e-e71df148b109-kube-api-access-mhdl4\") pod \"watcher-db-sync-bcnrt\" (UID: \"bf1fd7f3-5d9a-44b4-8e4e-e71df148b109\") " pod="openstack/watcher-db-sync-bcnrt" Dec 10 14:56:48 crc kubenswrapper[4718]: I1210 14:56:48.882918 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d1528b35-a982-4f53-90fb-0f0374ffcdb3-operator-scripts\") pod \"glance-db-create-pxzzc\" (UID: \"d1528b35-a982-4f53-90fb-0f0374ffcdb3\") " pod="openstack/glance-db-create-pxzzc" Dec 10 14:56:48 crc kubenswrapper[4718]: I1210 14:56:48.948986 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/bf1fd7f3-5d9a-44b4-8e4e-e71df148b109-db-sync-config-data\") pod \"watcher-db-sync-bcnrt\" (UID: \"bf1fd7f3-5d9a-44b4-8e4e-e71df148b109\") " pod="openstack/watcher-db-sync-bcnrt" Dec 10 14:56:48 crc kubenswrapper[4718]: I1210 14:56:48.949184 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bf1fd7f3-5d9a-44b4-8e4e-e71df148b109-config-data\") pod \"watcher-db-sync-bcnrt\" (UID: \"bf1fd7f3-5d9a-44b4-8e4e-e71df148b109\") " pod="openstack/watcher-db-sync-bcnrt" Dec 10 14:56:48 crc kubenswrapper[4718]: I1210 14:56:48.949326 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cdrkc\" (UniqueName: \"kubernetes.io/projected/694a1ec1-f2d8-4637-b0ae-2bdba236854b-kube-api-access-cdrkc\") pod \"neutron-db-create-mxtgr\" (UID: \"694a1ec1-f2d8-4637-b0ae-2bdba236854b\") " pod="openstack/neutron-db-create-mxtgr" Dec 10 14:56:48 crc kubenswrapper[4718]: I1210 14:56:48.949481 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/694a1ec1-f2d8-4637-b0ae-2bdba236854b-operator-scripts\") pod \"neutron-db-create-mxtgr\" (UID: \"694a1ec1-f2d8-4637-b0ae-2bdba236854b\") " pod="openstack/neutron-db-create-mxtgr" Dec 10 14:56:48 crc kubenswrapper[4718]: I1210 14:56:48.949686 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf1fd7f3-5d9a-44b4-8e4e-e71df148b109-combined-ca-bundle\") pod \"watcher-db-sync-bcnrt\" (UID: \"bf1fd7f3-5d9a-44b4-8e4e-e71df148b109\") " pod="openstack/watcher-db-sync-bcnrt" Dec 10 14:56:48 crc kubenswrapper[4718]: I1210 14:56:48.949768 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mhdl4\" (UniqueName: \"kubernetes.io/projected/bf1fd7f3-5d9a-44b4-8e4e-e71df148b109-kube-api-access-mhdl4\") pod \"watcher-db-sync-bcnrt\" (UID: \"bf1fd7f3-5d9a-44b4-8e4e-e71df148b109\") " pod="openstack/watcher-db-sync-bcnrt" Dec 10 14:56:48 crc kubenswrapper[4718]: I1210 14:56:48.970020 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-9569-account-create-update-gxrxr"] Dec 10 14:56:48 crc kubenswrapper[4718]: I1210 14:56:48.987971 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-9569-account-create-update-gxrxr" Dec 10 14:56:49 crc kubenswrapper[4718]: I1210 14:56:48.999640 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Dec 10 14:56:49 crc kubenswrapper[4718]: I1210 14:56:49.027494 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-9569-account-create-update-gxrxr"] Dec 10 14:56:49 crc kubenswrapper[4718]: I1210 14:56:49.052068 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cdrkc\" (UniqueName: \"kubernetes.io/projected/694a1ec1-f2d8-4637-b0ae-2bdba236854b-kube-api-access-cdrkc\") pod \"neutron-db-create-mxtgr\" (UID: \"694a1ec1-f2d8-4637-b0ae-2bdba236854b\") " pod="openstack/neutron-db-create-mxtgr" Dec 10 14:56:49 crc kubenswrapper[4718]: I1210 14:56:49.052178 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/694a1ec1-f2d8-4637-b0ae-2bdba236854b-operator-scripts\") pod \"neutron-db-create-mxtgr\" (UID: \"694a1ec1-f2d8-4637-b0ae-2bdba236854b\") " pod="openstack/neutron-db-create-mxtgr" Dec 10 14:56:49 crc kubenswrapper[4718]: I1210 14:56:49.052278 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/40ac237b-96dc-4d03-9afd-3693b621a63b-operator-scripts\") pod \"glance-9569-account-create-update-gxrxr\" (UID: \"40ac237b-96dc-4d03-9afd-3693b621a63b\") " pod="openstack/glance-9569-account-create-update-gxrxr" Dec 10 14:56:49 crc kubenswrapper[4718]: I1210 14:56:49.052343 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c6pwn\" (UniqueName: \"kubernetes.io/projected/40ac237b-96dc-4d03-9afd-3693b621a63b-kube-api-access-c6pwn\") pod \"glance-9569-account-create-update-gxrxr\" (UID: \"40ac237b-96dc-4d03-9afd-3693b621a63b\") " pod="openstack/glance-9569-account-create-update-gxrxr" Dec 10 14:56:49 crc kubenswrapper[4718]: I1210 14:56:49.053603 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/694a1ec1-f2d8-4637-b0ae-2bdba236854b-operator-scripts\") pod \"neutron-db-create-mxtgr\" (UID: \"694a1ec1-f2d8-4637-b0ae-2bdba236854b\") " pod="openstack/neutron-db-create-mxtgr" Dec 10 14:56:49 crc kubenswrapper[4718]: I1210 14:56:49.061981 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-3d20-account-create-update-z8f8t"] Dec 10 14:56:49 crc kubenswrapper[4718]: I1210 14:56:49.064142 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-3d20-account-create-update-z8f8t" Dec 10 14:56:49 crc kubenswrapper[4718]: I1210 14:56:49.077949 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/bf1fd7f3-5d9a-44b4-8e4e-e71df148b109-db-sync-config-data\") pod \"watcher-db-sync-bcnrt\" (UID: \"bf1fd7f3-5d9a-44b4-8e4e-e71df148b109\") " pod="openstack/watcher-db-sync-bcnrt" Dec 10 14:56:49 crc kubenswrapper[4718]: I1210 14:56:49.078080 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bf1fd7f3-5d9a-44b4-8e4e-e71df148b109-config-data\") pod \"watcher-db-sync-bcnrt\" (UID: \"bf1fd7f3-5d9a-44b4-8e4e-e71df148b109\") " pod="openstack/watcher-db-sync-bcnrt" Dec 10 14:56:49 crc kubenswrapper[4718]: I1210 14:56:49.078663 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Dec 10 14:56:49 crc kubenswrapper[4718]: I1210 14:56:49.078869 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-phsj2\" (UniqueName: \"kubernetes.io/projected/d1528b35-a982-4f53-90fb-0f0374ffcdb3-kube-api-access-phsj2\") pod \"glance-db-create-pxzzc\" (UID: \"d1528b35-a982-4f53-90fb-0f0374ffcdb3\") " pod="openstack/glance-db-create-pxzzc" Dec 10 14:56:49 crc kubenswrapper[4718]: I1210 14:56:49.078946 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-3d20-account-create-update-z8f8t"] Dec 10 14:56:49 crc kubenswrapper[4718]: I1210 14:56:49.081074 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf1fd7f3-5d9a-44b4-8e4e-e71df148b109-combined-ca-bundle\") pod \"watcher-db-sync-bcnrt\" (UID: \"bf1fd7f3-5d9a-44b4-8e4e-e71df148b109\") " pod="openstack/watcher-db-sync-bcnrt" Dec 10 14:56:49 crc kubenswrapper[4718]: I1210 14:56:49.098520 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mhdl4\" (UniqueName: \"kubernetes.io/projected/bf1fd7f3-5d9a-44b4-8e4e-e71df148b109-kube-api-access-mhdl4\") pod \"watcher-db-sync-bcnrt\" (UID: \"bf1fd7f3-5d9a-44b4-8e4e-e71df148b109\") " pod="openstack/watcher-db-sync-bcnrt" Dec 10 14:56:49 crc kubenswrapper[4718]: I1210 14:56:49.105180 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cdrkc\" (UniqueName: \"kubernetes.io/projected/694a1ec1-f2d8-4637-b0ae-2bdba236854b-kube-api-access-cdrkc\") pod \"neutron-db-create-mxtgr\" (UID: \"694a1ec1-f2d8-4637-b0ae-2bdba236854b\") " pod="openstack/neutron-db-create-mxtgr" Dec 10 14:56:49 crc kubenswrapper[4718]: I1210 14:56:49.108628 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-fe2e-account-create-update-q9jg8"] Dec 10 14:56:49 crc kubenswrapper[4718]: I1210 14:56:49.155514 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-clthp\" (UniqueName: \"kubernetes.io/projected/fcd536b9-1513-4c72-8cb9-acbf491e6b3d-kube-api-access-clthp\") pod \"neutron-3d20-account-create-update-z8f8t\" (UID: \"fcd536b9-1513-4c72-8cb9-acbf491e6b3d\") " pod="openstack/neutron-3d20-account-create-update-z8f8t" Dec 10 14:56:49 crc kubenswrapper[4718]: I1210 14:56:49.155592 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fcd536b9-1513-4c72-8cb9-acbf491e6b3d-operator-scripts\") pod \"neutron-3d20-account-create-update-z8f8t\" (UID: \"fcd536b9-1513-4c72-8cb9-acbf491e6b3d\") " pod="openstack/neutron-3d20-account-create-update-z8f8t" Dec 10 14:56:49 crc kubenswrapper[4718]: I1210 14:56:49.155653 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/40ac237b-96dc-4d03-9afd-3693b621a63b-operator-scripts\") pod \"glance-9569-account-create-update-gxrxr\" (UID: \"40ac237b-96dc-4d03-9afd-3693b621a63b\") " pod="openstack/glance-9569-account-create-update-gxrxr" Dec 10 14:56:49 crc kubenswrapper[4718]: I1210 14:56:49.155709 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c6pwn\" (UniqueName: \"kubernetes.io/projected/40ac237b-96dc-4d03-9afd-3693b621a63b-kube-api-access-c6pwn\") pod \"glance-9569-account-create-update-gxrxr\" (UID: \"40ac237b-96dc-4d03-9afd-3693b621a63b\") " pod="openstack/glance-9569-account-create-update-gxrxr" Dec 10 14:56:49 crc kubenswrapper[4718]: I1210 14:56:49.156691 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/40ac237b-96dc-4d03-9afd-3693b621a63b-operator-scripts\") pod \"glance-9569-account-create-update-gxrxr\" (UID: \"40ac237b-96dc-4d03-9afd-3693b621a63b\") " pod="openstack/glance-9569-account-create-update-gxrxr" Dec 10 14:56:49 crc kubenswrapper[4718]: I1210 14:56:49.177566 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-mxtgr" Dec 10 14:56:49 crc kubenswrapper[4718]: I1210 14:56:49.189800 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c6pwn\" (UniqueName: \"kubernetes.io/projected/40ac237b-96dc-4d03-9afd-3693b621a63b-kube-api-access-c6pwn\") pod \"glance-9569-account-create-update-gxrxr\" (UID: \"40ac237b-96dc-4d03-9afd-3693b621a63b\") " pod="openstack/glance-9569-account-create-update-gxrxr" Dec 10 14:56:49 crc kubenswrapper[4718]: I1210 14:56:49.225010 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-pxzzc" Dec 10 14:56:49 crc kubenswrapper[4718]: I1210 14:56:49.258154 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-clthp\" (UniqueName: \"kubernetes.io/projected/fcd536b9-1513-4c72-8cb9-acbf491e6b3d-kube-api-access-clthp\") pod \"neutron-3d20-account-create-update-z8f8t\" (UID: \"fcd536b9-1513-4c72-8cb9-acbf491e6b3d\") " pod="openstack/neutron-3d20-account-create-update-z8f8t" Dec 10 14:56:49 crc kubenswrapper[4718]: I1210 14:56:49.258795 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fcd536b9-1513-4c72-8cb9-acbf491e6b3d-operator-scripts\") pod \"neutron-3d20-account-create-update-z8f8t\" (UID: \"fcd536b9-1513-4c72-8cb9-acbf491e6b3d\") " pod="openstack/neutron-3d20-account-create-update-z8f8t" Dec 10 14:56:49 crc kubenswrapper[4718]: I1210 14:56:49.259796 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fcd536b9-1513-4c72-8cb9-acbf491e6b3d-operator-scripts\") pod \"neutron-3d20-account-create-update-z8f8t\" (UID: \"fcd536b9-1513-4c72-8cb9-acbf491e6b3d\") " pod="openstack/neutron-3d20-account-create-update-z8f8t" Dec 10 14:56:49 crc kubenswrapper[4718]: I1210 14:56:49.264549 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db97-account-create-update-5zs2l"] Dec 10 14:56:49 crc kubenswrapper[4718]: I1210 14:56:49.273292 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-55b99bf79c-dwmpv"] Dec 10 14:56:49 crc kubenswrapper[4718]: I1210 14:56:49.283014 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-nhtpt"] Dec 10 14:56:49 crc kubenswrapper[4718]: W1210 14:56:49.286930 4718 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb15123b2_81bb_4fab_b6cd_ed84c0965118.slice/crio-a84419b5da2456d648ff0685db1aef0497fee23b07446758f3c6606718d62535 WatchSource:0}: Error finding container a84419b5da2456d648ff0685db1aef0497fee23b07446758f3c6606718d62535: Status 404 returned error can't find the container with id a84419b5da2456d648ff0685db1aef0497fee23b07446758f3c6606718d62535 Dec 10 14:56:49 crc kubenswrapper[4718]: I1210 14:56:49.290548 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-clthp\" (UniqueName: \"kubernetes.io/projected/fcd536b9-1513-4c72-8cb9-acbf491e6b3d-kube-api-access-clthp\") pod \"neutron-3d20-account-create-update-z8f8t\" (UID: \"fcd536b9-1513-4c72-8cb9-acbf491e6b3d\") " pod="openstack/neutron-3d20-account-create-update-z8f8t" Dec 10 14:56:49 crc kubenswrapper[4718]: W1210 14:56:49.290633 4718 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod73737af3_b524_4da3_a97a_d3815b32ae7b.slice/crio-0e1a666b42471448c82cc536e441db1fa67a94ca96984e849336bab4acce1bd4 WatchSource:0}: Error finding container 0e1a666b42471448c82cc536e441db1fa67a94ca96984e849336bab4acce1bd4: Status 404 returned error can't find the container with id 0e1a666b42471448c82cc536e441db1fa67a94ca96984e849336bab4acce1bd4 Dec 10 14:56:49 crc kubenswrapper[4718]: I1210 14:56:49.292621 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-sync-bcnrt" Dec 10 14:56:49 crc kubenswrapper[4718]: I1210 14:56:49.333001 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-9569-account-create-update-gxrxr" Dec 10 14:56:49 crc kubenswrapper[4718]: I1210 14:56:49.355911 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-5fkv5"] Dec 10 14:56:49 crc kubenswrapper[4718]: W1210 14:56:49.501754 4718 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1e87b030_a8e8_444f_9dda_a2d7a563aba9.slice/crio-d962514dc26193b6d5b5a368575ba397257feeba10a22ebae8bc11b939261505 WatchSource:0}: Error finding container d962514dc26193b6d5b5a368575ba397257feeba10a22ebae8bc11b939261505: Status 404 returned error can't find the container with id d962514dc26193b6d5b5a368575ba397257feeba10a22ebae8bc11b939261505 Dec 10 14:56:49 crc kubenswrapper[4718]: I1210 14:56:49.510267 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-nhtpt" event={"ID":"801e3b76-dd13-4285-9597-8f7874496ed5","Type":"ContainerStarted","Data":"8e8623bb7598816053e42fa7a76ad3bc7446402073dec03a742af864e9e63fcf"} Dec 10 14:56:49 crc kubenswrapper[4718]: I1210 14:56:49.523115 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-fe2e-account-create-update-q9jg8" event={"ID":"ec673fdd-e3b3-4576-aefa-c53822bd5f27","Type":"ContainerStarted","Data":"ea43e39dbd7f63221a0b86db1aaf897202ba057ca356afdb7a2847dd139968d4"} Dec 10 14:56:49 crc kubenswrapper[4718]: I1210 14:56:49.524675 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db97-account-create-update-5zs2l" event={"ID":"73737af3-b524-4da3-a97a-d3815b32ae7b","Type":"ContainerStarted","Data":"0e1a666b42471448c82cc536e441db1fa67a94ca96984e849336bab4acce1bd4"} Dec 10 14:56:49 crc kubenswrapper[4718]: I1210 14:56:49.526759 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55b99bf79c-dwmpv" event={"ID":"b15123b2-81bb-4fab-b6cd-ed84c0965118","Type":"ContainerStarted","Data":"a84419b5da2456d648ff0685db1aef0497fee23b07446758f3c6606718d62535"} Dec 10 14:56:49 crc kubenswrapper[4718]: I1210 14:56:49.623935 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-3d20-account-create-update-z8f8t" Dec 10 14:56:50 crc kubenswrapper[4718]: I1210 14:56:50.291813 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-pxzzc"] Dec 10 14:56:50 crc kubenswrapper[4718]: I1210 14:56:50.338917 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-db-sync-bcnrt"] Dec 10 14:56:50 crc kubenswrapper[4718]: I1210 14:56:50.523981 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-mxtgr"] Dec 10 14:56:50 crc kubenswrapper[4718]: I1210 14:56:50.554114 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-5fkv5" event={"ID":"1e87b030-a8e8-444f-9dda-a2d7a563aba9","Type":"ContainerStarted","Data":"d962514dc26193b6d5b5a368575ba397257feeba10a22ebae8bc11b939261505"} Dec 10 14:56:50 crc kubenswrapper[4718]: I1210 14:56:50.632310 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-3d20-account-create-update-z8f8t"] Dec 10 14:56:50 crc kubenswrapper[4718]: I1210 14:56:50.667233 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-9569-account-create-update-gxrxr"] Dec 10 14:56:50 crc kubenswrapper[4718]: W1210 14:56:50.805554 4718 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod40ac237b_96dc_4d03_9afd_3693b621a63b.slice/crio-4a9a72090c20eb002ace8b381fa0b54b583f6a40a51c7ce22552d8d3f41ca05c WatchSource:0}: Error finding container 4a9a72090c20eb002ace8b381fa0b54b583f6a40a51c7ce22552d8d3f41ca05c: Status 404 returned error can't find the container with id 4a9a72090c20eb002ace8b381fa0b54b583f6a40a51c7ce22552d8d3f41ca05c Dec 10 14:56:51 crc kubenswrapper[4718]: I1210 14:56:51.586084 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-9569-account-create-update-gxrxr" event={"ID":"40ac237b-96dc-4d03-9afd-3693b621a63b","Type":"ContainerStarted","Data":"914878a308caab8ea00b865994a333e870b11adb944ec4e4bfd010a94c7bcb68"} Dec 10 14:56:51 crc kubenswrapper[4718]: I1210 14:56:51.586163 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-9569-account-create-update-gxrxr" event={"ID":"40ac237b-96dc-4d03-9afd-3693b621a63b","Type":"ContainerStarted","Data":"4a9a72090c20eb002ace8b381fa0b54b583f6a40a51c7ce22552d8d3f41ca05c"} Dec 10 14:56:51 crc kubenswrapper[4718]: I1210 14:56:51.594265 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-fe2e-account-create-update-q9jg8" event={"ID":"ec673fdd-e3b3-4576-aefa-c53822bd5f27","Type":"ContainerStarted","Data":"46fc31ff953bc13a4411e6db8f2d2c6f86fb3366991f5bce9d8fd36837803827"} Dec 10 14:56:51 crc kubenswrapper[4718]: I1210 14:56:51.614683 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-mxtgr" event={"ID":"694a1ec1-f2d8-4637-b0ae-2bdba236854b","Type":"ContainerStarted","Data":"5828b8554fc36ef8d4562434851260cc4164618ef4d7a238ab7439964d6b5803"} Dec 10 14:56:51 crc kubenswrapper[4718]: I1210 14:56:51.614753 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-mxtgr" event={"ID":"694a1ec1-f2d8-4637-b0ae-2bdba236854b","Type":"ContainerStarted","Data":"946d8dfae8bb0be8cf97bea221c225b2eea95624b651cb57017534d39bd17556"} Dec 10 14:56:51 crc kubenswrapper[4718]: I1210 14:56:51.617160 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-9569-account-create-update-gxrxr" podStartSLOduration=3.617122008 podStartE2EDuration="3.617122008s" podCreationTimestamp="2025-12-10 14:56:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 14:56:51.616209125 +0000 UTC m=+1516.565432542" watchObservedRunningTime="2025-12-10 14:56:51.617122008 +0000 UTC m=+1516.566345435" Dec 10 14:56:51 crc kubenswrapper[4718]: I1210 14:56:51.628301 4718 generic.go:334] "Generic (PLEG): container finished" podID="b15123b2-81bb-4fab-b6cd-ed84c0965118" containerID="ffbdf3a0ecd58eb5930b6dfac8243f7a0af458182edee3af0698e445e118016a" exitCode=0 Dec 10 14:56:51 crc kubenswrapper[4718]: I1210 14:56:51.635191 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55b99bf79c-dwmpv" event={"ID":"b15123b2-81bb-4fab-b6cd-ed84c0965118","Type":"ContainerDied","Data":"ffbdf3a0ecd58eb5930b6dfac8243f7a0af458182edee3af0698e445e118016a"} Dec 10 14:56:51 crc kubenswrapper[4718]: I1210 14:56:51.673180 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-pxzzc" event={"ID":"d1528b35-a982-4f53-90fb-0f0374ffcdb3","Type":"ContainerStarted","Data":"1eef66815823a1a20fa4b36826326ff1e43676f5e62b09cf904ad050f90d1ffe"} Dec 10 14:56:51 crc kubenswrapper[4718]: I1210 14:56:51.673272 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-pxzzc" event={"ID":"d1528b35-a982-4f53-90fb-0f0374ffcdb3","Type":"ContainerStarted","Data":"0ef47d2340ace172ec02a3d4892d4e10eb31e9b59813ce4ad06d01ae73133195"} Dec 10 14:56:51 crc kubenswrapper[4718]: I1210 14:56:51.705296 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-db-sync-bcnrt" event={"ID":"bf1fd7f3-5d9a-44b4-8e4e-e71df148b109","Type":"ContainerStarted","Data":"180f2e76ed4911d43a09683cb5e9b18af0e3967d895c8ccd60bcb7c0c33d1448"} Dec 10 14:56:51 crc kubenswrapper[4718]: I1210 14:56:51.715074 4718 generic.go:334] "Generic (PLEG): container finished" podID="df5dab54-fa6b-4cfd-9c05-a6eeeec57de3" containerID="db973de307b14a5162d5187cdbfacfe4c57fd955103c73d63283b37fdf9a5823" exitCode=0 Dec 10 14:56:51 crc kubenswrapper[4718]: I1210 14:56:51.715273 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-k6j4m" event={"ID":"df5dab54-fa6b-4cfd-9c05-a6eeeec57de3","Type":"ContainerDied","Data":"db973de307b14a5162d5187cdbfacfe4c57fd955103c73d63283b37fdf9a5823"} Dec 10 14:56:51 crc kubenswrapper[4718]: I1210 14:56:51.725459 4718 generic.go:334] "Generic (PLEG): container finished" podID="1e87b030-a8e8-444f-9dda-a2d7a563aba9" containerID="f59b49d2f55b83625d0744fc11c15650a1ded84a919859b9530099dc5d596cd3" exitCode=0 Dec 10 14:56:51 crc kubenswrapper[4718]: I1210 14:56:51.725565 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-5fkv5" event={"ID":"1e87b030-a8e8-444f-9dda-a2d7a563aba9","Type":"ContainerDied","Data":"f59b49d2f55b83625d0744fc11c15650a1ded84a919859b9530099dc5d596cd3"} Dec 10 14:56:51 crc kubenswrapper[4718]: I1210 14:56:51.734653 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-create-mxtgr" podStartSLOduration=3.734623998 podStartE2EDuration="3.734623998s" podCreationTimestamp="2025-12-10 14:56:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 14:56:51.636644546 +0000 UTC m=+1516.585867953" watchObservedRunningTime="2025-12-10 14:56:51.734623998 +0000 UTC m=+1516.683847415" Dec 10 14:56:51 crc kubenswrapper[4718]: I1210 14:56:51.748164 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-fe2e-account-create-update-q9jg8" podStartSLOduration=7.748110852 podStartE2EDuration="7.748110852s" podCreationTimestamp="2025-12-10 14:56:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 14:56:51.663652166 +0000 UTC m=+1516.612875583" watchObservedRunningTime="2025-12-10 14:56:51.748110852 +0000 UTC m=+1516.697334269" Dec 10 14:56:51 crc kubenswrapper[4718]: I1210 14:56:51.748961 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-3d20-account-create-update-z8f8t" event={"ID":"fcd536b9-1513-4c72-8cb9-acbf491e6b3d","Type":"ContainerStarted","Data":"7fb73a0219b1b074c05870e910e23c34769f1823949798c6c1e75ba43dbe8647"} Dec 10 14:56:51 crc kubenswrapper[4718]: I1210 14:56:51.768067 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"9261d0c6-0506-4f71-ad72-76eb7dd4ac0a","Type":"ContainerStarted","Data":"a6895eee2c578cff62226d2105459b82da186cce242a503cccec972eea4b7ea7"} Dec 10 14:56:51 crc kubenswrapper[4718]: I1210 14:56:51.768157 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"9261d0c6-0506-4f71-ad72-76eb7dd4ac0a","Type":"ContainerStarted","Data":"23931b46d54d68f781ff0af77258a1a4336dd892c7bae77f04b963b41689c6cf"} Dec 10 14:56:51 crc kubenswrapper[4718]: I1210 14:56:51.777808 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db97-account-create-update-5zs2l" event={"ID":"73737af3-b524-4da3-a97a-d3815b32ae7b","Type":"ContainerStarted","Data":"4517535a28edcb323efd5e3b0dc7f232ca9c4d6123a5a197136016ebf4b3b423"} Dec 10 14:56:51 crc kubenswrapper[4718]: I1210 14:56:51.783009 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-create-pxzzc" podStartSLOduration=3.7829796719999997 podStartE2EDuration="3.782979672s" podCreationTimestamp="2025-12-10 14:56:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 14:56:51.741190875 +0000 UTC m=+1516.690414292" watchObservedRunningTime="2025-12-10 14:56:51.782979672 +0000 UTC m=+1516.732203109" Dec 10 14:56:51 crc kubenswrapper[4718]: I1210 14:56:51.845046 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-3d20-account-create-update-z8f8t" podStartSLOduration=3.845013026 podStartE2EDuration="3.845013026s" podCreationTimestamp="2025-12-10 14:56:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 14:56:51.824858311 +0000 UTC m=+1516.774081748" watchObservedRunningTime="2025-12-10 14:56:51.845013026 +0000 UTC m=+1516.794236443" Dec 10 14:56:51 crc kubenswrapper[4718]: I1210 14:56:51.872131 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db97-account-create-update-5zs2l" podStartSLOduration=6.872090787 podStartE2EDuration="6.872090787s" podCreationTimestamp="2025-12-10 14:56:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 14:56:51.863444616 +0000 UTC m=+1516.812668033" watchObservedRunningTime="2025-12-10 14:56:51.872090787 +0000 UTC m=+1516.821314204" Dec 10 14:56:51 crc kubenswrapper[4718]: I1210 14:56:51.961577 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=19.961542981 podStartE2EDuration="19.961542981s" podCreationTimestamp="2025-12-10 14:56:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 14:56:51.921774695 +0000 UTC m=+1516.870998112" watchObservedRunningTime="2025-12-10 14:56:51.961542981 +0000 UTC m=+1516.910766398" Dec 10 14:56:52 crc kubenswrapper[4718]: E1210 14:56:52.305765 4718 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podec673fdd_e3b3_4576_aefa_c53822bd5f27.slice/crio-conmon-46fc31ff953bc13a4411e6db8f2d2c6f86fb3366991f5bce9d8fd36837803827.scope\": RecentStats: unable to find data in memory cache]" Dec 10 14:56:52 crc kubenswrapper[4718]: I1210 14:56:52.811565 4718 generic.go:334] "Generic (PLEG): container finished" podID="ec673fdd-e3b3-4576-aefa-c53822bd5f27" containerID="46fc31ff953bc13a4411e6db8f2d2c6f86fb3366991f5bce9d8fd36837803827" exitCode=0 Dec 10 14:56:52 crc kubenswrapper[4718]: I1210 14:56:52.812221 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-fe2e-account-create-update-q9jg8" event={"ID":"ec673fdd-e3b3-4576-aefa-c53822bd5f27","Type":"ContainerDied","Data":"46fc31ff953bc13a4411e6db8f2d2c6f86fb3366991f5bce9d8fd36837803827"} Dec 10 14:56:52 crc kubenswrapper[4718]: I1210 14:56:52.818612 4718 generic.go:334] "Generic (PLEG): container finished" podID="73737af3-b524-4da3-a97a-d3815b32ae7b" containerID="4517535a28edcb323efd5e3b0dc7f232ca9c4d6123a5a197136016ebf4b3b423" exitCode=0 Dec 10 14:56:52 crc kubenswrapper[4718]: I1210 14:56:52.818735 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db97-account-create-update-5zs2l" event={"ID":"73737af3-b524-4da3-a97a-d3815b32ae7b","Type":"ContainerDied","Data":"4517535a28edcb323efd5e3b0dc7f232ca9c4d6123a5a197136016ebf4b3b423"} Dec 10 14:56:52 crc kubenswrapper[4718]: I1210 14:56:52.822202 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55b99bf79c-dwmpv" event={"ID":"b15123b2-81bb-4fab-b6cd-ed84c0965118","Type":"ContainerStarted","Data":"0a4f7cf73dde3288d097d40166898edbde2e5a5fe7c4c7537d380c07cf6edfc5"} Dec 10 14:56:52 crc kubenswrapper[4718]: I1210 14:56:52.822361 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-55b99bf79c-dwmpv" Dec 10 14:56:52 crc kubenswrapper[4718]: I1210 14:56:52.825769 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-3d20-account-create-update-z8f8t" event={"ID":"fcd536b9-1513-4c72-8cb9-acbf491e6b3d","Type":"ContainerStarted","Data":"bc7204ed242e5df8ba92dc5033f2c984c42e3690025e33a8e30668744e0363c3"} Dec 10 14:56:52 crc kubenswrapper[4718]: I1210 14:56:52.887045 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-55b99bf79c-dwmpv" podStartSLOduration=7.887021036 podStartE2EDuration="7.887021036s" podCreationTimestamp="2025-12-10 14:56:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 14:56:52.883292921 +0000 UTC m=+1517.832516338" watchObservedRunningTime="2025-12-10 14:56:52.887021036 +0000 UTC m=+1517.836244453" Dec 10 14:56:53 crc kubenswrapper[4718]: I1210 14:56:53.316235 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Dec 10 14:56:53 crc kubenswrapper[4718]: I1210 14:56:53.404337 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-k6j4m" Dec 10 14:56:53 crc kubenswrapper[4718]: I1210 14:56:53.407852 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-5fkv5" Dec 10 14:56:53 crc kubenswrapper[4718]: I1210 14:56:53.529503 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/df5dab54-fa6b-4cfd-9c05-a6eeeec57de3-operator-scripts\") pod \"df5dab54-fa6b-4cfd-9c05-a6eeeec57de3\" (UID: \"df5dab54-fa6b-4cfd-9c05-a6eeeec57de3\") " Dec 10 14:56:53 crc kubenswrapper[4718]: I1210 14:56:53.529677 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1e87b030-a8e8-444f-9dda-a2d7a563aba9-operator-scripts\") pod \"1e87b030-a8e8-444f-9dda-a2d7a563aba9\" (UID: \"1e87b030-a8e8-444f-9dda-a2d7a563aba9\") " Dec 10 14:56:53 crc kubenswrapper[4718]: I1210 14:56:53.529750 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tm8vd\" (UniqueName: \"kubernetes.io/projected/1e87b030-a8e8-444f-9dda-a2d7a563aba9-kube-api-access-tm8vd\") pod \"1e87b030-a8e8-444f-9dda-a2d7a563aba9\" (UID: \"1e87b030-a8e8-444f-9dda-a2d7a563aba9\") " Dec 10 14:56:53 crc kubenswrapper[4718]: I1210 14:56:53.529851 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gfnpw\" (UniqueName: \"kubernetes.io/projected/df5dab54-fa6b-4cfd-9c05-a6eeeec57de3-kube-api-access-gfnpw\") pod \"df5dab54-fa6b-4cfd-9c05-a6eeeec57de3\" (UID: \"df5dab54-fa6b-4cfd-9c05-a6eeeec57de3\") " Dec 10 14:56:53 crc kubenswrapper[4718]: I1210 14:56:53.531597 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1e87b030-a8e8-444f-9dda-a2d7a563aba9-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "1e87b030-a8e8-444f-9dda-a2d7a563aba9" (UID: "1e87b030-a8e8-444f-9dda-a2d7a563aba9"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:56:53 crc kubenswrapper[4718]: I1210 14:56:53.532740 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/df5dab54-fa6b-4cfd-9c05-a6eeeec57de3-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "df5dab54-fa6b-4cfd-9c05-a6eeeec57de3" (UID: "df5dab54-fa6b-4cfd-9c05-a6eeeec57de3"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:56:53 crc kubenswrapper[4718]: I1210 14:56:53.541426 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1e87b030-a8e8-444f-9dda-a2d7a563aba9-kube-api-access-tm8vd" (OuterVolumeSpecName: "kube-api-access-tm8vd") pod "1e87b030-a8e8-444f-9dda-a2d7a563aba9" (UID: "1e87b030-a8e8-444f-9dda-a2d7a563aba9"). InnerVolumeSpecName "kube-api-access-tm8vd". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 14:56:53 crc kubenswrapper[4718]: I1210 14:56:53.556680 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/df5dab54-fa6b-4cfd-9c05-a6eeeec57de3-kube-api-access-gfnpw" (OuterVolumeSpecName: "kube-api-access-gfnpw") pod "df5dab54-fa6b-4cfd-9c05-a6eeeec57de3" (UID: "df5dab54-fa6b-4cfd-9c05-a6eeeec57de3"). InnerVolumeSpecName "kube-api-access-gfnpw". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 14:56:53 crc kubenswrapper[4718]: I1210 14:56:53.632307 4718 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/df5dab54-fa6b-4cfd-9c05-a6eeeec57de3-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 10 14:56:53 crc kubenswrapper[4718]: I1210 14:56:53.632350 4718 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1e87b030-a8e8-444f-9dda-a2d7a563aba9-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 10 14:56:53 crc kubenswrapper[4718]: I1210 14:56:53.632366 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tm8vd\" (UniqueName: \"kubernetes.io/projected/1e87b030-a8e8-444f-9dda-a2d7a563aba9-kube-api-access-tm8vd\") on node \"crc\" DevicePath \"\"" Dec 10 14:56:53 crc kubenswrapper[4718]: I1210 14:56:53.632388 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gfnpw\" (UniqueName: \"kubernetes.io/projected/df5dab54-fa6b-4cfd-9c05-a6eeeec57de3-kube-api-access-gfnpw\") on node \"crc\" DevicePath \"\"" Dec 10 14:56:53 crc kubenswrapper[4718]: I1210 14:56:53.891195 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-k6j4m" event={"ID":"df5dab54-fa6b-4cfd-9c05-a6eeeec57de3","Type":"ContainerDied","Data":"320079ad5df1cbdb308a57d042f4094309270472ed6f81e78fb5f6722367693f"} Dec 10 14:56:53 crc kubenswrapper[4718]: I1210 14:56:53.891289 4718 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="320079ad5df1cbdb308a57d042f4094309270472ed6f81e78fb5f6722367693f" Dec 10 14:56:53 crc kubenswrapper[4718]: I1210 14:56:53.891529 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-k6j4m" Dec 10 14:56:53 crc kubenswrapper[4718]: I1210 14:56:53.918070 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-5fkv5" Dec 10 14:56:53 crc kubenswrapper[4718]: I1210 14:56:53.918492 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-5fkv5" event={"ID":"1e87b030-a8e8-444f-9dda-a2d7a563aba9","Type":"ContainerDied","Data":"d962514dc26193b6d5b5a368575ba397257feeba10a22ebae8bc11b939261505"} Dec 10 14:56:53 crc kubenswrapper[4718]: I1210 14:56:53.918573 4718 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d962514dc26193b6d5b5a368575ba397257feeba10a22ebae8bc11b939261505" Dec 10 14:56:54 crc kubenswrapper[4718]: I1210 14:56:54.531525 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db97-account-create-update-5zs2l" Dec 10 14:56:54 crc kubenswrapper[4718]: I1210 14:56:54.554938 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-fe2e-account-create-update-q9jg8" Dec 10 14:56:54 crc kubenswrapper[4718]: I1210 14:56:54.714326 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mhbxd\" (UniqueName: \"kubernetes.io/projected/ec673fdd-e3b3-4576-aefa-c53822bd5f27-kube-api-access-mhbxd\") pod \"ec673fdd-e3b3-4576-aefa-c53822bd5f27\" (UID: \"ec673fdd-e3b3-4576-aefa-c53822bd5f27\") " Dec 10 14:56:54 crc kubenswrapper[4718]: I1210 14:56:54.714596 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ec673fdd-e3b3-4576-aefa-c53822bd5f27-operator-scripts\") pod \"ec673fdd-e3b3-4576-aefa-c53822bd5f27\" (UID: \"ec673fdd-e3b3-4576-aefa-c53822bd5f27\") " Dec 10 14:56:54 crc kubenswrapper[4718]: I1210 14:56:54.714791 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/73737af3-b524-4da3-a97a-d3815b32ae7b-operator-scripts\") pod \"73737af3-b524-4da3-a97a-d3815b32ae7b\" (UID: \"73737af3-b524-4da3-a97a-d3815b32ae7b\") " Dec 10 14:56:54 crc kubenswrapper[4718]: I1210 14:56:54.714851 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sc2kt\" (UniqueName: \"kubernetes.io/projected/73737af3-b524-4da3-a97a-d3815b32ae7b-kube-api-access-sc2kt\") pod \"73737af3-b524-4da3-a97a-d3815b32ae7b\" (UID: \"73737af3-b524-4da3-a97a-d3815b32ae7b\") " Dec 10 14:56:54 crc kubenswrapper[4718]: I1210 14:56:54.717103 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/73737af3-b524-4da3-a97a-d3815b32ae7b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "73737af3-b524-4da3-a97a-d3815b32ae7b" (UID: "73737af3-b524-4da3-a97a-d3815b32ae7b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:56:54 crc kubenswrapper[4718]: I1210 14:56:54.717315 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ec673fdd-e3b3-4576-aefa-c53822bd5f27-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ec673fdd-e3b3-4576-aefa-c53822bd5f27" (UID: "ec673fdd-e3b3-4576-aefa-c53822bd5f27"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:56:54 crc kubenswrapper[4718]: I1210 14:56:54.795751 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ec673fdd-e3b3-4576-aefa-c53822bd5f27-kube-api-access-mhbxd" (OuterVolumeSpecName: "kube-api-access-mhbxd") pod "ec673fdd-e3b3-4576-aefa-c53822bd5f27" (UID: "ec673fdd-e3b3-4576-aefa-c53822bd5f27"). InnerVolumeSpecName "kube-api-access-mhbxd". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 14:56:54 crc kubenswrapper[4718]: I1210 14:56:54.798555 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/73737af3-b524-4da3-a97a-d3815b32ae7b-kube-api-access-sc2kt" (OuterVolumeSpecName: "kube-api-access-sc2kt") pod "73737af3-b524-4da3-a97a-d3815b32ae7b" (UID: "73737af3-b524-4da3-a97a-d3815b32ae7b"). InnerVolumeSpecName "kube-api-access-sc2kt". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 14:56:54 crc kubenswrapper[4718]: I1210 14:56:54.816437 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mhbxd\" (UniqueName: \"kubernetes.io/projected/ec673fdd-e3b3-4576-aefa-c53822bd5f27-kube-api-access-mhbxd\") on node \"crc\" DevicePath \"\"" Dec 10 14:56:54 crc kubenswrapper[4718]: I1210 14:56:54.816482 4718 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ec673fdd-e3b3-4576-aefa-c53822bd5f27-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 10 14:56:54 crc kubenswrapper[4718]: I1210 14:56:54.816497 4718 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/73737af3-b524-4da3-a97a-d3815b32ae7b-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 10 14:56:54 crc kubenswrapper[4718]: I1210 14:56:54.816510 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sc2kt\" (UniqueName: \"kubernetes.io/projected/73737af3-b524-4da3-a97a-d3815b32ae7b-kube-api-access-sc2kt\") on node \"crc\" DevicePath \"\"" Dec 10 14:56:54 crc kubenswrapper[4718]: I1210 14:56:54.936316 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-fe2e-account-create-update-q9jg8" Dec 10 14:56:54 crc kubenswrapper[4718]: I1210 14:56:54.936483 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-fe2e-account-create-update-q9jg8" event={"ID":"ec673fdd-e3b3-4576-aefa-c53822bd5f27","Type":"ContainerDied","Data":"ea43e39dbd7f63221a0b86db1aaf897202ba057ca356afdb7a2847dd139968d4"} Dec 10 14:56:54 crc kubenswrapper[4718]: I1210 14:56:54.936546 4718 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ea43e39dbd7f63221a0b86db1aaf897202ba057ca356afdb7a2847dd139968d4" Dec 10 14:56:54 crc kubenswrapper[4718]: I1210 14:56:54.942924 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db97-account-create-update-5zs2l" event={"ID":"73737af3-b524-4da3-a97a-d3815b32ae7b","Type":"ContainerDied","Data":"0e1a666b42471448c82cc536e441db1fa67a94ca96984e849336bab4acce1bd4"} Dec 10 14:56:54 crc kubenswrapper[4718]: I1210 14:56:54.942980 4718 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0e1a666b42471448c82cc536e441db1fa67a94ca96984e849336bab4acce1bd4" Dec 10 14:56:54 crc kubenswrapper[4718]: I1210 14:56:54.943089 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db97-account-create-update-5zs2l" Dec 10 14:56:55 crc kubenswrapper[4718]: I1210 14:56:55.968520 4718 generic.go:334] "Generic (PLEG): container finished" podID="40ac237b-96dc-4d03-9afd-3693b621a63b" containerID="914878a308caab8ea00b865994a333e870b11adb944ec4e4bfd010a94c7bcb68" exitCode=0 Dec 10 14:56:55 crc kubenswrapper[4718]: I1210 14:56:55.968739 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-9569-account-create-update-gxrxr" event={"ID":"40ac237b-96dc-4d03-9afd-3693b621a63b","Type":"ContainerDied","Data":"914878a308caab8ea00b865994a333e870b11adb944ec4e4bfd010a94c7bcb68"} Dec 10 14:56:55 crc kubenswrapper[4718]: I1210 14:56:55.974005 4718 generic.go:334] "Generic (PLEG): container finished" podID="694a1ec1-f2d8-4637-b0ae-2bdba236854b" containerID="5828b8554fc36ef8d4562434851260cc4164618ef4d7a238ab7439964d6b5803" exitCode=0 Dec 10 14:56:55 crc kubenswrapper[4718]: I1210 14:56:55.974081 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-mxtgr" event={"ID":"694a1ec1-f2d8-4637-b0ae-2bdba236854b","Type":"ContainerDied","Data":"5828b8554fc36ef8d4562434851260cc4164618ef4d7a238ab7439964d6b5803"} Dec 10 14:56:55 crc kubenswrapper[4718]: I1210 14:56:55.978161 4718 generic.go:334] "Generic (PLEG): container finished" podID="d1528b35-a982-4f53-90fb-0f0374ffcdb3" containerID="1eef66815823a1a20fa4b36826326ff1e43676f5e62b09cf904ad050f90d1ffe" exitCode=0 Dec 10 14:56:55 crc kubenswrapper[4718]: I1210 14:56:55.978228 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-pxzzc" event={"ID":"d1528b35-a982-4f53-90fb-0f0374ffcdb3","Type":"ContainerDied","Data":"1eef66815823a1a20fa4b36826326ff1e43676f5e62b09cf904ad050f90d1ffe"} Dec 10 14:56:55 crc kubenswrapper[4718]: I1210 14:56:55.981366 4718 generic.go:334] "Generic (PLEG): container finished" podID="fcd536b9-1513-4c72-8cb9-acbf491e6b3d" containerID="bc7204ed242e5df8ba92dc5033f2c984c42e3690025e33a8e30668744e0363c3" exitCode=0 Dec 10 14:56:55 crc kubenswrapper[4718]: I1210 14:56:55.981467 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-3d20-account-create-update-z8f8t" event={"ID":"fcd536b9-1513-4c72-8cb9-acbf491e6b3d","Type":"ContainerDied","Data":"bc7204ed242e5df8ba92dc5033f2c984c42e3690025e33a8e30668744e0363c3"} Dec 10 14:56:56 crc kubenswrapper[4718]: I1210 14:56:56.750301 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-btnnx" Dec 10 14:56:56 crc kubenswrapper[4718]: I1210 14:56:56.846783 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-btnnx" Dec 10 14:56:57 crc kubenswrapper[4718]: I1210 14:56:57.004805 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-btnnx"] Dec 10 14:56:58 crc kubenswrapper[4718]: I1210 14:56:58.030323 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-btnnx" podUID="635b489b-951a-4df2-8cb2-4c1fff2d8d59" containerName="registry-server" containerID="cri-o://c4e9ffcc0a5c69fe431afe1bbbabf195c4044f88bf58abafd2d676bd90fded01" gracePeriod=2 Dec 10 14:56:59 crc kubenswrapper[4718]: I1210 14:56:59.044589 4718 generic.go:334] "Generic (PLEG): container finished" podID="635b489b-951a-4df2-8cb2-4c1fff2d8d59" containerID="c4e9ffcc0a5c69fe431afe1bbbabf195c4044f88bf58abafd2d676bd90fded01" exitCode=0 Dec 10 14:56:59 crc kubenswrapper[4718]: I1210 14:56:59.044685 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-btnnx" event={"ID":"635b489b-951a-4df2-8cb2-4c1fff2d8d59","Type":"ContainerDied","Data":"c4e9ffcc0a5c69fe431afe1bbbabf195c4044f88bf58abafd2d676bd90fded01"} Dec 10 14:56:59 crc kubenswrapper[4718]: I1210 14:56:59.447801 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-9569-account-create-update-gxrxr" Dec 10 14:56:59 crc kubenswrapper[4718]: I1210 14:56:59.459544 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-mxtgr" Dec 10 14:56:59 crc kubenswrapper[4718]: I1210 14:56:59.473164 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-3d20-account-create-update-z8f8t" Dec 10 14:56:59 crc kubenswrapper[4718]: I1210 14:56:59.485310 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-pxzzc" Dec 10 14:56:59 crc kubenswrapper[4718]: I1210 14:56:59.619846 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fcd536b9-1513-4c72-8cb9-acbf491e6b3d-operator-scripts\") pod \"fcd536b9-1513-4c72-8cb9-acbf491e6b3d\" (UID: \"fcd536b9-1513-4c72-8cb9-acbf491e6b3d\") " Dec 10 14:56:59 crc kubenswrapper[4718]: I1210 14:56:59.620090 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/694a1ec1-f2d8-4637-b0ae-2bdba236854b-operator-scripts\") pod \"694a1ec1-f2d8-4637-b0ae-2bdba236854b\" (UID: \"694a1ec1-f2d8-4637-b0ae-2bdba236854b\") " Dec 10 14:56:59 crc kubenswrapper[4718]: I1210 14:56:59.620288 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-clthp\" (UniqueName: \"kubernetes.io/projected/fcd536b9-1513-4c72-8cb9-acbf491e6b3d-kube-api-access-clthp\") pod \"fcd536b9-1513-4c72-8cb9-acbf491e6b3d\" (UID: \"fcd536b9-1513-4c72-8cb9-acbf491e6b3d\") " Dec 10 14:56:59 crc kubenswrapper[4718]: I1210 14:56:59.620844 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/694a1ec1-f2d8-4637-b0ae-2bdba236854b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "694a1ec1-f2d8-4637-b0ae-2bdba236854b" (UID: "694a1ec1-f2d8-4637-b0ae-2bdba236854b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:56:59 crc kubenswrapper[4718]: I1210 14:56:59.620868 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fcd536b9-1513-4c72-8cb9-acbf491e6b3d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "fcd536b9-1513-4c72-8cb9-acbf491e6b3d" (UID: "fcd536b9-1513-4c72-8cb9-acbf491e6b3d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:56:59 crc kubenswrapper[4718]: I1210 14:56:59.621718 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d1528b35-a982-4f53-90fb-0f0374ffcdb3-operator-scripts\") pod \"d1528b35-a982-4f53-90fb-0f0374ffcdb3\" (UID: \"d1528b35-a982-4f53-90fb-0f0374ffcdb3\") " Dec 10 14:56:59 crc kubenswrapper[4718]: I1210 14:56:59.621823 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/40ac237b-96dc-4d03-9afd-3693b621a63b-operator-scripts\") pod \"40ac237b-96dc-4d03-9afd-3693b621a63b\" (UID: \"40ac237b-96dc-4d03-9afd-3693b621a63b\") " Dec 10 14:56:59 crc kubenswrapper[4718]: I1210 14:56:59.621879 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cdrkc\" (UniqueName: \"kubernetes.io/projected/694a1ec1-f2d8-4637-b0ae-2bdba236854b-kube-api-access-cdrkc\") pod \"694a1ec1-f2d8-4637-b0ae-2bdba236854b\" (UID: \"694a1ec1-f2d8-4637-b0ae-2bdba236854b\") " Dec 10 14:56:59 crc kubenswrapper[4718]: I1210 14:56:59.621964 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-phsj2\" (UniqueName: \"kubernetes.io/projected/d1528b35-a982-4f53-90fb-0f0374ffcdb3-kube-api-access-phsj2\") pod \"d1528b35-a982-4f53-90fb-0f0374ffcdb3\" (UID: \"d1528b35-a982-4f53-90fb-0f0374ffcdb3\") " Dec 10 14:56:59 crc kubenswrapper[4718]: I1210 14:56:59.622027 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c6pwn\" (UniqueName: \"kubernetes.io/projected/40ac237b-96dc-4d03-9afd-3693b621a63b-kube-api-access-c6pwn\") pod \"40ac237b-96dc-4d03-9afd-3693b621a63b\" (UID: \"40ac237b-96dc-4d03-9afd-3693b621a63b\") " Dec 10 14:56:59 crc kubenswrapper[4718]: I1210 14:56:59.622237 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d1528b35-a982-4f53-90fb-0f0374ffcdb3-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d1528b35-a982-4f53-90fb-0f0374ffcdb3" (UID: "d1528b35-a982-4f53-90fb-0f0374ffcdb3"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:56:59 crc kubenswrapper[4718]: I1210 14:56:59.622498 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/40ac237b-96dc-4d03-9afd-3693b621a63b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "40ac237b-96dc-4d03-9afd-3693b621a63b" (UID: "40ac237b-96dc-4d03-9afd-3693b621a63b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:56:59 crc kubenswrapper[4718]: I1210 14:56:59.623298 4718 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d1528b35-a982-4f53-90fb-0f0374ffcdb3-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 10 14:56:59 crc kubenswrapper[4718]: I1210 14:56:59.623326 4718 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/40ac237b-96dc-4d03-9afd-3693b621a63b-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 10 14:56:59 crc kubenswrapper[4718]: I1210 14:56:59.623339 4718 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fcd536b9-1513-4c72-8cb9-acbf491e6b3d-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 10 14:56:59 crc kubenswrapper[4718]: I1210 14:56:59.623351 4718 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/694a1ec1-f2d8-4637-b0ae-2bdba236854b-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 10 14:56:59 crc kubenswrapper[4718]: I1210 14:56:59.629112 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d1528b35-a982-4f53-90fb-0f0374ffcdb3-kube-api-access-phsj2" (OuterVolumeSpecName: "kube-api-access-phsj2") pod "d1528b35-a982-4f53-90fb-0f0374ffcdb3" (UID: "d1528b35-a982-4f53-90fb-0f0374ffcdb3"). InnerVolumeSpecName "kube-api-access-phsj2". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 14:56:59 crc kubenswrapper[4718]: I1210 14:56:59.629271 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fcd536b9-1513-4c72-8cb9-acbf491e6b3d-kube-api-access-clthp" (OuterVolumeSpecName: "kube-api-access-clthp") pod "fcd536b9-1513-4c72-8cb9-acbf491e6b3d" (UID: "fcd536b9-1513-4c72-8cb9-acbf491e6b3d"). InnerVolumeSpecName "kube-api-access-clthp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 14:56:59 crc kubenswrapper[4718]: I1210 14:56:59.630010 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/694a1ec1-f2d8-4637-b0ae-2bdba236854b-kube-api-access-cdrkc" (OuterVolumeSpecName: "kube-api-access-cdrkc") pod "694a1ec1-f2d8-4637-b0ae-2bdba236854b" (UID: "694a1ec1-f2d8-4637-b0ae-2bdba236854b"). InnerVolumeSpecName "kube-api-access-cdrkc". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 14:56:59 crc kubenswrapper[4718]: I1210 14:56:59.631180 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/40ac237b-96dc-4d03-9afd-3693b621a63b-kube-api-access-c6pwn" (OuterVolumeSpecName: "kube-api-access-c6pwn") pod "40ac237b-96dc-4d03-9afd-3693b621a63b" (UID: "40ac237b-96dc-4d03-9afd-3693b621a63b"). InnerVolumeSpecName "kube-api-access-c6pwn". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 14:56:59 crc kubenswrapper[4718]: I1210 14:56:59.725003 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-clthp\" (UniqueName: \"kubernetes.io/projected/fcd536b9-1513-4c72-8cb9-acbf491e6b3d-kube-api-access-clthp\") on node \"crc\" DevicePath \"\"" Dec 10 14:56:59 crc kubenswrapper[4718]: I1210 14:56:59.725041 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cdrkc\" (UniqueName: \"kubernetes.io/projected/694a1ec1-f2d8-4637-b0ae-2bdba236854b-kube-api-access-cdrkc\") on node \"crc\" DevicePath \"\"" Dec 10 14:56:59 crc kubenswrapper[4718]: I1210 14:56:59.725055 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-phsj2\" (UniqueName: \"kubernetes.io/projected/d1528b35-a982-4f53-90fb-0f0374ffcdb3-kube-api-access-phsj2\") on node \"crc\" DevicePath \"\"" Dec 10 14:56:59 crc kubenswrapper[4718]: I1210 14:56:59.725064 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c6pwn\" (UniqueName: \"kubernetes.io/projected/40ac237b-96dc-4d03-9afd-3693b621a63b-kube-api-access-c6pwn\") on node \"crc\" DevicePath \"\"" Dec 10 14:57:00 crc kubenswrapper[4718]: I1210 14:57:00.066239 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-9569-account-create-update-gxrxr" event={"ID":"40ac237b-96dc-4d03-9afd-3693b621a63b","Type":"ContainerDied","Data":"4a9a72090c20eb002ace8b381fa0b54b583f6a40a51c7ce22552d8d3f41ca05c"} Dec 10 14:57:00 crc kubenswrapper[4718]: I1210 14:57:00.066307 4718 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4a9a72090c20eb002ace8b381fa0b54b583f6a40a51c7ce22552d8d3f41ca05c" Dec 10 14:57:00 crc kubenswrapper[4718]: I1210 14:57:00.066259 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-9569-account-create-update-gxrxr" Dec 10 14:57:00 crc kubenswrapper[4718]: I1210 14:57:00.068803 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-mxtgr" event={"ID":"694a1ec1-f2d8-4637-b0ae-2bdba236854b","Type":"ContainerDied","Data":"946d8dfae8bb0be8cf97bea221c225b2eea95624b651cb57017534d39bd17556"} Dec 10 14:57:00 crc kubenswrapper[4718]: I1210 14:57:00.068881 4718 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="946d8dfae8bb0be8cf97bea221c225b2eea95624b651cb57017534d39bd17556" Dec 10 14:57:00 crc kubenswrapper[4718]: I1210 14:57:00.068941 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-mxtgr" Dec 10 14:57:00 crc kubenswrapper[4718]: I1210 14:57:00.071044 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-pxzzc" event={"ID":"d1528b35-a982-4f53-90fb-0f0374ffcdb3","Type":"ContainerDied","Data":"0ef47d2340ace172ec02a3d4892d4e10eb31e9b59813ce4ad06d01ae73133195"} Dec 10 14:57:00 crc kubenswrapper[4718]: I1210 14:57:00.071084 4718 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0ef47d2340ace172ec02a3d4892d4e10eb31e9b59813ce4ad06d01ae73133195" Dec 10 14:57:00 crc kubenswrapper[4718]: I1210 14:57:00.071180 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-pxzzc" Dec 10 14:57:00 crc kubenswrapper[4718]: I1210 14:57:00.075224 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-3d20-account-create-update-z8f8t" event={"ID":"fcd536b9-1513-4c72-8cb9-acbf491e6b3d","Type":"ContainerDied","Data":"7fb73a0219b1b074c05870e910e23c34769f1823949798c6c1e75ba43dbe8647"} Dec 10 14:57:00 crc kubenswrapper[4718]: I1210 14:57:00.075272 4718 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7fb73a0219b1b074c05870e910e23c34769f1823949798c6c1e75ba43dbe8647" Dec 10 14:57:00 crc kubenswrapper[4718]: I1210 14:57:00.075299 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-3d20-account-create-update-z8f8t" Dec 10 14:57:00 crc kubenswrapper[4718]: I1210 14:57:00.799702 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-55b99bf79c-dwmpv" Dec 10 14:57:00 crc kubenswrapper[4718]: I1210 14:57:00.911146 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-76f9c4c8bc-nqtx8"] Dec 10 14:57:00 crc kubenswrapper[4718]: I1210 14:57:00.911493 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-76f9c4c8bc-nqtx8" podUID="cf76b176-14cf-4972-9384-7a0c69151f84" containerName="dnsmasq-dns" containerID="cri-o://d1aac8ecc11e9d176473e21ff10b04dcb193e5b9f6b234592fbb3ca7dfb64480" gracePeriod=10 Dec 10 14:57:02 crc kubenswrapper[4718]: I1210 14:57:02.110367 4718 generic.go:334] "Generic (PLEG): container finished" podID="cf76b176-14cf-4972-9384-7a0c69151f84" containerID="d1aac8ecc11e9d176473e21ff10b04dcb193e5b9f6b234592fbb3ca7dfb64480" exitCode=0 Dec 10 14:57:02 crc kubenswrapper[4718]: I1210 14:57:02.110631 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-76f9c4c8bc-nqtx8" event={"ID":"cf76b176-14cf-4972-9384-7a0c69151f84","Type":"ContainerDied","Data":"d1aac8ecc11e9d176473e21ff10b04dcb193e5b9f6b234592fbb3ca7dfb64480"} Dec 10 14:57:03 crc kubenswrapper[4718]: I1210 14:57:03.317072 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Dec 10 14:57:03 crc kubenswrapper[4718]: I1210 14:57:03.332198 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Dec 10 14:57:04 crc kubenswrapper[4718]: I1210 14:57:04.170849 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-qgdwr"] Dec 10 14:57:04 crc kubenswrapper[4718]: E1210 14:57:04.171859 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fcd536b9-1513-4c72-8cb9-acbf491e6b3d" containerName="mariadb-account-create-update" Dec 10 14:57:04 crc kubenswrapper[4718]: I1210 14:57:04.171892 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="fcd536b9-1513-4c72-8cb9-acbf491e6b3d" containerName="mariadb-account-create-update" Dec 10 14:57:04 crc kubenswrapper[4718]: E1210 14:57:04.171907 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d1528b35-a982-4f53-90fb-0f0374ffcdb3" containerName="mariadb-database-create" Dec 10 14:57:04 crc kubenswrapper[4718]: I1210 14:57:04.171916 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="d1528b35-a982-4f53-90fb-0f0374ffcdb3" containerName="mariadb-database-create" Dec 10 14:57:04 crc kubenswrapper[4718]: E1210 14:57:04.171926 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="40ac237b-96dc-4d03-9afd-3693b621a63b" containerName="mariadb-account-create-update" Dec 10 14:57:04 crc kubenswrapper[4718]: I1210 14:57:04.171933 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="40ac237b-96dc-4d03-9afd-3693b621a63b" containerName="mariadb-account-create-update" Dec 10 14:57:04 crc kubenswrapper[4718]: E1210 14:57:04.171947 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e87b030-a8e8-444f-9dda-a2d7a563aba9" containerName="mariadb-database-create" Dec 10 14:57:04 crc kubenswrapper[4718]: I1210 14:57:04.171956 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e87b030-a8e8-444f-9dda-a2d7a563aba9" containerName="mariadb-database-create" Dec 10 14:57:04 crc kubenswrapper[4718]: E1210 14:57:04.171968 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df5dab54-fa6b-4cfd-9c05-a6eeeec57de3" containerName="mariadb-database-create" Dec 10 14:57:04 crc kubenswrapper[4718]: I1210 14:57:04.171978 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="df5dab54-fa6b-4cfd-9c05-a6eeeec57de3" containerName="mariadb-database-create" Dec 10 14:57:04 crc kubenswrapper[4718]: E1210 14:57:04.171992 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="73737af3-b524-4da3-a97a-d3815b32ae7b" containerName="mariadb-account-create-update" Dec 10 14:57:04 crc kubenswrapper[4718]: I1210 14:57:04.172000 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="73737af3-b524-4da3-a97a-d3815b32ae7b" containerName="mariadb-account-create-update" Dec 10 14:57:04 crc kubenswrapper[4718]: E1210 14:57:04.172012 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="694a1ec1-f2d8-4637-b0ae-2bdba236854b" containerName="mariadb-database-create" Dec 10 14:57:04 crc kubenswrapper[4718]: I1210 14:57:04.172018 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="694a1ec1-f2d8-4637-b0ae-2bdba236854b" containerName="mariadb-database-create" Dec 10 14:57:04 crc kubenswrapper[4718]: E1210 14:57:04.172041 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec673fdd-e3b3-4576-aefa-c53822bd5f27" containerName="mariadb-account-create-update" Dec 10 14:57:04 crc kubenswrapper[4718]: I1210 14:57:04.172048 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec673fdd-e3b3-4576-aefa-c53822bd5f27" containerName="mariadb-account-create-update" Dec 10 14:57:04 crc kubenswrapper[4718]: I1210 14:57:04.172257 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="df5dab54-fa6b-4cfd-9c05-a6eeeec57de3" containerName="mariadb-database-create" Dec 10 14:57:04 crc kubenswrapper[4718]: I1210 14:57:04.172270 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="d1528b35-a982-4f53-90fb-0f0374ffcdb3" containerName="mariadb-database-create" Dec 10 14:57:04 crc kubenswrapper[4718]: I1210 14:57:04.172284 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="fcd536b9-1513-4c72-8cb9-acbf491e6b3d" containerName="mariadb-account-create-update" Dec 10 14:57:04 crc kubenswrapper[4718]: I1210 14:57:04.172297 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="73737af3-b524-4da3-a97a-d3815b32ae7b" containerName="mariadb-account-create-update" Dec 10 14:57:04 crc kubenswrapper[4718]: I1210 14:57:04.172308 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="40ac237b-96dc-4d03-9afd-3693b621a63b" containerName="mariadb-account-create-update" Dec 10 14:57:04 crc kubenswrapper[4718]: I1210 14:57:04.172324 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="694a1ec1-f2d8-4637-b0ae-2bdba236854b" containerName="mariadb-database-create" Dec 10 14:57:04 crc kubenswrapper[4718]: I1210 14:57:04.172333 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="ec673fdd-e3b3-4576-aefa-c53822bd5f27" containerName="mariadb-account-create-update" Dec 10 14:57:04 crc kubenswrapper[4718]: I1210 14:57:04.172344 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="1e87b030-a8e8-444f-9dda-a2d7a563aba9" containerName="mariadb-database-create" Dec 10 14:57:04 crc kubenswrapper[4718]: I1210 14:57:04.176649 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-qgdwr" Dec 10 14:57:04 crc kubenswrapper[4718]: I1210 14:57:04.192520 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-qgdwr"] Dec 10 14:57:04 crc kubenswrapper[4718]: I1210 14:57:04.204647 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Dec 10 14:57:04 crc kubenswrapper[4718]: I1210 14:57:04.218780 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Dec 10 14:57:04 crc kubenswrapper[4718]: I1210 14:57:04.219137 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-2mtgq" Dec 10 14:57:04 crc kubenswrapper[4718]: I1210 14:57:04.283724 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b3828426-9676-412b-aeaa-22c7c97989c4-config-data\") pod \"glance-db-sync-qgdwr\" (UID: \"b3828426-9676-412b-aeaa-22c7c97989c4\") " pod="openstack/glance-db-sync-qgdwr" Dec 10 14:57:04 crc kubenswrapper[4718]: I1210 14:57:04.283819 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x8928\" (UniqueName: \"kubernetes.io/projected/b3828426-9676-412b-aeaa-22c7c97989c4-kube-api-access-x8928\") pod \"glance-db-sync-qgdwr\" (UID: \"b3828426-9676-412b-aeaa-22c7c97989c4\") " pod="openstack/glance-db-sync-qgdwr" Dec 10 14:57:04 crc kubenswrapper[4718]: I1210 14:57:04.284945 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3828426-9676-412b-aeaa-22c7c97989c4-combined-ca-bundle\") pod \"glance-db-sync-qgdwr\" (UID: \"b3828426-9676-412b-aeaa-22c7c97989c4\") " pod="openstack/glance-db-sync-qgdwr" Dec 10 14:57:04 crc kubenswrapper[4718]: I1210 14:57:04.285261 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/b3828426-9676-412b-aeaa-22c7c97989c4-db-sync-config-data\") pod \"glance-db-sync-qgdwr\" (UID: \"b3828426-9676-412b-aeaa-22c7c97989c4\") " pod="openstack/glance-db-sync-qgdwr" Dec 10 14:57:04 crc kubenswrapper[4718]: I1210 14:57:04.388136 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b3828426-9676-412b-aeaa-22c7c97989c4-config-data\") pod \"glance-db-sync-qgdwr\" (UID: \"b3828426-9676-412b-aeaa-22c7c97989c4\") " pod="openstack/glance-db-sync-qgdwr" Dec 10 14:57:04 crc kubenswrapper[4718]: I1210 14:57:04.388210 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x8928\" (UniqueName: \"kubernetes.io/projected/b3828426-9676-412b-aeaa-22c7c97989c4-kube-api-access-x8928\") pod \"glance-db-sync-qgdwr\" (UID: \"b3828426-9676-412b-aeaa-22c7c97989c4\") " pod="openstack/glance-db-sync-qgdwr" Dec 10 14:57:04 crc kubenswrapper[4718]: I1210 14:57:04.388313 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3828426-9676-412b-aeaa-22c7c97989c4-combined-ca-bundle\") pod \"glance-db-sync-qgdwr\" (UID: \"b3828426-9676-412b-aeaa-22c7c97989c4\") " pod="openstack/glance-db-sync-qgdwr" Dec 10 14:57:04 crc kubenswrapper[4718]: I1210 14:57:04.388379 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/b3828426-9676-412b-aeaa-22c7c97989c4-db-sync-config-data\") pod \"glance-db-sync-qgdwr\" (UID: \"b3828426-9676-412b-aeaa-22c7c97989c4\") " pod="openstack/glance-db-sync-qgdwr" Dec 10 14:57:04 crc kubenswrapper[4718]: I1210 14:57:04.399055 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b3828426-9676-412b-aeaa-22c7c97989c4-config-data\") pod \"glance-db-sync-qgdwr\" (UID: \"b3828426-9676-412b-aeaa-22c7c97989c4\") " pod="openstack/glance-db-sync-qgdwr" Dec 10 14:57:04 crc kubenswrapper[4718]: I1210 14:57:04.399964 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3828426-9676-412b-aeaa-22c7c97989c4-combined-ca-bundle\") pod \"glance-db-sync-qgdwr\" (UID: \"b3828426-9676-412b-aeaa-22c7c97989c4\") " pod="openstack/glance-db-sync-qgdwr" Dec 10 14:57:04 crc kubenswrapper[4718]: I1210 14:57:04.404231 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/b3828426-9676-412b-aeaa-22c7c97989c4-db-sync-config-data\") pod \"glance-db-sync-qgdwr\" (UID: \"b3828426-9676-412b-aeaa-22c7c97989c4\") " pod="openstack/glance-db-sync-qgdwr" Dec 10 14:57:04 crc kubenswrapper[4718]: I1210 14:57:04.418492 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x8928\" (UniqueName: \"kubernetes.io/projected/b3828426-9676-412b-aeaa-22c7c97989c4-kube-api-access-x8928\") pod \"glance-db-sync-qgdwr\" (UID: \"b3828426-9676-412b-aeaa-22c7c97989c4\") " pod="openstack/glance-db-sync-qgdwr" Dec 10 14:57:04 crc kubenswrapper[4718]: I1210 14:57:04.533290 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-qgdwr" Dec 10 14:57:05 crc kubenswrapper[4718]: I1210 14:57:05.225114 4718 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-76f9c4c8bc-nqtx8" podUID="cf76b176-14cf-4972-9384-7a0c69151f84" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.119:5353: connect: connection refused" Dec 10 14:57:06 crc kubenswrapper[4718]: E1210 14:57:06.690765 4718 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of c4e9ffcc0a5c69fe431afe1bbbabf195c4044f88bf58abafd2d676bd90fded01 is running failed: container process not found" containerID="c4e9ffcc0a5c69fe431afe1bbbabf195c4044f88bf58abafd2d676bd90fded01" cmd=["grpc_health_probe","-addr=:50051"] Dec 10 14:57:06 crc kubenswrapper[4718]: E1210 14:57:06.692449 4718 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of c4e9ffcc0a5c69fe431afe1bbbabf195c4044f88bf58abafd2d676bd90fded01 is running failed: container process not found" containerID="c4e9ffcc0a5c69fe431afe1bbbabf195c4044f88bf58abafd2d676bd90fded01" cmd=["grpc_health_probe","-addr=:50051"] Dec 10 14:57:06 crc kubenswrapper[4718]: E1210 14:57:06.692768 4718 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of c4e9ffcc0a5c69fe431afe1bbbabf195c4044f88bf58abafd2d676bd90fded01 is running failed: container process not found" containerID="c4e9ffcc0a5c69fe431afe1bbbabf195c4044f88bf58abafd2d676bd90fded01" cmd=["grpc_health_probe","-addr=:50051"] Dec 10 14:57:06 crc kubenswrapper[4718]: E1210 14:57:06.692823 4718 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of c4e9ffcc0a5c69fe431afe1bbbabf195c4044f88bf58abafd2d676bd90fded01 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/redhat-operators-btnnx" podUID="635b489b-951a-4df2-8cb2-4c1fff2d8d59" containerName="registry-server" Dec 10 14:57:07 crc kubenswrapper[4718]: E1210 14:57:07.467891 4718 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos10/openstack-watcher-api:current" Dec 10 14:57:07 crc kubenswrapper[4718]: E1210 14:57:07.467995 4718 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos10/openstack-watcher-api:current" Dec 10 14:57:07 crc kubenswrapper[4718]: E1210 14:57:07.468248 4718 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:watcher-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-watcher-api:current,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/default,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/watcher/watcher.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:watcher-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mhdl4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-db-sync-bcnrt_openstack(bf1fd7f3-5d9a-44b4-8e4e-e71df148b109): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 10 14:57:07 crc kubenswrapper[4718]: E1210 14:57:07.469470 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/watcher-db-sync-bcnrt" podUID="bf1fd7f3-5d9a-44b4-8e4e-e71df148b109" Dec 10 14:57:07 crc kubenswrapper[4718]: I1210 14:57:07.728968 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-btnnx" Dec 10 14:57:07 crc kubenswrapper[4718]: I1210 14:57:07.823077 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dr65l\" (UniqueName: \"kubernetes.io/projected/635b489b-951a-4df2-8cb2-4c1fff2d8d59-kube-api-access-dr65l\") pod \"635b489b-951a-4df2-8cb2-4c1fff2d8d59\" (UID: \"635b489b-951a-4df2-8cb2-4c1fff2d8d59\") " Dec 10 14:57:07 crc kubenswrapper[4718]: I1210 14:57:07.823347 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/635b489b-951a-4df2-8cb2-4c1fff2d8d59-utilities\") pod \"635b489b-951a-4df2-8cb2-4c1fff2d8d59\" (UID: \"635b489b-951a-4df2-8cb2-4c1fff2d8d59\") " Dec 10 14:57:07 crc kubenswrapper[4718]: I1210 14:57:07.823653 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/635b489b-951a-4df2-8cb2-4c1fff2d8d59-catalog-content\") pod \"635b489b-951a-4df2-8cb2-4c1fff2d8d59\" (UID: \"635b489b-951a-4df2-8cb2-4c1fff2d8d59\") " Dec 10 14:57:07 crc kubenswrapper[4718]: I1210 14:57:07.825527 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/635b489b-951a-4df2-8cb2-4c1fff2d8d59-utilities" (OuterVolumeSpecName: "utilities") pod "635b489b-951a-4df2-8cb2-4c1fff2d8d59" (UID: "635b489b-951a-4df2-8cb2-4c1fff2d8d59"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 14:57:07 crc kubenswrapper[4718]: I1210 14:57:07.827946 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/635b489b-951a-4df2-8cb2-4c1fff2d8d59-kube-api-access-dr65l" (OuterVolumeSpecName: "kube-api-access-dr65l") pod "635b489b-951a-4df2-8cb2-4c1fff2d8d59" (UID: "635b489b-951a-4df2-8cb2-4c1fff2d8d59"). InnerVolumeSpecName "kube-api-access-dr65l". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 14:57:07 crc kubenswrapper[4718]: I1210 14:57:07.901868 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-76f9c4c8bc-nqtx8" Dec 10 14:57:07 crc kubenswrapper[4718]: I1210 14:57:07.926068 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dr65l\" (UniqueName: \"kubernetes.io/projected/635b489b-951a-4df2-8cb2-4c1fff2d8d59-kube-api-access-dr65l\") on node \"crc\" DevicePath \"\"" Dec 10 14:57:07 crc kubenswrapper[4718]: I1210 14:57:07.926122 4718 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/635b489b-951a-4df2-8cb2-4c1fff2d8d59-utilities\") on node \"crc\" DevicePath \"\"" Dec 10 14:57:07 crc kubenswrapper[4718]: I1210 14:57:07.974503 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/635b489b-951a-4df2-8cb2-4c1fff2d8d59-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "635b489b-951a-4df2-8cb2-4c1fff2d8d59" (UID: "635b489b-951a-4df2-8cb2-4c1fff2d8d59"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 14:57:08 crc kubenswrapper[4718]: I1210 14:57:08.031619 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cf76b176-14cf-4972-9384-7a0c69151f84-ovsdbserver-sb\") pod \"cf76b176-14cf-4972-9384-7a0c69151f84\" (UID: \"cf76b176-14cf-4972-9384-7a0c69151f84\") " Dec 10 14:57:08 crc kubenswrapper[4718]: I1210 14:57:08.031741 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cf76b176-14cf-4972-9384-7a0c69151f84-config\") pod \"cf76b176-14cf-4972-9384-7a0c69151f84\" (UID: \"cf76b176-14cf-4972-9384-7a0c69151f84\") " Dec 10 14:57:08 crc kubenswrapper[4718]: I1210 14:57:08.031958 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fbzpq\" (UniqueName: \"kubernetes.io/projected/cf76b176-14cf-4972-9384-7a0c69151f84-kube-api-access-fbzpq\") pod \"cf76b176-14cf-4972-9384-7a0c69151f84\" (UID: \"cf76b176-14cf-4972-9384-7a0c69151f84\") " Dec 10 14:57:08 crc kubenswrapper[4718]: I1210 14:57:08.032134 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cf76b176-14cf-4972-9384-7a0c69151f84-ovsdbserver-nb\") pod \"cf76b176-14cf-4972-9384-7a0c69151f84\" (UID: \"cf76b176-14cf-4972-9384-7a0c69151f84\") " Dec 10 14:57:08 crc kubenswrapper[4718]: I1210 14:57:08.032230 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cf76b176-14cf-4972-9384-7a0c69151f84-dns-svc\") pod \"cf76b176-14cf-4972-9384-7a0c69151f84\" (UID: \"cf76b176-14cf-4972-9384-7a0c69151f84\") " Dec 10 14:57:08 crc kubenswrapper[4718]: I1210 14:57:08.043273 4718 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/635b489b-951a-4df2-8cb2-4c1fff2d8d59-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 10 14:57:08 crc kubenswrapper[4718]: I1210 14:57:08.061134 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cf76b176-14cf-4972-9384-7a0c69151f84-kube-api-access-fbzpq" (OuterVolumeSpecName: "kube-api-access-fbzpq") pod "cf76b176-14cf-4972-9384-7a0c69151f84" (UID: "cf76b176-14cf-4972-9384-7a0c69151f84"). InnerVolumeSpecName "kube-api-access-fbzpq". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 14:57:08 crc kubenswrapper[4718]: I1210 14:57:08.116542 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cf76b176-14cf-4972-9384-7a0c69151f84-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "cf76b176-14cf-4972-9384-7a0c69151f84" (UID: "cf76b176-14cf-4972-9384-7a0c69151f84"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:57:08 crc kubenswrapper[4718]: I1210 14:57:08.132413 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cf76b176-14cf-4972-9384-7a0c69151f84-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "cf76b176-14cf-4972-9384-7a0c69151f84" (UID: "cf76b176-14cf-4972-9384-7a0c69151f84"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:57:08 crc kubenswrapper[4718]: I1210 14:57:08.142512 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cf76b176-14cf-4972-9384-7a0c69151f84-config" (OuterVolumeSpecName: "config") pod "cf76b176-14cf-4972-9384-7a0c69151f84" (UID: "cf76b176-14cf-4972-9384-7a0c69151f84"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:57:08 crc kubenswrapper[4718]: I1210 14:57:08.148107 4718 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cf76b176-14cf-4972-9384-7a0c69151f84-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 10 14:57:08 crc kubenswrapper[4718]: I1210 14:57:08.148145 4718 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cf76b176-14cf-4972-9384-7a0c69151f84-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 10 14:57:08 crc kubenswrapper[4718]: I1210 14:57:08.148163 4718 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cf76b176-14cf-4972-9384-7a0c69151f84-config\") on node \"crc\" DevicePath \"\"" Dec 10 14:57:08 crc kubenswrapper[4718]: I1210 14:57:08.148176 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fbzpq\" (UniqueName: \"kubernetes.io/projected/cf76b176-14cf-4972-9384-7a0c69151f84-kube-api-access-fbzpq\") on node \"crc\" DevicePath \"\"" Dec 10 14:57:08 crc kubenswrapper[4718]: I1210 14:57:08.158668 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cf76b176-14cf-4972-9384-7a0c69151f84-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "cf76b176-14cf-4972-9384-7a0c69151f84" (UID: "cf76b176-14cf-4972-9384-7a0c69151f84"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:57:08 crc kubenswrapper[4718]: I1210 14:57:08.251654 4718 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cf76b176-14cf-4972-9384-7a0c69151f84-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 10 14:57:08 crc kubenswrapper[4718]: I1210 14:57:08.298724 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-btnnx" event={"ID":"635b489b-951a-4df2-8cb2-4c1fff2d8d59","Type":"ContainerDied","Data":"fb68f262a6056fc6394d74bffd14fbc9f09620a007aa44a71adc3e253155b53b"} Dec 10 14:57:08 crc kubenswrapper[4718]: I1210 14:57:08.298806 4718 scope.go:117] "RemoveContainer" containerID="c4e9ffcc0a5c69fe431afe1bbbabf195c4044f88bf58abafd2d676bd90fded01" Dec 10 14:57:08 crc kubenswrapper[4718]: I1210 14:57:08.299081 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-btnnx" Dec 10 14:57:08 crc kubenswrapper[4718]: I1210 14:57:08.306322 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-nhtpt" event={"ID":"801e3b76-dd13-4285-9597-8f7874496ed5","Type":"ContainerStarted","Data":"b45e03e56b3c9b1b14020bda8abf447ed3f61d00e80d7e8264977ea6905c5c8e"} Dec 10 14:57:08 crc kubenswrapper[4718]: I1210 14:57:08.313553 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-76f9c4c8bc-nqtx8" Dec 10 14:57:08 crc kubenswrapper[4718]: I1210 14:57:08.313895 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-76f9c4c8bc-nqtx8" event={"ID":"cf76b176-14cf-4972-9384-7a0c69151f84","Type":"ContainerDied","Data":"3340fba6875db912b203ea8f7075075f9619277405b4cd4046fe49a5bd336e3d"} Dec 10 14:57:08 crc kubenswrapper[4718]: E1210 14:57:08.314745 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-watcher-api:current\\\"\"" pod="openstack/watcher-db-sync-bcnrt" podUID="bf1fd7f3-5d9a-44b4-8e4e-e71df148b109" Dec 10 14:57:08 crc kubenswrapper[4718]: I1210 14:57:08.325587 4718 scope.go:117] "RemoveContainer" containerID="ac39c130c0d7547daf2a4b4b1c0defe5839dc7009455adf2d179c57f99cbcedc" Dec 10 14:57:08 crc kubenswrapper[4718]: I1210 14:57:08.351701 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-btnnx"] Dec 10 14:57:08 crc kubenswrapper[4718]: I1210 14:57:08.365818 4718 scope.go:117] "RemoveContainer" containerID="3c14f40fb175da7e2372cde8f8f0cd7f54b89e5f8f4a9daf4e37538e5b6417a8" Dec 10 14:57:08 crc kubenswrapper[4718]: I1210 14:57:08.367796 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-btnnx"] Dec 10 14:57:08 crc kubenswrapper[4718]: I1210 14:57:08.386707 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-nhtpt" podStartSLOduration=5.174764696 podStartE2EDuration="23.386658175s" podCreationTimestamp="2025-12-10 14:56:45 +0000 UTC" firstStartedPulling="2025-12-10 14:56:49.321049141 +0000 UTC m=+1514.270272558" lastFinishedPulling="2025-12-10 14:57:07.53294262 +0000 UTC m=+1532.482166037" observedRunningTime="2025-12-10 14:57:08.375876219 +0000 UTC m=+1533.325099646" watchObservedRunningTime="2025-12-10 14:57:08.386658175 +0000 UTC m=+1533.335881592" Dec 10 14:57:08 crc kubenswrapper[4718]: I1210 14:57:08.400713 4718 scope.go:117] "RemoveContainer" containerID="d1aac8ecc11e9d176473e21ff10b04dcb193e5b9f6b234592fbb3ca7dfb64480" Dec 10 14:57:08 crc kubenswrapper[4718]: I1210 14:57:08.416514 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-76f9c4c8bc-nqtx8"] Dec 10 14:57:08 crc kubenswrapper[4718]: I1210 14:57:08.426377 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-76f9c4c8bc-nqtx8"] Dec 10 14:57:08 crc kubenswrapper[4718]: I1210 14:57:08.429222 4718 scope.go:117] "RemoveContainer" containerID="2f7cdfb3d0c175c44e60379e8206d5d52bb733fd38297d73b8dff8867da7d899" Dec 10 14:57:08 crc kubenswrapper[4718]: W1210 14:57:08.591241 4718 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb3828426_9676_412b_aeaa_22c7c97989c4.slice/crio-0ceca4f64d29b24363124f890feb23d791314c670d56f5b419b760ba8fcfe4d6 WatchSource:0}: Error finding container 0ceca4f64d29b24363124f890feb23d791314c670d56f5b419b760ba8fcfe4d6: Status 404 returned error can't find the container with id 0ceca4f64d29b24363124f890feb23d791314c670d56f5b419b760ba8fcfe4d6 Dec 10 14:57:08 crc kubenswrapper[4718]: I1210 14:57:08.593185 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-qgdwr"] Dec 10 14:57:09 crc kubenswrapper[4718]: I1210 14:57:09.331498 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-qgdwr" event={"ID":"b3828426-9676-412b-aeaa-22c7c97989c4","Type":"ContainerStarted","Data":"0ceca4f64d29b24363124f890feb23d791314c670d56f5b419b760ba8fcfe4d6"} Dec 10 14:57:10 crc kubenswrapper[4718]: I1210 14:57:10.040118 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="635b489b-951a-4df2-8cb2-4c1fff2d8d59" path="/var/lib/kubelet/pods/635b489b-951a-4df2-8cb2-4c1fff2d8d59/volumes" Dec 10 14:57:10 crc kubenswrapper[4718]: I1210 14:57:10.041864 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cf76b176-14cf-4972-9384-7a0c69151f84" path="/var/lib/kubelet/pods/cf76b176-14cf-4972-9384-7a0c69151f84/volumes" Dec 10 14:57:18 crc kubenswrapper[4718]: I1210 14:57:18.084043 4718 patch_prober.go:28] interesting pod/machine-config-daemon-8zmhn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 10 14:57:18 crc kubenswrapper[4718]: I1210 14:57:18.085091 4718 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" podUID="8db53917-7cfb-496d-b8a0-5cc68f3be4e7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 10 14:57:18 crc kubenswrapper[4718]: I1210 14:57:18.085155 4718 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" Dec 10 14:57:18 crc kubenswrapper[4718]: I1210 14:57:18.086269 4718 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"76beffb68a4302af15a19b5c310d822594a21c7c7ec551bf4bf551ca3dcf1f21"} pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 10 14:57:18 crc kubenswrapper[4718]: I1210 14:57:18.086336 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" podUID="8db53917-7cfb-496d-b8a0-5cc68f3be4e7" containerName="machine-config-daemon" containerID="cri-o://76beffb68a4302af15a19b5c310d822594a21c7c7ec551bf4bf551ca3dcf1f21" gracePeriod=600 Dec 10 14:57:18 crc kubenswrapper[4718]: I1210 14:57:18.453893 4718 generic.go:334] "Generic (PLEG): container finished" podID="8db53917-7cfb-496d-b8a0-5cc68f3be4e7" containerID="76beffb68a4302af15a19b5c310d822594a21c7c7ec551bf4bf551ca3dcf1f21" exitCode=0 Dec 10 14:57:18 crc kubenswrapper[4718]: I1210 14:57:18.453974 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" event={"ID":"8db53917-7cfb-496d-b8a0-5cc68f3be4e7","Type":"ContainerDied","Data":"76beffb68a4302af15a19b5c310d822594a21c7c7ec551bf4bf551ca3dcf1f21"} Dec 10 14:57:18 crc kubenswrapper[4718]: I1210 14:57:18.454067 4718 scope.go:117] "RemoveContainer" containerID="2ec2fde063b0fe89bfac326b092793e5b0835b83f4f064a57717d8a122925145" Dec 10 14:57:18 crc kubenswrapper[4718]: E1210 14:57:18.715439 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8zmhn_openshift-machine-config-operator(8db53917-7cfb-496d-b8a0-5cc68f3be4e7)\"" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" podUID="8db53917-7cfb-496d-b8a0-5cc68f3be4e7" Dec 10 14:57:19 crc kubenswrapper[4718]: I1210 14:57:19.472343 4718 scope.go:117] "RemoveContainer" containerID="76beffb68a4302af15a19b5c310d822594a21c7c7ec551bf4bf551ca3dcf1f21" Dec 10 14:57:19 crc kubenswrapper[4718]: E1210 14:57:19.472679 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8zmhn_openshift-machine-config-operator(8db53917-7cfb-496d-b8a0-5cc68f3be4e7)\"" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" podUID="8db53917-7cfb-496d-b8a0-5cc68f3be4e7" Dec 10 14:57:25 crc kubenswrapper[4718]: I1210 14:57:25.563898 4718 generic.go:334] "Generic (PLEG): container finished" podID="801e3b76-dd13-4285-9597-8f7874496ed5" containerID="b45e03e56b3c9b1b14020bda8abf447ed3f61d00e80d7e8264977ea6905c5c8e" exitCode=0 Dec 10 14:57:25 crc kubenswrapper[4718]: I1210 14:57:25.564069 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-nhtpt" event={"ID":"801e3b76-dd13-4285-9597-8f7874496ed5","Type":"ContainerDied","Data":"b45e03e56b3c9b1b14020bda8abf447ed3f61d00e80d7e8264977ea6905c5c8e"} Dec 10 14:57:28 crc kubenswrapper[4718]: I1210 14:57:28.605663 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-nhtpt" event={"ID":"801e3b76-dd13-4285-9597-8f7874496ed5","Type":"ContainerDied","Data":"8e8623bb7598816053e42fa7a76ad3bc7446402073dec03a742af864e9e63fcf"} Dec 10 14:57:28 crc kubenswrapper[4718]: I1210 14:57:28.607557 4718 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8e8623bb7598816053e42fa7a76ad3bc7446402073dec03a742af864e9e63fcf" Dec 10 14:57:28 crc kubenswrapper[4718]: I1210 14:57:28.669359 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-nhtpt" Dec 10 14:57:28 crc kubenswrapper[4718]: I1210 14:57:28.850104 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/801e3b76-dd13-4285-9597-8f7874496ed5-config-data\") pod \"801e3b76-dd13-4285-9597-8f7874496ed5\" (UID: \"801e3b76-dd13-4285-9597-8f7874496ed5\") " Dec 10 14:57:28 crc kubenswrapper[4718]: I1210 14:57:28.850173 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/801e3b76-dd13-4285-9597-8f7874496ed5-combined-ca-bundle\") pod \"801e3b76-dd13-4285-9597-8f7874496ed5\" (UID: \"801e3b76-dd13-4285-9597-8f7874496ed5\") " Dec 10 14:57:28 crc kubenswrapper[4718]: I1210 14:57:28.850561 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q5gsd\" (UniqueName: \"kubernetes.io/projected/801e3b76-dd13-4285-9597-8f7874496ed5-kube-api-access-q5gsd\") pod \"801e3b76-dd13-4285-9597-8f7874496ed5\" (UID: \"801e3b76-dd13-4285-9597-8f7874496ed5\") " Dec 10 14:57:28 crc kubenswrapper[4718]: I1210 14:57:28.858012 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/801e3b76-dd13-4285-9597-8f7874496ed5-kube-api-access-q5gsd" (OuterVolumeSpecName: "kube-api-access-q5gsd") pod "801e3b76-dd13-4285-9597-8f7874496ed5" (UID: "801e3b76-dd13-4285-9597-8f7874496ed5"). InnerVolumeSpecName "kube-api-access-q5gsd". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 14:57:28 crc kubenswrapper[4718]: I1210 14:57:28.892992 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/801e3b76-dd13-4285-9597-8f7874496ed5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "801e3b76-dd13-4285-9597-8f7874496ed5" (UID: "801e3b76-dd13-4285-9597-8f7874496ed5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 14:57:28 crc kubenswrapper[4718]: I1210 14:57:28.918797 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/801e3b76-dd13-4285-9597-8f7874496ed5-config-data" (OuterVolumeSpecName: "config-data") pod "801e3b76-dd13-4285-9597-8f7874496ed5" (UID: "801e3b76-dd13-4285-9597-8f7874496ed5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 14:57:28 crc kubenswrapper[4718]: I1210 14:57:28.952923 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q5gsd\" (UniqueName: \"kubernetes.io/projected/801e3b76-dd13-4285-9597-8f7874496ed5-kube-api-access-q5gsd\") on node \"crc\" DevicePath \"\"" Dec 10 14:57:28 crc kubenswrapper[4718]: I1210 14:57:28.952979 4718 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/801e3b76-dd13-4285-9597-8f7874496ed5-config-data\") on node \"crc\" DevicePath \"\"" Dec 10 14:57:28 crc kubenswrapper[4718]: I1210 14:57:28.952992 4718 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/801e3b76-dd13-4285-9597-8f7874496ed5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 10 14:57:29 crc kubenswrapper[4718]: I1210 14:57:29.619845 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-db-sync-bcnrt" event={"ID":"bf1fd7f3-5d9a-44b4-8e4e-e71df148b109","Type":"ContainerStarted","Data":"afb065c6f5d1d7dc86c06200e773d7019392ee936131ec9ded0efe4e02f88d46"} Dec 10 14:57:29 crc kubenswrapper[4718]: I1210 14:57:29.623193 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-nhtpt" Dec 10 14:57:29 crc kubenswrapper[4718]: I1210 14:57:29.624775 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-qgdwr" event={"ID":"b3828426-9676-412b-aeaa-22c7c97989c4","Type":"ContainerStarted","Data":"98f769a12127da0cc2a40d9e3421fce565b7ca1da8624cc5b376f8fc402d8596"} Dec 10 14:57:29 crc kubenswrapper[4718]: I1210 14:57:29.649248 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-db-sync-bcnrt" podStartSLOduration=3.745628969 podStartE2EDuration="41.649211896s" podCreationTimestamp="2025-12-10 14:56:48 +0000 UTC" firstStartedPulling="2025-12-10 14:56:50.783488696 +0000 UTC m=+1515.732712103" lastFinishedPulling="2025-12-10 14:57:28.687071613 +0000 UTC m=+1553.636295030" observedRunningTime="2025-12-10 14:57:29.645264045 +0000 UTC m=+1554.594487462" watchObservedRunningTime="2025-12-10 14:57:29.649211896 +0000 UTC m=+1554.598435323" Dec 10 14:57:29 crc kubenswrapper[4718]: I1210 14:57:29.722624 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-qgdwr" podStartSLOduration=5.576191272 podStartE2EDuration="25.722592579s" podCreationTimestamp="2025-12-10 14:57:04 +0000 UTC" firstStartedPulling="2025-12-10 14:57:08.595542337 +0000 UTC m=+1533.544765754" lastFinishedPulling="2025-12-10 14:57:28.741943644 +0000 UTC m=+1553.691167061" observedRunningTime="2025-12-10 14:57:29.679291494 +0000 UTC m=+1554.628514921" watchObservedRunningTime="2025-12-10 14:57:29.722592579 +0000 UTC m=+1554.671815996" Dec 10 14:57:30 crc kubenswrapper[4718]: I1210 14:57:30.179465 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-2g9vq"] Dec 10 14:57:30 crc kubenswrapper[4718]: E1210 14:57:30.185647 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="635b489b-951a-4df2-8cb2-4c1fff2d8d59" containerName="extract-content" Dec 10 14:57:30 crc kubenswrapper[4718]: I1210 14:57:30.185720 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="635b489b-951a-4df2-8cb2-4c1fff2d8d59" containerName="extract-content" Dec 10 14:57:30 crc kubenswrapper[4718]: E1210 14:57:30.185748 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="635b489b-951a-4df2-8cb2-4c1fff2d8d59" containerName="extract-utilities" Dec 10 14:57:30 crc kubenswrapper[4718]: I1210 14:57:30.185758 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="635b489b-951a-4df2-8cb2-4c1fff2d8d59" containerName="extract-utilities" Dec 10 14:57:30 crc kubenswrapper[4718]: E1210 14:57:30.185804 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="801e3b76-dd13-4285-9597-8f7874496ed5" containerName="keystone-db-sync" Dec 10 14:57:30 crc kubenswrapper[4718]: I1210 14:57:30.185812 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="801e3b76-dd13-4285-9597-8f7874496ed5" containerName="keystone-db-sync" Dec 10 14:57:30 crc kubenswrapper[4718]: E1210 14:57:30.185828 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="635b489b-951a-4df2-8cb2-4c1fff2d8d59" containerName="registry-server" Dec 10 14:57:30 crc kubenswrapper[4718]: I1210 14:57:30.185835 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="635b489b-951a-4df2-8cb2-4c1fff2d8d59" containerName="registry-server" Dec 10 14:57:30 crc kubenswrapper[4718]: E1210 14:57:30.185856 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf76b176-14cf-4972-9384-7a0c69151f84" containerName="dnsmasq-dns" Dec 10 14:57:30 crc kubenswrapper[4718]: I1210 14:57:30.185863 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf76b176-14cf-4972-9384-7a0c69151f84" containerName="dnsmasq-dns" Dec 10 14:57:30 crc kubenswrapper[4718]: E1210 14:57:30.185881 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf76b176-14cf-4972-9384-7a0c69151f84" containerName="init" Dec 10 14:57:30 crc kubenswrapper[4718]: I1210 14:57:30.185900 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf76b176-14cf-4972-9384-7a0c69151f84" containerName="init" Dec 10 14:57:30 crc kubenswrapper[4718]: I1210 14:57:30.186438 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="cf76b176-14cf-4972-9384-7a0c69151f84" containerName="dnsmasq-dns" Dec 10 14:57:30 crc kubenswrapper[4718]: I1210 14:57:30.186451 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="635b489b-951a-4df2-8cb2-4c1fff2d8d59" containerName="registry-server" Dec 10 14:57:30 crc kubenswrapper[4718]: I1210 14:57:30.186470 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="801e3b76-dd13-4285-9597-8f7874496ed5" containerName="keystone-db-sync" Dec 10 14:57:30 crc kubenswrapper[4718]: I1210 14:57:30.187365 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-2g9vq" Dec 10 14:57:30 crc kubenswrapper[4718]: I1210 14:57:30.193913 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-fvdkw" Dec 10 14:57:30 crc kubenswrapper[4718]: I1210 14:57:30.194208 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Dec 10 14:57:30 crc kubenswrapper[4718]: I1210 14:57:30.202853 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Dec 10 14:57:30 crc kubenswrapper[4718]: I1210 14:57:30.202853 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Dec 10 14:57:30 crc kubenswrapper[4718]: I1210 14:57:30.203211 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Dec 10 14:57:30 crc kubenswrapper[4718]: I1210 14:57:30.224255 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-2g9vq"] Dec 10 14:57:30 crc kubenswrapper[4718]: I1210 14:57:30.225133 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7c827924-284b-42bd-b871-a892565d7a73-config-data\") pod \"keystone-bootstrap-2g9vq\" (UID: \"7c827924-284b-42bd-b871-a892565d7a73\") " pod="openstack/keystone-bootstrap-2g9vq" Dec 10 14:57:30 crc kubenswrapper[4718]: I1210 14:57:30.225175 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/7c827924-284b-42bd-b871-a892565d7a73-credential-keys\") pod \"keystone-bootstrap-2g9vq\" (UID: \"7c827924-284b-42bd-b871-a892565d7a73\") " pod="openstack/keystone-bootstrap-2g9vq" Dec 10 14:57:30 crc kubenswrapper[4718]: I1210 14:57:30.225245 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-945mb\" (UniqueName: \"kubernetes.io/projected/7c827924-284b-42bd-b871-a892565d7a73-kube-api-access-945mb\") pod \"keystone-bootstrap-2g9vq\" (UID: \"7c827924-284b-42bd-b871-a892565d7a73\") " pod="openstack/keystone-bootstrap-2g9vq" Dec 10 14:57:30 crc kubenswrapper[4718]: I1210 14:57:30.225301 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c827924-284b-42bd-b871-a892565d7a73-combined-ca-bundle\") pod \"keystone-bootstrap-2g9vq\" (UID: \"7c827924-284b-42bd-b871-a892565d7a73\") " pod="openstack/keystone-bootstrap-2g9vq" Dec 10 14:57:30 crc kubenswrapper[4718]: I1210 14:57:30.225341 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/7c827924-284b-42bd-b871-a892565d7a73-fernet-keys\") pod \"keystone-bootstrap-2g9vq\" (UID: \"7c827924-284b-42bd-b871-a892565d7a73\") " pod="openstack/keystone-bootstrap-2g9vq" Dec 10 14:57:30 crc kubenswrapper[4718]: I1210 14:57:30.225791 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7c827924-284b-42bd-b871-a892565d7a73-scripts\") pod \"keystone-bootstrap-2g9vq\" (UID: \"7c827924-284b-42bd-b871-a892565d7a73\") " pod="openstack/keystone-bootstrap-2g9vq" Dec 10 14:57:30 crc kubenswrapper[4718]: I1210 14:57:30.275497 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-58bbf48b7f-vxqcx"] Dec 10 14:57:30 crc kubenswrapper[4718]: I1210 14:57:30.277682 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58bbf48b7f-vxqcx" Dec 10 14:57:30 crc kubenswrapper[4718]: I1210 14:57:30.328200 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7c827924-284b-42bd-b871-a892565d7a73-config-data\") pod \"keystone-bootstrap-2g9vq\" (UID: \"7c827924-284b-42bd-b871-a892565d7a73\") " pod="openstack/keystone-bootstrap-2g9vq" Dec 10 14:57:30 crc kubenswrapper[4718]: I1210 14:57:30.328263 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/7c827924-284b-42bd-b871-a892565d7a73-credential-keys\") pod \"keystone-bootstrap-2g9vq\" (UID: \"7c827924-284b-42bd-b871-a892565d7a73\") " pod="openstack/keystone-bootstrap-2g9vq" Dec 10 14:57:30 crc kubenswrapper[4718]: I1210 14:57:30.328325 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6c6829a4-a5f2-43ac-b574-c4d9edaf0f85-ovsdbserver-nb\") pod \"dnsmasq-dns-58bbf48b7f-vxqcx\" (UID: \"6c6829a4-a5f2-43ac-b574-c4d9edaf0f85\") " pod="openstack/dnsmasq-dns-58bbf48b7f-vxqcx" Dec 10 14:57:30 crc kubenswrapper[4718]: I1210 14:57:30.328365 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6c6829a4-a5f2-43ac-b574-c4d9edaf0f85-dns-svc\") pod \"dnsmasq-dns-58bbf48b7f-vxqcx\" (UID: \"6c6829a4-a5f2-43ac-b574-c4d9edaf0f85\") " pod="openstack/dnsmasq-dns-58bbf48b7f-vxqcx" Dec 10 14:57:30 crc kubenswrapper[4718]: I1210 14:57:30.328435 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n9xmh\" (UniqueName: \"kubernetes.io/projected/6c6829a4-a5f2-43ac-b574-c4d9edaf0f85-kube-api-access-n9xmh\") pod \"dnsmasq-dns-58bbf48b7f-vxqcx\" (UID: \"6c6829a4-a5f2-43ac-b574-c4d9edaf0f85\") " pod="openstack/dnsmasq-dns-58bbf48b7f-vxqcx" Dec 10 14:57:30 crc kubenswrapper[4718]: I1210 14:57:30.328467 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-945mb\" (UniqueName: \"kubernetes.io/projected/7c827924-284b-42bd-b871-a892565d7a73-kube-api-access-945mb\") pod \"keystone-bootstrap-2g9vq\" (UID: \"7c827924-284b-42bd-b871-a892565d7a73\") " pod="openstack/keystone-bootstrap-2g9vq" Dec 10 14:57:30 crc kubenswrapper[4718]: I1210 14:57:30.328521 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6c6829a4-a5f2-43ac-b574-c4d9edaf0f85-dns-swift-storage-0\") pod \"dnsmasq-dns-58bbf48b7f-vxqcx\" (UID: \"6c6829a4-a5f2-43ac-b574-c4d9edaf0f85\") " pod="openstack/dnsmasq-dns-58bbf48b7f-vxqcx" Dec 10 14:57:30 crc kubenswrapper[4718]: I1210 14:57:30.328553 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c827924-284b-42bd-b871-a892565d7a73-combined-ca-bundle\") pod \"keystone-bootstrap-2g9vq\" (UID: \"7c827924-284b-42bd-b871-a892565d7a73\") " pod="openstack/keystone-bootstrap-2g9vq" Dec 10 14:57:30 crc kubenswrapper[4718]: I1210 14:57:30.328603 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6c6829a4-a5f2-43ac-b574-c4d9edaf0f85-config\") pod \"dnsmasq-dns-58bbf48b7f-vxqcx\" (UID: \"6c6829a4-a5f2-43ac-b574-c4d9edaf0f85\") " pod="openstack/dnsmasq-dns-58bbf48b7f-vxqcx" Dec 10 14:57:30 crc kubenswrapper[4718]: I1210 14:57:30.328633 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/7c827924-284b-42bd-b871-a892565d7a73-fernet-keys\") pod \"keystone-bootstrap-2g9vq\" (UID: \"7c827924-284b-42bd-b871-a892565d7a73\") " pod="openstack/keystone-bootstrap-2g9vq" Dec 10 14:57:30 crc kubenswrapper[4718]: I1210 14:57:30.328720 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7c827924-284b-42bd-b871-a892565d7a73-scripts\") pod \"keystone-bootstrap-2g9vq\" (UID: \"7c827924-284b-42bd-b871-a892565d7a73\") " pod="openstack/keystone-bootstrap-2g9vq" Dec 10 14:57:30 crc kubenswrapper[4718]: I1210 14:57:30.328753 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6c6829a4-a5f2-43ac-b574-c4d9edaf0f85-ovsdbserver-sb\") pod \"dnsmasq-dns-58bbf48b7f-vxqcx\" (UID: \"6c6829a4-a5f2-43ac-b574-c4d9edaf0f85\") " pod="openstack/dnsmasq-dns-58bbf48b7f-vxqcx" Dec 10 14:57:30 crc kubenswrapper[4718]: I1210 14:57:30.337068 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-58bbf48b7f-vxqcx"] Dec 10 14:57:30 crc kubenswrapper[4718]: I1210 14:57:30.385552 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/7c827924-284b-42bd-b871-a892565d7a73-credential-keys\") pod \"keystone-bootstrap-2g9vq\" (UID: \"7c827924-284b-42bd-b871-a892565d7a73\") " pod="openstack/keystone-bootstrap-2g9vq" Dec 10 14:57:30 crc kubenswrapper[4718]: I1210 14:57:30.387502 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c827924-284b-42bd-b871-a892565d7a73-combined-ca-bundle\") pod \"keystone-bootstrap-2g9vq\" (UID: \"7c827924-284b-42bd-b871-a892565d7a73\") " pod="openstack/keystone-bootstrap-2g9vq" Dec 10 14:57:30 crc kubenswrapper[4718]: I1210 14:57:30.388255 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/7c827924-284b-42bd-b871-a892565d7a73-fernet-keys\") pod \"keystone-bootstrap-2g9vq\" (UID: \"7c827924-284b-42bd-b871-a892565d7a73\") " pod="openstack/keystone-bootstrap-2g9vq" Dec 10 14:57:30 crc kubenswrapper[4718]: I1210 14:57:30.390154 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7c827924-284b-42bd-b871-a892565d7a73-scripts\") pod \"keystone-bootstrap-2g9vq\" (UID: \"7c827924-284b-42bd-b871-a892565d7a73\") " pod="openstack/keystone-bootstrap-2g9vq" Dec 10 14:57:30 crc kubenswrapper[4718]: I1210 14:57:30.394539 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7c827924-284b-42bd-b871-a892565d7a73-config-data\") pod \"keystone-bootstrap-2g9vq\" (UID: \"7c827924-284b-42bd-b871-a892565d7a73\") " pod="openstack/keystone-bootstrap-2g9vq" Dec 10 14:57:30 crc kubenswrapper[4718]: I1210 14:57:30.406267 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-945mb\" (UniqueName: \"kubernetes.io/projected/7c827924-284b-42bd-b871-a892565d7a73-kube-api-access-945mb\") pod \"keystone-bootstrap-2g9vq\" (UID: \"7c827924-284b-42bd-b871-a892565d7a73\") " pod="openstack/keystone-bootstrap-2g9vq" Dec 10 14:57:30 crc kubenswrapper[4718]: I1210 14:57:30.429699 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6c6829a4-a5f2-43ac-b574-c4d9edaf0f85-ovsdbserver-sb\") pod \"dnsmasq-dns-58bbf48b7f-vxqcx\" (UID: \"6c6829a4-a5f2-43ac-b574-c4d9edaf0f85\") " pod="openstack/dnsmasq-dns-58bbf48b7f-vxqcx" Dec 10 14:57:30 crc kubenswrapper[4718]: I1210 14:57:30.429816 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6c6829a4-a5f2-43ac-b574-c4d9edaf0f85-ovsdbserver-nb\") pod \"dnsmasq-dns-58bbf48b7f-vxqcx\" (UID: \"6c6829a4-a5f2-43ac-b574-c4d9edaf0f85\") " pod="openstack/dnsmasq-dns-58bbf48b7f-vxqcx" Dec 10 14:57:30 crc kubenswrapper[4718]: I1210 14:57:30.429841 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6c6829a4-a5f2-43ac-b574-c4d9edaf0f85-dns-svc\") pod \"dnsmasq-dns-58bbf48b7f-vxqcx\" (UID: \"6c6829a4-a5f2-43ac-b574-c4d9edaf0f85\") " pod="openstack/dnsmasq-dns-58bbf48b7f-vxqcx" Dec 10 14:57:30 crc kubenswrapper[4718]: I1210 14:57:30.429877 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n9xmh\" (UniqueName: \"kubernetes.io/projected/6c6829a4-a5f2-43ac-b574-c4d9edaf0f85-kube-api-access-n9xmh\") pod \"dnsmasq-dns-58bbf48b7f-vxqcx\" (UID: \"6c6829a4-a5f2-43ac-b574-c4d9edaf0f85\") " pod="openstack/dnsmasq-dns-58bbf48b7f-vxqcx" Dec 10 14:57:30 crc kubenswrapper[4718]: I1210 14:57:30.429910 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6c6829a4-a5f2-43ac-b574-c4d9edaf0f85-dns-swift-storage-0\") pod \"dnsmasq-dns-58bbf48b7f-vxqcx\" (UID: \"6c6829a4-a5f2-43ac-b574-c4d9edaf0f85\") " pod="openstack/dnsmasq-dns-58bbf48b7f-vxqcx" Dec 10 14:57:30 crc kubenswrapper[4718]: I1210 14:57:30.433990 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6c6829a4-a5f2-43ac-b574-c4d9edaf0f85-ovsdbserver-sb\") pod \"dnsmasq-dns-58bbf48b7f-vxqcx\" (UID: \"6c6829a4-a5f2-43ac-b574-c4d9edaf0f85\") " pod="openstack/dnsmasq-dns-58bbf48b7f-vxqcx" Dec 10 14:57:30 crc kubenswrapper[4718]: I1210 14:57:30.434405 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6c6829a4-a5f2-43ac-b574-c4d9edaf0f85-dns-svc\") pod \"dnsmasq-dns-58bbf48b7f-vxqcx\" (UID: \"6c6829a4-a5f2-43ac-b574-c4d9edaf0f85\") " pod="openstack/dnsmasq-dns-58bbf48b7f-vxqcx" Dec 10 14:57:30 crc kubenswrapper[4718]: I1210 14:57:30.434788 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6c6829a4-a5f2-43ac-b574-c4d9edaf0f85-dns-swift-storage-0\") pod \"dnsmasq-dns-58bbf48b7f-vxqcx\" (UID: \"6c6829a4-a5f2-43ac-b574-c4d9edaf0f85\") " pod="openstack/dnsmasq-dns-58bbf48b7f-vxqcx" Dec 10 14:57:30 crc kubenswrapper[4718]: I1210 14:57:30.436585 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6c6829a4-a5f2-43ac-b574-c4d9edaf0f85-config\") pod \"dnsmasq-dns-58bbf48b7f-vxqcx\" (UID: \"6c6829a4-a5f2-43ac-b574-c4d9edaf0f85\") " pod="openstack/dnsmasq-dns-58bbf48b7f-vxqcx" Dec 10 14:57:30 crc kubenswrapper[4718]: I1210 14:57:30.438019 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6c6829a4-a5f2-43ac-b574-c4d9edaf0f85-config\") pod \"dnsmasq-dns-58bbf48b7f-vxqcx\" (UID: \"6c6829a4-a5f2-43ac-b574-c4d9edaf0f85\") " pod="openstack/dnsmasq-dns-58bbf48b7f-vxqcx" Dec 10 14:57:30 crc kubenswrapper[4718]: I1210 14:57:30.448617 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6c6829a4-a5f2-43ac-b574-c4d9edaf0f85-ovsdbserver-nb\") pod \"dnsmasq-dns-58bbf48b7f-vxqcx\" (UID: \"6c6829a4-a5f2-43ac-b574-c4d9edaf0f85\") " pod="openstack/dnsmasq-dns-58bbf48b7f-vxqcx" Dec 10 14:57:30 crc kubenswrapper[4718]: I1210 14:57:30.490645 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n9xmh\" (UniqueName: \"kubernetes.io/projected/6c6829a4-a5f2-43ac-b574-c4d9edaf0f85-kube-api-access-n9xmh\") pod \"dnsmasq-dns-58bbf48b7f-vxqcx\" (UID: \"6c6829a4-a5f2-43ac-b574-c4d9edaf0f85\") " pod="openstack/dnsmasq-dns-58bbf48b7f-vxqcx" Dec 10 14:57:30 crc kubenswrapper[4718]: I1210 14:57:30.554050 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-2g9vq" Dec 10 14:57:31 crc kubenswrapper[4718]: I1210 14:57:30.619207 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-zr6j2"] Dec 10 14:57:31 crc kubenswrapper[4718]: I1210 14:57:30.621077 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-zr6j2" Dec 10 14:57:31 crc kubenswrapper[4718]: I1210 14:57:30.653188 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db4ee945-67d7-4670-9192-2ecaf4f03c3d-combined-ca-bundle\") pod \"neutron-db-sync-zr6j2\" (UID: \"db4ee945-67d7-4670-9192-2ecaf4f03c3d\") " pod="openstack/neutron-db-sync-zr6j2" Dec 10 14:57:31 crc kubenswrapper[4718]: I1210 14:57:30.653295 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-666bl\" (UniqueName: \"kubernetes.io/projected/db4ee945-67d7-4670-9192-2ecaf4f03c3d-kube-api-access-666bl\") pod \"neutron-db-sync-zr6j2\" (UID: \"db4ee945-67d7-4670-9192-2ecaf4f03c3d\") " pod="openstack/neutron-db-sync-zr6j2" Dec 10 14:57:31 crc kubenswrapper[4718]: I1210 14:57:30.653377 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/db4ee945-67d7-4670-9192-2ecaf4f03c3d-config\") pod \"neutron-db-sync-zr6j2\" (UID: \"db4ee945-67d7-4670-9192-2ecaf4f03c3d\") " pod="openstack/neutron-db-sync-zr6j2" Dec 10 14:57:31 crc kubenswrapper[4718]: I1210 14:57:30.662303 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Dec 10 14:57:31 crc kubenswrapper[4718]: I1210 14:57:30.662686 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-hfcj7" Dec 10 14:57:31 crc kubenswrapper[4718]: I1210 14:57:30.662846 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Dec 10 14:57:31 crc kubenswrapper[4718]: I1210 14:57:30.699999 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-zr6j2"] Dec 10 14:57:31 crc kubenswrapper[4718]: I1210 14:57:30.759671 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-hctjl"] Dec 10 14:57:31 crc kubenswrapper[4718]: I1210 14:57:30.762949 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db4ee945-67d7-4670-9192-2ecaf4f03c3d-combined-ca-bundle\") pod \"neutron-db-sync-zr6j2\" (UID: \"db4ee945-67d7-4670-9192-2ecaf4f03c3d\") " pod="openstack/neutron-db-sync-zr6j2" Dec 10 14:57:31 crc kubenswrapper[4718]: I1210 14:57:30.763033 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-666bl\" (UniqueName: \"kubernetes.io/projected/db4ee945-67d7-4670-9192-2ecaf4f03c3d-kube-api-access-666bl\") pod \"neutron-db-sync-zr6j2\" (UID: \"db4ee945-67d7-4670-9192-2ecaf4f03c3d\") " pod="openstack/neutron-db-sync-zr6j2" Dec 10 14:57:31 crc kubenswrapper[4718]: I1210 14:57:30.763068 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/db4ee945-67d7-4670-9192-2ecaf4f03c3d-config\") pod \"neutron-db-sync-zr6j2\" (UID: \"db4ee945-67d7-4670-9192-2ecaf4f03c3d\") " pod="openstack/neutron-db-sync-zr6j2" Dec 10 14:57:31 crc kubenswrapper[4718]: I1210 14:57:30.778329 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-hctjl" Dec 10 14:57:31 crc kubenswrapper[4718]: I1210 14:57:30.792633 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/db4ee945-67d7-4670-9192-2ecaf4f03c3d-config\") pod \"neutron-db-sync-zr6j2\" (UID: \"db4ee945-67d7-4670-9192-2ecaf4f03c3d\") " pod="openstack/neutron-db-sync-zr6j2" Dec 10 14:57:31 crc kubenswrapper[4718]: I1210 14:57:30.801709 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-cwrnt"] Dec 10 14:57:31 crc kubenswrapper[4718]: I1210 14:57:30.803527 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-cwrnt" Dec 10 14:57:31 crc kubenswrapper[4718]: I1210 14:57:30.814468 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-cp2kf" Dec 10 14:57:31 crc kubenswrapper[4718]: I1210 14:57:30.816260 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Dec 10 14:57:31 crc kubenswrapper[4718]: I1210 14:57:30.816927 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-mtk4v" Dec 10 14:57:31 crc kubenswrapper[4718]: I1210 14:57:30.820300 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db4ee945-67d7-4670-9192-2ecaf4f03c3d-combined-ca-bundle\") pod \"neutron-db-sync-zr6j2\" (UID: \"db4ee945-67d7-4670-9192-2ecaf4f03c3d\") " pod="openstack/neutron-db-sync-zr6j2" Dec 10 14:57:31 crc kubenswrapper[4718]: I1210 14:57:30.850616 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Dec 10 14:57:31 crc kubenswrapper[4718]: I1210 14:57:30.851018 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Dec 10 14:57:31 crc kubenswrapper[4718]: I1210 14:57:30.905269 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-666bl\" (UniqueName: \"kubernetes.io/projected/db4ee945-67d7-4670-9192-2ecaf4f03c3d-kube-api-access-666bl\") pod \"neutron-db-sync-zr6j2\" (UID: \"db4ee945-67d7-4670-9192-2ecaf4f03c3d\") " pod="openstack/neutron-db-sync-zr6j2" Dec 10 14:57:31 crc kubenswrapper[4718]: I1210 14:57:30.948581 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-cwrnt"] Dec 10 14:57:31 crc kubenswrapper[4718]: I1210 14:57:30.984222 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/71a092ad-773d-47b4-bc1f-73358adecf4a-db-sync-config-data\") pod \"cinder-db-sync-hctjl\" (UID: \"71a092ad-773d-47b4-bc1f-73358adecf4a\") " pod="openstack/cinder-db-sync-hctjl" Dec 10 14:57:31 crc kubenswrapper[4718]: I1210 14:57:30.984290 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3990dc15-53e8-4cd7-a25d-f9b322b74f3e-combined-ca-bundle\") pod \"barbican-db-sync-cwrnt\" (UID: \"3990dc15-53e8-4cd7-a25d-f9b322b74f3e\") " pod="openstack/barbican-db-sync-cwrnt" Dec 10 14:57:31 crc kubenswrapper[4718]: I1210 14:57:30.984334 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ftgjj\" (UniqueName: \"kubernetes.io/projected/3990dc15-53e8-4cd7-a25d-f9b322b74f3e-kube-api-access-ftgjj\") pod \"barbican-db-sync-cwrnt\" (UID: \"3990dc15-53e8-4cd7-a25d-f9b322b74f3e\") " pod="openstack/barbican-db-sync-cwrnt" Dec 10 14:57:31 crc kubenswrapper[4718]: I1210 14:57:30.984405 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/3990dc15-53e8-4cd7-a25d-f9b322b74f3e-db-sync-config-data\") pod \"barbican-db-sync-cwrnt\" (UID: \"3990dc15-53e8-4cd7-a25d-f9b322b74f3e\") " pod="openstack/barbican-db-sync-cwrnt" Dec 10 14:57:31 crc kubenswrapper[4718]: I1210 14:57:30.984458 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/71a092ad-773d-47b4-bc1f-73358adecf4a-config-data\") pod \"cinder-db-sync-hctjl\" (UID: \"71a092ad-773d-47b4-bc1f-73358adecf4a\") " pod="openstack/cinder-db-sync-hctjl" Dec 10 14:57:31 crc kubenswrapper[4718]: I1210 14:57:30.984497 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/71a092ad-773d-47b4-bc1f-73358adecf4a-combined-ca-bundle\") pod \"cinder-db-sync-hctjl\" (UID: \"71a092ad-773d-47b4-bc1f-73358adecf4a\") " pod="openstack/cinder-db-sync-hctjl" Dec 10 14:57:31 crc kubenswrapper[4718]: I1210 14:57:30.984644 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bvqkf\" (UniqueName: \"kubernetes.io/projected/71a092ad-773d-47b4-bc1f-73358adecf4a-kube-api-access-bvqkf\") pod \"cinder-db-sync-hctjl\" (UID: \"71a092ad-773d-47b4-bc1f-73358adecf4a\") " pod="openstack/cinder-db-sync-hctjl" Dec 10 14:57:31 crc kubenswrapper[4718]: I1210 14:57:30.984822 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/71a092ad-773d-47b4-bc1f-73358adecf4a-scripts\") pod \"cinder-db-sync-hctjl\" (UID: \"71a092ad-773d-47b4-bc1f-73358adecf4a\") " pod="openstack/cinder-db-sync-hctjl" Dec 10 14:57:31 crc kubenswrapper[4718]: I1210 14:57:30.984914 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/71a092ad-773d-47b4-bc1f-73358adecf4a-etc-machine-id\") pod \"cinder-db-sync-hctjl\" (UID: \"71a092ad-773d-47b4-bc1f-73358adecf4a\") " pod="openstack/cinder-db-sync-hctjl" Dec 10 14:57:31 crc kubenswrapper[4718]: I1210 14:57:31.060509 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-hctjl"] Dec 10 14:57:31 crc kubenswrapper[4718]: I1210 14:57:31.088073 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bvqkf\" (UniqueName: \"kubernetes.io/projected/71a092ad-773d-47b4-bc1f-73358adecf4a-kube-api-access-bvqkf\") pod \"cinder-db-sync-hctjl\" (UID: \"71a092ad-773d-47b4-bc1f-73358adecf4a\") " pod="openstack/cinder-db-sync-hctjl" Dec 10 14:57:31 crc kubenswrapper[4718]: I1210 14:57:31.088163 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/71a092ad-773d-47b4-bc1f-73358adecf4a-scripts\") pod \"cinder-db-sync-hctjl\" (UID: \"71a092ad-773d-47b4-bc1f-73358adecf4a\") " pod="openstack/cinder-db-sync-hctjl" Dec 10 14:57:31 crc kubenswrapper[4718]: I1210 14:57:31.088204 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/71a092ad-773d-47b4-bc1f-73358adecf4a-etc-machine-id\") pod \"cinder-db-sync-hctjl\" (UID: \"71a092ad-773d-47b4-bc1f-73358adecf4a\") " pod="openstack/cinder-db-sync-hctjl" Dec 10 14:57:31 crc kubenswrapper[4718]: I1210 14:57:31.088250 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/71a092ad-773d-47b4-bc1f-73358adecf4a-db-sync-config-data\") pod \"cinder-db-sync-hctjl\" (UID: \"71a092ad-773d-47b4-bc1f-73358adecf4a\") " pod="openstack/cinder-db-sync-hctjl" Dec 10 14:57:31 crc kubenswrapper[4718]: I1210 14:57:31.088274 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3990dc15-53e8-4cd7-a25d-f9b322b74f3e-combined-ca-bundle\") pod \"barbican-db-sync-cwrnt\" (UID: \"3990dc15-53e8-4cd7-a25d-f9b322b74f3e\") " pod="openstack/barbican-db-sync-cwrnt" Dec 10 14:57:31 crc kubenswrapper[4718]: I1210 14:57:31.088295 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ftgjj\" (UniqueName: \"kubernetes.io/projected/3990dc15-53e8-4cd7-a25d-f9b322b74f3e-kube-api-access-ftgjj\") pod \"barbican-db-sync-cwrnt\" (UID: \"3990dc15-53e8-4cd7-a25d-f9b322b74f3e\") " pod="openstack/barbican-db-sync-cwrnt" Dec 10 14:57:31 crc kubenswrapper[4718]: I1210 14:57:31.088351 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/3990dc15-53e8-4cd7-a25d-f9b322b74f3e-db-sync-config-data\") pod \"barbican-db-sync-cwrnt\" (UID: \"3990dc15-53e8-4cd7-a25d-f9b322b74f3e\") " pod="openstack/barbican-db-sync-cwrnt" Dec 10 14:57:31 crc kubenswrapper[4718]: I1210 14:57:31.088408 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/71a092ad-773d-47b4-bc1f-73358adecf4a-config-data\") pod \"cinder-db-sync-hctjl\" (UID: \"71a092ad-773d-47b4-bc1f-73358adecf4a\") " pod="openstack/cinder-db-sync-hctjl" Dec 10 14:57:31 crc kubenswrapper[4718]: I1210 14:57:31.088426 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/71a092ad-773d-47b4-bc1f-73358adecf4a-combined-ca-bundle\") pod \"cinder-db-sync-hctjl\" (UID: \"71a092ad-773d-47b4-bc1f-73358adecf4a\") " pod="openstack/cinder-db-sync-hctjl" Dec 10 14:57:31 crc kubenswrapper[4718]: I1210 14:57:31.091733 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/71a092ad-773d-47b4-bc1f-73358adecf4a-etc-machine-id\") pod \"cinder-db-sync-hctjl\" (UID: \"71a092ad-773d-47b4-bc1f-73358adecf4a\") " pod="openstack/cinder-db-sync-hctjl" Dec 10 14:57:31 crc kubenswrapper[4718]: I1210 14:57:31.114368 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/71a092ad-773d-47b4-bc1f-73358adecf4a-config-data\") pod \"cinder-db-sync-hctjl\" (UID: \"71a092ad-773d-47b4-bc1f-73358adecf4a\") " pod="openstack/cinder-db-sync-hctjl" Dec 10 14:57:31 crc kubenswrapper[4718]: I1210 14:57:31.116887 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/3990dc15-53e8-4cd7-a25d-f9b322b74f3e-db-sync-config-data\") pod \"barbican-db-sync-cwrnt\" (UID: \"3990dc15-53e8-4cd7-a25d-f9b322b74f3e\") " pod="openstack/barbican-db-sync-cwrnt" Dec 10 14:57:31 crc kubenswrapper[4718]: I1210 14:57:31.118637 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/71a092ad-773d-47b4-bc1f-73358adecf4a-combined-ca-bundle\") pod \"cinder-db-sync-hctjl\" (UID: \"71a092ad-773d-47b4-bc1f-73358adecf4a\") " pod="openstack/cinder-db-sync-hctjl" Dec 10 14:57:31 crc kubenswrapper[4718]: I1210 14:57:31.136364 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3990dc15-53e8-4cd7-a25d-f9b322b74f3e-combined-ca-bundle\") pod \"barbican-db-sync-cwrnt\" (UID: \"3990dc15-53e8-4cd7-a25d-f9b322b74f3e\") " pod="openstack/barbican-db-sync-cwrnt" Dec 10 14:57:31 crc kubenswrapper[4718]: I1210 14:57:31.136433 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/71a092ad-773d-47b4-bc1f-73358adecf4a-scripts\") pod \"cinder-db-sync-hctjl\" (UID: \"71a092ad-773d-47b4-bc1f-73358adecf4a\") " pod="openstack/cinder-db-sync-hctjl" Dec 10 14:57:31 crc kubenswrapper[4718]: I1210 14:57:31.136794 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/71a092ad-773d-47b4-bc1f-73358adecf4a-db-sync-config-data\") pod \"cinder-db-sync-hctjl\" (UID: \"71a092ad-773d-47b4-bc1f-73358adecf4a\") " pod="openstack/cinder-db-sync-hctjl" Dec 10 14:57:31 crc kubenswrapper[4718]: I1210 14:57:31.150222 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ftgjj\" (UniqueName: \"kubernetes.io/projected/3990dc15-53e8-4cd7-a25d-f9b322b74f3e-kube-api-access-ftgjj\") pod \"barbican-db-sync-cwrnt\" (UID: \"3990dc15-53e8-4cd7-a25d-f9b322b74f3e\") " pod="openstack/barbican-db-sync-cwrnt" Dec 10 14:57:31 crc kubenswrapper[4718]: I1210 14:57:31.172570 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-5d86d75cbf-rg448"] Dec 10 14:57:31 crc kubenswrapper[4718]: I1210 14:57:31.175121 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5d86d75cbf-rg448" Dec 10 14:57:31 crc kubenswrapper[4718]: I1210 14:57:31.179951 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-scripts" Dec 10 14:57:31 crc kubenswrapper[4718]: I1210 14:57:31.180674 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-config-data" Dec 10 14:57:31 crc kubenswrapper[4718]: I1210 14:57:31.180910 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon-horizon-dockercfg-82c4b" Dec 10 14:57:31 crc kubenswrapper[4718]: I1210 14:57:31.181133 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon" Dec 10 14:57:31 crc kubenswrapper[4718]: I1210 14:57:31.196567 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kwl9x\" (UniqueName: \"kubernetes.io/projected/67ad4684-e977-4f3e-b1f0-2efa3d60567f-kube-api-access-kwl9x\") pod \"horizon-5d86d75cbf-rg448\" (UID: \"67ad4684-e977-4f3e-b1f0-2efa3d60567f\") " pod="openstack/horizon-5d86d75cbf-rg448" Dec 10 14:57:31 crc kubenswrapper[4718]: I1210 14:57:31.196713 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/67ad4684-e977-4f3e-b1f0-2efa3d60567f-scripts\") pod \"horizon-5d86d75cbf-rg448\" (UID: \"67ad4684-e977-4f3e-b1f0-2efa3d60567f\") " pod="openstack/horizon-5d86d75cbf-rg448" Dec 10 14:57:31 crc kubenswrapper[4718]: I1210 14:57:31.196760 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/67ad4684-e977-4f3e-b1f0-2efa3d60567f-logs\") pod \"horizon-5d86d75cbf-rg448\" (UID: \"67ad4684-e977-4f3e-b1f0-2efa3d60567f\") " pod="openstack/horizon-5d86d75cbf-rg448" Dec 10 14:57:31 crc kubenswrapper[4718]: I1210 14:57:31.196822 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/67ad4684-e977-4f3e-b1f0-2efa3d60567f-horizon-secret-key\") pod \"horizon-5d86d75cbf-rg448\" (UID: \"67ad4684-e977-4f3e-b1f0-2efa3d60567f\") " pod="openstack/horizon-5d86d75cbf-rg448" Dec 10 14:57:31 crc kubenswrapper[4718]: I1210 14:57:31.196975 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/67ad4684-e977-4f3e-b1f0-2efa3d60567f-config-data\") pod \"horizon-5d86d75cbf-rg448\" (UID: \"67ad4684-e977-4f3e-b1f0-2efa3d60567f\") " pod="openstack/horizon-5d86d75cbf-rg448" Dec 10 14:57:31 crc kubenswrapper[4718]: I1210 14:57:31.216202 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bvqkf\" (UniqueName: \"kubernetes.io/projected/71a092ad-773d-47b4-bc1f-73358adecf4a-kube-api-access-bvqkf\") pod \"cinder-db-sync-hctjl\" (UID: \"71a092ad-773d-47b4-bc1f-73358adecf4a\") " pod="openstack/cinder-db-sync-hctjl" Dec 10 14:57:31 crc kubenswrapper[4718]: I1210 14:57:31.230779 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-5d86d75cbf-rg448"] Dec 10 14:57:31 crc kubenswrapper[4718]: I1210 14:57:31.299106 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58bbf48b7f-vxqcx" Dec 10 14:57:31 crc kubenswrapper[4718]: I1210 14:57:31.301664 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kwl9x\" (UniqueName: \"kubernetes.io/projected/67ad4684-e977-4f3e-b1f0-2efa3d60567f-kube-api-access-kwl9x\") pod \"horizon-5d86d75cbf-rg448\" (UID: \"67ad4684-e977-4f3e-b1f0-2efa3d60567f\") " pod="openstack/horizon-5d86d75cbf-rg448" Dec 10 14:57:31 crc kubenswrapper[4718]: I1210 14:57:31.302054 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/67ad4684-e977-4f3e-b1f0-2efa3d60567f-scripts\") pod \"horizon-5d86d75cbf-rg448\" (UID: \"67ad4684-e977-4f3e-b1f0-2efa3d60567f\") " pod="openstack/horizon-5d86d75cbf-rg448" Dec 10 14:57:31 crc kubenswrapper[4718]: I1210 14:57:31.302271 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/67ad4684-e977-4f3e-b1f0-2efa3d60567f-logs\") pod \"horizon-5d86d75cbf-rg448\" (UID: \"67ad4684-e977-4f3e-b1f0-2efa3d60567f\") " pod="openstack/horizon-5d86d75cbf-rg448" Dec 10 14:57:31 crc kubenswrapper[4718]: I1210 14:57:31.302464 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/67ad4684-e977-4f3e-b1f0-2efa3d60567f-horizon-secret-key\") pod \"horizon-5d86d75cbf-rg448\" (UID: \"67ad4684-e977-4f3e-b1f0-2efa3d60567f\") " pod="openstack/horizon-5d86d75cbf-rg448" Dec 10 14:57:31 crc kubenswrapper[4718]: I1210 14:57:31.302760 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/67ad4684-e977-4f3e-b1f0-2efa3d60567f-config-data\") pod \"horizon-5d86d75cbf-rg448\" (UID: \"67ad4684-e977-4f3e-b1f0-2efa3d60567f\") " pod="openstack/horizon-5d86d75cbf-rg448" Dec 10 14:57:31 crc kubenswrapper[4718]: I1210 14:57:31.309974 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/67ad4684-e977-4f3e-b1f0-2efa3d60567f-logs\") pod \"horizon-5d86d75cbf-rg448\" (UID: \"67ad4684-e977-4f3e-b1f0-2efa3d60567f\") " pod="openstack/horizon-5d86d75cbf-rg448" Dec 10 14:57:31 crc kubenswrapper[4718]: I1210 14:57:31.311741 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/67ad4684-e977-4f3e-b1f0-2efa3d60567f-scripts\") pod \"horizon-5d86d75cbf-rg448\" (UID: \"67ad4684-e977-4f3e-b1f0-2efa3d60567f\") " pod="openstack/horizon-5d86d75cbf-rg448" Dec 10 14:57:31 crc kubenswrapper[4718]: I1210 14:57:31.311869 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/67ad4684-e977-4f3e-b1f0-2efa3d60567f-config-data\") pod \"horizon-5d86d75cbf-rg448\" (UID: \"67ad4684-e977-4f3e-b1f0-2efa3d60567f\") " pod="openstack/horizon-5d86d75cbf-rg448" Dec 10 14:57:31 crc kubenswrapper[4718]: I1210 14:57:31.317956 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/67ad4684-e977-4f3e-b1f0-2efa3d60567f-horizon-secret-key\") pod \"horizon-5d86d75cbf-rg448\" (UID: \"67ad4684-e977-4f3e-b1f0-2efa3d60567f\") " pod="openstack/horizon-5d86d75cbf-rg448" Dec 10 14:57:31 crc kubenswrapper[4718]: I1210 14:57:31.336537 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kwl9x\" (UniqueName: \"kubernetes.io/projected/67ad4684-e977-4f3e-b1f0-2efa3d60567f-kube-api-access-kwl9x\") pod \"horizon-5d86d75cbf-rg448\" (UID: \"67ad4684-e977-4f3e-b1f0-2efa3d60567f\") " pod="openstack/horizon-5d86d75cbf-rg448" Dec 10 14:57:31 crc kubenswrapper[4718]: I1210 14:57:31.374571 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-58bbf48b7f-vxqcx"] Dec 10 14:57:31 crc kubenswrapper[4718]: I1210 14:57:31.397971 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-qklhp"] Dec 10 14:57:31 crc kubenswrapper[4718]: I1210 14:57:31.403119 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-qklhp" Dec 10 14:57:31 crc kubenswrapper[4718]: I1210 14:57:31.407661 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Dec 10 14:57:31 crc kubenswrapper[4718]: I1210 14:57:31.407695 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Dec 10 14:57:31 crc kubenswrapper[4718]: I1210 14:57:31.408689 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-5k8qj" Dec 10 14:57:31 crc kubenswrapper[4718]: I1210 14:57:31.432549 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-cwrnt" Dec 10 14:57:31 crc kubenswrapper[4718]: I1210 14:57:31.452036 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-qklhp"] Dec 10 14:57:31 crc kubenswrapper[4718]: I1210 14:57:31.457841 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-zr6j2" Dec 10 14:57:31 crc kubenswrapper[4718]: I1210 14:57:31.477540 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-578598f949-4lgxw"] Dec 10 14:57:31 crc kubenswrapper[4718]: I1210 14:57:31.515784 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-578598f949-4lgxw" Dec 10 14:57:31 crc kubenswrapper[4718]: I1210 14:57:31.522449 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-hctjl" Dec 10 14:57:31 crc kubenswrapper[4718]: I1210 14:57:31.532035 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f6329eaf-fcae-417e-96a8-96719f02420b-logs\") pod \"placement-db-sync-qklhp\" (UID: \"f6329eaf-fcae-417e-96a8-96719f02420b\") " pod="openstack/placement-db-sync-qklhp" Dec 10 14:57:31 crc kubenswrapper[4718]: I1210 14:57:31.532136 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f6329eaf-fcae-417e-96a8-96719f02420b-combined-ca-bundle\") pod \"placement-db-sync-qklhp\" (UID: \"f6329eaf-fcae-417e-96a8-96719f02420b\") " pod="openstack/placement-db-sync-qklhp" Dec 10 14:57:31 crc kubenswrapper[4718]: I1210 14:57:31.532179 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f6329eaf-fcae-417e-96a8-96719f02420b-config-data\") pod \"placement-db-sync-qklhp\" (UID: \"f6329eaf-fcae-417e-96a8-96719f02420b\") " pod="openstack/placement-db-sync-qklhp" Dec 10 14:57:31 crc kubenswrapper[4718]: I1210 14:57:31.532301 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c5df2\" (UniqueName: \"kubernetes.io/projected/f6329eaf-fcae-417e-96a8-96719f02420b-kube-api-access-c5df2\") pod \"placement-db-sync-qklhp\" (UID: \"f6329eaf-fcae-417e-96a8-96719f02420b\") " pod="openstack/placement-db-sync-qklhp" Dec 10 14:57:31 crc kubenswrapper[4718]: I1210 14:57:31.532320 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f6329eaf-fcae-417e-96a8-96719f02420b-scripts\") pod \"placement-db-sync-qklhp\" (UID: \"f6329eaf-fcae-417e-96a8-96719f02420b\") " pod="openstack/placement-db-sync-qklhp" Dec 10 14:57:31 crc kubenswrapper[4718]: I1210 14:57:31.535679 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-7d98bb654c-nmwh8"] Dec 10 14:57:31 crc kubenswrapper[4718]: I1210 14:57:31.538337 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7d98bb654c-nmwh8" Dec 10 14:57:31 crc kubenswrapper[4718]: I1210 14:57:31.543814 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5d86d75cbf-rg448" Dec 10 14:57:31 crc kubenswrapper[4718]: I1210 14:57:31.566569 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-578598f949-4lgxw"] Dec 10 14:57:31 crc kubenswrapper[4718]: I1210 14:57:31.611123 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-7d98bb654c-nmwh8"] Dec 10 14:57:31 crc kubenswrapper[4718]: I1210 14:57:31.634249 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/830caa3e-735c-4fda-9323-ad5cf8d4779a-dns-svc\") pod \"dnsmasq-dns-578598f949-4lgxw\" (UID: \"830caa3e-735c-4fda-9323-ad5cf8d4779a\") " pod="openstack/dnsmasq-dns-578598f949-4lgxw" Dec 10 14:57:31 crc kubenswrapper[4718]: I1210 14:57:31.634316 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/830caa3e-735c-4fda-9323-ad5cf8d4779a-ovsdbserver-sb\") pod \"dnsmasq-dns-578598f949-4lgxw\" (UID: \"830caa3e-735c-4fda-9323-ad5cf8d4779a\") " pod="openstack/dnsmasq-dns-578598f949-4lgxw" Dec 10 14:57:31 crc kubenswrapper[4718]: I1210 14:57:31.634405 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4b4gg\" (UniqueName: \"kubernetes.io/projected/830caa3e-735c-4fda-9323-ad5cf8d4779a-kube-api-access-4b4gg\") pod \"dnsmasq-dns-578598f949-4lgxw\" (UID: \"830caa3e-735c-4fda-9323-ad5cf8d4779a\") " pod="openstack/dnsmasq-dns-578598f949-4lgxw" Dec 10 14:57:31 crc kubenswrapper[4718]: I1210 14:57:31.634505 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 10 14:57:31 crc kubenswrapper[4718]: I1210 14:57:31.634582 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/830caa3e-735c-4fda-9323-ad5cf8d4779a-ovsdbserver-nb\") pod \"dnsmasq-dns-578598f949-4lgxw\" (UID: \"830caa3e-735c-4fda-9323-ad5cf8d4779a\") " pod="openstack/dnsmasq-dns-578598f949-4lgxw" Dec 10 14:57:31 crc kubenswrapper[4718]: I1210 14:57:31.634662 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/830caa3e-735c-4fda-9323-ad5cf8d4779a-config\") pod \"dnsmasq-dns-578598f949-4lgxw\" (UID: \"830caa3e-735c-4fda-9323-ad5cf8d4779a\") " pod="openstack/dnsmasq-dns-578598f949-4lgxw" Dec 10 14:57:31 crc kubenswrapper[4718]: I1210 14:57:31.634722 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c5df2\" (UniqueName: \"kubernetes.io/projected/f6329eaf-fcae-417e-96a8-96719f02420b-kube-api-access-c5df2\") pod \"placement-db-sync-qklhp\" (UID: \"f6329eaf-fcae-417e-96a8-96719f02420b\") " pod="openstack/placement-db-sync-qklhp" Dec 10 14:57:31 crc kubenswrapper[4718]: I1210 14:57:31.634753 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f6329eaf-fcae-417e-96a8-96719f02420b-scripts\") pod \"placement-db-sync-qklhp\" (UID: \"f6329eaf-fcae-417e-96a8-96719f02420b\") " pod="openstack/placement-db-sync-qklhp" Dec 10 14:57:31 crc kubenswrapper[4718]: I1210 14:57:31.634853 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/830caa3e-735c-4fda-9323-ad5cf8d4779a-dns-swift-storage-0\") pod \"dnsmasq-dns-578598f949-4lgxw\" (UID: \"830caa3e-735c-4fda-9323-ad5cf8d4779a\") " pod="openstack/dnsmasq-dns-578598f949-4lgxw" Dec 10 14:57:31 crc kubenswrapper[4718]: I1210 14:57:31.634893 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f6329eaf-fcae-417e-96a8-96719f02420b-logs\") pod \"placement-db-sync-qklhp\" (UID: \"f6329eaf-fcae-417e-96a8-96719f02420b\") " pod="openstack/placement-db-sync-qklhp" Dec 10 14:57:31 crc kubenswrapper[4718]: I1210 14:57:31.634944 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f6329eaf-fcae-417e-96a8-96719f02420b-combined-ca-bundle\") pod \"placement-db-sync-qklhp\" (UID: \"f6329eaf-fcae-417e-96a8-96719f02420b\") " pod="openstack/placement-db-sync-qklhp" Dec 10 14:57:31 crc kubenswrapper[4718]: I1210 14:57:31.634998 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f6329eaf-fcae-417e-96a8-96719f02420b-config-data\") pod \"placement-db-sync-qklhp\" (UID: \"f6329eaf-fcae-417e-96a8-96719f02420b\") " pod="openstack/placement-db-sync-qklhp" Dec 10 14:57:31 crc kubenswrapper[4718]: I1210 14:57:31.638202 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 10 14:57:31 crc kubenswrapper[4718]: I1210 14:57:31.642118 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f6329eaf-fcae-417e-96a8-96719f02420b-logs\") pod \"placement-db-sync-qklhp\" (UID: \"f6329eaf-fcae-417e-96a8-96719f02420b\") " pod="openstack/placement-db-sync-qklhp" Dec 10 14:57:31 crc kubenswrapper[4718]: I1210 14:57:31.644353 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 10 14:57:31 crc kubenswrapper[4718]: I1210 14:57:31.648325 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f6329eaf-fcae-417e-96a8-96719f02420b-scripts\") pod \"placement-db-sync-qklhp\" (UID: \"f6329eaf-fcae-417e-96a8-96719f02420b\") " pod="openstack/placement-db-sync-qklhp" Dec 10 14:57:31 crc kubenswrapper[4718]: I1210 14:57:31.648629 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 10 14:57:31 crc kubenswrapper[4718]: I1210 14:57:31.653896 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f6329eaf-fcae-417e-96a8-96719f02420b-combined-ca-bundle\") pod \"placement-db-sync-qklhp\" (UID: \"f6329eaf-fcae-417e-96a8-96719f02420b\") " pod="openstack/placement-db-sync-qklhp" Dec 10 14:57:31 crc kubenswrapper[4718]: I1210 14:57:31.667585 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c5df2\" (UniqueName: \"kubernetes.io/projected/f6329eaf-fcae-417e-96a8-96719f02420b-kube-api-access-c5df2\") pod \"placement-db-sync-qklhp\" (UID: \"f6329eaf-fcae-417e-96a8-96719f02420b\") " pod="openstack/placement-db-sync-qklhp" Dec 10 14:57:31 crc kubenswrapper[4718]: I1210 14:57:31.670226 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 10 14:57:31 crc kubenswrapper[4718]: I1210 14:57:31.674501 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f6329eaf-fcae-417e-96a8-96719f02420b-config-data\") pod \"placement-db-sync-qklhp\" (UID: \"f6329eaf-fcae-417e-96a8-96719f02420b\") " pod="openstack/placement-db-sync-qklhp" Dec 10 14:57:31 crc kubenswrapper[4718]: I1210 14:57:31.745012 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/18de3c3d-d30b-4c09-b1a2-3a6376de8843-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"18de3c3d-d30b-4c09-b1a2-3a6376de8843\") " pod="openstack/ceilometer-0" Dec 10 14:57:31 crc kubenswrapper[4718]: I1210 14:57:31.745073 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/18de3c3d-d30b-4c09-b1a2-3a6376de8843-run-httpd\") pod \"ceilometer-0\" (UID: \"18de3c3d-d30b-4c09-b1a2-3a6376de8843\") " pod="openstack/ceilometer-0" Dec 10 14:57:31 crc kubenswrapper[4718]: I1210 14:57:31.745104 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/fb18be8d-fe62-4361-8dd1-1a068f0cc2d4-config-data\") pod \"horizon-7d98bb654c-nmwh8\" (UID: \"fb18be8d-fe62-4361-8dd1-1a068f0cc2d4\") " pod="openstack/horizon-7d98bb654c-nmwh8" Dec 10 14:57:31 crc kubenswrapper[4718]: I1210 14:57:31.745161 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/fb18be8d-fe62-4361-8dd1-1a068f0cc2d4-horizon-secret-key\") pod \"horizon-7d98bb654c-nmwh8\" (UID: \"fb18be8d-fe62-4361-8dd1-1a068f0cc2d4\") " pod="openstack/horizon-7d98bb654c-nmwh8" Dec 10 14:57:31 crc kubenswrapper[4718]: I1210 14:57:31.745269 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/830caa3e-735c-4fda-9323-ad5cf8d4779a-dns-svc\") pod \"dnsmasq-dns-578598f949-4lgxw\" (UID: \"830caa3e-735c-4fda-9323-ad5cf8d4779a\") " pod="openstack/dnsmasq-dns-578598f949-4lgxw" Dec 10 14:57:31 crc kubenswrapper[4718]: I1210 14:57:31.745299 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/830caa3e-735c-4fda-9323-ad5cf8d4779a-ovsdbserver-sb\") pod \"dnsmasq-dns-578598f949-4lgxw\" (UID: \"830caa3e-735c-4fda-9323-ad5cf8d4779a\") " pod="openstack/dnsmasq-dns-578598f949-4lgxw" Dec 10 14:57:31 crc kubenswrapper[4718]: I1210 14:57:31.745316 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fb18be8d-fe62-4361-8dd1-1a068f0cc2d4-logs\") pod \"horizon-7d98bb654c-nmwh8\" (UID: \"fb18be8d-fe62-4361-8dd1-1a068f0cc2d4\") " pod="openstack/horizon-7d98bb654c-nmwh8" Dec 10 14:57:31 crc kubenswrapper[4718]: I1210 14:57:31.745354 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4b4gg\" (UniqueName: \"kubernetes.io/projected/830caa3e-735c-4fda-9323-ad5cf8d4779a-kube-api-access-4b4gg\") pod \"dnsmasq-dns-578598f949-4lgxw\" (UID: \"830caa3e-735c-4fda-9323-ad5cf8d4779a\") " pod="openstack/dnsmasq-dns-578598f949-4lgxw" Dec 10 14:57:31 crc kubenswrapper[4718]: I1210 14:57:31.745419 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/18de3c3d-d30b-4c09-b1a2-3a6376de8843-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"18de3c3d-d30b-4c09-b1a2-3a6376de8843\") " pod="openstack/ceilometer-0" Dec 10 14:57:31 crc kubenswrapper[4718]: I1210 14:57:31.745527 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/830caa3e-735c-4fda-9323-ad5cf8d4779a-ovsdbserver-nb\") pod \"dnsmasq-dns-578598f949-4lgxw\" (UID: \"830caa3e-735c-4fda-9323-ad5cf8d4779a\") " pod="openstack/dnsmasq-dns-578598f949-4lgxw" Dec 10 14:57:31 crc kubenswrapper[4718]: I1210 14:57:31.745587 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nqdvd\" (UniqueName: \"kubernetes.io/projected/fb18be8d-fe62-4361-8dd1-1a068f0cc2d4-kube-api-access-nqdvd\") pod \"horizon-7d98bb654c-nmwh8\" (UID: \"fb18be8d-fe62-4361-8dd1-1a068f0cc2d4\") " pod="openstack/horizon-7d98bb654c-nmwh8" Dec 10 14:57:31 crc kubenswrapper[4718]: I1210 14:57:31.745623 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/18de3c3d-d30b-4c09-b1a2-3a6376de8843-scripts\") pod \"ceilometer-0\" (UID: \"18de3c3d-d30b-4c09-b1a2-3a6376de8843\") " pod="openstack/ceilometer-0" Dec 10 14:57:31 crc kubenswrapper[4718]: I1210 14:57:31.745643 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/18de3c3d-d30b-4c09-b1a2-3a6376de8843-log-httpd\") pod \"ceilometer-0\" (UID: \"18de3c3d-d30b-4c09-b1a2-3a6376de8843\") " pod="openstack/ceilometer-0" Dec 10 14:57:31 crc kubenswrapper[4718]: I1210 14:57:31.745670 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/830caa3e-735c-4fda-9323-ad5cf8d4779a-config\") pod \"dnsmasq-dns-578598f949-4lgxw\" (UID: \"830caa3e-735c-4fda-9323-ad5cf8d4779a\") " pod="openstack/dnsmasq-dns-578598f949-4lgxw" Dec 10 14:57:31 crc kubenswrapper[4718]: I1210 14:57:31.745707 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/fb18be8d-fe62-4361-8dd1-1a068f0cc2d4-scripts\") pod \"horizon-7d98bb654c-nmwh8\" (UID: \"fb18be8d-fe62-4361-8dd1-1a068f0cc2d4\") " pod="openstack/horizon-7d98bb654c-nmwh8" Dec 10 14:57:31 crc kubenswrapper[4718]: I1210 14:57:31.745772 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/18de3c3d-d30b-4c09-b1a2-3a6376de8843-config-data\") pod \"ceilometer-0\" (UID: \"18de3c3d-d30b-4c09-b1a2-3a6376de8843\") " pod="openstack/ceilometer-0" Dec 10 14:57:31 crc kubenswrapper[4718]: I1210 14:57:31.745824 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/830caa3e-735c-4fda-9323-ad5cf8d4779a-dns-swift-storage-0\") pod \"dnsmasq-dns-578598f949-4lgxw\" (UID: \"830caa3e-735c-4fda-9323-ad5cf8d4779a\") " pod="openstack/dnsmasq-dns-578598f949-4lgxw" Dec 10 14:57:31 crc kubenswrapper[4718]: I1210 14:57:31.745847 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xmcqr\" (UniqueName: \"kubernetes.io/projected/18de3c3d-d30b-4c09-b1a2-3a6376de8843-kube-api-access-xmcqr\") pod \"ceilometer-0\" (UID: \"18de3c3d-d30b-4c09-b1a2-3a6376de8843\") " pod="openstack/ceilometer-0" Dec 10 14:57:31 crc kubenswrapper[4718]: I1210 14:57:31.747673 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/830caa3e-735c-4fda-9323-ad5cf8d4779a-ovsdbserver-nb\") pod \"dnsmasq-dns-578598f949-4lgxw\" (UID: \"830caa3e-735c-4fda-9323-ad5cf8d4779a\") " pod="openstack/dnsmasq-dns-578598f949-4lgxw" Dec 10 14:57:31 crc kubenswrapper[4718]: I1210 14:57:31.748295 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/830caa3e-735c-4fda-9323-ad5cf8d4779a-config\") pod \"dnsmasq-dns-578598f949-4lgxw\" (UID: \"830caa3e-735c-4fda-9323-ad5cf8d4779a\") " pod="openstack/dnsmasq-dns-578598f949-4lgxw" Dec 10 14:57:31 crc kubenswrapper[4718]: I1210 14:57:31.749879 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/830caa3e-735c-4fda-9323-ad5cf8d4779a-dns-svc\") pod \"dnsmasq-dns-578598f949-4lgxw\" (UID: \"830caa3e-735c-4fda-9323-ad5cf8d4779a\") " pod="openstack/dnsmasq-dns-578598f949-4lgxw" Dec 10 14:57:31 crc kubenswrapper[4718]: I1210 14:57:31.750004 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-2g9vq" event={"ID":"7c827924-284b-42bd-b871-a892565d7a73","Type":"ContainerStarted","Data":"55a433903eb90cfacd4bf518c38e6575d778e5e27218002dc784cda9d4a400dc"} Dec 10 14:57:31 crc kubenswrapper[4718]: I1210 14:57:31.746722 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/830caa3e-735c-4fda-9323-ad5cf8d4779a-ovsdbserver-sb\") pod \"dnsmasq-dns-578598f949-4lgxw\" (UID: \"830caa3e-735c-4fda-9323-ad5cf8d4779a\") " pod="openstack/dnsmasq-dns-578598f949-4lgxw" Dec 10 14:57:31 crc kubenswrapper[4718]: I1210 14:57:31.751639 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/830caa3e-735c-4fda-9323-ad5cf8d4779a-dns-swift-storage-0\") pod \"dnsmasq-dns-578598f949-4lgxw\" (UID: \"830caa3e-735c-4fda-9323-ad5cf8d4779a\") " pod="openstack/dnsmasq-dns-578598f949-4lgxw" Dec 10 14:57:31 crc kubenswrapper[4718]: I1210 14:57:31.769764 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-2g9vq"] Dec 10 14:57:31 crc kubenswrapper[4718]: I1210 14:57:31.773205 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4b4gg\" (UniqueName: \"kubernetes.io/projected/830caa3e-735c-4fda-9323-ad5cf8d4779a-kube-api-access-4b4gg\") pod \"dnsmasq-dns-578598f949-4lgxw\" (UID: \"830caa3e-735c-4fda-9323-ad5cf8d4779a\") " pod="openstack/dnsmasq-dns-578598f949-4lgxw" Dec 10 14:57:31 crc kubenswrapper[4718]: I1210 14:57:31.848067 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/18de3c3d-d30b-4c09-b1a2-3a6376de8843-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"18de3c3d-d30b-4c09-b1a2-3a6376de8843\") " pod="openstack/ceilometer-0" Dec 10 14:57:31 crc kubenswrapper[4718]: I1210 14:57:31.848206 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nqdvd\" (UniqueName: \"kubernetes.io/projected/fb18be8d-fe62-4361-8dd1-1a068f0cc2d4-kube-api-access-nqdvd\") pod \"horizon-7d98bb654c-nmwh8\" (UID: \"fb18be8d-fe62-4361-8dd1-1a068f0cc2d4\") " pod="openstack/horizon-7d98bb654c-nmwh8" Dec 10 14:57:31 crc kubenswrapper[4718]: I1210 14:57:31.848236 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/18de3c3d-d30b-4c09-b1a2-3a6376de8843-scripts\") pod \"ceilometer-0\" (UID: \"18de3c3d-d30b-4c09-b1a2-3a6376de8843\") " pod="openstack/ceilometer-0" Dec 10 14:57:31 crc kubenswrapper[4718]: I1210 14:57:31.848265 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/18de3c3d-d30b-4c09-b1a2-3a6376de8843-log-httpd\") pod \"ceilometer-0\" (UID: \"18de3c3d-d30b-4c09-b1a2-3a6376de8843\") " pod="openstack/ceilometer-0" Dec 10 14:57:31 crc kubenswrapper[4718]: I1210 14:57:31.848298 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/fb18be8d-fe62-4361-8dd1-1a068f0cc2d4-scripts\") pod \"horizon-7d98bb654c-nmwh8\" (UID: \"fb18be8d-fe62-4361-8dd1-1a068f0cc2d4\") " pod="openstack/horizon-7d98bb654c-nmwh8" Dec 10 14:57:31 crc kubenswrapper[4718]: I1210 14:57:31.848341 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/18de3c3d-d30b-4c09-b1a2-3a6376de8843-config-data\") pod \"ceilometer-0\" (UID: \"18de3c3d-d30b-4c09-b1a2-3a6376de8843\") " pod="openstack/ceilometer-0" Dec 10 14:57:31 crc kubenswrapper[4718]: I1210 14:57:31.848372 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xmcqr\" (UniqueName: \"kubernetes.io/projected/18de3c3d-d30b-4c09-b1a2-3a6376de8843-kube-api-access-xmcqr\") pod \"ceilometer-0\" (UID: \"18de3c3d-d30b-4c09-b1a2-3a6376de8843\") " pod="openstack/ceilometer-0" Dec 10 14:57:31 crc kubenswrapper[4718]: I1210 14:57:31.848452 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/18de3c3d-d30b-4c09-b1a2-3a6376de8843-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"18de3c3d-d30b-4c09-b1a2-3a6376de8843\") " pod="openstack/ceilometer-0" Dec 10 14:57:31 crc kubenswrapper[4718]: I1210 14:57:31.848475 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/18de3c3d-d30b-4c09-b1a2-3a6376de8843-run-httpd\") pod \"ceilometer-0\" (UID: \"18de3c3d-d30b-4c09-b1a2-3a6376de8843\") " pod="openstack/ceilometer-0" Dec 10 14:57:31 crc kubenswrapper[4718]: I1210 14:57:31.848504 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/fb18be8d-fe62-4361-8dd1-1a068f0cc2d4-config-data\") pod \"horizon-7d98bb654c-nmwh8\" (UID: \"fb18be8d-fe62-4361-8dd1-1a068f0cc2d4\") " pod="openstack/horizon-7d98bb654c-nmwh8" Dec 10 14:57:31 crc kubenswrapper[4718]: I1210 14:57:31.848533 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/fb18be8d-fe62-4361-8dd1-1a068f0cc2d4-horizon-secret-key\") pod \"horizon-7d98bb654c-nmwh8\" (UID: \"fb18be8d-fe62-4361-8dd1-1a068f0cc2d4\") " pod="openstack/horizon-7d98bb654c-nmwh8" Dec 10 14:57:31 crc kubenswrapper[4718]: I1210 14:57:31.848585 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fb18be8d-fe62-4361-8dd1-1a068f0cc2d4-logs\") pod \"horizon-7d98bb654c-nmwh8\" (UID: \"fb18be8d-fe62-4361-8dd1-1a068f0cc2d4\") " pod="openstack/horizon-7d98bb654c-nmwh8" Dec 10 14:57:31 crc kubenswrapper[4718]: I1210 14:57:31.849553 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fb18be8d-fe62-4361-8dd1-1a068f0cc2d4-logs\") pod \"horizon-7d98bb654c-nmwh8\" (UID: \"fb18be8d-fe62-4361-8dd1-1a068f0cc2d4\") " pod="openstack/horizon-7d98bb654c-nmwh8" Dec 10 14:57:31 crc kubenswrapper[4718]: I1210 14:57:31.852663 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/fb18be8d-fe62-4361-8dd1-1a068f0cc2d4-scripts\") pod \"horizon-7d98bb654c-nmwh8\" (UID: \"fb18be8d-fe62-4361-8dd1-1a068f0cc2d4\") " pod="openstack/horizon-7d98bb654c-nmwh8" Dec 10 14:57:31 crc kubenswrapper[4718]: I1210 14:57:31.853297 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/18de3c3d-d30b-4c09-b1a2-3a6376de8843-log-httpd\") pod \"ceilometer-0\" (UID: \"18de3c3d-d30b-4c09-b1a2-3a6376de8843\") " pod="openstack/ceilometer-0" Dec 10 14:57:31 crc kubenswrapper[4718]: I1210 14:57:31.855745 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/fb18be8d-fe62-4361-8dd1-1a068f0cc2d4-config-data\") pod \"horizon-7d98bb654c-nmwh8\" (UID: \"fb18be8d-fe62-4361-8dd1-1a068f0cc2d4\") " pod="openstack/horizon-7d98bb654c-nmwh8" Dec 10 14:57:31 crc kubenswrapper[4718]: I1210 14:57:31.856352 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/18de3c3d-d30b-4c09-b1a2-3a6376de8843-run-httpd\") pod \"ceilometer-0\" (UID: \"18de3c3d-d30b-4c09-b1a2-3a6376de8843\") " pod="openstack/ceilometer-0" Dec 10 14:57:31 crc kubenswrapper[4718]: I1210 14:57:31.858564 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/18de3c3d-d30b-4c09-b1a2-3a6376de8843-scripts\") pod \"ceilometer-0\" (UID: \"18de3c3d-d30b-4c09-b1a2-3a6376de8843\") " pod="openstack/ceilometer-0" Dec 10 14:57:31 crc kubenswrapper[4718]: I1210 14:57:31.859228 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/18de3c3d-d30b-4c09-b1a2-3a6376de8843-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"18de3c3d-d30b-4c09-b1a2-3a6376de8843\") " pod="openstack/ceilometer-0" Dec 10 14:57:31 crc kubenswrapper[4718]: I1210 14:57:31.872876 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/18de3c3d-d30b-4c09-b1a2-3a6376de8843-config-data\") pod \"ceilometer-0\" (UID: \"18de3c3d-d30b-4c09-b1a2-3a6376de8843\") " pod="openstack/ceilometer-0" Dec 10 14:57:31 crc kubenswrapper[4718]: I1210 14:57:31.875938 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/18de3c3d-d30b-4c09-b1a2-3a6376de8843-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"18de3c3d-d30b-4c09-b1a2-3a6376de8843\") " pod="openstack/ceilometer-0" Dec 10 14:57:31 crc kubenswrapper[4718]: I1210 14:57:31.887534 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/fb18be8d-fe62-4361-8dd1-1a068f0cc2d4-horizon-secret-key\") pod \"horizon-7d98bb654c-nmwh8\" (UID: \"fb18be8d-fe62-4361-8dd1-1a068f0cc2d4\") " pod="openstack/horizon-7d98bb654c-nmwh8" Dec 10 14:57:31 crc kubenswrapper[4718]: I1210 14:57:31.889658 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-qklhp" Dec 10 14:57:31 crc kubenswrapper[4718]: I1210 14:57:31.898637 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nqdvd\" (UniqueName: \"kubernetes.io/projected/fb18be8d-fe62-4361-8dd1-1a068f0cc2d4-kube-api-access-nqdvd\") pod \"horizon-7d98bb654c-nmwh8\" (UID: \"fb18be8d-fe62-4361-8dd1-1a068f0cc2d4\") " pod="openstack/horizon-7d98bb654c-nmwh8" Dec 10 14:57:31 crc kubenswrapper[4718]: I1210 14:57:31.901469 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xmcqr\" (UniqueName: \"kubernetes.io/projected/18de3c3d-d30b-4c09-b1a2-3a6376de8843-kube-api-access-xmcqr\") pod \"ceilometer-0\" (UID: \"18de3c3d-d30b-4c09-b1a2-3a6376de8843\") " pod="openstack/ceilometer-0" Dec 10 14:57:31 crc kubenswrapper[4718]: I1210 14:57:31.911402 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-578598f949-4lgxw" Dec 10 14:57:31 crc kubenswrapper[4718]: I1210 14:57:31.934872 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7d98bb654c-nmwh8" Dec 10 14:57:32 crc kubenswrapper[4718]: I1210 14:57:32.010248 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 10 14:57:32 crc kubenswrapper[4718]: I1210 14:57:32.019453 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-58bbf48b7f-vxqcx"] Dec 10 14:57:32 crc kubenswrapper[4718]: W1210 14:57:32.119771 4718 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6c6829a4_a5f2_43ac_b574_c4d9edaf0f85.slice/crio-014b3c878c29be2b997e342c5fabccaeb396b3cc61a0cc9ff8deba701f3029c2 WatchSource:0}: Error finding container 014b3c878c29be2b997e342c5fabccaeb396b3cc61a0cc9ff8deba701f3029c2: Status 404 returned error can't find the container with id 014b3c878c29be2b997e342c5fabccaeb396b3cc61a0cc9ff8deba701f3029c2 Dec 10 14:57:32 crc kubenswrapper[4718]: I1210 14:57:32.244296 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-cwrnt"] Dec 10 14:57:32 crc kubenswrapper[4718]: I1210 14:57:32.420509 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-hctjl"] Dec 10 14:57:32 crc kubenswrapper[4718]: W1210 14:57:32.443706 4718 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod71a092ad_773d_47b4_bc1f_73358adecf4a.slice/crio-15c5569295772b94dae2b5f9138d2ea881de6ca4a9b203fd4169a88ed3fc0bf0 WatchSource:0}: Error finding container 15c5569295772b94dae2b5f9138d2ea881de6ca4a9b203fd4169a88ed3fc0bf0: Status 404 returned error can't find the container with id 15c5569295772b94dae2b5f9138d2ea881de6ca4a9b203fd4169a88ed3fc0bf0 Dec 10 14:57:32 crc kubenswrapper[4718]: I1210 14:57:32.464457 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-zr6j2"] Dec 10 14:57:32 crc kubenswrapper[4718]: I1210 14:57:32.572414 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-5d86d75cbf-rg448"] Dec 10 14:57:32 crc kubenswrapper[4718]: W1210 14:57:32.655132 4718 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod67ad4684_e977_4f3e_b1f0_2efa3d60567f.slice/crio-85b42b90aa84843029a8e09471ed602576bed4a51ec2ae0072e6e6f0479dc6fc WatchSource:0}: Error finding container 85b42b90aa84843029a8e09471ed602576bed4a51ec2ae0072e6e6f0479dc6fc: Status 404 returned error can't find the container with id 85b42b90aa84843029a8e09471ed602576bed4a51ec2ae0072e6e6f0479dc6fc Dec 10 14:57:32 crc kubenswrapper[4718]: W1210 14:57:32.752925 4718 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfb18be8d_fe62_4361_8dd1_1a068f0cc2d4.slice/crio-fcb4739bab23d1c8685f1223a94acc6dcd3d7ae74ca747432cc789f03b1c8f3f WatchSource:0}: Error finding container fcb4739bab23d1c8685f1223a94acc6dcd3d7ae74ca747432cc789f03b1c8f3f: Status 404 returned error can't find the container with id fcb4739bab23d1c8685f1223a94acc6dcd3d7ae74ca747432cc789f03b1c8f3f Dec 10 14:57:32 crc kubenswrapper[4718]: I1210 14:57:32.756142 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-578598f949-4lgxw"] Dec 10 14:57:32 crc kubenswrapper[4718]: I1210 14:57:32.778385 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-578598f949-4lgxw" event={"ID":"830caa3e-735c-4fda-9323-ad5cf8d4779a","Type":"ContainerStarted","Data":"c35eb49bc0f96de8946bf83646b4fe04dd20714cb8b49b8887327be89d3b959c"} Dec 10 14:57:32 crc kubenswrapper[4718]: I1210 14:57:32.782635 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-7d98bb654c-nmwh8"] Dec 10 14:57:32 crc kubenswrapper[4718]: I1210 14:57:32.789861 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5d86d75cbf-rg448" event={"ID":"67ad4684-e977-4f3e-b1f0-2efa3d60567f","Type":"ContainerStarted","Data":"85b42b90aa84843029a8e09471ed602576bed4a51ec2ae0072e6e6f0479dc6fc"} Dec 10 14:57:32 crc kubenswrapper[4718]: I1210 14:57:32.799357 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-2g9vq" event={"ID":"7c827924-284b-42bd-b871-a892565d7a73","Type":"ContainerStarted","Data":"a14f45af416b3a29626303cc96f6de332ad3d25851121806e75e551a172f9346"} Dec 10 14:57:32 crc kubenswrapper[4718]: I1210 14:57:32.837332 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-qklhp"] Dec 10 14:57:32 crc kubenswrapper[4718]: I1210 14:57:32.842742 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-cwrnt" event={"ID":"3990dc15-53e8-4cd7-a25d-f9b322b74f3e","Type":"ContainerStarted","Data":"8030f2957773807418461c2a2f1fa5b1ec32459c4bed524388222c09b87f0e0a"} Dec 10 14:57:32 crc kubenswrapper[4718]: I1210 14:57:32.859643 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-2g9vq" podStartSLOduration=2.859572042 podStartE2EDuration="2.859572042s" podCreationTimestamp="2025-12-10 14:57:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 14:57:32.827180255 +0000 UTC m=+1557.776403672" watchObservedRunningTime="2025-12-10 14:57:32.859572042 +0000 UTC m=+1557.808795459" Dec 10 14:57:32 crc kubenswrapper[4718]: I1210 14:57:32.870925 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-zr6j2" event={"ID":"db4ee945-67d7-4670-9192-2ecaf4f03c3d","Type":"ContainerStarted","Data":"60fede4de97b5270c89c4bd04a080af782c13ea31acedbb13bbef7c09c0d048a"} Dec 10 14:57:32 crc kubenswrapper[4718]: I1210 14:57:32.878997 4718 generic.go:334] "Generic (PLEG): container finished" podID="6c6829a4-a5f2-43ac-b574-c4d9edaf0f85" containerID="c82e39dae39789df7d1443aac027955e54cffb1e22b81a3e8e7ea9b1e8d53bff" exitCode=0 Dec 10 14:57:32 crc kubenswrapper[4718]: I1210 14:57:32.879209 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58bbf48b7f-vxqcx" event={"ID":"6c6829a4-a5f2-43ac-b574-c4d9edaf0f85","Type":"ContainerDied","Data":"c82e39dae39789df7d1443aac027955e54cffb1e22b81a3e8e7ea9b1e8d53bff"} Dec 10 14:57:32 crc kubenswrapper[4718]: I1210 14:57:32.879456 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58bbf48b7f-vxqcx" event={"ID":"6c6829a4-a5f2-43ac-b574-c4d9edaf0f85","Type":"ContainerStarted","Data":"014b3c878c29be2b997e342c5fabccaeb396b3cc61a0cc9ff8deba701f3029c2"} Dec 10 14:57:32 crc kubenswrapper[4718]: I1210 14:57:32.892739 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-hctjl" event={"ID":"71a092ad-773d-47b4-bc1f-73358adecf4a","Type":"ContainerStarted","Data":"15c5569295772b94dae2b5f9138d2ea881de6ca4a9b203fd4169a88ed3fc0bf0"} Dec 10 14:57:33 crc kubenswrapper[4718]: I1210 14:57:33.053968 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 10 14:57:33 crc kubenswrapper[4718]: I1210 14:57:33.723729 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58bbf48b7f-vxqcx" Dec 10 14:57:33 crc kubenswrapper[4718]: I1210 14:57:33.802444 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6c6829a4-a5f2-43ac-b574-c4d9edaf0f85-dns-svc\") pod \"6c6829a4-a5f2-43ac-b574-c4d9edaf0f85\" (UID: \"6c6829a4-a5f2-43ac-b574-c4d9edaf0f85\") " Dec 10 14:57:33 crc kubenswrapper[4718]: I1210 14:57:33.802633 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6c6829a4-a5f2-43ac-b574-c4d9edaf0f85-config\") pod \"6c6829a4-a5f2-43ac-b574-c4d9edaf0f85\" (UID: \"6c6829a4-a5f2-43ac-b574-c4d9edaf0f85\") " Dec 10 14:57:33 crc kubenswrapper[4718]: I1210 14:57:33.802698 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n9xmh\" (UniqueName: \"kubernetes.io/projected/6c6829a4-a5f2-43ac-b574-c4d9edaf0f85-kube-api-access-n9xmh\") pod \"6c6829a4-a5f2-43ac-b574-c4d9edaf0f85\" (UID: \"6c6829a4-a5f2-43ac-b574-c4d9edaf0f85\") " Dec 10 14:57:33 crc kubenswrapper[4718]: I1210 14:57:33.802903 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6c6829a4-a5f2-43ac-b574-c4d9edaf0f85-ovsdbserver-nb\") pod \"6c6829a4-a5f2-43ac-b574-c4d9edaf0f85\" (UID: \"6c6829a4-a5f2-43ac-b574-c4d9edaf0f85\") " Dec 10 14:57:33 crc kubenswrapper[4718]: I1210 14:57:33.803002 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6c6829a4-a5f2-43ac-b574-c4d9edaf0f85-ovsdbserver-sb\") pod \"6c6829a4-a5f2-43ac-b574-c4d9edaf0f85\" (UID: \"6c6829a4-a5f2-43ac-b574-c4d9edaf0f85\") " Dec 10 14:57:33 crc kubenswrapper[4718]: I1210 14:57:33.803045 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6c6829a4-a5f2-43ac-b574-c4d9edaf0f85-dns-swift-storage-0\") pod \"6c6829a4-a5f2-43ac-b574-c4d9edaf0f85\" (UID: \"6c6829a4-a5f2-43ac-b574-c4d9edaf0f85\") " Dec 10 14:57:33 crc kubenswrapper[4718]: I1210 14:57:33.818684 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6c6829a4-a5f2-43ac-b574-c4d9edaf0f85-kube-api-access-n9xmh" (OuterVolumeSpecName: "kube-api-access-n9xmh") pod "6c6829a4-a5f2-43ac-b574-c4d9edaf0f85" (UID: "6c6829a4-a5f2-43ac-b574-c4d9edaf0f85"). InnerVolumeSpecName "kube-api-access-n9xmh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 14:57:33 crc kubenswrapper[4718]: I1210 14:57:33.847907 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6c6829a4-a5f2-43ac-b574-c4d9edaf0f85-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "6c6829a4-a5f2-43ac-b574-c4d9edaf0f85" (UID: "6c6829a4-a5f2-43ac-b574-c4d9edaf0f85"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:57:33 crc kubenswrapper[4718]: I1210 14:57:33.860483 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6c6829a4-a5f2-43ac-b574-c4d9edaf0f85-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "6c6829a4-a5f2-43ac-b574-c4d9edaf0f85" (UID: "6c6829a4-a5f2-43ac-b574-c4d9edaf0f85"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:57:33 crc kubenswrapper[4718]: I1210 14:57:33.868707 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6c6829a4-a5f2-43ac-b574-c4d9edaf0f85-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "6c6829a4-a5f2-43ac-b574-c4d9edaf0f85" (UID: "6c6829a4-a5f2-43ac-b574-c4d9edaf0f85"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:57:33 crc kubenswrapper[4718]: I1210 14:57:33.870493 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6c6829a4-a5f2-43ac-b574-c4d9edaf0f85-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "6c6829a4-a5f2-43ac-b574-c4d9edaf0f85" (UID: "6c6829a4-a5f2-43ac-b574-c4d9edaf0f85"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:57:33 crc kubenswrapper[4718]: I1210 14:57:33.875313 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6c6829a4-a5f2-43ac-b574-c4d9edaf0f85-config" (OuterVolumeSpecName: "config") pod "6c6829a4-a5f2-43ac-b574-c4d9edaf0f85" (UID: "6c6829a4-a5f2-43ac-b574-c4d9edaf0f85"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:57:33 crc kubenswrapper[4718]: I1210 14:57:33.905726 4718 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6c6829a4-a5f2-43ac-b574-c4d9edaf0f85-config\") on node \"crc\" DevicePath \"\"" Dec 10 14:57:33 crc kubenswrapper[4718]: I1210 14:57:33.905776 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n9xmh\" (UniqueName: \"kubernetes.io/projected/6c6829a4-a5f2-43ac-b574-c4d9edaf0f85-kube-api-access-n9xmh\") on node \"crc\" DevicePath \"\"" Dec 10 14:57:33 crc kubenswrapper[4718]: I1210 14:57:33.905791 4718 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6c6829a4-a5f2-43ac-b574-c4d9edaf0f85-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 10 14:57:33 crc kubenswrapper[4718]: I1210 14:57:33.905802 4718 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6c6829a4-a5f2-43ac-b574-c4d9edaf0f85-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 10 14:57:33 crc kubenswrapper[4718]: I1210 14:57:33.905821 4718 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6c6829a4-a5f2-43ac-b574-c4d9edaf0f85-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 10 14:57:33 crc kubenswrapper[4718]: I1210 14:57:33.905832 4718 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6c6829a4-a5f2-43ac-b574-c4d9edaf0f85-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 10 14:57:33 crc kubenswrapper[4718]: I1210 14:57:33.964419 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7d98bb654c-nmwh8" event={"ID":"fb18be8d-fe62-4361-8dd1-1a068f0cc2d4","Type":"ContainerStarted","Data":"fcb4739bab23d1c8685f1223a94acc6dcd3d7ae74ca747432cc789f03b1c8f3f"} Dec 10 14:57:33 crc kubenswrapper[4718]: I1210 14:57:33.989696 4718 generic.go:334] "Generic (PLEG): container finished" podID="830caa3e-735c-4fda-9323-ad5cf8d4779a" containerID="50dccb61ea20f2f45d72195938f69e3a3d8444e08fd9804061d2efd12042234e" exitCode=0 Dec 10 14:57:33 crc kubenswrapper[4718]: I1210 14:57:33.991364 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-578598f949-4lgxw" event={"ID":"830caa3e-735c-4fda-9323-ad5cf8d4779a","Type":"ContainerDied","Data":"50dccb61ea20f2f45d72195938f69e3a3d8444e08fd9804061d2efd12042234e"} Dec 10 14:57:34 crc kubenswrapper[4718]: I1210 14:57:34.032840 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-qklhp" event={"ID":"f6329eaf-fcae-417e-96a8-96719f02420b","Type":"ContainerStarted","Data":"3f2bd73641834fccf96673b525029ebe486986660261430f3467d2e66e69f0d5"} Dec 10 14:57:34 crc kubenswrapper[4718]: I1210 14:57:34.076321 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-zr6j2" event={"ID":"db4ee945-67d7-4670-9192-2ecaf4f03c3d","Type":"ContainerStarted","Data":"1ac122f7920ac5e36138ce8ff9e8333fbba94203b46380f16113189b53ac545e"} Dec 10 14:57:34 crc kubenswrapper[4718]: I1210 14:57:34.088950 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58bbf48b7f-vxqcx" event={"ID":"6c6829a4-a5f2-43ac-b574-c4d9edaf0f85","Type":"ContainerDied","Data":"014b3c878c29be2b997e342c5fabccaeb396b3cc61a0cc9ff8deba701f3029c2"} Dec 10 14:57:34 crc kubenswrapper[4718]: I1210 14:57:34.089029 4718 scope.go:117] "RemoveContainer" containerID="c82e39dae39789df7d1443aac027955e54cffb1e22b81a3e8e7ea9b1e8d53bff" Dec 10 14:57:34 crc kubenswrapper[4718]: I1210 14:57:34.089181 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58bbf48b7f-vxqcx" Dec 10 14:57:34 crc kubenswrapper[4718]: I1210 14:57:34.095005 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-zr6j2" podStartSLOduration=4.094961301 podStartE2EDuration="4.094961301s" podCreationTimestamp="2025-12-10 14:57:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 14:57:34.089658845 +0000 UTC m=+1559.038882252" watchObservedRunningTime="2025-12-10 14:57:34.094961301 +0000 UTC m=+1559.044184718" Dec 10 14:57:34 crc kubenswrapper[4718]: I1210 14:57:34.115816 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"18de3c3d-d30b-4c09-b1a2-3a6376de8843","Type":"ContainerStarted","Data":"6c0bd3e4a77a149951b0ce1032fdd782f58f3c67d6a2eedd0de928d6e88e5d7b"} Dec 10 14:57:34 crc kubenswrapper[4718]: I1210 14:57:34.467147 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-5d86d75cbf-rg448"] Dec 10 14:57:34 crc kubenswrapper[4718]: I1210 14:57:34.581847 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-58bbf48b7f-vxqcx"] Dec 10 14:57:34 crc kubenswrapper[4718]: I1210 14:57:34.598706 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-5547d7fc49-599jc"] Dec 10 14:57:34 crc kubenswrapper[4718]: E1210 14:57:34.599482 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c6829a4-a5f2-43ac-b574-c4d9edaf0f85" containerName="init" Dec 10 14:57:34 crc kubenswrapper[4718]: I1210 14:57:34.599501 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c6829a4-a5f2-43ac-b574-c4d9edaf0f85" containerName="init" Dec 10 14:57:34 crc kubenswrapper[4718]: I1210 14:57:34.599766 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="6c6829a4-a5f2-43ac-b574-c4d9edaf0f85" containerName="init" Dec 10 14:57:34 crc kubenswrapper[4718]: I1210 14:57:34.601635 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5547d7fc49-599jc" Dec 10 14:57:34 crc kubenswrapper[4718]: I1210 14:57:34.617675 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-58bbf48b7f-vxqcx"] Dec 10 14:57:34 crc kubenswrapper[4718]: I1210 14:57:34.656116 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-5547d7fc49-599jc"] Dec 10 14:57:34 crc kubenswrapper[4718]: I1210 14:57:34.677716 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1f14e36c-813e-413c-a7e2-140c02eb6599-scripts\") pod \"horizon-5547d7fc49-599jc\" (UID: \"1f14e36c-813e-413c-a7e2-140c02eb6599\") " pod="openstack/horizon-5547d7fc49-599jc" Dec 10 14:57:34 crc kubenswrapper[4718]: I1210 14:57:34.678167 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/1f14e36c-813e-413c-a7e2-140c02eb6599-horizon-secret-key\") pod \"horizon-5547d7fc49-599jc\" (UID: \"1f14e36c-813e-413c-a7e2-140c02eb6599\") " pod="openstack/horizon-5547d7fc49-599jc" Dec 10 14:57:34 crc kubenswrapper[4718]: I1210 14:57:34.678365 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1f14e36c-813e-413c-a7e2-140c02eb6599-logs\") pod \"horizon-5547d7fc49-599jc\" (UID: \"1f14e36c-813e-413c-a7e2-140c02eb6599\") " pod="openstack/horizon-5547d7fc49-599jc" Dec 10 14:57:34 crc kubenswrapper[4718]: I1210 14:57:34.678532 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1f14e36c-813e-413c-a7e2-140c02eb6599-config-data\") pod \"horizon-5547d7fc49-599jc\" (UID: \"1f14e36c-813e-413c-a7e2-140c02eb6599\") " pod="openstack/horizon-5547d7fc49-599jc" Dec 10 14:57:34 crc kubenswrapper[4718]: I1210 14:57:34.678708 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gm44f\" (UniqueName: \"kubernetes.io/projected/1f14e36c-813e-413c-a7e2-140c02eb6599-kube-api-access-gm44f\") pod \"horizon-5547d7fc49-599jc\" (UID: \"1f14e36c-813e-413c-a7e2-140c02eb6599\") " pod="openstack/horizon-5547d7fc49-599jc" Dec 10 14:57:34 crc kubenswrapper[4718]: I1210 14:57:34.733968 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 10 14:57:34 crc kubenswrapper[4718]: I1210 14:57:34.781463 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1f14e36c-813e-413c-a7e2-140c02eb6599-config-data\") pod \"horizon-5547d7fc49-599jc\" (UID: \"1f14e36c-813e-413c-a7e2-140c02eb6599\") " pod="openstack/horizon-5547d7fc49-599jc" Dec 10 14:57:34 crc kubenswrapper[4718]: I1210 14:57:34.781593 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gm44f\" (UniqueName: \"kubernetes.io/projected/1f14e36c-813e-413c-a7e2-140c02eb6599-kube-api-access-gm44f\") pod \"horizon-5547d7fc49-599jc\" (UID: \"1f14e36c-813e-413c-a7e2-140c02eb6599\") " pod="openstack/horizon-5547d7fc49-599jc" Dec 10 14:57:34 crc kubenswrapper[4718]: I1210 14:57:34.781718 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1f14e36c-813e-413c-a7e2-140c02eb6599-scripts\") pod \"horizon-5547d7fc49-599jc\" (UID: \"1f14e36c-813e-413c-a7e2-140c02eb6599\") " pod="openstack/horizon-5547d7fc49-599jc" Dec 10 14:57:34 crc kubenswrapper[4718]: I1210 14:57:34.781898 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/1f14e36c-813e-413c-a7e2-140c02eb6599-horizon-secret-key\") pod \"horizon-5547d7fc49-599jc\" (UID: \"1f14e36c-813e-413c-a7e2-140c02eb6599\") " pod="openstack/horizon-5547d7fc49-599jc" Dec 10 14:57:34 crc kubenswrapper[4718]: I1210 14:57:34.781956 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1f14e36c-813e-413c-a7e2-140c02eb6599-logs\") pod \"horizon-5547d7fc49-599jc\" (UID: \"1f14e36c-813e-413c-a7e2-140c02eb6599\") " pod="openstack/horizon-5547d7fc49-599jc" Dec 10 14:57:34 crc kubenswrapper[4718]: I1210 14:57:34.782592 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1f14e36c-813e-413c-a7e2-140c02eb6599-logs\") pod \"horizon-5547d7fc49-599jc\" (UID: \"1f14e36c-813e-413c-a7e2-140c02eb6599\") " pod="openstack/horizon-5547d7fc49-599jc" Dec 10 14:57:34 crc kubenswrapper[4718]: I1210 14:57:34.783372 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1f14e36c-813e-413c-a7e2-140c02eb6599-scripts\") pod \"horizon-5547d7fc49-599jc\" (UID: \"1f14e36c-813e-413c-a7e2-140c02eb6599\") " pod="openstack/horizon-5547d7fc49-599jc" Dec 10 14:57:34 crc kubenswrapper[4718]: I1210 14:57:34.784815 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1f14e36c-813e-413c-a7e2-140c02eb6599-config-data\") pod \"horizon-5547d7fc49-599jc\" (UID: \"1f14e36c-813e-413c-a7e2-140c02eb6599\") " pod="openstack/horizon-5547d7fc49-599jc" Dec 10 14:57:34 crc kubenswrapper[4718]: I1210 14:57:34.795489 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/1f14e36c-813e-413c-a7e2-140c02eb6599-horizon-secret-key\") pod \"horizon-5547d7fc49-599jc\" (UID: \"1f14e36c-813e-413c-a7e2-140c02eb6599\") " pod="openstack/horizon-5547d7fc49-599jc" Dec 10 14:57:34 crc kubenswrapper[4718]: I1210 14:57:34.810124 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gm44f\" (UniqueName: \"kubernetes.io/projected/1f14e36c-813e-413c-a7e2-140c02eb6599-kube-api-access-gm44f\") pod \"horizon-5547d7fc49-599jc\" (UID: \"1f14e36c-813e-413c-a7e2-140c02eb6599\") " pod="openstack/horizon-5547d7fc49-599jc" Dec 10 14:57:35 crc kubenswrapper[4718]: I1210 14:57:35.009992 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5547d7fc49-599jc" Dec 10 14:57:35 crc kubenswrapper[4718]: I1210 14:57:35.020673 4718 scope.go:117] "RemoveContainer" containerID="76beffb68a4302af15a19b5c310d822594a21c7c7ec551bf4bf551ca3dcf1f21" Dec 10 14:57:35 crc kubenswrapper[4718]: E1210 14:57:35.021304 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8zmhn_openshift-machine-config-operator(8db53917-7cfb-496d-b8a0-5cc68f3be4e7)\"" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" podUID="8db53917-7cfb-496d-b8a0-5cc68f3be4e7" Dec 10 14:57:35 crc kubenswrapper[4718]: I1210 14:57:35.291404 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-578598f949-4lgxw" event={"ID":"830caa3e-735c-4fda-9323-ad5cf8d4779a","Type":"ContainerStarted","Data":"9cdcf6bfb5a9d58472ed8acc4d9913a45c1ce0eeb3f6857524686c93222b8e7b"} Dec 10 14:57:35 crc kubenswrapper[4718]: I1210 14:57:35.291882 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-578598f949-4lgxw" Dec 10 14:57:35 crc kubenswrapper[4718]: I1210 14:57:35.340530 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-578598f949-4lgxw" podStartSLOduration=4.340502328 podStartE2EDuration="4.340502328s" podCreationTimestamp="2025-12-10 14:57:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 14:57:35.329352314 +0000 UTC m=+1560.278575731" watchObservedRunningTime="2025-12-10 14:57:35.340502328 +0000 UTC m=+1560.289725745" Dec 10 14:57:35 crc kubenswrapper[4718]: I1210 14:57:35.789710 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-5547d7fc49-599jc"] Dec 10 14:57:36 crc kubenswrapper[4718]: I1210 14:57:36.056626 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6c6829a4-a5f2-43ac-b574-c4d9edaf0f85" path="/var/lib/kubelet/pods/6c6829a4-a5f2-43ac-b574-c4d9edaf0f85/volumes" Dec 10 14:57:36 crc kubenswrapper[4718]: I1210 14:57:36.369371 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5547d7fc49-599jc" event={"ID":"1f14e36c-813e-413c-a7e2-140c02eb6599","Type":"ContainerStarted","Data":"e7a5bd8866e7654e22182f384c47a8c29193b45f06bcee9a16d3f6561ae28b26"} Dec 10 14:57:40 crc kubenswrapper[4718]: I1210 14:57:40.897506 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-7d98bb654c-nmwh8"] Dec 10 14:57:40 crc kubenswrapper[4718]: I1210 14:57:40.940732 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-6bb7f498bd-pjx6h"] Dec 10 14:57:40 crc kubenswrapper[4718]: I1210 14:57:40.943255 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6bb7f498bd-pjx6h" Dec 10 14:57:40 crc kubenswrapper[4718]: I1210 14:57:40.946713 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-horizon-svc" Dec 10 14:57:40 crc kubenswrapper[4718]: I1210 14:57:40.955303 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-6bb7f498bd-pjx6h"] Dec 10 14:57:40 crc kubenswrapper[4718]: I1210 14:57:40.965747 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/57bc5c19-c945-4bca-adef-0ddf1b9fabac-horizon-tls-certs\") pod \"horizon-6bb7f498bd-pjx6h\" (UID: \"57bc5c19-c945-4bca-adef-0ddf1b9fabac\") " pod="openstack/horizon-6bb7f498bd-pjx6h" Dec 10 14:57:40 crc kubenswrapper[4718]: I1210 14:57:40.971122 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/57bc5c19-c945-4bca-adef-0ddf1b9fabac-horizon-secret-key\") pod \"horizon-6bb7f498bd-pjx6h\" (UID: \"57bc5c19-c945-4bca-adef-0ddf1b9fabac\") " pod="openstack/horizon-6bb7f498bd-pjx6h" Dec 10 14:57:40 crc kubenswrapper[4718]: I1210 14:57:40.971495 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57bc5c19-c945-4bca-adef-0ddf1b9fabac-combined-ca-bundle\") pod \"horizon-6bb7f498bd-pjx6h\" (UID: \"57bc5c19-c945-4bca-adef-0ddf1b9fabac\") " pod="openstack/horizon-6bb7f498bd-pjx6h" Dec 10 14:57:40 crc kubenswrapper[4718]: I1210 14:57:40.971664 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/57bc5c19-c945-4bca-adef-0ddf1b9fabac-scripts\") pod \"horizon-6bb7f498bd-pjx6h\" (UID: \"57bc5c19-c945-4bca-adef-0ddf1b9fabac\") " pod="openstack/horizon-6bb7f498bd-pjx6h" Dec 10 14:57:40 crc kubenswrapper[4718]: I1210 14:57:40.972056 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/57bc5c19-c945-4bca-adef-0ddf1b9fabac-logs\") pod \"horizon-6bb7f498bd-pjx6h\" (UID: \"57bc5c19-c945-4bca-adef-0ddf1b9fabac\") " pod="openstack/horizon-6bb7f498bd-pjx6h" Dec 10 14:57:40 crc kubenswrapper[4718]: I1210 14:57:40.972197 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/57bc5c19-c945-4bca-adef-0ddf1b9fabac-config-data\") pod \"horizon-6bb7f498bd-pjx6h\" (UID: \"57bc5c19-c945-4bca-adef-0ddf1b9fabac\") " pod="openstack/horizon-6bb7f498bd-pjx6h" Dec 10 14:57:40 crc kubenswrapper[4718]: I1210 14:57:40.981428 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ngd4x\" (UniqueName: \"kubernetes.io/projected/57bc5c19-c945-4bca-adef-0ddf1b9fabac-kube-api-access-ngd4x\") pod \"horizon-6bb7f498bd-pjx6h\" (UID: \"57bc5c19-c945-4bca-adef-0ddf1b9fabac\") " pod="openstack/horizon-6bb7f498bd-pjx6h" Dec 10 14:57:41 crc kubenswrapper[4718]: I1210 14:57:41.053822 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-5547d7fc49-599jc"] Dec 10 14:57:41 crc kubenswrapper[4718]: I1210 14:57:41.089021 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-77c9ddb894-brvxz"] Dec 10 14:57:41 crc kubenswrapper[4718]: I1210 14:57:41.092998 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-77c9ddb894-brvxz" Dec 10 14:57:41 crc kubenswrapper[4718]: I1210 14:57:41.095053 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/57bc5c19-c945-4bca-adef-0ddf1b9fabac-horizon-secret-key\") pod \"horizon-6bb7f498bd-pjx6h\" (UID: \"57bc5c19-c945-4bca-adef-0ddf1b9fabac\") " pod="openstack/horizon-6bb7f498bd-pjx6h" Dec 10 14:57:41 crc kubenswrapper[4718]: I1210 14:57:41.095126 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57bc5c19-c945-4bca-adef-0ddf1b9fabac-combined-ca-bundle\") pod \"horizon-6bb7f498bd-pjx6h\" (UID: \"57bc5c19-c945-4bca-adef-0ddf1b9fabac\") " pod="openstack/horizon-6bb7f498bd-pjx6h" Dec 10 14:57:41 crc kubenswrapper[4718]: I1210 14:57:41.095174 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/57bc5c19-c945-4bca-adef-0ddf1b9fabac-scripts\") pod \"horizon-6bb7f498bd-pjx6h\" (UID: \"57bc5c19-c945-4bca-adef-0ddf1b9fabac\") " pod="openstack/horizon-6bb7f498bd-pjx6h" Dec 10 14:57:41 crc kubenswrapper[4718]: I1210 14:57:41.095289 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/57bc5c19-c945-4bca-adef-0ddf1b9fabac-logs\") pod \"horizon-6bb7f498bd-pjx6h\" (UID: \"57bc5c19-c945-4bca-adef-0ddf1b9fabac\") " pod="openstack/horizon-6bb7f498bd-pjx6h" Dec 10 14:57:41 crc kubenswrapper[4718]: I1210 14:57:41.095326 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/57bc5c19-c945-4bca-adef-0ddf1b9fabac-config-data\") pod \"horizon-6bb7f498bd-pjx6h\" (UID: \"57bc5c19-c945-4bca-adef-0ddf1b9fabac\") " pod="openstack/horizon-6bb7f498bd-pjx6h" Dec 10 14:57:41 crc kubenswrapper[4718]: I1210 14:57:41.095482 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ngd4x\" (UniqueName: \"kubernetes.io/projected/57bc5c19-c945-4bca-adef-0ddf1b9fabac-kube-api-access-ngd4x\") pod \"horizon-6bb7f498bd-pjx6h\" (UID: \"57bc5c19-c945-4bca-adef-0ddf1b9fabac\") " pod="openstack/horizon-6bb7f498bd-pjx6h" Dec 10 14:57:41 crc kubenswrapper[4718]: I1210 14:57:41.095564 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/57bc5c19-c945-4bca-adef-0ddf1b9fabac-horizon-tls-certs\") pod \"horizon-6bb7f498bd-pjx6h\" (UID: \"57bc5c19-c945-4bca-adef-0ddf1b9fabac\") " pod="openstack/horizon-6bb7f498bd-pjx6h" Dec 10 14:57:41 crc kubenswrapper[4718]: I1210 14:57:41.096378 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/57bc5c19-c945-4bca-adef-0ddf1b9fabac-logs\") pod \"horizon-6bb7f498bd-pjx6h\" (UID: \"57bc5c19-c945-4bca-adef-0ddf1b9fabac\") " pod="openstack/horizon-6bb7f498bd-pjx6h" Dec 10 14:57:41 crc kubenswrapper[4718]: I1210 14:57:41.097624 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/57bc5c19-c945-4bca-adef-0ddf1b9fabac-scripts\") pod \"horizon-6bb7f498bd-pjx6h\" (UID: \"57bc5c19-c945-4bca-adef-0ddf1b9fabac\") " pod="openstack/horizon-6bb7f498bd-pjx6h" Dec 10 14:57:41 crc kubenswrapper[4718]: I1210 14:57:41.097718 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/57bc5c19-c945-4bca-adef-0ddf1b9fabac-config-data\") pod \"horizon-6bb7f498bd-pjx6h\" (UID: \"57bc5c19-c945-4bca-adef-0ddf1b9fabac\") " pod="openstack/horizon-6bb7f498bd-pjx6h" Dec 10 14:57:41 crc kubenswrapper[4718]: I1210 14:57:41.105093 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57bc5c19-c945-4bca-adef-0ddf1b9fabac-combined-ca-bundle\") pod \"horizon-6bb7f498bd-pjx6h\" (UID: \"57bc5c19-c945-4bca-adef-0ddf1b9fabac\") " pod="openstack/horizon-6bb7f498bd-pjx6h" Dec 10 14:57:41 crc kubenswrapper[4718]: I1210 14:57:41.105126 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/57bc5c19-c945-4bca-adef-0ddf1b9fabac-horizon-secret-key\") pod \"horizon-6bb7f498bd-pjx6h\" (UID: \"57bc5c19-c945-4bca-adef-0ddf1b9fabac\") " pod="openstack/horizon-6bb7f498bd-pjx6h" Dec 10 14:57:41 crc kubenswrapper[4718]: I1210 14:57:41.120678 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-77c9ddb894-brvxz"] Dec 10 14:57:41 crc kubenswrapper[4718]: I1210 14:57:41.125477 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ngd4x\" (UniqueName: \"kubernetes.io/projected/57bc5c19-c945-4bca-adef-0ddf1b9fabac-kube-api-access-ngd4x\") pod \"horizon-6bb7f498bd-pjx6h\" (UID: \"57bc5c19-c945-4bca-adef-0ddf1b9fabac\") " pod="openstack/horizon-6bb7f498bd-pjx6h" Dec 10 14:57:41 crc kubenswrapper[4718]: I1210 14:57:41.141083 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/57bc5c19-c945-4bca-adef-0ddf1b9fabac-horizon-tls-certs\") pod \"horizon-6bb7f498bd-pjx6h\" (UID: \"57bc5c19-c945-4bca-adef-0ddf1b9fabac\") " pod="openstack/horizon-6bb7f498bd-pjx6h" Dec 10 14:57:41 crc kubenswrapper[4718]: I1210 14:57:41.197336 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e1a09589-44b9-49f4-8970-d3381c3d4b99-config-data\") pod \"horizon-77c9ddb894-brvxz\" (UID: \"e1a09589-44b9-49f4-8970-d3381c3d4b99\") " pod="openstack/horizon-77c9ddb894-brvxz" Dec 10 14:57:41 crc kubenswrapper[4718]: I1210 14:57:41.197843 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/e1a09589-44b9-49f4-8970-d3381c3d4b99-horizon-tls-certs\") pod \"horizon-77c9ddb894-brvxz\" (UID: \"e1a09589-44b9-49f4-8970-d3381c3d4b99\") " pod="openstack/horizon-77c9ddb894-brvxz" Dec 10 14:57:41 crc kubenswrapper[4718]: I1210 14:57:41.197950 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e1a09589-44b9-49f4-8970-d3381c3d4b99-logs\") pod \"horizon-77c9ddb894-brvxz\" (UID: \"e1a09589-44b9-49f4-8970-d3381c3d4b99\") " pod="openstack/horizon-77c9ddb894-brvxz" Dec 10 14:57:41 crc kubenswrapper[4718]: I1210 14:57:41.198103 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e1a09589-44b9-49f4-8970-d3381c3d4b99-scripts\") pod \"horizon-77c9ddb894-brvxz\" (UID: \"e1a09589-44b9-49f4-8970-d3381c3d4b99\") " pod="openstack/horizon-77c9ddb894-brvxz" Dec 10 14:57:41 crc kubenswrapper[4718]: I1210 14:57:41.198278 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/e1a09589-44b9-49f4-8970-d3381c3d4b99-horizon-secret-key\") pod \"horizon-77c9ddb894-brvxz\" (UID: \"e1a09589-44b9-49f4-8970-d3381c3d4b99\") " pod="openstack/horizon-77c9ddb894-brvxz" Dec 10 14:57:41 crc kubenswrapper[4718]: I1210 14:57:41.198469 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e1a09589-44b9-49f4-8970-d3381c3d4b99-combined-ca-bundle\") pod \"horizon-77c9ddb894-brvxz\" (UID: \"e1a09589-44b9-49f4-8970-d3381c3d4b99\") " pod="openstack/horizon-77c9ddb894-brvxz" Dec 10 14:57:41 crc kubenswrapper[4718]: I1210 14:57:41.198632 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kn4t9\" (UniqueName: \"kubernetes.io/projected/e1a09589-44b9-49f4-8970-d3381c3d4b99-kube-api-access-kn4t9\") pod \"horizon-77c9ddb894-brvxz\" (UID: \"e1a09589-44b9-49f4-8970-d3381c3d4b99\") " pod="openstack/horizon-77c9ddb894-brvxz" Dec 10 14:57:41 crc kubenswrapper[4718]: I1210 14:57:41.301985 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e1a09589-44b9-49f4-8970-d3381c3d4b99-combined-ca-bundle\") pod \"horizon-77c9ddb894-brvxz\" (UID: \"e1a09589-44b9-49f4-8970-d3381c3d4b99\") " pod="openstack/horizon-77c9ddb894-brvxz" Dec 10 14:57:41 crc kubenswrapper[4718]: I1210 14:57:41.302091 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kn4t9\" (UniqueName: \"kubernetes.io/projected/e1a09589-44b9-49f4-8970-d3381c3d4b99-kube-api-access-kn4t9\") pod \"horizon-77c9ddb894-brvxz\" (UID: \"e1a09589-44b9-49f4-8970-d3381c3d4b99\") " pod="openstack/horizon-77c9ddb894-brvxz" Dec 10 14:57:41 crc kubenswrapper[4718]: I1210 14:57:41.302134 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e1a09589-44b9-49f4-8970-d3381c3d4b99-config-data\") pod \"horizon-77c9ddb894-brvxz\" (UID: \"e1a09589-44b9-49f4-8970-d3381c3d4b99\") " pod="openstack/horizon-77c9ddb894-brvxz" Dec 10 14:57:41 crc kubenswrapper[4718]: I1210 14:57:41.302189 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/e1a09589-44b9-49f4-8970-d3381c3d4b99-horizon-tls-certs\") pod \"horizon-77c9ddb894-brvxz\" (UID: \"e1a09589-44b9-49f4-8970-d3381c3d4b99\") " pod="openstack/horizon-77c9ddb894-brvxz" Dec 10 14:57:41 crc kubenswrapper[4718]: I1210 14:57:41.302211 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e1a09589-44b9-49f4-8970-d3381c3d4b99-logs\") pod \"horizon-77c9ddb894-brvxz\" (UID: \"e1a09589-44b9-49f4-8970-d3381c3d4b99\") " pod="openstack/horizon-77c9ddb894-brvxz" Dec 10 14:57:41 crc kubenswrapper[4718]: I1210 14:57:41.302301 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e1a09589-44b9-49f4-8970-d3381c3d4b99-scripts\") pod \"horizon-77c9ddb894-brvxz\" (UID: \"e1a09589-44b9-49f4-8970-d3381c3d4b99\") " pod="openstack/horizon-77c9ddb894-brvxz" Dec 10 14:57:41 crc kubenswrapper[4718]: I1210 14:57:41.302330 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/e1a09589-44b9-49f4-8970-d3381c3d4b99-horizon-secret-key\") pod \"horizon-77c9ddb894-brvxz\" (UID: \"e1a09589-44b9-49f4-8970-d3381c3d4b99\") " pod="openstack/horizon-77c9ddb894-brvxz" Dec 10 14:57:41 crc kubenswrapper[4718]: I1210 14:57:41.306286 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e1a09589-44b9-49f4-8970-d3381c3d4b99-config-data\") pod \"horizon-77c9ddb894-brvxz\" (UID: \"e1a09589-44b9-49f4-8970-d3381c3d4b99\") " pod="openstack/horizon-77c9ddb894-brvxz" Dec 10 14:57:41 crc kubenswrapper[4718]: I1210 14:57:41.306605 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6bb7f498bd-pjx6h" Dec 10 14:57:41 crc kubenswrapper[4718]: I1210 14:57:41.306792 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e1a09589-44b9-49f4-8970-d3381c3d4b99-logs\") pod \"horizon-77c9ddb894-brvxz\" (UID: \"e1a09589-44b9-49f4-8970-d3381c3d4b99\") " pod="openstack/horizon-77c9ddb894-brvxz" Dec 10 14:57:41 crc kubenswrapper[4718]: I1210 14:57:41.308884 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/e1a09589-44b9-49f4-8970-d3381c3d4b99-horizon-secret-key\") pod \"horizon-77c9ddb894-brvxz\" (UID: \"e1a09589-44b9-49f4-8970-d3381c3d4b99\") " pod="openstack/horizon-77c9ddb894-brvxz" Dec 10 14:57:41 crc kubenswrapper[4718]: I1210 14:57:41.319642 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/e1a09589-44b9-49f4-8970-d3381c3d4b99-horizon-tls-certs\") pod \"horizon-77c9ddb894-brvxz\" (UID: \"e1a09589-44b9-49f4-8970-d3381c3d4b99\") " pod="openstack/horizon-77c9ddb894-brvxz" Dec 10 14:57:41 crc kubenswrapper[4718]: I1210 14:57:41.319783 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e1a09589-44b9-49f4-8970-d3381c3d4b99-scripts\") pod \"horizon-77c9ddb894-brvxz\" (UID: \"e1a09589-44b9-49f4-8970-d3381c3d4b99\") " pod="openstack/horizon-77c9ddb894-brvxz" Dec 10 14:57:41 crc kubenswrapper[4718]: I1210 14:57:41.330600 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e1a09589-44b9-49f4-8970-d3381c3d4b99-combined-ca-bundle\") pod \"horizon-77c9ddb894-brvxz\" (UID: \"e1a09589-44b9-49f4-8970-d3381c3d4b99\") " pod="openstack/horizon-77c9ddb894-brvxz" Dec 10 14:57:41 crc kubenswrapper[4718]: I1210 14:57:41.330942 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kn4t9\" (UniqueName: \"kubernetes.io/projected/e1a09589-44b9-49f4-8970-d3381c3d4b99-kube-api-access-kn4t9\") pod \"horizon-77c9ddb894-brvxz\" (UID: \"e1a09589-44b9-49f4-8970-d3381c3d4b99\") " pod="openstack/horizon-77c9ddb894-brvxz" Dec 10 14:57:41 crc kubenswrapper[4718]: I1210 14:57:41.526736 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-77c9ddb894-brvxz" Dec 10 14:57:42 crc kubenswrapper[4718]: I1210 14:57:42.106006 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-578598f949-4lgxw" Dec 10 14:57:42 crc kubenswrapper[4718]: I1210 14:57:42.183155 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-55b99bf79c-dwmpv"] Dec 10 14:57:42 crc kubenswrapper[4718]: I1210 14:57:42.184748 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-55b99bf79c-dwmpv" podUID="b15123b2-81bb-4fab-b6cd-ed84c0965118" containerName="dnsmasq-dns" containerID="cri-o://0a4f7cf73dde3288d097d40166898edbde2e5a5fe7c4c7537d380c07cf6edfc5" gracePeriod=10 Dec 10 14:57:43 crc kubenswrapper[4718]: I1210 14:57:43.786985 4718 generic.go:334] "Generic (PLEG): container finished" podID="b15123b2-81bb-4fab-b6cd-ed84c0965118" containerID="0a4f7cf73dde3288d097d40166898edbde2e5a5fe7c4c7537d380c07cf6edfc5" exitCode=0 Dec 10 14:57:43 crc kubenswrapper[4718]: I1210 14:57:43.787550 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55b99bf79c-dwmpv" event={"ID":"b15123b2-81bb-4fab-b6cd-ed84c0965118","Type":"ContainerDied","Data":"0a4f7cf73dde3288d097d40166898edbde2e5a5fe7c4c7537d380c07cf6edfc5"} Dec 10 14:57:44 crc kubenswrapper[4718]: I1210 14:57:44.499482 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-77c9ddb894-brvxz"] Dec 10 14:57:44 crc kubenswrapper[4718]: W1210 14:57:44.506603 4718 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode1a09589_44b9_49f4_8970_d3381c3d4b99.slice/crio-64a0463eb929cfe0ce4b3258026aaa5f2e18188de9a2edfcc3ce4f0a39757945 WatchSource:0}: Error finding container 64a0463eb929cfe0ce4b3258026aaa5f2e18188de9a2edfcc3ce4f0a39757945: Status 404 returned error can't find the container with id 64a0463eb929cfe0ce4b3258026aaa5f2e18188de9a2edfcc3ce4f0a39757945 Dec 10 14:57:44 crc kubenswrapper[4718]: I1210 14:57:44.783761 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-6bb7f498bd-pjx6h"] Dec 10 14:57:44 crc kubenswrapper[4718]: I1210 14:57:44.805947 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-77c9ddb894-brvxz" event={"ID":"e1a09589-44b9-49f4-8970-d3381c3d4b99","Type":"ContainerStarted","Data":"64a0463eb929cfe0ce4b3258026aaa5f2e18188de9a2edfcc3ce4f0a39757945"} Dec 10 14:57:46 crc kubenswrapper[4718]: I1210 14:57:46.326622 4718 scope.go:117] "RemoveContainer" containerID="76beffb68a4302af15a19b5c310d822594a21c7c7ec551bf4bf551ca3dcf1f21" Dec 10 14:57:46 crc kubenswrapper[4718]: E1210 14:57:46.327878 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8zmhn_openshift-machine-config-operator(8db53917-7cfb-496d-b8a0-5cc68f3be4e7)\"" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" podUID="8db53917-7cfb-496d-b8a0-5cc68f3be4e7" Dec 10 14:57:50 crc kubenswrapper[4718]: I1210 14:57:50.799888 4718 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-55b99bf79c-dwmpv" podUID="b15123b2-81bb-4fab-b6cd-ed84c0965118" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.136:5353: i/o timeout" Dec 10 14:57:55 crc kubenswrapper[4718]: I1210 14:57:55.801843 4718 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-55b99bf79c-dwmpv" podUID="b15123b2-81bb-4fab-b6cd-ed84c0965118" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.136:5353: i/o timeout" Dec 10 14:57:56 crc kubenswrapper[4718]: I1210 14:57:56.322231 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55b99bf79c-dwmpv" Dec 10 14:57:56 crc kubenswrapper[4718]: I1210 14:57:56.397752 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b15123b2-81bb-4fab-b6cd-ed84c0965118-ovsdbserver-nb\") pod \"b15123b2-81bb-4fab-b6cd-ed84c0965118\" (UID: \"b15123b2-81bb-4fab-b6cd-ed84c0965118\") " Dec 10 14:57:56 crc kubenswrapper[4718]: I1210 14:57:56.397952 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9grpf\" (UniqueName: \"kubernetes.io/projected/b15123b2-81bb-4fab-b6cd-ed84c0965118-kube-api-access-9grpf\") pod \"b15123b2-81bb-4fab-b6cd-ed84c0965118\" (UID: \"b15123b2-81bb-4fab-b6cd-ed84c0965118\") " Dec 10 14:57:56 crc kubenswrapper[4718]: I1210 14:57:56.398097 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b15123b2-81bb-4fab-b6cd-ed84c0965118-ovsdbserver-sb\") pod \"b15123b2-81bb-4fab-b6cd-ed84c0965118\" (UID: \"b15123b2-81bb-4fab-b6cd-ed84c0965118\") " Dec 10 14:57:56 crc kubenswrapper[4718]: I1210 14:57:56.398242 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b15123b2-81bb-4fab-b6cd-ed84c0965118-dns-swift-storage-0\") pod \"b15123b2-81bb-4fab-b6cd-ed84c0965118\" (UID: \"b15123b2-81bb-4fab-b6cd-ed84c0965118\") " Dec 10 14:57:56 crc kubenswrapper[4718]: I1210 14:57:56.398336 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b15123b2-81bb-4fab-b6cd-ed84c0965118-dns-svc\") pod \"b15123b2-81bb-4fab-b6cd-ed84c0965118\" (UID: \"b15123b2-81bb-4fab-b6cd-ed84c0965118\") " Dec 10 14:57:56 crc kubenswrapper[4718]: I1210 14:57:56.398463 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b15123b2-81bb-4fab-b6cd-ed84c0965118-config\") pod \"b15123b2-81bb-4fab-b6cd-ed84c0965118\" (UID: \"b15123b2-81bb-4fab-b6cd-ed84c0965118\") " Dec 10 14:57:56 crc kubenswrapper[4718]: I1210 14:57:56.407804 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b15123b2-81bb-4fab-b6cd-ed84c0965118-kube-api-access-9grpf" (OuterVolumeSpecName: "kube-api-access-9grpf") pod "b15123b2-81bb-4fab-b6cd-ed84c0965118" (UID: "b15123b2-81bb-4fab-b6cd-ed84c0965118"). InnerVolumeSpecName "kube-api-access-9grpf". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 14:57:56 crc kubenswrapper[4718]: E1210 14:57:56.409249 4718 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos10/openstack-horizon:current" Dec 10 14:57:56 crc kubenswrapper[4718]: E1210 14:57:56.409340 4718 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos10/openstack-horizon:current" Dec 10 14:57:56 crc kubenswrapper[4718]: E1210 14:57:56.409546 4718 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:quay.rdoproject.org/podified-master-centos10/openstack-horizon:current,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nc7h5dch59bh654h88h68chf5h569h658h568h5b8h555h67h674h5d5h658h547h7ch56ch68h5b4h54dh5h564h9ch5bbh67ch54h656h647h5cbh8dq,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:yes,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kwl9x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-5d86d75cbf-rg448_openstack(67ad4684-e977-4f3e-b1f0-2efa3d60567f): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 10 14:57:56 crc kubenswrapper[4718]: E1210 14:57:56.413175 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-horizon:current\\\"\"]" pod="openstack/horizon-5d86d75cbf-rg448" podUID="67ad4684-e977-4f3e-b1f0-2efa3d60567f" Dec 10 14:57:56 crc kubenswrapper[4718]: I1210 14:57:56.459412 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b15123b2-81bb-4fab-b6cd-ed84c0965118-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "b15123b2-81bb-4fab-b6cd-ed84c0965118" (UID: "b15123b2-81bb-4fab-b6cd-ed84c0965118"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:57:56 crc kubenswrapper[4718]: I1210 14:57:56.468831 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b15123b2-81bb-4fab-b6cd-ed84c0965118-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "b15123b2-81bb-4fab-b6cd-ed84c0965118" (UID: "b15123b2-81bb-4fab-b6cd-ed84c0965118"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:57:56 crc kubenswrapper[4718]: I1210 14:57:56.470714 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b15123b2-81bb-4fab-b6cd-ed84c0965118-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "b15123b2-81bb-4fab-b6cd-ed84c0965118" (UID: "b15123b2-81bb-4fab-b6cd-ed84c0965118"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:57:56 crc kubenswrapper[4718]: E1210 14:57:56.479709 4718 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos10/openstack-horizon:current" Dec 10 14:57:56 crc kubenswrapper[4718]: E1210 14:57:56.479835 4718 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos10/openstack-horizon:current" Dec 10 14:57:56 crc kubenswrapper[4718]: E1210 14:57:56.480038 4718 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:quay.rdoproject.org/podified-master-centos10/openstack-horizon:current,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n66ch696h668h67ch7ch99h674h657h59dh5dfh59bh58chf5h56fh5f6hcbh8ch589h67dh595h98hdfh587h565h566h75h644h5fbh689h95h76h64cq,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:yes,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nqdvd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-7d98bb654c-nmwh8_openstack(fb18be8d-fe62-4361-8dd1-1a068f0cc2d4): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 10 14:57:56 crc kubenswrapper[4718]: E1210 14:57:56.482938 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-horizon:current\\\"\"]" pod="openstack/horizon-7d98bb654c-nmwh8" podUID="fb18be8d-fe62-4361-8dd1-1a068f0cc2d4" Dec 10 14:57:56 crc kubenswrapper[4718]: I1210 14:57:56.487719 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b15123b2-81bb-4fab-b6cd-ed84c0965118-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "b15123b2-81bb-4fab-b6cd-ed84c0965118" (UID: "b15123b2-81bb-4fab-b6cd-ed84c0965118"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:57:56 crc kubenswrapper[4718]: I1210 14:57:56.500739 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b15123b2-81bb-4fab-b6cd-ed84c0965118-config" (OuterVolumeSpecName: "config") pod "b15123b2-81bb-4fab-b6cd-ed84c0965118" (UID: "b15123b2-81bb-4fab-b6cd-ed84c0965118"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:57:56 crc kubenswrapper[4718]: I1210 14:57:56.501220 4718 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b15123b2-81bb-4fab-b6cd-ed84c0965118-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 10 14:57:56 crc kubenswrapper[4718]: I1210 14:57:56.501274 4718 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b15123b2-81bb-4fab-b6cd-ed84c0965118-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 10 14:57:56 crc kubenswrapper[4718]: I1210 14:57:56.501292 4718 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b15123b2-81bb-4fab-b6cd-ed84c0965118-config\") on node \"crc\" DevicePath \"\"" Dec 10 14:57:56 crc kubenswrapper[4718]: I1210 14:57:56.501305 4718 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b15123b2-81bb-4fab-b6cd-ed84c0965118-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 10 14:57:56 crc kubenswrapper[4718]: I1210 14:57:56.501321 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9grpf\" (UniqueName: \"kubernetes.io/projected/b15123b2-81bb-4fab-b6cd-ed84c0965118-kube-api-access-9grpf\") on node \"crc\" DevicePath \"\"" Dec 10 14:57:56 crc kubenswrapper[4718]: I1210 14:57:56.501338 4718 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b15123b2-81bb-4fab-b6cd-ed84c0965118-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 10 14:57:56 crc kubenswrapper[4718]: E1210 14:57:56.563369 4718 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos10/openstack-horizon:current" Dec 10 14:57:56 crc kubenswrapper[4718]: E1210 14:57:56.563474 4718 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos10/openstack-horizon:current" Dec 10 14:57:56 crc kubenswrapper[4718]: E1210 14:57:56.563669 4718 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:quay.rdoproject.org/podified-master-centos10/openstack-horizon:current,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n68h5bchc5h676h5bbh54ch56dhb6h58dh58ch7bh56bh6ch687h7h55dhdch64dh66chb6h689h67bh55ch5dch66dh646h5fbh64dh5cbh5dfh5bfhf5q,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:yes,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gm44f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-5547d7fc49-599jc_openstack(1f14e36c-813e-413c-a7e2-140c02eb6599): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 10 14:57:56 crc kubenswrapper[4718]: E1210 14:57:56.565931 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-horizon:current\\\"\"]" pod="openstack/horizon-5547d7fc49-599jc" podUID="1f14e36c-813e-413c-a7e2-140c02eb6599" Dec 10 14:57:56 crc kubenswrapper[4718]: I1210 14:57:56.959444 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6bb7f498bd-pjx6h" event={"ID":"57bc5c19-c945-4bca-adef-0ddf1b9fabac","Type":"ContainerStarted","Data":"afc10462319479238449d83b0057b11e3073c64e4d1a7efb30f713d556f9a2ae"} Dec 10 14:57:56 crc kubenswrapper[4718]: I1210 14:57:56.964223 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55b99bf79c-dwmpv" Dec 10 14:57:56 crc kubenswrapper[4718]: I1210 14:57:56.964212 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55b99bf79c-dwmpv" event={"ID":"b15123b2-81bb-4fab-b6cd-ed84c0965118","Type":"ContainerDied","Data":"a84419b5da2456d648ff0685db1aef0497fee23b07446758f3c6606718d62535"} Dec 10 14:57:56 crc kubenswrapper[4718]: I1210 14:57:56.964334 4718 scope.go:117] "RemoveContainer" containerID="0a4f7cf73dde3288d097d40166898edbde2e5a5fe7c4c7537d380c07cf6edfc5" Dec 10 14:57:57 crc kubenswrapper[4718]: I1210 14:57:57.091769 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-55b99bf79c-dwmpv"] Dec 10 14:57:57 crc kubenswrapper[4718]: I1210 14:57:57.101685 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-55b99bf79c-dwmpv"] Dec 10 14:57:58 crc kubenswrapper[4718]: I1210 14:57:58.046361 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b15123b2-81bb-4fab-b6cd-ed84c0965118" path="/var/lib/kubelet/pods/b15123b2-81bb-4fab-b6cd-ed84c0965118/volumes" Dec 10 14:57:58 crc kubenswrapper[4718]: E1210 14:57:58.293799 4718 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos10/openstack-placement-api:current" Dec 10 14:57:58 crc kubenswrapper[4718]: E1210 14:57:58.294449 4718 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos10/openstack-placement-api:current" Dec 10 14:57:58 crc kubenswrapper[4718]: E1210 14:57:58.294740 4718 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:placement-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-placement-api:current,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/placement,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:placement-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-c5df2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42482,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-db-sync-qklhp_openstack(f6329eaf-fcae-417e-96a8-96719f02420b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 10 14:57:58 crc kubenswrapper[4718]: E1210 14:57:58.295982 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"placement-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/placement-db-sync-qklhp" podUID="f6329eaf-fcae-417e-96a8-96719f02420b" Dec 10 14:57:58 crc kubenswrapper[4718]: I1210 14:57:58.992015 4718 generic.go:334] "Generic (PLEG): container finished" podID="7c827924-284b-42bd-b871-a892565d7a73" containerID="a14f45af416b3a29626303cc96f6de332ad3d25851121806e75e551a172f9346" exitCode=0 Dec 10 14:57:58 crc kubenswrapper[4718]: I1210 14:57:58.992123 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-2g9vq" event={"ID":"7c827924-284b-42bd-b871-a892565d7a73","Type":"ContainerDied","Data":"a14f45af416b3a29626303cc96f6de332ad3d25851121806e75e551a172f9346"} Dec 10 14:57:58 crc kubenswrapper[4718]: I1210 14:57:58.995579 4718 generic.go:334] "Generic (PLEG): container finished" podID="bf1fd7f3-5d9a-44b4-8e4e-e71df148b109" containerID="afb065c6f5d1d7dc86c06200e773d7019392ee936131ec9ded0efe4e02f88d46" exitCode=0 Dec 10 14:57:58 crc kubenswrapper[4718]: I1210 14:57:58.996739 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-db-sync-bcnrt" event={"ID":"bf1fd7f3-5d9a-44b4-8e4e-e71df148b109","Type":"ContainerDied","Data":"afb065c6f5d1d7dc86c06200e773d7019392ee936131ec9ded0efe4e02f88d46"} Dec 10 14:57:58 crc kubenswrapper[4718]: E1210 14:57:58.998851 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"placement-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-placement-api:current\\\"\"" pod="openstack/placement-db-sync-qklhp" podUID="f6329eaf-fcae-417e-96a8-96719f02420b" Dec 10 14:58:00 crc kubenswrapper[4718]: I1210 14:58:00.022796 4718 scope.go:117] "RemoveContainer" containerID="76beffb68a4302af15a19b5c310d822594a21c7c7ec551bf4bf551ca3dcf1f21" Dec 10 14:58:00 crc kubenswrapper[4718]: E1210 14:58:00.023475 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8zmhn_openshift-machine-config-operator(8db53917-7cfb-496d-b8a0-5cc68f3be4e7)\"" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" podUID="8db53917-7cfb-496d-b8a0-5cc68f3be4e7" Dec 10 14:58:00 crc kubenswrapper[4718]: I1210 14:58:00.803832 4718 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-55b99bf79c-dwmpv" podUID="b15123b2-81bb-4fab-b6cd-ed84c0965118" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.136:5353: i/o timeout" Dec 10 14:58:02 crc kubenswrapper[4718]: E1210 14:58:02.733492 4718 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current" Dec 10 14:58:02 crc kubenswrapper[4718]: E1210 14:58:02.733923 4718 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current" Dec 10 14:58:02 crc kubenswrapper[4718]: E1210 14:58:02.734131 4718 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n668hd8hb6h576h6h549h8h554h54dh5c4h6chbfh55chdch58ch5cdh657h5d6h87h65bh85h56fh5cch577h8dhffh7fh5bfh5bch7dh55ch75q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xmcqr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(18de3c3d-d30b-4c09-b1a2-3a6376de8843): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 10 14:58:04 crc kubenswrapper[4718]: I1210 14:58:04.472993 4718 pod_container_manager_linux.go:210] "Failed to delete cgroup paths" cgroupName=["kubepods","besteffort","pod6c6829a4-a5f2-43ac-b574-c4d9edaf0f85"] err="unable to destroy cgroup paths for cgroup [kubepods besteffort pod6c6829a4-a5f2-43ac-b574-c4d9edaf0f85] : Timed out while waiting for systemd to remove kubepods-besteffort-pod6c6829a4_a5f2_43ac_b574_c4d9edaf0f85.slice" Dec 10 14:58:14 crc kubenswrapper[4718]: I1210 14:58:14.021359 4718 scope.go:117] "RemoveContainer" containerID="76beffb68a4302af15a19b5c310d822594a21c7c7ec551bf4bf551ca3dcf1f21" Dec 10 14:58:14 crc kubenswrapper[4718]: E1210 14:58:14.022678 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8zmhn_openshift-machine-config-operator(8db53917-7cfb-496d-b8a0-5cc68f3be4e7)\"" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" podUID="8db53917-7cfb-496d-b8a0-5cc68f3be4e7" Dec 10 14:58:16 crc kubenswrapper[4718]: E1210 14:58:16.521084 4718 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos10/openstack-horizon:current" Dec 10 14:58:16 crc kubenswrapper[4718]: E1210 14:58:16.521818 4718 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos10/openstack-horizon:current" Dec 10 14:58:16 crc kubenswrapper[4718]: E1210 14:58:16.522021 4718 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:quay.rdoproject.org/podified-master-centos10/openstack-horizon:current,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n64ch5f6h5c7h5c8h58ch5fdh5ch698hddh9fh546h589h5d5h66h6bh7fh5c9h597h5ddh664h6h54bhb4hdh67dhd7h58ch94h68hcbh5cfhc5q,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:yes,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kn4t9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-77c9ddb894-brvxz_openstack(e1a09589-44b9-49f4-8970-d3381c3d4b99): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 10 14:58:16 crc kubenswrapper[4718]: E1210 14:58:16.524295 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-horizon:current\\\"\"]" pod="openstack/horizon-77c9ddb894-brvxz" podUID="e1a09589-44b9-49f4-8970-d3381c3d4b99" Dec 10 14:58:16 crc kubenswrapper[4718]: I1210 14:58:16.544548 4718 scope.go:117] "RemoveContainer" containerID="ffbdf3a0ecd58eb5930b6dfac8243f7a0af458182edee3af0698e445e118016a" Dec 10 14:58:16 crc kubenswrapper[4718]: E1210 14:58:16.552448 4718 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos10/openstack-horizon:current" Dec 10 14:58:16 crc kubenswrapper[4718]: E1210 14:58:16.552539 4718 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos10/openstack-horizon:current" Dec 10 14:58:16 crc kubenswrapper[4718]: E1210 14:58:16.552715 4718 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:quay.rdoproject.org/podified-master-centos10/openstack-horizon:current,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n55bh55dh57dhc9h645h5b9h56dh547h5ch665h5cdh5f9h55ch5d9h68h8ch568h557h59dh564h55ch659h65dh88h5c4h648h68fh94h55fh8dh59dh667q,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:yes,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ngd4x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-6bb7f498bd-pjx6h_openstack(57bc5c19-c945-4bca-adef-0ddf1b9fabac): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 10 14:58:16 crc kubenswrapper[4718]: E1210 14:58:16.559899 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-horizon:current\\\"\"]" pod="openstack/horizon-6bb7f498bd-pjx6h" podUID="57bc5c19-c945-4bca-adef-0ddf1b9fabac" Dec 10 14:58:16 crc kubenswrapper[4718]: I1210 14:58:16.664720 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7d98bb654c-nmwh8" Dec 10 14:58:16 crc kubenswrapper[4718]: I1210 14:58:16.679189 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5d86d75cbf-rg448" Dec 10 14:58:16 crc kubenswrapper[4718]: I1210 14:58:16.687885 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5547d7fc49-599jc" Dec 10 14:58:16 crc kubenswrapper[4718]: I1210 14:58:16.710807 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-sync-bcnrt" Dec 10 14:58:16 crc kubenswrapper[4718]: I1210 14:58:16.727777 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-2g9vq" Dec 10 14:58:16 crc kubenswrapper[4718]: I1210 14:58:16.836156 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c827924-284b-42bd-b871-a892565d7a73-combined-ca-bundle\") pod \"7c827924-284b-42bd-b871-a892565d7a73\" (UID: \"7c827924-284b-42bd-b871-a892565d7a73\") " Dec 10 14:58:16 crc kubenswrapper[4718]: I1210 14:58:16.836221 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nqdvd\" (UniqueName: \"kubernetes.io/projected/fb18be8d-fe62-4361-8dd1-1a068f0cc2d4-kube-api-access-nqdvd\") pod \"fb18be8d-fe62-4361-8dd1-1a068f0cc2d4\" (UID: \"fb18be8d-fe62-4361-8dd1-1a068f0cc2d4\") " Dec 10 14:58:16 crc kubenswrapper[4718]: I1210 14:58:16.836264 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bf1fd7f3-5d9a-44b4-8e4e-e71df148b109-config-data\") pod \"bf1fd7f3-5d9a-44b4-8e4e-e71df148b109\" (UID: \"bf1fd7f3-5d9a-44b4-8e4e-e71df148b109\") " Dec 10 14:58:16 crc kubenswrapper[4718]: I1210 14:58:16.836296 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gm44f\" (UniqueName: \"kubernetes.io/projected/1f14e36c-813e-413c-a7e2-140c02eb6599-kube-api-access-gm44f\") pod \"1f14e36c-813e-413c-a7e2-140c02eb6599\" (UID: \"1f14e36c-813e-413c-a7e2-140c02eb6599\") " Dec 10 14:58:16 crc kubenswrapper[4718]: I1210 14:58:16.836323 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/67ad4684-e977-4f3e-b1f0-2efa3d60567f-horizon-secret-key\") pod \"67ad4684-e977-4f3e-b1f0-2efa3d60567f\" (UID: \"67ad4684-e977-4f3e-b1f0-2efa3d60567f\") " Dec 10 14:58:16 crc kubenswrapper[4718]: I1210 14:58:16.836350 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mhdl4\" (UniqueName: \"kubernetes.io/projected/bf1fd7f3-5d9a-44b4-8e4e-e71df148b109-kube-api-access-mhdl4\") pod \"bf1fd7f3-5d9a-44b4-8e4e-e71df148b109\" (UID: \"bf1fd7f3-5d9a-44b4-8e4e-e71df148b109\") " Dec 10 14:58:16 crc kubenswrapper[4718]: I1210 14:58:16.836423 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/67ad4684-e977-4f3e-b1f0-2efa3d60567f-logs\") pod \"67ad4684-e977-4f3e-b1f0-2efa3d60567f\" (UID: \"67ad4684-e977-4f3e-b1f0-2efa3d60567f\") " Dec 10 14:58:16 crc kubenswrapper[4718]: I1210 14:58:16.836450 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/1f14e36c-813e-413c-a7e2-140c02eb6599-horizon-secret-key\") pod \"1f14e36c-813e-413c-a7e2-140c02eb6599\" (UID: \"1f14e36c-813e-413c-a7e2-140c02eb6599\") " Dec 10 14:58:16 crc kubenswrapper[4718]: I1210 14:58:16.836487 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf1fd7f3-5d9a-44b4-8e4e-e71df148b109-combined-ca-bundle\") pod \"bf1fd7f3-5d9a-44b4-8e4e-e71df148b109\" (UID: \"bf1fd7f3-5d9a-44b4-8e4e-e71df148b109\") " Dec 10 14:58:16 crc kubenswrapper[4718]: I1210 14:58:16.836514 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7c827924-284b-42bd-b871-a892565d7a73-scripts\") pod \"7c827924-284b-42bd-b871-a892565d7a73\" (UID: \"7c827924-284b-42bd-b871-a892565d7a73\") " Dec 10 14:58:16 crc kubenswrapper[4718]: I1210 14:58:16.836533 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/fb18be8d-fe62-4361-8dd1-1a068f0cc2d4-config-data\") pod \"fb18be8d-fe62-4361-8dd1-1a068f0cc2d4\" (UID: \"fb18be8d-fe62-4361-8dd1-1a068f0cc2d4\") " Dec 10 14:58:16 crc kubenswrapper[4718]: I1210 14:58:16.836913 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1f14e36c-813e-413c-a7e2-140c02eb6599-scripts\") pod \"1f14e36c-813e-413c-a7e2-140c02eb6599\" (UID: \"1f14e36c-813e-413c-a7e2-140c02eb6599\") " Dec 10 14:58:16 crc kubenswrapper[4718]: I1210 14:58:16.837250 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/67ad4684-e977-4f3e-b1f0-2efa3d60567f-config-data\") pod \"67ad4684-e977-4f3e-b1f0-2efa3d60567f\" (UID: \"67ad4684-e977-4f3e-b1f0-2efa3d60567f\") " Dec 10 14:58:16 crc kubenswrapper[4718]: I1210 14:58:16.837363 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/fb18be8d-fe62-4361-8dd1-1a068f0cc2d4-horizon-secret-key\") pod \"fb18be8d-fe62-4361-8dd1-1a068f0cc2d4\" (UID: \"fb18be8d-fe62-4361-8dd1-1a068f0cc2d4\") " Dec 10 14:58:16 crc kubenswrapper[4718]: I1210 14:58:16.837500 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1f14e36c-813e-413c-a7e2-140c02eb6599-logs\") pod \"1f14e36c-813e-413c-a7e2-140c02eb6599\" (UID: \"1f14e36c-813e-413c-a7e2-140c02eb6599\") " Dec 10 14:58:16 crc kubenswrapper[4718]: I1210 14:58:16.837625 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/bf1fd7f3-5d9a-44b4-8e4e-e71df148b109-db-sync-config-data\") pod \"bf1fd7f3-5d9a-44b4-8e4e-e71df148b109\" (UID: \"bf1fd7f3-5d9a-44b4-8e4e-e71df148b109\") " Dec 10 14:58:16 crc kubenswrapper[4718]: I1210 14:58:16.837706 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7c827924-284b-42bd-b871-a892565d7a73-config-data\") pod \"7c827924-284b-42bd-b871-a892565d7a73\" (UID: \"7c827924-284b-42bd-b871-a892565d7a73\") " Dec 10 14:58:16 crc kubenswrapper[4718]: I1210 14:58:16.837736 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/67ad4684-e977-4f3e-b1f0-2efa3d60567f-logs" (OuterVolumeSpecName: "logs") pod "67ad4684-e977-4f3e-b1f0-2efa3d60567f" (UID: "67ad4684-e977-4f3e-b1f0-2efa3d60567f"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 14:58:16 crc kubenswrapper[4718]: I1210 14:58:16.837818 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fb18be8d-fe62-4361-8dd1-1a068f0cc2d4-logs\") pod \"fb18be8d-fe62-4361-8dd1-1a068f0cc2d4\" (UID: \"fb18be8d-fe62-4361-8dd1-1a068f0cc2d4\") " Dec 10 14:58:16 crc kubenswrapper[4718]: I1210 14:58:16.838165 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1f14e36c-813e-413c-a7e2-140c02eb6599-config-data\") pod \"1f14e36c-813e-413c-a7e2-140c02eb6599\" (UID: \"1f14e36c-813e-413c-a7e2-140c02eb6599\") " Dec 10 14:58:16 crc kubenswrapper[4718]: I1210 14:58:16.838208 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/67ad4684-e977-4f3e-b1f0-2efa3d60567f-scripts\") pod \"67ad4684-e977-4f3e-b1f0-2efa3d60567f\" (UID: \"67ad4684-e977-4f3e-b1f0-2efa3d60567f\") " Dec 10 14:58:16 crc kubenswrapper[4718]: I1210 14:58:16.838239 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/7c827924-284b-42bd-b871-a892565d7a73-credential-keys\") pod \"7c827924-284b-42bd-b871-a892565d7a73\" (UID: \"7c827924-284b-42bd-b871-a892565d7a73\") " Dec 10 14:58:16 crc kubenswrapper[4718]: I1210 14:58:16.838269 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/fb18be8d-fe62-4361-8dd1-1a068f0cc2d4-scripts\") pod \"fb18be8d-fe62-4361-8dd1-1a068f0cc2d4\" (UID: \"fb18be8d-fe62-4361-8dd1-1a068f0cc2d4\") " Dec 10 14:58:16 crc kubenswrapper[4718]: I1210 14:58:16.838338 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-945mb\" (UniqueName: \"kubernetes.io/projected/7c827924-284b-42bd-b871-a892565d7a73-kube-api-access-945mb\") pod \"7c827924-284b-42bd-b871-a892565d7a73\" (UID: \"7c827924-284b-42bd-b871-a892565d7a73\") " Dec 10 14:58:16 crc kubenswrapper[4718]: I1210 14:58:16.838427 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/7c827924-284b-42bd-b871-a892565d7a73-fernet-keys\") pod \"7c827924-284b-42bd-b871-a892565d7a73\" (UID: \"7c827924-284b-42bd-b871-a892565d7a73\") " Dec 10 14:58:16 crc kubenswrapper[4718]: I1210 14:58:16.838460 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kwl9x\" (UniqueName: \"kubernetes.io/projected/67ad4684-e977-4f3e-b1f0-2efa3d60567f-kube-api-access-kwl9x\") pod \"67ad4684-e977-4f3e-b1f0-2efa3d60567f\" (UID: \"67ad4684-e977-4f3e-b1f0-2efa3d60567f\") " Dec 10 14:58:16 crc kubenswrapper[4718]: I1210 14:58:16.838674 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fb18be8d-fe62-4361-8dd1-1a068f0cc2d4-config-data" (OuterVolumeSpecName: "config-data") pod "fb18be8d-fe62-4361-8dd1-1a068f0cc2d4" (UID: "fb18be8d-fe62-4361-8dd1-1a068f0cc2d4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:58:16 crc kubenswrapper[4718]: I1210 14:58:16.839204 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1f14e36c-813e-413c-a7e2-140c02eb6599-scripts" (OuterVolumeSpecName: "scripts") pod "1f14e36c-813e-413c-a7e2-140c02eb6599" (UID: "1f14e36c-813e-413c-a7e2-140c02eb6599"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:58:16 crc kubenswrapper[4718]: I1210 14:58:16.840049 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/67ad4684-e977-4f3e-b1f0-2efa3d60567f-config-data" (OuterVolumeSpecName: "config-data") pod "67ad4684-e977-4f3e-b1f0-2efa3d60567f" (UID: "67ad4684-e977-4f3e-b1f0-2efa3d60567f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:58:16 crc kubenswrapper[4718]: I1210 14:58:16.840470 4718 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/67ad4684-e977-4f3e-b1f0-2efa3d60567f-logs\") on node \"crc\" DevicePath \"\"" Dec 10 14:58:16 crc kubenswrapper[4718]: I1210 14:58:16.840587 4718 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/fb18be8d-fe62-4361-8dd1-1a068f0cc2d4-config-data\") on node \"crc\" DevicePath \"\"" Dec 10 14:58:16 crc kubenswrapper[4718]: I1210 14:58:16.840687 4718 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1f14e36c-813e-413c-a7e2-140c02eb6599-scripts\") on node \"crc\" DevicePath \"\"" Dec 10 14:58:16 crc kubenswrapper[4718]: I1210 14:58:16.840775 4718 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/67ad4684-e977-4f3e-b1f0-2efa3d60567f-config-data\") on node \"crc\" DevicePath \"\"" Dec 10 14:58:16 crc kubenswrapper[4718]: I1210 14:58:16.839281 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fb18be8d-fe62-4361-8dd1-1a068f0cc2d4-logs" (OuterVolumeSpecName: "logs") pod "fb18be8d-fe62-4361-8dd1-1a068f0cc2d4" (UID: "fb18be8d-fe62-4361-8dd1-1a068f0cc2d4"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 14:58:16 crc kubenswrapper[4718]: I1210 14:58:16.847331 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1f14e36c-813e-413c-a7e2-140c02eb6599-kube-api-access-gm44f" (OuterVolumeSpecName: "kube-api-access-gm44f") pod "1f14e36c-813e-413c-a7e2-140c02eb6599" (UID: "1f14e36c-813e-413c-a7e2-140c02eb6599"). InnerVolumeSpecName "kube-api-access-gm44f". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 14:58:16 crc kubenswrapper[4718]: I1210 14:58:16.847604 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7c827924-284b-42bd-b871-a892565d7a73-scripts" (OuterVolumeSpecName: "scripts") pod "7c827924-284b-42bd-b871-a892565d7a73" (UID: "7c827924-284b-42bd-b871-a892565d7a73"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 14:58:16 crc kubenswrapper[4718]: I1210 14:58:16.847617 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/67ad4684-e977-4f3e-b1f0-2efa3d60567f-kube-api-access-kwl9x" (OuterVolumeSpecName: "kube-api-access-kwl9x") pod "67ad4684-e977-4f3e-b1f0-2efa3d60567f" (UID: "67ad4684-e977-4f3e-b1f0-2efa3d60567f"). InnerVolumeSpecName "kube-api-access-kwl9x". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 14:58:16 crc kubenswrapper[4718]: I1210 14:58:16.847680 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1f14e36c-813e-413c-a7e2-140c02eb6599-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "1f14e36c-813e-413c-a7e2-140c02eb6599" (UID: "1f14e36c-813e-413c-a7e2-140c02eb6599"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 14:58:16 crc kubenswrapper[4718]: I1210 14:58:16.848144 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/67ad4684-e977-4f3e-b1f0-2efa3d60567f-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "67ad4684-e977-4f3e-b1f0-2efa3d60567f" (UID: "67ad4684-e977-4f3e-b1f0-2efa3d60567f"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 14:58:16 crc kubenswrapper[4718]: I1210 14:58:16.848315 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf1fd7f3-5d9a-44b4-8e4e-e71df148b109-kube-api-access-mhdl4" (OuterVolumeSpecName: "kube-api-access-mhdl4") pod "bf1fd7f3-5d9a-44b4-8e4e-e71df148b109" (UID: "bf1fd7f3-5d9a-44b4-8e4e-e71df148b109"). InnerVolumeSpecName "kube-api-access-mhdl4". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 14:58:16 crc kubenswrapper[4718]: I1210 14:58:16.848629 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1f14e36c-813e-413c-a7e2-140c02eb6599-logs" (OuterVolumeSpecName: "logs") pod "1f14e36c-813e-413c-a7e2-140c02eb6599" (UID: "1f14e36c-813e-413c-a7e2-140c02eb6599"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 14:58:16 crc kubenswrapper[4718]: I1210 14:58:16.848731 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/67ad4684-e977-4f3e-b1f0-2efa3d60567f-scripts" (OuterVolumeSpecName: "scripts") pod "67ad4684-e977-4f3e-b1f0-2efa3d60567f" (UID: "67ad4684-e977-4f3e-b1f0-2efa3d60567f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:58:16 crc kubenswrapper[4718]: I1210 14:58:16.849421 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1f14e36c-813e-413c-a7e2-140c02eb6599-config-data" (OuterVolumeSpecName: "config-data") pod "1f14e36c-813e-413c-a7e2-140c02eb6599" (UID: "1f14e36c-813e-413c-a7e2-140c02eb6599"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:58:16 crc kubenswrapper[4718]: I1210 14:58:16.850010 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fb18be8d-fe62-4361-8dd1-1a068f0cc2d4-scripts" (OuterVolumeSpecName: "scripts") pod "fb18be8d-fe62-4361-8dd1-1a068f0cc2d4" (UID: "fb18be8d-fe62-4361-8dd1-1a068f0cc2d4"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:58:16 crc kubenswrapper[4718]: I1210 14:58:16.850863 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fb18be8d-fe62-4361-8dd1-1a068f0cc2d4-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "fb18be8d-fe62-4361-8dd1-1a068f0cc2d4" (UID: "fb18be8d-fe62-4361-8dd1-1a068f0cc2d4"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 14:58:16 crc kubenswrapper[4718]: I1210 14:58:16.855631 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7c827924-284b-42bd-b871-a892565d7a73-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "7c827924-284b-42bd-b871-a892565d7a73" (UID: "7c827924-284b-42bd-b871-a892565d7a73"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 14:58:16 crc kubenswrapper[4718]: I1210 14:58:16.856823 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf1fd7f3-5d9a-44b4-8e4e-e71df148b109-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "bf1fd7f3-5d9a-44b4-8e4e-e71df148b109" (UID: "bf1fd7f3-5d9a-44b4-8e4e-e71df148b109"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 14:58:16 crc kubenswrapper[4718]: I1210 14:58:16.859456 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7c827924-284b-42bd-b871-a892565d7a73-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "7c827924-284b-42bd-b871-a892565d7a73" (UID: "7c827924-284b-42bd-b871-a892565d7a73"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 14:58:16 crc kubenswrapper[4718]: I1210 14:58:16.860092 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fb18be8d-fe62-4361-8dd1-1a068f0cc2d4-kube-api-access-nqdvd" (OuterVolumeSpecName: "kube-api-access-nqdvd") pod "fb18be8d-fe62-4361-8dd1-1a068f0cc2d4" (UID: "fb18be8d-fe62-4361-8dd1-1a068f0cc2d4"). InnerVolumeSpecName "kube-api-access-nqdvd". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 14:58:16 crc kubenswrapper[4718]: I1210 14:58:16.864151 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7c827924-284b-42bd-b871-a892565d7a73-kube-api-access-945mb" (OuterVolumeSpecName: "kube-api-access-945mb") pod "7c827924-284b-42bd-b871-a892565d7a73" (UID: "7c827924-284b-42bd-b871-a892565d7a73"). InnerVolumeSpecName "kube-api-access-945mb". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 14:58:16 crc kubenswrapper[4718]: I1210 14:58:16.882617 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7c827924-284b-42bd-b871-a892565d7a73-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7c827924-284b-42bd-b871-a892565d7a73" (UID: "7c827924-284b-42bd-b871-a892565d7a73"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 14:58:16 crc kubenswrapper[4718]: I1210 14:58:16.887998 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf1fd7f3-5d9a-44b4-8e4e-e71df148b109-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bf1fd7f3-5d9a-44b4-8e4e-e71df148b109" (UID: "bf1fd7f3-5d9a-44b4-8e4e-e71df148b109"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 14:58:16 crc kubenswrapper[4718]: I1210 14:58:16.893327 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7c827924-284b-42bd-b871-a892565d7a73-config-data" (OuterVolumeSpecName: "config-data") pod "7c827924-284b-42bd-b871-a892565d7a73" (UID: "7c827924-284b-42bd-b871-a892565d7a73"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 14:58:16 crc kubenswrapper[4718]: I1210 14:58:16.914590 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf1fd7f3-5d9a-44b4-8e4e-e71df148b109-config-data" (OuterVolumeSpecName: "config-data") pod "bf1fd7f3-5d9a-44b4-8e4e-e71df148b109" (UID: "bf1fd7f3-5d9a-44b4-8e4e-e71df148b109"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 14:58:16 crc kubenswrapper[4718]: I1210 14:58:16.943293 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-945mb\" (UniqueName: \"kubernetes.io/projected/7c827924-284b-42bd-b871-a892565d7a73-kube-api-access-945mb\") on node \"crc\" DevicePath \"\"" Dec 10 14:58:16 crc kubenswrapper[4718]: I1210 14:58:16.943362 4718 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/7c827924-284b-42bd-b871-a892565d7a73-fernet-keys\") on node \"crc\" DevicePath \"\"" Dec 10 14:58:16 crc kubenswrapper[4718]: I1210 14:58:16.943378 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kwl9x\" (UniqueName: \"kubernetes.io/projected/67ad4684-e977-4f3e-b1f0-2efa3d60567f-kube-api-access-kwl9x\") on node \"crc\" DevicePath \"\"" Dec 10 14:58:16 crc kubenswrapper[4718]: I1210 14:58:16.943411 4718 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c827924-284b-42bd-b871-a892565d7a73-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 10 14:58:16 crc kubenswrapper[4718]: I1210 14:58:16.943428 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nqdvd\" (UniqueName: \"kubernetes.io/projected/fb18be8d-fe62-4361-8dd1-1a068f0cc2d4-kube-api-access-nqdvd\") on node \"crc\" DevicePath \"\"" Dec 10 14:58:16 crc kubenswrapper[4718]: I1210 14:58:16.943443 4718 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bf1fd7f3-5d9a-44b4-8e4e-e71df148b109-config-data\") on node \"crc\" DevicePath \"\"" Dec 10 14:58:16 crc kubenswrapper[4718]: I1210 14:58:16.943458 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gm44f\" (UniqueName: \"kubernetes.io/projected/1f14e36c-813e-413c-a7e2-140c02eb6599-kube-api-access-gm44f\") on node \"crc\" DevicePath \"\"" Dec 10 14:58:16 crc kubenswrapper[4718]: I1210 14:58:16.943469 4718 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/67ad4684-e977-4f3e-b1f0-2efa3d60567f-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Dec 10 14:58:16 crc kubenswrapper[4718]: I1210 14:58:16.943480 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mhdl4\" (UniqueName: \"kubernetes.io/projected/bf1fd7f3-5d9a-44b4-8e4e-e71df148b109-kube-api-access-mhdl4\") on node \"crc\" DevicePath \"\"" Dec 10 14:58:16 crc kubenswrapper[4718]: I1210 14:58:16.943492 4718 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/1f14e36c-813e-413c-a7e2-140c02eb6599-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Dec 10 14:58:16 crc kubenswrapper[4718]: I1210 14:58:16.943534 4718 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf1fd7f3-5d9a-44b4-8e4e-e71df148b109-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 10 14:58:16 crc kubenswrapper[4718]: I1210 14:58:16.943550 4718 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7c827924-284b-42bd-b871-a892565d7a73-scripts\") on node \"crc\" DevicePath \"\"" Dec 10 14:58:16 crc kubenswrapper[4718]: I1210 14:58:16.943564 4718 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/fb18be8d-fe62-4361-8dd1-1a068f0cc2d4-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Dec 10 14:58:16 crc kubenswrapper[4718]: I1210 14:58:16.943579 4718 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1f14e36c-813e-413c-a7e2-140c02eb6599-logs\") on node \"crc\" DevicePath \"\"" Dec 10 14:58:16 crc kubenswrapper[4718]: I1210 14:58:16.943595 4718 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/bf1fd7f3-5d9a-44b4-8e4e-e71df148b109-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Dec 10 14:58:16 crc kubenswrapper[4718]: I1210 14:58:16.943608 4718 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7c827924-284b-42bd-b871-a892565d7a73-config-data\") on node \"crc\" DevicePath \"\"" Dec 10 14:58:16 crc kubenswrapper[4718]: I1210 14:58:16.943618 4718 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fb18be8d-fe62-4361-8dd1-1a068f0cc2d4-logs\") on node \"crc\" DevicePath \"\"" Dec 10 14:58:16 crc kubenswrapper[4718]: I1210 14:58:16.943632 4718 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1f14e36c-813e-413c-a7e2-140c02eb6599-config-data\") on node \"crc\" DevicePath \"\"" Dec 10 14:58:16 crc kubenswrapper[4718]: I1210 14:58:16.943648 4718 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/67ad4684-e977-4f3e-b1f0-2efa3d60567f-scripts\") on node \"crc\" DevicePath \"\"" Dec 10 14:58:16 crc kubenswrapper[4718]: I1210 14:58:16.943662 4718 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/7c827924-284b-42bd-b871-a892565d7a73-credential-keys\") on node \"crc\" DevicePath \"\"" Dec 10 14:58:16 crc kubenswrapper[4718]: I1210 14:58:16.943676 4718 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/fb18be8d-fe62-4361-8dd1-1a068f0cc2d4-scripts\") on node \"crc\" DevicePath \"\"" Dec 10 14:58:17 crc kubenswrapper[4718]: I1210 14:58:17.204314 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5547d7fc49-599jc" Dec 10 14:58:17 crc kubenswrapper[4718]: I1210 14:58:17.204326 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5547d7fc49-599jc" event={"ID":"1f14e36c-813e-413c-a7e2-140c02eb6599","Type":"ContainerDied","Data":"e7a5bd8866e7654e22182f384c47a8c29193b45f06bcee9a16d3f6561ae28b26"} Dec 10 14:58:17 crc kubenswrapper[4718]: I1210 14:58:17.207238 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5d86d75cbf-rg448" event={"ID":"67ad4684-e977-4f3e-b1f0-2efa3d60567f","Type":"ContainerDied","Data":"85b42b90aa84843029a8e09471ed602576bed4a51ec2ae0072e6e6f0479dc6fc"} Dec 10 14:58:17 crc kubenswrapper[4718]: I1210 14:58:17.207275 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5d86d75cbf-rg448" Dec 10 14:58:17 crc kubenswrapper[4718]: I1210 14:58:17.211315 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-2g9vq" Dec 10 14:58:17 crc kubenswrapper[4718]: I1210 14:58:17.211306 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-2g9vq" event={"ID":"7c827924-284b-42bd-b871-a892565d7a73","Type":"ContainerDied","Data":"55a433903eb90cfacd4bf518c38e6575d778e5e27218002dc784cda9d4a400dc"} Dec 10 14:58:17 crc kubenswrapper[4718]: I1210 14:58:17.211435 4718 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="55a433903eb90cfacd4bf518c38e6575d778e5e27218002dc784cda9d4a400dc" Dec 10 14:58:17 crc kubenswrapper[4718]: I1210 14:58:17.220996 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-db-sync-bcnrt" event={"ID":"bf1fd7f3-5d9a-44b4-8e4e-e71df148b109","Type":"ContainerDied","Data":"180f2e76ed4911d43a09683cb5e9b18af0e3967d895c8ccd60bcb7c0c33d1448"} Dec 10 14:58:17 crc kubenswrapper[4718]: I1210 14:58:17.221067 4718 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="180f2e76ed4911d43a09683cb5e9b18af0e3967d895c8ccd60bcb7c0c33d1448" Dec 10 14:58:17 crc kubenswrapper[4718]: I1210 14:58:17.221029 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-sync-bcnrt" Dec 10 14:58:17 crc kubenswrapper[4718]: I1210 14:58:17.228955 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7d98bb654c-nmwh8" Dec 10 14:58:17 crc kubenswrapper[4718]: I1210 14:58:17.229994 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7d98bb654c-nmwh8" event={"ID":"fb18be8d-fe62-4361-8dd1-1a068f0cc2d4","Type":"ContainerDied","Data":"fcb4739bab23d1c8685f1223a94acc6dcd3d7ae74ca747432cc789f03b1c8f3f"} Dec 10 14:58:17 crc kubenswrapper[4718]: E1210 14:58:17.234225 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-horizon:current\\\"\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-horizon:current\\\"\"]" pod="openstack/horizon-6bb7f498bd-pjx6h" podUID="57bc5c19-c945-4bca-adef-0ddf1b9fabac" Dec 10 14:58:17 crc kubenswrapper[4718]: E1210 14:58:17.238177 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-horizon:current\\\"\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-horizon:current\\\"\"]" pod="openstack/horizon-77c9ddb894-brvxz" podUID="e1a09589-44b9-49f4-8970-d3381c3d4b99" Dec 10 14:58:17 crc kubenswrapper[4718]: I1210 14:58:17.386599 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-7d98bb654c-nmwh8"] Dec 10 14:58:17 crc kubenswrapper[4718]: I1210 14:58:17.386776 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-7d98bb654c-nmwh8"] Dec 10 14:58:17 crc kubenswrapper[4718]: E1210 14:58:17.396425 4718 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos10/openstack-barbican-api:current" Dec 10 14:58:17 crc kubenswrapper[4718]: E1210 14:58:17.396569 4718 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos10/openstack-barbican-api:current" Dec 10 14:58:17 crc kubenswrapper[4718]: E1210 14:58:17.396763 4718 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:barbican-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-barbican-api:current,Command:[/bin/bash],Args:[-c barbican-manage db upgrade],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/barbican/barbican.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ftgjj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42403,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42403,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-db-sync-cwrnt_openstack(3990dc15-53e8-4cd7-a25d-f9b322b74f3e): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 10 14:58:17 crc kubenswrapper[4718]: E1210 14:58:17.399097 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/barbican-db-sync-cwrnt" podUID="3990dc15-53e8-4cd7-a25d-f9b322b74f3e" Dec 10 14:58:17 crc kubenswrapper[4718]: I1210 14:58:17.514710 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-5547d7fc49-599jc"] Dec 10 14:58:17 crc kubenswrapper[4718]: I1210 14:58:17.577008 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-5547d7fc49-599jc"] Dec 10 14:58:17 crc kubenswrapper[4718]: I1210 14:58:17.604754 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-5d86d75cbf-rg448"] Dec 10 14:58:17 crc kubenswrapper[4718]: I1210 14:58:17.618269 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-5d86d75cbf-rg448"] Dec 10 14:58:18 crc kubenswrapper[4718]: I1210 14:58:18.006776 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-2g9vq"] Dec 10 14:58:18 crc kubenswrapper[4718]: I1210 14:58:18.043752 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1f14e36c-813e-413c-a7e2-140c02eb6599" path="/var/lib/kubelet/pods/1f14e36c-813e-413c-a7e2-140c02eb6599/volumes" Dec 10 14:58:18 crc kubenswrapper[4718]: I1210 14:58:18.044592 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="67ad4684-e977-4f3e-b1f0-2efa3d60567f" path="/var/lib/kubelet/pods/67ad4684-e977-4f3e-b1f0-2efa3d60567f/volumes" Dec 10 14:58:18 crc kubenswrapper[4718]: I1210 14:58:18.045502 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fb18be8d-fe62-4361-8dd1-1a068f0cc2d4" path="/var/lib/kubelet/pods/fb18be8d-fe62-4361-8dd1-1a068f0cc2d4/volumes" Dec 10 14:58:18 crc kubenswrapper[4718]: I1210 14:58:18.046142 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-2g9vq"] Dec 10 14:58:18 crc kubenswrapper[4718]: I1210 14:58:18.083617 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-hwv84"] Dec 10 14:58:18 crc kubenswrapper[4718]: E1210 14:58:18.084629 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b15123b2-81bb-4fab-b6cd-ed84c0965118" containerName="init" Dec 10 14:58:18 crc kubenswrapper[4718]: I1210 14:58:18.084725 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="b15123b2-81bb-4fab-b6cd-ed84c0965118" containerName="init" Dec 10 14:58:18 crc kubenswrapper[4718]: E1210 14:58:18.084846 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c827924-284b-42bd-b871-a892565d7a73" containerName="keystone-bootstrap" Dec 10 14:58:18 crc kubenswrapper[4718]: I1210 14:58:18.084911 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c827924-284b-42bd-b871-a892565d7a73" containerName="keystone-bootstrap" Dec 10 14:58:18 crc kubenswrapper[4718]: E1210 14:58:18.085018 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b15123b2-81bb-4fab-b6cd-ed84c0965118" containerName="dnsmasq-dns" Dec 10 14:58:18 crc kubenswrapper[4718]: I1210 14:58:18.085096 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="b15123b2-81bb-4fab-b6cd-ed84c0965118" containerName="dnsmasq-dns" Dec 10 14:58:18 crc kubenswrapper[4718]: E1210 14:58:18.085182 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf1fd7f3-5d9a-44b4-8e4e-e71df148b109" containerName="watcher-db-sync" Dec 10 14:58:18 crc kubenswrapper[4718]: I1210 14:58:18.085241 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf1fd7f3-5d9a-44b4-8e4e-e71df148b109" containerName="watcher-db-sync" Dec 10 14:58:18 crc kubenswrapper[4718]: I1210 14:58:18.085525 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="7c827924-284b-42bd-b871-a892565d7a73" containerName="keystone-bootstrap" Dec 10 14:58:18 crc kubenswrapper[4718]: I1210 14:58:18.085612 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="bf1fd7f3-5d9a-44b4-8e4e-e71df148b109" containerName="watcher-db-sync" Dec 10 14:58:18 crc kubenswrapper[4718]: I1210 14:58:18.085684 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="b15123b2-81bb-4fab-b6cd-ed84c0965118" containerName="dnsmasq-dns" Dec 10 14:58:18 crc kubenswrapper[4718]: I1210 14:58:18.086547 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-hwv84" Dec 10 14:58:18 crc kubenswrapper[4718]: I1210 14:58:18.095984 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Dec 10 14:58:18 crc kubenswrapper[4718]: I1210 14:58:18.096354 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Dec 10 14:58:18 crc kubenswrapper[4718]: I1210 14:58:18.096460 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Dec 10 14:58:18 crc kubenswrapper[4718]: I1210 14:58:18.096899 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-fvdkw" Dec 10 14:58:18 crc kubenswrapper[4718]: I1210 14:58:18.097116 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Dec 10 14:58:18 crc kubenswrapper[4718]: I1210 14:58:18.110983 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-hwv84"] Dec 10 14:58:18 crc kubenswrapper[4718]: I1210 14:58:18.224821 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fh8sq\" (UniqueName: \"kubernetes.io/projected/80b183de-b86e-49a8-8c3c-ebf398fc65eb-kube-api-access-fh8sq\") pod \"keystone-bootstrap-hwv84\" (UID: \"80b183de-b86e-49a8-8c3c-ebf398fc65eb\") " pod="openstack/keystone-bootstrap-hwv84" Dec 10 14:58:18 crc kubenswrapper[4718]: I1210 14:58:18.225008 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/80b183de-b86e-49a8-8c3c-ebf398fc65eb-credential-keys\") pod \"keystone-bootstrap-hwv84\" (UID: \"80b183de-b86e-49a8-8c3c-ebf398fc65eb\") " pod="openstack/keystone-bootstrap-hwv84" Dec 10 14:58:18 crc kubenswrapper[4718]: I1210 14:58:18.225064 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/80b183de-b86e-49a8-8c3c-ebf398fc65eb-scripts\") pod \"keystone-bootstrap-hwv84\" (UID: \"80b183de-b86e-49a8-8c3c-ebf398fc65eb\") " pod="openstack/keystone-bootstrap-hwv84" Dec 10 14:58:18 crc kubenswrapper[4718]: I1210 14:58:18.225123 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/80b183de-b86e-49a8-8c3c-ebf398fc65eb-config-data\") pod \"keystone-bootstrap-hwv84\" (UID: \"80b183de-b86e-49a8-8c3c-ebf398fc65eb\") " pod="openstack/keystone-bootstrap-hwv84" Dec 10 14:58:18 crc kubenswrapper[4718]: I1210 14:58:18.225194 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/80b183de-b86e-49a8-8c3c-ebf398fc65eb-fernet-keys\") pod \"keystone-bootstrap-hwv84\" (UID: \"80b183de-b86e-49a8-8c3c-ebf398fc65eb\") " pod="openstack/keystone-bootstrap-hwv84" Dec 10 14:58:18 crc kubenswrapper[4718]: I1210 14:58:18.225272 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/80b183de-b86e-49a8-8c3c-ebf398fc65eb-combined-ca-bundle\") pod \"keystone-bootstrap-hwv84\" (UID: \"80b183de-b86e-49a8-8c3c-ebf398fc65eb\") " pod="openstack/keystone-bootstrap-hwv84" Dec 10 14:58:18 crc kubenswrapper[4718]: I1210 14:58:18.232333 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-decision-engine-0"] Dec 10 14:58:18 crc kubenswrapper[4718]: I1210 14:58:18.234222 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Dec 10 14:58:18 crc kubenswrapper[4718]: I1210 14:58:18.251219 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-watcher-dockercfg-xlv7b" Dec 10 14:58:18 crc kubenswrapper[4718]: I1210 14:58:18.256205 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-decision-engine-config-data" Dec 10 14:58:18 crc kubenswrapper[4718]: I1210 14:58:18.261407 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-decision-engine-0"] Dec 10 14:58:18 crc kubenswrapper[4718]: E1210 14:58:18.281822 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-barbican-api:current\\\"\"" pod="openstack/barbican-db-sync-cwrnt" podUID="3990dc15-53e8-4cd7-a25d-f9b322b74f3e" Dec 10 14:58:18 crc kubenswrapper[4718]: I1210 14:58:18.292418 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-api-0"] Dec 10 14:58:18 crc kubenswrapper[4718]: I1210 14:58:18.294943 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Dec 10 14:58:18 crc kubenswrapper[4718]: I1210 14:58:18.302431 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-api-config-data" Dec 10 14:58:18 crc kubenswrapper[4718]: I1210 14:58:18.328192 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/80b183de-b86e-49a8-8c3c-ebf398fc65eb-combined-ca-bundle\") pod \"keystone-bootstrap-hwv84\" (UID: \"80b183de-b86e-49a8-8c3c-ebf398fc65eb\") " pod="openstack/keystone-bootstrap-hwv84" Dec 10 14:58:18 crc kubenswrapper[4718]: I1210 14:58:18.328283 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fh8sq\" (UniqueName: \"kubernetes.io/projected/80b183de-b86e-49a8-8c3c-ebf398fc65eb-kube-api-access-fh8sq\") pod \"keystone-bootstrap-hwv84\" (UID: \"80b183de-b86e-49a8-8c3c-ebf398fc65eb\") " pod="openstack/keystone-bootstrap-hwv84" Dec 10 14:58:18 crc kubenswrapper[4718]: I1210 14:58:18.328434 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/80b183de-b86e-49a8-8c3c-ebf398fc65eb-credential-keys\") pod \"keystone-bootstrap-hwv84\" (UID: \"80b183de-b86e-49a8-8c3c-ebf398fc65eb\") " pod="openstack/keystone-bootstrap-hwv84" Dec 10 14:58:18 crc kubenswrapper[4718]: I1210 14:58:18.328480 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/80b183de-b86e-49a8-8c3c-ebf398fc65eb-scripts\") pod \"keystone-bootstrap-hwv84\" (UID: \"80b183de-b86e-49a8-8c3c-ebf398fc65eb\") " pod="openstack/keystone-bootstrap-hwv84" Dec 10 14:58:18 crc kubenswrapper[4718]: I1210 14:58:18.328534 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/80b183de-b86e-49a8-8c3c-ebf398fc65eb-config-data\") pod \"keystone-bootstrap-hwv84\" (UID: \"80b183de-b86e-49a8-8c3c-ebf398fc65eb\") " pod="openstack/keystone-bootstrap-hwv84" Dec 10 14:58:18 crc kubenswrapper[4718]: I1210 14:58:18.328580 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/80b183de-b86e-49a8-8c3c-ebf398fc65eb-fernet-keys\") pod \"keystone-bootstrap-hwv84\" (UID: \"80b183de-b86e-49a8-8c3c-ebf398fc65eb\") " pod="openstack/keystone-bootstrap-hwv84" Dec 10 14:58:18 crc kubenswrapper[4718]: I1210 14:58:18.346073 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-api-0"] Dec 10 14:58:18 crc kubenswrapper[4718]: I1210 14:58:18.363582 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/80b183de-b86e-49a8-8c3c-ebf398fc65eb-combined-ca-bundle\") pod \"keystone-bootstrap-hwv84\" (UID: \"80b183de-b86e-49a8-8c3c-ebf398fc65eb\") " pod="openstack/keystone-bootstrap-hwv84" Dec 10 14:58:18 crc kubenswrapper[4718]: I1210 14:58:18.378382 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/80b183de-b86e-49a8-8c3c-ebf398fc65eb-fernet-keys\") pod \"keystone-bootstrap-hwv84\" (UID: \"80b183de-b86e-49a8-8c3c-ebf398fc65eb\") " pod="openstack/keystone-bootstrap-hwv84" Dec 10 14:58:18 crc kubenswrapper[4718]: I1210 14:58:18.382038 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/80b183de-b86e-49a8-8c3c-ebf398fc65eb-credential-keys\") pod \"keystone-bootstrap-hwv84\" (UID: \"80b183de-b86e-49a8-8c3c-ebf398fc65eb\") " pod="openstack/keystone-bootstrap-hwv84" Dec 10 14:58:18 crc kubenswrapper[4718]: I1210 14:58:18.384563 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fh8sq\" (UniqueName: \"kubernetes.io/projected/80b183de-b86e-49a8-8c3c-ebf398fc65eb-kube-api-access-fh8sq\") pod \"keystone-bootstrap-hwv84\" (UID: \"80b183de-b86e-49a8-8c3c-ebf398fc65eb\") " pod="openstack/keystone-bootstrap-hwv84" Dec 10 14:58:18 crc kubenswrapper[4718]: I1210 14:58:18.399590 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/80b183de-b86e-49a8-8c3c-ebf398fc65eb-scripts\") pod \"keystone-bootstrap-hwv84\" (UID: \"80b183de-b86e-49a8-8c3c-ebf398fc65eb\") " pod="openstack/keystone-bootstrap-hwv84" Dec 10 14:58:18 crc kubenswrapper[4718]: I1210 14:58:18.418410 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/80b183de-b86e-49a8-8c3c-ebf398fc65eb-config-data\") pod \"keystone-bootstrap-hwv84\" (UID: \"80b183de-b86e-49a8-8c3c-ebf398fc65eb\") " pod="openstack/keystone-bootstrap-hwv84" Dec 10 14:58:18 crc kubenswrapper[4718]: I1210 14:58:18.430489 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/95261732-95ae-4618-a8a3-c883c287553e-custom-prometheus-ca\") pod \"watcher-decision-engine-0\" (UID: \"95261732-95ae-4618-a8a3-c883c287553e\") " pod="openstack/watcher-decision-engine-0" Dec 10 14:58:18 crc kubenswrapper[4718]: I1210 14:58:18.430934 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/95261732-95ae-4618-a8a3-c883c287553e-config-data\") pod \"watcher-decision-engine-0\" (UID: \"95261732-95ae-4618-a8a3-c883c287553e\") " pod="openstack/watcher-decision-engine-0" Dec 10 14:58:18 crc kubenswrapper[4718]: I1210 14:58:18.431027 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/42d14e81-de96-43a8-b3d9-fbaca9457b22-logs\") pod \"watcher-api-0\" (UID: \"42d14e81-de96-43a8-b3d9-fbaca9457b22\") " pod="openstack/watcher-api-0" Dec 10 14:58:18 crc kubenswrapper[4718]: I1210 14:58:18.431329 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/42d14e81-de96-43a8-b3d9-fbaca9457b22-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"42d14e81-de96-43a8-b3d9-fbaca9457b22\") " pod="openstack/watcher-api-0" Dec 10 14:58:18 crc kubenswrapper[4718]: I1210 14:58:18.431898 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42d14e81-de96-43a8-b3d9-fbaca9457b22-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"42d14e81-de96-43a8-b3d9-fbaca9457b22\") " pod="openstack/watcher-api-0" Dec 10 14:58:18 crc kubenswrapper[4718]: I1210 14:58:18.432207 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rnwqg\" (UniqueName: \"kubernetes.io/projected/42d14e81-de96-43a8-b3d9-fbaca9457b22-kube-api-access-rnwqg\") pod \"watcher-api-0\" (UID: \"42d14e81-de96-43a8-b3d9-fbaca9457b22\") " pod="openstack/watcher-api-0" Dec 10 14:58:18 crc kubenswrapper[4718]: I1210 14:58:18.432456 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b2t2m\" (UniqueName: \"kubernetes.io/projected/95261732-95ae-4618-a8a3-c883c287553e-kube-api-access-b2t2m\") pod \"watcher-decision-engine-0\" (UID: \"95261732-95ae-4618-a8a3-c883c287553e\") " pod="openstack/watcher-decision-engine-0" Dec 10 14:58:18 crc kubenswrapper[4718]: I1210 14:58:18.432483 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95261732-95ae-4618-a8a3-c883c287553e-combined-ca-bundle\") pod \"watcher-decision-engine-0\" (UID: \"95261732-95ae-4618-a8a3-c883c287553e\") " pod="openstack/watcher-decision-engine-0" Dec 10 14:58:18 crc kubenswrapper[4718]: I1210 14:58:18.432698 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/42d14e81-de96-43a8-b3d9-fbaca9457b22-config-data\") pod \"watcher-api-0\" (UID: \"42d14e81-de96-43a8-b3d9-fbaca9457b22\") " pod="openstack/watcher-api-0" Dec 10 14:58:18 crc kubenswrapper[4718]: I1210 14:58:18.432751 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/95261732-95ae-4618-a8a3-c883c287553e-logs\") pod \"watcher-decision-engine-0\" (UID: \"95261732-95ae-4618-a8a3-c883c287553e\") " pod="openstack/watcher-decision-engine-0" Dec 10 14:58:18 crc kubenswrapper[4718]: I1210 14:58:18.436632 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-hwv84" Dec 10 14:58:18 crc kubenswrapper[4718]: I1210 14:58:18.445954 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-applier-0"] Dec 10 14:58:18 crc kubenswrapper[4718]: I1210 14:58:18.447985 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-applier-0" Dec 10 14:58:18 crc kubenswrapper[4718]: I1210 14:58:18.456051 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-applier-config-data" Dec 10 14:58:18 crc kubenswrapper[4718]: I1210 14:58:18.482806 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-applier-0"] Dec 10 14:58:18 crc kubenswrapper[4718]: I1210 14:58:18.537361 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/42d14e81-de96-43a8-b3d9-fbaca9457b22-config-data\") pod \"watcher-api-0\" (UID: \"42d14e81-de96-43a8-b3d9-fbaca9457b22\") " pod="openstack/watcher-api-0" Dec 10 14:58:18 crc kubenswrapper[4718]: I1210 14:58:18.538086 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/95261732-95ae-4618-a8a3-c883c287553e-logs\") pod \"watcher-decision-engine-0\" (UID: \"95261732-95ae-4618-a8a3-c883c287553e\") " pod="openstack/watcher-decision-engine-0" Dec 10 14:58:18 crc kubenswrapper[4718]: I1210 14:58:18.538355 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/95261732-95ae-4618-a8a3-c883c287553e-custom-prometheus-ca\") pod \"watcher-decision-engine-0\" (UID: \"95261732-95ae-4618-a8a3-c883c287553e\") " pod="openstack/watcher-decision-engine-0" Dec 10 14:58:18 crc kubenswrapper[4718]: I1210 14:58:18.538500 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/95261732-95ae-4618-a8a3-c883c287553e-config-data\") pod \"watcher-decision-engine-0\" (UID: \"95261732-95ae-4618-a8a3-c883c287553e\") " pod="openstack/watcher-decision-engine-0" Dec 10 14:58:18 crc kubenswrapper[4718]: I1210 14:58:18.538757 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/42d14e81-de96-43a8-b3d9-fbaca9457b22-logs\") pod \"watcher-api-0\" (UID: \"42d14e81-de96-43a8-b3d9-fbaca9457b22\") " pod="openstack/watcher-api-0" Dec 10 14:58:18 crc kubenswrapper[4718]: I1210 14:58:18.538802 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/95261732-95ae-4618-a8a3-c883c287553e-logs\") pod \"watcher-decision-engine-0\" (UID: \"95261732-95ae-4618-a8a3-c883c287553e\") " pod="openstack/watcher-decision-engine-0" Dec 10 14:58:18 crc kubenswrapper[4718]: I1210 14:58:18.539207 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/42d14e81-de96-43a8-b3d9-fbaca9457b22-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"42d14e81-de96-43a8-b3d9-fbaca9457b22\") " pod="openstack/watcher-api-0" Dec 10 14:58:18 crc kubenswrapper[4718]: I1210 14:58:18.539247 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/42d14e81-de96-43a8-b3d9-fbaca9457b22-logs\") pod \"watcher-api-0\" (UID: \"42d14e81-de96-43a8-b3d9-fbaca9457b22\") " pod="openstack/watcher-api-0" Dec 10 14:58:18 crc kubenswrapper[4718]: I1210 14:58:18.539636 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cda091c0-e668-4132-95bf-e956b4ee9b39-config-data\") pod \"watcher-applier-0\" (UID: \"cda091c0-e668-4132-95bf-e956b4ee9b39\") " pod="openstack/watcher-applier-0" Dec 10 14:58:18 crc kubenswrapper[4718]: I1210 14:58:18.539893 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cda091c0-e668-4132-95bf-e956b4ee9b39-combined-ca-bundle\") pod \"watcher-applier-0\" (UID: \"cda091c0-e668-4132-95bf-e956b4ee9b39\") " pod="openstack/watcher-applier-0" Dec 10 14:58:18 crc kubenswrapper[4718]: I1210 14:58:18.540071 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42d14e81-de96-43a8-b3d9-fbaca9457b22-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"42d14e81-de96-43a8-b3d9-fbaca9457b22\") " pod="openstack/watcher-api-0" Dec 10 14:58:18 crc kubenswrapper[4718]: I1210 14:58:18.540811 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rnwqg\" (UniqueName: \"kubernetes.io/projected/42d14e81-de96-43a8-b3d9-fbaca9457b22-kube-api-access-rnwqg\") pod \"watcher-api-0\" (UID: \"42d14e81-de96-43a8-b3d9-fbaca9457b22\") " pod="openstack/watcher-api-0" Dec 10 14:58:18 crc kubenswrapper[4718]: I1210 14:58:18.541882 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b2t2m\" (UniqueName: \"kubernetes.io/projected/95261732-95ae-4618-a8a3-c883c287553e-kube-api-access-b2t2m\") pod \"watcher-decision-engine-0\" (UID: \"95261732-95ae-4618-a8a3-c883c287553e\") " pod="openstack/watcher-decision-engine-0" Dec 10 14:58:18 crc kubenswrapper[4718]: I1210 14:58:18.542026 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95261732-95ae-4618-a8a3-c883c287553e-combined-ca-bundle\") pod \"watcher-decision-engine-0\" (UID: \"95261732-95ae-4618-a8a3-c883c287553e\") " pod="openstack/watcher-decision-engine-0" Dec 10 14:58:18 crc kubenswrapper[4718]: I1210 14:58:18.542193 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cda091c0-e668-4132-95bf-e956b4ee9b39-logs\") pod \"watcher-applier-0\" (UID: \"cda091c0-e668-4132-95bf-e956b4ee9b39\") " pod="openstack/watcher-applier-0" Dec 10 14:58:18 crc kubenswrapper[4718]: I1210 14:58:18.542366 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mpmgr\" (UniqueName: \"kubernetes.io/projected/cda091c0-e668-4132-95bf-e956b4ee9b39-kube-api-access-mpmgr\") pod \"watcher-applier-0\" (UID: \"cda091c0-e668-4132-95bf-e956b4ee9b39\") " pod="openstack/watcher-applier-0" Dec 10 14:58:18 crc kubenswrapper[4718]: I1210 14:58:18.544018 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/95261732-95ae-4618-a8a3-c883c287553e-config-data\") pod \"watcher-decision-engine-0\" (UID: \"95261732-95ae-4618-a8a3-c883c287553e\") " pod="openstack/watcher-decision-engine-0" Dec 10 14:58:18 crc kubenswrapper[4718]: I1210 14:58:18.546141 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/42d14e81-de96-43a8-b3d9-fbaca9457b22-config-data\") pod \"watcher-api-0\" (UID: \"42d14e81-de96-43a8-b3d9-fbaca9457b22\") " pod="openstack/watcher-api-0" Dec 10 14:58:18 crc kubenswrapper[4718]: I1210 14:58:18.548010 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42d14e81-de96-43a8-b3d9-fbaca9457b22-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"42d14e81-de96-43a8-b3d9-fbaca9457b22\") " pod="openstack/watcher-api-0" Dec 10 14:58:18 crc kubenswrapper[4718]: I1210 14:58:18.558705 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/95261732-95ae-4618-a8a3-c883c287553e-custom-prometheus-ca\") pod \"watcher-decision-engine-0\" (UID: \"95261732-95ae-4618-a8a3-c883c287553e\") " pod="openstack/watcher-decision-engine-0" Dec 10 14:58:18 crc kubenswrapper[4718]: I1210 14:58:18.561288 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/42d14e81-de96-43a8-b3d9-fbaca9457b22-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"42d14e81-de96-43a8-b3d9-fbaca9457b22\") " pod="openstack/watcher-api-0" Dec 10 14:58:18 crc kubenswrapper[4718]: I1210 14:58:18.561796 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95261732-95ae-4618-a8a3-c883c287553e-combined-ca-bundle\") pod \"watcher-decision-engine-0\" (UID: \"95261732-95ae-4618-a8a3-c883c287553e\") " pod="openstack/watcher-decision-engine-0" Dec 10 14:58:18 crc kubenswrapper[4718]: I1210 14:58:18.567824 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b2t2m\" (UniqueName: \"kubernetes.io/projected/95261732-95ae-4618-a8a3-c883c287553e-kube-api-access-b2t2m\") pod \"watcher-decision-engine-0\" (UID: \"95261732-95ae-4618-a8a3-c883c287553e\") " pod="openstack/watcher-decision-engine-0" Dec 10 14:58:18 crc kubenswrapper[4718]: I1210 14:58:18.575880 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rnwqg\" (UniqueName: \"kubernetes.io/projected/42d14e81-de96-43a8-b3d9-fbaca9457b22-kube-api-access-rnwqg\") pod \"watcher-api-0\" (UID: \"42d14e81-de96-43a8-b3d9-fbaca9457b22\") " pod="openstack/watcher-api-0" Dec 10 14:58:18 crc kubenswrapper[4718]: I1210 14:58:18.589462 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Dec 10 14:58:18 crc kubenswrapper[4718]: I1210 14:58:18.644686 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Dec 10 14:58:18 crc kubenswrapper[4718]: I1210 14:58:18.645164 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mpmgr\" (UniqueName: \"kubernetes.io/projected/cda091c0-e668-4132-95bf-e956b4ee9b39-kube-api-access-mpmgr\") pod \"watcher-applier-0\" (UID: \"cda091c0-e668-4132-95bf-e956b4ee9b39\") " pod="openstack/watcher-applier-0" Dec 10 14:58:18 crc kubenswrapper[4718]: I1210 14:58:18.645335 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cda091c0-e668-4132-95bf-e956b4ee9b39-config-data\") pod \"watcher-applier-0\" (UID: \"cda091c0-e668-4132-95bf-e956b4ee9b39\") " pod="openstack/watcher-applier-0" Dec 10 14:58:18 crc kubenswrapper[4718]: I1210 14:58:18.646076 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cda091c0-e668-4132-95bf-e956b4ee9b39-combined-ca-bundle\") pod \"watcher-applier-0\" (UID: \"cda091c0-e668-4132-95bf-e956b4ee9b39\") " pod="openstack/watcher-applier-0" Dec 10 14:58:18 crc kubenswrapper[4718]: I1210 14:58:18.646236 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cda091c0-e668-4132-95bf-e956b4ee9b39-logs\") pod \"watcher-applier-0\" (UID: \"cda091c0-e668-4132-95bf-e956b4ee9b39\") " pod="openstack/watcher-applier-0" Dec 10 14:58:18 crc kubenswrapper[4718]: I1210 14:58:18.646733 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cda091c0-e668-4132-95bf-e956b4ee9b39-logs\") pod \"watcher-applier-0\" (UID: \"cda091c0-e668-4132-95bf-e956b4ee9b39\") " pod="openstack/watcher-applier-0" Dec 10 14:58:18 crc kubenswrapper[4718]: I1210 14:58:18.650038 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cda091c0-e668-4132-95bf-e956b4ee9b39-combined-ca-bundle\") pod \"watcher-applier-0\" (UID: \"cda091c0-e668-4132-95bf-e956b4ee9b39\") " pod="openstack/watcher-applier-0" Dec 10 14:58:18 crc kubenswrapper[4718]: I1210 14:58:18.654804 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cda091c0-e668-4132-95bf-e956b4ee9b39-config-data\") pod \"watcher-applier-0\" (UID: \"cda091c0-e668-4132-95bf-e956b4ee9b39\") " pod="openstack/watcher-applier-0" Dec 10 14:58:18 crc kubenswrapper[4718]: I1210 14:58:18.670950 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mpmgr\" (UniqueName: \"kubernetes.io/projected/cda091c0-e668-4132-95bf-e956b4ee9b39-kube-api-access-mpmgr\") pod \"watcher-applier-0\" (UID: \"cda091c0-e668-4132-95bf-e956b4ee9b39\") " pod="openstack/watcher-applier-0" Dec 10 14:58:18 crc kubenswrapper[4718]: I1210 14:58:18.787318 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-applier-0" Dec 10 14:58:19 crc kubenswrapper[4718]: E1210 14:58:19.486541 4718 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos10/openstack-cinder-api:current" Dec 10 14:58:19 crc kubenswrapper[4718]: E1210 14:58:19.487138 4718 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos10/openstack-cinder-api:current" Dec 10 14:58:19 crc kubenswrapper[4718]: E1210 14:58:19.487658 4718 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-cinder-api:current,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bvqkf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-hctjl_openstack(71a092ad-773d-47b4-bc1f-73358adecf4a): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 10 14:58:19 crc kubenswrapper[4718]: E1210 14:58:19.489820 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-hctjl" podUID="71a092ad-773d-47b4-bc1f-73358adecf4a" Dec 10 14:58:20 crc kubenswrapper[4718]: I1210 14:58:20.051768 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7c827924-284b-42bd-b871-a892565d7a73" path="/var/lib/kubelet/pods/7c827924-284b-42bd-b871-a892565d7a73/volumes" Dec 10 14:58:20 crc kubenswrapper[4718]: E1210 14:58:20.354598 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cinder-api:current\\\"\"" pod="openstack/cinder-db-sync-hctjl" podUID="71a092ad-773d-47b4-bc1f-73358adecf4a" Dec 10 14:58:20 crc kubenswrapper[4718]: I1210 14:58:20.678439 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-api-0"] Dec 10 14:58:20 crc kubenswrapper[4718]: I1210 14:58:20.841727 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-hwv84"] Dec 10 14:58:20 crc kubenswrapper[4718]: I1210 14:58:20.859551 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-decision-engine-0"] Dec 10 14:58:20 crc kubenswrapper[4718]: W1210 14:58:20.865908 4718 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod95261732_95ae_4618_a8a3_c883c287553e.slice/crio-724665ed02d0c341ded0e4f7dd51c6951e127f62923f1875a65ea5b1a7cd066d WatchSource:0}: Error finding container 724665ed02d0c341ded0e4f7dd51c6951e127f62923f1875a65ea5b1a7cd066d: Status 404 returned error can't find the container with id 724665ed02d0c341ded0e4f7dd51c6951e127f62923f1875a65ea5b1a7cd066d Dec 10 14:58:20 crc kubenswrapper[4718]: I1210 14:58:20.951772 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-applier-0"] Dec 10 14:58:20 crc kubenswrapper[4718]: W1210 14:58:20.965375 4718 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcda091c0_e668_4132_95bf_e956b4ee9b39.slice/crio-67d6c66d777c1ac9ee90266c8721b5efbcc3b41a8528e1eb02291aad45c6c97e WatchSource:0}: Error finding container 67d6c66d777c1ac9ee90266c8721b5efbcc3b41a8528e1eb02291aad45c6c97e: Status 404 returned error can't find the container with id 67d6c66d777c1ac9ee90266c8721b5efbcc3b41a8528e1eb02291aad45c6c97e Dec 10 14:58:21 crc kubenswrapper[4718]: I1210 14:58:21.363463 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-applier-0" event={"ID":"cda091c0-e668-4132-95bf-e956b4ee9b39","Type":"ContainerStarted","Data":"67d6c66d777c1ac9ee90266c8721b5efbcc3b41a8528e1eb02291aad45c6c97e"} Dec 10 14:58:21 crc kubenswrapper[4718]: I1210 14:58:21.366673 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"42d14e81-de96-43a8-b3d9-fbaca9457b22","Type":"ContainerStarted","Data":"4cc370b8a9536785d63eebbf436717cf3e1c466c3694521394fbbf76e2d525dd"} Dec 10 14:58:21 crc kubenswrapper[4718]: I1210 14:58:21.366717 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"42d14e81-de96-43a8-b3d9-fbaca9457b22","Type":"ContainerStarted","Data":"ca6fdd3668f489cc6d3e2aa09e1b2e3bae67899fafc9fbaed12ef54510f71fbb"} Dec 10 14:58:21 crc kubenswrapper[4718]: I1210 14:58:21.368960 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"95261732-95ae-4618-a8a3-c883c287553e","Type":"ContainerStarted","Data":"724665ed02d0c341ded0e4f7dd51c6951e127f62923f1875a65ea5b1a7cd066d"} Dec 10 14:58:21 crc kubenswrapper[4718]: I1210 14:58:21.372672 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-qklhp" event={"ID":"f6329eaf-fcae-417e-96a8-96719f02420b","Type":"ContainerStarted","Data":"fa6a2558a3cb00acb2ee5107a07d8a0a8f64bf8afa3aa4aec3b558b02c87faa8"} Dec 10 14:58:21 crc kubenswrapper[4718]: I1210 14:58:21.378709 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-hwv84" event={"ID":"80b183de-b86e-49a8-8c3c-ebf398fc65eb","Type":"ContainerStarted","Data":"b5545b7e7d53cd55e962135153804b1b5030323a12b587abfdac17ff83a30908"} Dec 10 14:58:21 crc kubenswrapper[4718]: I1210 14:58:21.378807 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-hwv84" event={"ID":"80b183de-b86e-49a8-8c3c-ebf398fc65eb","Type":"ContainerStarted","Data":"c4d67664472c88fbe9e73d44895760ee698c60dbd60626b4312d0b0e1ca7e39f"} Dec 10 14:58:21 crc kubenswrapper[4718]: I1210 14:58:21.383956 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"18de3c3d-d30b-4c09-b1a2-3a6376de8843","Type":"ContainerStarted","Data":"05464e0792ca08ae50c09bb7386a735a606b53c8af4f8c35fb6a7490fef3355f"} Dec 10 14:58:21 crc kubenswrapper[4718]: I1210 14:58:21.406507 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-qklhp" podStartSLOduration=3.264198401 podStartE2EDuration="50.406484452s" podCreationTimestamp="2025-12-10 14:57:31 +0000 UTC" firstStartedPulling="2025-12-10 14:57:32.779525889 +0000 UTC m=+1557.728749316" lastFinishedPulling="2025-12-10 14:58:19.92181195 +0000 UTC m=+1604.871035367" observedRunningTime="2025-12-10 14:58:21.401415883 +0000 UTC m=+1606.350639320" watchObservedRunningTime="2025-12-10 14:58:21.406484452 +0000 UTC m=+1606.355707869" Dec 10 14:58:21 crc kubenswrapper[4718]: I1210 14:58:21.434185 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-hwv84" podStartSLOduration=3.434145048 podStartE2EDuration="3.434145048s" podCreationTimestamp="2025-12-10 14:58:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 14:58:21.425449646 +0000 UTC m=+1606.374673073" watchObservedRunningTime="2025-12-10 14:58:21.434145048 +0000 UTC m=+1606.383368465" Dec 10 14:58:22 crc kubenswrapper[4718]: I1210 14:58:22.426945 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"42d14e81-de96-43a8-b3d9-fbaca9457b22","Type":"ContainerStarted","Data":"312a4e701b7fa1a24da17adfb81dc88f54646557a139c291ae0af12db57bf464"} Dec 10 14:58:22 crc kubenswrapper[4718]: I1210 14:58:22.427635 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-api-0" Dec 10 14:58:22 crc kubenswrapper[4718]: I1210 14:58:22.489461 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-api-0" podStartSLOduration=4.489379667 podStartE2EDuration="4.489379667s" podCreationTimestamp="2025-12-10 14:58:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 14:58:22.468227997 +0000 UTC m=+1607.417451414" watchObservedRunningTime="2025-12-10 14:58:22.489379667 +0000 UTC m=+1607.438603254" Dec 10 14:58:23 crc kubenswrapper[4718]: I1210 14:58:23.644793 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-api-0" Dec 10 14:58:24 crc kubenswrapper[4718]: I1210 14:58:24.452774 4718 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 10 14:58:25 crc kubenswrapper[4718]: I1210 14:58:25.199609 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-api-0" Dec 10 14:58:27 crc kubenswrapper[4718]: I1210 14:58:27.508338 4718 generic.go:334] "Generic (PLEG): container finished" podID="f6329eaf-fcae-417e-96a8-96719f02420b" containerID="fa6a2558a3cb00acb2ee5107a07d8a0a8f64bf8afa3aa4aec3b558b02c87faa8" exitCode=0 Dec 10 14:58:27 crc kubenswrapper[4718]: I1210 14:58:27.508446 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-qklhp" event={"ID":"f6329eaf-fcae-417e-96a8-96719f02420b","Type":"ContainerDied","Data":"fa6a2558a3cb00acb2ee5107a07d8a0a8f64bf8afa3aa4aec3b558b02c87faa8"} Dec 10 14:58:27 crc kubenswrapper[4718]: I1210 14:58:27.513545 4718 generic.go:334] "Generic (PLEG): container finished" podID="80b183de-b86e-49a8-8c3c-ebf398fc65eb" containerID="b5545b7e7d53cd55e962135153804b1b5030323a12b587abfdac17ff83a30908" exitCode=0 Dec 10 14:58:27 crc kubenswrapper[4718]: I1210 14:58:27.513630 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-hwv84" event={"ID":"80b183de-b86e-49a8-8c3c-ebf398fc65eb","Type":"ContainerDied","Data":"b5545b7e7d53cd55e962135153804b1b5030323a12b587abfdac17ff83a30908"} Dec 10 14:58:28 crc kubenswrapper[4718]: I1210 14:58:28.023331 4718 scope.go:117] "RemoveContainer" containerID="76beffb68a4302af15a19b5c310d822594a21c7c7ec551bf4bf551ca3dcf1f21" Dec 10 14:58:28 crc kubenswrapper[4718]: E1210 14:58:28.024427 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8zmhn_openshift-machine-config-operator(8db53917-7cfb-496d-b8a0-5cc68f3be4e7)\"" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" podUID="8db53917-7cfb-496d-b8a0-5cc68f3be4e7" Dec 10 14:58:28 crc kubenswrapper[4718]: I1210 14:58:28.529093 4718 generic.go:334] "Generic (PLEG): container finished" podID="b3828426-9676-412b-aeaa-22c7c97989c4" containerID="98f769a12127da0cc2a40d9e3421fce565b7ca1da8624cc5b376f8fc402d8596" exitCode=0 Dec 10 14:58:28 crc kubenswrapper[4718]: I1210 14:58:28.529211 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-qgdwr" event={"ID":"b3828426-9676-412b-aeaa-22c7c97989c4","Type":"ContainerDied","Data":"98f769a12127da0cc2a40d9e3421fce565b7ca1da8624cc5b376f8fc402d8596"} Dec 10 14:58:28 crc kubenswrapper[4718]: I1210 14:58:28.774727 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-api-0" Dec 10 14:58:28 crc kubenswrapper[4718]: I1210 14:58:28.802686 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/watcher-api-0" Dec 10 14:58:29 crc kubenswrapper[4718]: I1210 14:58:29.287766 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-qklhp" Dec 10 14:58:29 crc kubenswrapper[4718]: I1210 14:58:29.297925 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-hwv84" Dec 10 14:58:29 crc kubenswrapper[4718]: I1210 14:58:29.389837 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/80b183de-b86e-49a8-8c3c-ebf398fc65eb-config-data\") pod \"80b183de-b86e-49a8-8c3c-ebf398fc65eb\" (UID: \"80b183de-b86e-49a8-8c3c-ebf398fc65eb\") " Dec 10 14:58:29 crc kubenswrapper[4718]: I1210 14:58:29.389904 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/80b183de-b86e-49a8-8c3c-ebf398fc65eb-scripts\") pod \"80b183de-b86e-49a8-8c3c-ebf398fc65eb\" (UID: \"80b183de-b86e-49a8-8c3c-ebf398fc65eb\") " Dec 10 14:58:29 crc kubenswrapper[4718]: I1210 14:58:29.390020 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f6329eaf-fcae-417e-96a8-96719f02420b-logs\") pod \"f6329eaf-fcae-417e-96a8-96719f02420b\" (UID: \"f6329eaf-fcae-417e-96a8-96719f02420b\") " Dec 10 14:58:29 crc kubenswrapper[4718]: I1210 14:58:29.390060 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f6329eaf-fcae-417e-96a8-96719f02420b-scripts\") pod \"f6329eaf-fcae-417e-96a8-96719f02420b\" (UID: \"f6329eaf-fcae-417e-96a8-96719f02420b\") " Dec 10 14:58:29 crc kubenswrapper[4718]: I1210 14:58:29.390123 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/80b183de-b86e-49a8-8c3c-ebf398fc65eb-fernet-keys\") pod \"80b183de-b86e-49a8-8c3c-ebf398fc65eb\" (UID: \"80b183de-b86e-49a8-8c3c-ebf398fc65eb\") " Dec 10 14:58:29 crc kubenswrapper[4718]: I1210 14:58:29.390172 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c5df2\" (UniqueName: \"kubernetes.io/projected/f6329eaf-fcae-417e-96a8-96719f02420b-kube-api-access-c5df2\") pod \"f6329eaf-fcae-417e-96a8-96719f02420b\" (UID: \"f6329eaf-fcae-417e-96a8-96719f02420b\") " Dec 10 14:58:29 crc kubenswrapper[4718]: I1210 14:58:29.390321 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f6329eaf-fcae-417e-96a8-96719f02420b-combined-ca-bundle\") pod \"f6329eaf-fcae-417e-96a8-96719f02420b\" (UID: \"f6329eaf-fcae-417e-96a8-96719f02420b\") " Dec 10 14:58:29 crc kubenswrapper[4718]: I1210 14:58:29.390421 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/80b183de-b86e-49a8-8c3c-ebf398fc65eb-credential-keys\") pod \"80b183de-b86e-49a8-8c3c-ebf398fc65eb\" (UID: \"80b183de-b86e-49a8-8c3c-ebf398fc65eb\") " Dec 10 14:58:29 crc kubenswrapper[4718]: I1210 14:58:29.390469 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fh8sq\" (UniqueName: \"kubernetes.io/projected/80b183de-b86e-49a8-8c3c-ebf398fc65eb-kube-api-access-fh8sq\") pod \"80b183de-b86e-49a8-8c3c-ebf398fc65eb\" (UID: \"80b183de-b86e-49a8-8c3c-ebf398fc65eb\") " Dec 10 14:58:29 crc kubenswrapper[4718]: I1210 14:58:29.390495 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/80b183de-b86e-49a8-8c3c-ebf398fc65eb-combined-ca-bundle\") pod \"80b183de-b86e-49a8-8c3c-ebf398fc65eb\" (UID: \"80b183de-b86e-49a8-8c3c-ebf398fc65eb\") " Dec 10 14:58:29 crc kubenswrapper[4718]: I1210 14:58:29.390521 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f6329eaf-fcae-417e-96a8-96719f02420b-config-data\") pod \"f6329eaf-fcae-417e-96a8-96719f02420b\" (UID: \"f6329eaf-fcae-417e-96a8-96719f02420b\") " Dec 10 14:58:29 crc kubenswrapper[4718]: I1210 14:58:29.391500 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f6329eaf-fcae-417e-96a8-96719f02420b-logs" (OuterVolumeSpecName: "logs") pod "f6329eaf-fcae-417e-96a8-96719f02420b" (UID: "f6329eaf-fcae-417e-96a8-96719f02420b"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 14:58:29 crc kubenswrapper[4718]: I1210 14:58:29.391903 4718 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f6329eaf-fcae-417e-96a8-96719f02420b-logs\") on node \"crc\" DevicePath \"\"" Dec 10 14:58:29 crc kubenswrapper[4718]: I1210 14:58:29.396232 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/80b183de-b86e-49a8-8c3c-ebf398fc65eb-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "80b183de-b86e-49a8-8c3c-ebf398fc65eb" (UID: "80b183de-b86e-49a8-8c3c-ebf398fc65eb"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 14:58:29 crc kubenswrapper[4718]: I1210 14:58:29.396920 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/80b183de-b86e-49a8-8c3c-ebf398fc65eb-scripts" (OuterVolumeSpecName: "scripts") pod "80b183de-b86e-49a8-8c3c-ebf398fc65eb" (UID: "80b183de-b86e-49a8-8c3c-ebf398fc65eb"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 14:58:29 crc kubenswrapper[4718]: I1210 14:58:29.397530 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f6329eaf-fcae-417e-96a8-96719f02420b-scripts" (OuterVolumeSpecName: "scripts") pod "f6329eaf-fcae-417e-96a8-96719f02420b" (UID: "f6329eaf-fcae-417e-96a8-96719f02420b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 14:58:29 crc kubenswrapper[4718]: I1210 14:58:29.398629 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/80b183de-b86e-49a8-8c3c-ebf398fc65eb-kube-api-access-fh8sq" (OuterVolumeSpecName: "kube-api-access-fh8sq") pod "80b183de-b86e-49a8-8c3c-ebf398fc65eb" (UID: "80b183de-b86e-49a8-8c3c-ebf398fc65eb"). InnerVolumeSpecName "kube-api-access-fh8sq". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 14:58:29 crc kubenswrapper[4718]: I1210 14:58:29.403099 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/80b183de-b86e-49a8-8c3c-ebf398fc65eb-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "80b183de-b86e-49a8-8c3c-ebf398fc65eb" (UID: "80b183de-b86e-49a8-8c3c-ebf398fc65eb"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 14:58:29 crc kubenswrapper[4718]: I1210 14:58:29.404806 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f6329eaf-fcae-417e-96a8-96719f02420b-kube-api-access-c5df2" (OuterVolumeSpecName: "kube-api-access-c5df2") pod "f6329eaf-fcae-417e-96a8-96719f02420b" (UID: "f6329eaf-fcae-417e-96a8-96719f02420b"). InnerVolumeSpecName "kube-api-access-c5df2". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 14:58:29 crc kubenswrapper[4718]: I1210 14:58:29.425297 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f6329eaf-fcae-417e-96a8-96719f02420b-config-data" (OuterVolumeSpecName: "config-data") pod "f6329eaf-fcae-417e-96a8-96719f02420b" (UID: "f6329eaf-fcae-417e-96a8-96719f02420b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 14:58:29 crc kubenswrapper[4718]: I1210 14:58:29.435261 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/80b183de-b86e-49a8-8c3c-ebf398fc65eb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "80b183de-b86e-49a8-8c3c-ebf398fc65eb" (UID: "80b183de-b86e-49a8-8c3c-ebf398fc65eb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 14:58:29 crc kubenswrapper[4718]: I1210 14:58:29.435886 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f6329eaf-fcae-417e-96a8-96719f02420b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f6329eaf-fcae-417e-96a8-96719f02420b" (UID: "f6329eaf-fcae-417e-96a8-96719f02420b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 14:58:29 crc kubenswrapper[4718]: I1210 14:58:29.474000 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/80b183de-b86e-49a8-8c3c-ebf398fc65eb-config-data" (OuterVolumeSpecName: "config-data") pod "80b183de-b86e-49a8-8c3c-ebf398fc65eb" (UID: "80b183de-b86e-49a8-8c3c-ebf398fc65eb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 14:58:29 crc kubenswrapper[4718]: I1210 14:58:29.494189 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c5df2\" (UniqueName: \"kubernetes.io/projected/f6329eaf-fcae-417e-96a8-96719f02420b-kube-api-access-c5df2\") on node \"crc\" DevicePath \"\"" Dec 10 14:58:29 crc kubenswrapper[4718]: I1210 14:58:29.494251 4718 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f6329eaf-fcae-417e-96a8-96719f02420b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 10 14:58:29 crc kubenswrapper[4718]: I1210 14:58:29.494262 4718 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/80b183de-b86e-49a8-8c3c-ebf398fc65eb-credential-keys\") on node \"crc\" DevicePath \"\"" Dec 10 14:58:29 crc kubenswrapper[4718]: I1210 14:58:29.494272 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fh8sq\" (UniqueName: \"kubernetes.io/projected/80b183de-b86e-49a8-8c3c-ebf398fc65eb-kube-api-access-fh8sq\") on node \"crc\" DevicePath \"\"" Dec 10 14:58:29 crc kubenswrapper[4718]: I1210 14:58:29.494285 4718 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/80b183de-b86e-49a8-8c3c-ebf398fc65eb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 10 14:58:29 crc kubenswrapper[4718]: I1210 14:58:29.494295 4718 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f6329eaf-fcae-417e-96a8-96719f02420b-config-data\") on node \"crc\" DevicePath \"\"" Dec 10 14:58:29 crc kubenswrapper[4718]: I1210 14:58:29.494306 4718 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/80b183de-b86e-49a8-8c3c-ebf398fc65eb-config-data\") on node \"crc\" DevicePath \"\"" Dec 10 14:58:29 crc kubenswrapper[4718]: I1210 14:58:29.494315 4718 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/80b183de-b86e-49a8-8c3c-ebf398fc65eb-scripts\") on node \"crc\" DevicePath \"\"" Dec 10 14:58:29 crc kubenswrapper[4718]: I1210 14:58:29.494327 4718 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f6329eaf-fcae-417e-96a8-96719f02420b-scripts\") on node \"crc\" DevicePath \"\"" Dec 10 14:58:29 crc kubenswrapper[4718]: I1210 14:58:29.494339 4718 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/80b183de-b86e-49a8-8c3c-ebf398fc65eb-fernet-keys\") on node \"crc\" DevicePath \"\"" Dec 10 14:58:29 crc kubenswrapper[4718]: I1210 14:58:29.542622 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-qklhp" event={"ID":"f6329eaf-fcae-417e-96a8-96719f02420b","Type":"ContainerDied","Data":"3f2bd73641834fccf96673b525029ebe486986660261430f3467d2e66e69f0d5"} Dec 10 14:58:29 crc kubenswrapper[4718]: I1210 14:58:29.542681 4718 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3f2bd73641834fccf96673b525029ebe486986660261430f3467d2e66e69f0d5" Dec 10 14:58:29 crc kubenswrapper[4718]: I1210 14:58:29.542707 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-qklhp" Dec 10 14:58:29 crc kubenswrapper[4718]: I1210 14:58:29.545007 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-hwv84" event={"ID":"80b183de-b86e-49a8-8c3c-ebf398fc65eb","Type":"ContainerDied","Data":"c4d67664472c88fbe9e73d44895760ee698c60dbd60626b4312d0b0e1ca7e39f"} Dec 10 14:58:29 crc kubenswrapper[4718]: I1210 14:58:29.545043 4718 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c4d67664472c88fbe9e73d44895760ee698c60dbd60626b4312d0b0e1ca7e39f" Dec 10 14:58:29 crc kubenswrapper[4718]: I1210 14:58:29.545175 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-hwv84" Dec 10 14:58:29 crc kubenswrapper[4718]: I1210 14:58:29.559037 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-api-0" Dec 10 14:58:29 crc kubenswrapper[4718]: I1210 14:58:29.716531 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-7cb9f4c9bb-nx4ml"] Dec 10 14:58:29 crc kubenswrapper[4718]: E1210 14:58:29.718160 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f6329eaf-fcae-417e-96a8-96719f02420b" containerName="placement-db-sync" Dec 10 14:58:29 crc kubenswrapper[4718]: I1210 14:58:29.719799 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="f6329eaf-fcae-417e-96a8-96719f02420b" containerName="placement-db-sync" Dec 10 14:58:29 crc kubenswrapper[4718]: E1210 14:58:29.722472 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="80b183de-b86e-49a8-8c3c-ebf398fc65eb" containerName="keystone-bootstrap" Dec 10 14:58:29 crc kubenswrapper[4718]: I1210 14:58:29.722778 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="80b183de-b86e-49a8-8c3c-ebf398fc65eb" containerName="keystone-bootstrap" Dec 10 14:58:29 crc kubenswrapper[4718]: I1210 14:58:29.732841 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="80b183de-b86e-49a8-8c3c-ebf398fc65eb" containerName="keystone-bootstrap" Dec 10 14:58:29 crc kubenswrapper[4718]: I1210 14:58:29.734302 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="f6329eaf-fcae-417e-96a8-96719f02420b" containerName="placement-db-sync" Dec 10 14:58:29 crc kubenswrapper[4718]: I1210 14:58:29.748693 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-7cb9f4c9bb-nx4ml"] Dec 10 14:58:29 crc kubenswrapper[4718]: I1210 14:58:29.749009 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-7cb9f4c9bb-nx4ml" Dec 10 14:58:29 crc kubenswrapper[4718]: I1210 14:58:29.755671 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Dec 10 14:58:29 crc kubenswrapper[4718]: I1210 14:58:29.757673 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Dec 10 14:58:29 crc kubenswrapper[4718]: I1210 14:58:29.758676 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Dec 10 14:58:29 crc kubenswrapper[4718]: I1210 14:58:29.759019 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-5k8qj" Dec 10 14:58:29 crc kubenswrapper[4718]: I1210 14:58:29.760725 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Dec 10 14:58:29 crc kubenswrapper[4718]: I1210 14:58:29.948623 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ea453fb-60ad-4093-b15f-5cb288f92511-combined-ca-bundle\") pod \"placement-7cb9f4c9bb-nx4ml\" (UID: \"2ea453fb-60ad-4093-b15f-5cb288f92511\") " pod="openstack/placement-7cb9f4c9bb-nx4ml" Dec 10 14:58:29 crc kubenswrapper[4718]: I1210 14:58:29.948690 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2ea453fb-60ad-4093-b15f-5cb288f92511-internal-tls-certs\") pod \"placement-7cb9f4c9bb-nx4ml\" (UID: \"2ea453fb-60ad-4093-b15f-5cb288f92511\") " pod="openstack/placement-7cb9f4c9bb-nx4ml" Dec 10 14:58:29 crc kubenswrapper[4718]: I1210 14:58:29.948749 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vtsf7\" (UniqueName: \"kubernetes.io/projected/2ea453fb-60ad-4093-b15f-5cb288f92511-kube-api-access-vtsf7\") pod \"placement-7cb9f4c9bb-nx4ml\" (UID: \"2ea453fb-60ad-4093-b15f-5cb288f92511\") " pod="openstack/placement-7cb9f4c9bb-nx4ml" Dec 10 14:58:29 crc kubenswrapper[4718]: I1210 14:58:29.948818 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2ea453fb-60ad-4093-b15f-5cb288f92511-logs\") pod \"placement-7cb9f4c9bb-nx4ml\" (UID: \"2ea453fb-60ad-4093-b15f-5cb288f92511\") " pod="openstack/placement-7cb9f4c9bb-nx4ml" Dec 10 14:58:29 crc kubenswrapper[4718]: I1210 14:58:29.948891 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2ea453fb-60ad-4093-b15f-5cb288f92511-public-tls-certs\") pod \"placement-7cb9f4c9bb-nx4ml\" (UID: \"2ea453fb-60ad-4093-b15f-5cb288f92511\") " pod="openstack/placement-7cb9f4c9bb-nx4ml" Dec 10 14:58:29 crc kubenswrapper[4718]: I1210 14:58:29.948942 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2ea453fb-60ad-4093-b15f-5cb288f92511-config-data\") pod \"placement-7cb9f4c9bb-nx4ml\" (UID: \"2ea453fb-60ad-4093-b15f-5cb288f92511\") " pod="openstack/placement-7cb9f4c9bb-nx4ml" Dec 10 14:58:29 crc kubenswrapper[4718]: I1210 14:58:29.948977 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2ea453fb-60ad-4093-b15f-5cb288f92511-scripts\") pod \"placement-7cb9f4c9bb-nx4ml\" (UID: \"2ea453fb-60ad-4093-b15f-5cb288f92511\") " pod="openstack/placement-7cb9f4c9bb-nx4ml" Dec 10 14:58:29 crc kubenswrapper[4718]: I1210 14:58:29.954545 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-6bcdc7c9dc-hxhdn"] Dec 10 14:58:29 crc kubenswrapper[4718]: I1210 14:58:29.960059 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-6bcdc7c9dc-hxhdn" Dec 10 14:58:29 crc kubenswrapper[4718]: I1210 14:58:29.966023 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Dec 10 14:58:29 crc kubenswrapper[4718]: I1210 14:58:29.966768 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-fvdkw" Dec 10 14:58:29 crc kubenswrapper[4718]: I1210 14:58:29.967070 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Dec 10 14:58:29 crc kubenswrapper[4718]: I1210 14:58:29.968726 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Dec 10 14:58:29 crc kubenswrapper[4718]: I1210 14:58:29.968807 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Dec 10 14:58:29 crc kubenswrapper[4718]: I1210 14:58:29.968826 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Dec 10 14:58:29 crc kubenswrapper[4718]: I1210 14:58:29.983484 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-6bcdc7c9dc-hxhdn"] Dec 10 14:58:30 crc kubenswrapper[4718]: I1210 14:58:30.050971 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2ea453fb-60ad-4093-b15f-5cb288f92511-scripts\") pod \"placement-7cb9f4c9bb-nx4ml\" (UID: \"2ea453fb-60ad-4093-b15f-5cb288f92511\") " pod="openstack/placement-7cb9f4c9bb-nx4ml" Dec 10 14:58:30 crc kubenswrapper[4718]: I1210 14:58:30.051798 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ea453fb-60ad-4093-b15f-5cb288f92511-combined-ca-bundle\") pod \"placement-7cb9f4c9bb-nx4ml\" (UID: \"2ea453fb-60ad-4093-b15f-5cb288f92511\") " pod="openstack/placement-7cb9f4c9bb-nx4ml" Dec 10 14:58:30 crc kubenswrapper[4718]: I1210 14:58:30.051852 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2ea453fb-60ad-4093-b15f-5cb288f92511-internal-tls-certs\") pod \"placement-7cb9f4c9bb-nx4ml\" (UID: \"2ea453fb-60ad-4093-b15f-5cb288f92511\") " pod="openstack/placement-7cb9f4c9bb-nx4ml" Dec 10 14:58:30 crc kubenswrapper[4718]: I1210 14:58:30.051918 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vtsf7\" (UniqueName: \"kubernetes.io/projected/2ea453fb-60ad-4093-b15f-5cb288f92511-kube-api-access-vtsf7\") pod \"placement-7cb9f4c9bb-nx4ml\" (UID: \"2ea453fb-60ad-4093-b15f-5cb288f92511\") " pod="openstack/placement-7cb9f4c9bb-nx4ml" Dec 10 14:58:30 crc kubenswrapper[4718]: I1210 14:58:30.051969 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2ea453fb-60ad-4093-b15f-5cb288f92511-logs\") pod \"placement-7cb9f4c9bb-nx4ml\" (UID: \"2ea453fb-60ad-4093-b15f-5cb288f92511\") " pod="openstack/placement-7cb9f4c9bb-nx4ml" Dec 10 14:58:30 crc kubenswrapper[4718]: I1210 14:58:30.052077 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2ea453fb-60ad-4093-b15f-5cb288f92511-public-tls-certs\") pod \"placement-7cb9f4c9bb-nx4ml\" (UID: \"2ea453fb-60ad-4093-b15f-5cb288f92511\") " pod="openstack/placement-7cb9f4c9bb-nx4ml" Dec 10 14:58:30 crc kubenswrapper[4718]: I1210 14:58:30.052134 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2ea453fb-60ad-4093-b15f-5cb288f92511-config-data\") pod \"placement-7cb9f4c9bb-nx4ml\" (UID: \"2ea453fb-60ad-4093-b15f-5cb288f92511\") " pod="openstack/placement-7cb9f4c9bb-nx4ml" Dec 10 14:58:30 crc kubenswrapper[4718]: I1210 14:58:30.053544 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2ea453fb-60ad-4093-b15f-5cb288f92511-logs\") pod \"placement-7cb9f4c9bb-nx4ml\" (UID: \"2ea453fb-60ad-4093-b15f-5cb288f92511\") " pod="openstack/placement-7cb9f4c9bb-nx4ml" Dec 10 14:58:30 crc kubenswrapper[4718]: I1210 14:58:30.059417 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2ea453fb-60ad-4093-b15f-5cb288f92511-internal-tls-certs\") pod \"placement-7cb9f4c9bb-nx4ml\" (UID: \"2ea453fb-60ad-4093-b15f-5cb288f92511\") " pod="openstack/placement-7cb9f4c9bb-nx4ml" Dec 10 14:58:30 crc kubenswrapper[4718]: I1210 14:58:30.060139 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ea453fb-60ad-4093-b15f-5cb288f92511-combined-ca-bundle\") pod \"placement-7cb9f4c9bb-nx4ml\" (UID: \"2ea453fb-60ad-4093-b15f-5cb288f92511\") " pod="openstack/placement-7cb9f4c9bb-nx4ml" Dec 10 14:58:30 crc kubenswrapper[4718]: I1210 14:58:30.071632 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2ea453fb-60ad-4093-b15f-5cb288f92511-scripts\") pod \"placement-7cb9f4c9bb-nx4ml\" (UID: \"2ea453fb-60ad-4093-b15f-5cb288f92511\") " pod="openstack/placement-7cb9f4c9bb-nx4ml" Dec 10 14:58:30 crc kubenswrapper[4718]: I1210 14:58:30.073224 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2ea453fb-60ad-4093-b15f-5cb288f92511-config-data\") pod \"placement-7cb9f4c9bb-nx4ml\" (UID: \"2ea453fb-60ad-4093-b15f-5cb288f92511\") " pod="openstack/placement-7cb9f4c9bb-nx4ml" Dec 10 14:58:30 crc kubenswrapper[4718]: I1210 14:58:30.073697 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2ea453fb-60ad-4093-b15f-5cb288f92511-public-tls-certs\") pod \"placement-7cb9f4c9bb-nx4ml\" (UID: \"2ea453fb-60ad-4093-b15f-5cb288f92511\") " pod="openstack/placement-7cb9f4c9bb-nx4ml" Dec 10 14:58:30 crc kubenswrapper[4718]: I1210 14:58:30.082349 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vtsf7\" (UniqueName: \"kubernetes.io/projected/2ea453fb-60ad-4093-b15f-5cb288f92511-kube-api-access-vtsf7\") pod \"placement-7cb9f4c9bb-nx4ml\" (UID: \"2ea453fb-60ad-4093-b15f-5cb288f92511\") " pod="openstack/placement-7cb9f4c9bb-nx4ml" Dec 10 14:58:30 crc kubenswrapper[4718]: I1210 14:58:30.109842 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-7cb9f4c9bb-nx4ml" Dec 10 14:58:30 crc kubenswrapper[4718]: I1210 14:58:30.154468 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dlnrx\" (UniqueName: \"kubernetes.io/projected/6e6f831b-5d26-4e7c-9b6b-ebddeb01327c-kube-api-access-dlnrx\") pod \"keystone-6bcdc7c9dc-hxhdn\" (UID: \"6e6f831b-5d26-4e7c-9b6b-ebddeb01327c\") " pod="openstack/keystone-6bcdc7c9dc-hxhdn" Dec 10 14:58:30 crc kubenswrapper[4718]: I1210 14:58:30.154555 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6e6f831b-5d26-4e7c-9b6b-ebddeb01327c-public-tls-certs\") pod \"keystone-6bcdc7c9dc-hxhdn\" (UID: \"6e6f831b-5d26-4e7c-9b6b-ebddeb01327c\") " pod="openstack/keystone-6bcdc7c9dc-hxhdn" Dec 10 14:58:30 crc kubenswrapper[4718]: I1210 14:58:30.154605 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e6f831b-5d26-4e7c-9b6b-ebddeb01327c-combined-ca-bundle\") pod \"keystone-6bcdc7c9dc-hxhdn\" (UID: \"6e6f831b-5d26-4e7c-9b6b-ebddeb01327c\") " pod="openstack/keystone-6bcdc7c9dc-hxhdn" Dec 10 14:58:30 crc kubenswrapper[4718]: I1210 14:58:30.154638 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6e6f831b-5d26-4e7c-9b6b-ebddeb01327c-internal-tls-certs\") pod \"keystone-6bcdc7c9dc-hxhdn\" (UID: \"6e6f831b-5d26-4e7c-9b6b-ebddeb01327c\") " pod="openstack/keystone-6bcdc7c9dc-hxhdn" Dec 10 14:58:30 crc kubenswrapper[4718]: I1210 14:58:30.154739 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/6e6f831b-5d26-4e7c-9b6b-ebddeb01327c-fernet-keys\") pod \"keystone-6bcdc7c9dc-hxhdn\" (UID: \"6e6f831b-5d26-4e7c-9b6b-ebddeb01327c\") " pod="openstack/keystone-6bcdc7c9dc-hxhdn" Dec 10 14:58:30 crc kubenswrapper[4718]: I1210 14:58:30.154811 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/6e6f831b-5d26-4e7c-9b6b-ebddeb01327c-credential-keys\") pod \"keystone-6bcdc7c9dc-hxhdn\" (UID: \"6e6f831b-5d26-4e7c-9b6b-ebddeb01327c\") " pod="openstack/keystone-6bcdc7c9dc-hxhdn" Dec 10 14:58:30 crc kubenswrapper[4718]: I1210 14:58:30.154835 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6e6f831b-5d26-4e7c-9b6b-ebddeb01327c-config-data\") pod \"keystone-6bcdc7c9dc-hxhdn\" (UID: \"6e6f831b-5d26-4e7c-9b6b-ebddeb01327c\") " pod="openstack/keystone-6bcdc7c9dc-hxhdn" Dec 10 14:58:30 crc kubenswrapper[4718]: I1210 14:58:30.154894 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6e6f831b-5d26-4e7c-9b6b-ebddeb01327c-scripts\") pod \"keystone-6bcdc7c9dc-hxhdn\" (UID: \"6e6f831b-5d26-4e7c-9b6b-ebddeb01327c\") " pod="openstack/keystone-6bcdc7c9dc-hxhdn" Dec 10 14:58:30 crc kubenswrapper[4718]: I1210 14:58:30.267251 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6e6f831b-5d26-4e7c-9b6b-ebddeb01327c-internal-tls-certs\") pod \"keystone-6bcdc7c9dc-hxhdn\" (UID: \"6e6f831b-5d26-4e7c-9b6b-ebddeb01327c\") " pod="openstack/keystone-6bcdc7c9dc-hxhdn" Dec 10 14:58:30 crc kubenswrapper[4718]: I1210 14:58:30.267367 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/6e6f831b-5d26-4e7c-9b6b-ebddeb01327c-fernet-keys\") pod \"keystone-6bcdc7c9dc-hxhdn\" (UID: \"6e6f831b-5d26-4e7c-9b6b-ebddeb01327c\") " pod="openstack/keystone-6bcdc7c9dc-hxhdn" Dec 10 14:58:30 crc kubenswrapper[4718]: I1210 14:58:30.267414 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/6e6f831b-5d26-4e7c-9b6b-ebddeb01327c-credential-keys\") pod \"keystone-6bcdc7c9dc-hxhdn\" (UID: \"6e6f831b-5d26-4e7c-9b6b-ebddeb01327c\") " pod="openstack/keystone-6bcdc7c9dc-hxhdn" Dec 10 14:58:30 crc kubenswrapper[4718]: I1210 14:58:30.267441 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6e6f831b-5d26-4e7c-9b6b-ebddeb01327c-config-data\") pod \"keystone-6bcdc7c9dc-hxhdn\" (UID: \"6e6f831b-5d26-4e7c-9b6b-ebddeb01327c\") " pod="openstack/keystone-6bcdc7c9dc-hxhdn" Dec 10 14:58:30 crc kubenswrapper[4718]: I1210 14:58:30.267498 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6e6f831b-5d26-4e7c-9b6b-ebddeb01327c-scripts\") pod \"keystone-6bcdc7c9dc-hxhdn\" (UID: \"6e6f831b-5d26-4e7c-9b6b-ebddeb01327c\") " pod="openstack/keystone-6bcdc7c9dc-hxhdn" Dec 10 14:58:30 crc kubenswrapper[4718]: I1210 14:58:30.267559 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dlnrx\" (UniqueName: \"kubernetes.io/projected/6e6f831b-5d26-4e7c-9b6b-ebddeb01327c-kube-api-access-dlnrx\") pod \"keystone-6bcdc7c9dc-hxhdn\" (UID: \"6e6f831b-5d26-4e7c-9b6b-ebddeb01327c\") " pod="openstack/keystone-6bcdc7c9dc-hxhdn" Dec 10 14:58:30 crc kubenswrapper[4718]: I1210 14:58:30.267587 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6e6f831b-5d26-4e7c-9b6b-ebddeb01327c-public-tls-certs\") pod \"keystone-6bcdc7c9dc-hxhdn\" (UID: \"6e6f831b-5d26-4e7c-9b6b-ebddeb01327c\") " pod="openstack/keystone-6bcdc7c9dc-hxhdn" Dec 10 14:58:30 crc kubenswrapper[4718]: I1210 14:58:30.267617 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e6f831b-5d26-4e7c-9b6b-ebddeb01327c-combined-ca-bundle\") pod \"keystone-6bcdc7c9dc-hxhdn\" (UID: \"6e6f831b-5d26-4e7c-9b6b-ebddeb01327c\") " pod="openstack/keystone-6bcdc7c9dc-hxhdn" Dec 10 14:58:30 crc kubenswrapper[4718]: I1210 14:58:30.289424 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/6e6f831b-5d26-4e7c-9b6b-ebddeb01327c-credential-keys\") pod \"keystone-6bcdc7c9dc-hxhdn\" (UID: \"6e6f831b-5d26-4e7c-9b6b-ebddeb01327c\") " pod="openstack/keystone-6bcdc7c9dc-hxhdn" Dec 10 14:58:30 crc kubenswrapper[4718]: I1210 14:58:30.289795 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6e6f831b-5d26-4e7c-9b6b-ebddeb01327c-scripts\") pod \"keystone-6bcdc7c9dc-hxhdn\" (UID: \"6e6f831b-5d26-4e7c-9b6b-ebddeb01327c\") " pod="openstack/keystone-6bcdc7c9dc-hxhdn" Dec 10 14:58:30 crc kubenswrapper[4718]: I1210 14:58:30.289375 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6e6f831b-5d26-4e7c-9b6b-ebddeb01327c-public-tls-certs\") pod \"keystone-6bcdc7c9dc-hxhdn\" (UID: \"6e6f831b-5d26-4e7c-9b6b-ebddeb01327c\") " pod="openstack/keystone-6bcdc7c9dc-hxhdn" Dec 10 14:58:30 crc kubenswrapper[4718]: I1210 14:58:30.291300 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6e6f831b-5d26-4e7c-9b6b-ebddeb01327c-internal-tls-certs\") pod \"keystone-6bcdc7c9dc-hxhdn\" (UID: \"6e6f831b-5d26-4e7c-9b6b-ebddeb01327c\") " pod="openstack/keystone-6bcdc7c9dc-hxhdn" Dec 10 14:58:30 crc kubenswrapper[4718]: I1210 14:58:30.294356 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6e6f831b-5d26-4e7c-9b6b-ebddeb01327c-config-data\") pod \"keystone-6bcdc7c9dc-hxhdn\" (UID: \"6e6f831b-5d26-4e7c-9b6b-ebddeb01327c\") " pod="openstack/keystone-6bcdc7c9dc-hxhdn" Dec 10 14:58:30 crc kubenswrapper[4718]: I1210 14:58:30.300537 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/6e6f831b-5d26-4e7c-9b6b-ebddeb01327c-fernet-keys\") pod \"keystone-6bcdc7c9dc-hxhdn\" (UID: \"6e6f831b-5d26-4e7c-9b6b-ebddeb01327c\") " pod="openstack/keystone-6bcdc7c9dc-hxhdn" Dec 10 14:58:30 crc kubenswrapper[4718]: I1210 14:58:30.302915 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dlnrx\" (UniqueName: \"kubernetes.io/projected/6e6f831b-5d26-4e7c-9b6b-ebddeb01327c-kube-api-access-dlnrx\") pod \"keystone-6bcdc7c9dc-hxhdn\" (UID: \"6e6f831b-5d26-4e7c-9b6b-ebddeb01327c\") " pod="openstack/keystone-6bcdc7c9dc-hxhdn" Dec 10 14:58:30 crc kubenswrapper[4718]: I1210 14:58:30.333628 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e6f831b-5d26-4e7c-9b6b-ebddeb01327c-combined-ca-bundle\") pod \"keystone-6bcdc7c9dc-hxhdn\" (UID: \"6e6f831b-5d26-4e7c-9b6b-ebddeb01327c\") " pod="openstack/keystone-6bcdc7c9dc-hxhdn" Dec 10 14:58:30 crc kubenswrapper[4718]: I1210 14:58:30.567621 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-applier-0" event={"ID":"cda091c0-e668-4132-95bf-e956b4ee9b39","Type":"ContainerStarted","Data":"bab8697df3a92333b4d40c504fe0ab9decc03f4a9464f745baf11378de4b7fa6"} Dec 10 14:58:30 crc kubenswrapper[4718]: I1210 14:58:30.573764 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"95261732-95ae-4618-a8a3-c883c287553e","Type":"ContainerStarted","Data":"adaadc9620a0a65e9630ae6487fae0f5e731d6e32ee263930dc619f5499b27c2"} Dec 10 14:58:30 crc kubenswrapper[4718]: I1210 14:58:30.595421 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-6bcdc7c9dc-hxhdn" Dec 10 14:58:30 crc kubenswrapper[4718]: I1210 14:58:30.605724 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-applier-0" podStartSLOduration=5.049864275 podStartE2EDuration="12.605687317s" podCreationTimestamp="2025-12-10 14:58:18 +0000 UTC" firstStartedPulling="2025-12-10 14:58:20.980195239 +0000 UTC m=+1605.929418656" lastFinishedPulling="2025-12-10 14:58:28.536018281 +0000 UTC m=+1613.485241698" observedRunningTime="2025-12-10 14:58:30.601198103 +0000 UTC m=+1615.550421520" watchObservedRunningTime="2025-12-10 14:58:30.605687317 +0000 UTC m=+1615.554910734" Dec 10 14:58:30 crc kubenswrapper[4718]: I1210 14:58:30.632980 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-decision-engine-0" podStartSLOduration=5.469690763 podStartE2EDuration="12.632937793s" podCreationTimestamp="2025-12-10 14:58:18 +0000 UTC" firstStartedPulling="2025-12-10 14:58:20.86975745 +0000 UTC m=+1605.818980867" lastFinishedPulling="2025-12-10 14:58:28.03300448 +0000 UTC m=+1612.982227897" observedRunningTime="2025-12-10 14:58:30.625530644 +0000 UTC m=+1615.574754081" watchObservedRunningTime="2025-12-10 14:58:30.632937793 +0000 UTC m=+1615.582161230" Dec 10 14:58:30 crc kubenswrapper[4718]: I1210 14:58:30.787722 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-7cb9f4c9bb-nx4ml"] Dec 10 14:58:31 crc kubenswrapper[4718]: I1210 14:58:31.298066 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-qgdwr" Dec 10 14:58:31 crc kubenswrapper[4718]: I1210 14:58:31.398930 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x8928\" (UniqueName: \"kubernetes.io/projected/b3828426-9676-412b-aeaa-22c7c97989c4-kube-api-access-x8928\") pod \"b3828426-9676-412b-aeaa-22c7c97989c4\" (UID: \"b3828426-9676-412b-aeaa-22c7c97989c4\") " Dec 10 14:58:31 crc kubenswrapper[4718]: I1210 14:58:31.399127 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3828426-9676-412b-aeaa-22c7c97989c4-combined-ca-bundle\") pod \"b3828426-9676-412b-aeaa-22c7c97989c4\" (UID: \"b3828426-9676-412b-aeaa-22c7c97989c4\") " Dec 10 14:58:31 crc kubenswrapper[4718]: I1210 14:58:31.399188 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/b3828426-9676-412b-aeaa-22c7c97989c4-db-sync-config-data\") pod \"b3828426-9676-412b-aeaa-22c7c97989c4\" (UID: \"b3828426-9676-412b-aeaa-22c7c97989c4\") " Dec 10 14:58:31 crc kubenswrapper[4718]: I1210 14:58:31.399225 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b3828426-9676-412b-aeaa-22c7c97989c4-config-data\") pod \"b3828426-9676-412b-aeaa-22c7c97989c4\" (UID: \"b3828426-9676-412b-aeaa-22c7c97989c4\") " Dec 10 14:58:31 crc kubenswrapper[4718]: I1210 14:58:31.406615 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b3828426-9676-412b-aeaa-22c7c97989c4-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "b3828426-9676-412b-aeaa-22c7c97989c4" (UID: "b3828426-9676-412b-aeaa-22c7c97989c4"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 14:58:31 crc kubenswrapper[4718]: I1210 14:58:31.409729 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b3828426-9676-412b-aeaa-22c7c97989c4-kube-api-access-x8928" (OuterVolumeSpecName: "kube-api-access-x8928") pod "b3828426-9676-412b-aeaa-22c7c97989c4" (UID: "b3828426-9676-412b-aeaa-22c7c97989c4"). InnerVolumeSpecName "kube-api-access-x8928". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 14:58:31 crc kubenswrapper[4718]: I1210 14:58:31.445315 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b3828426-9676-412b-aeaa-22c7c97989c4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b3828426-9676-412b-aeaa-22c7c97989c4" (UID: "b3828426-9676-412b-aeaa-22c7c97989c4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 14:58:31 crc kubenswrapper[4718]: I1210 14:58:31.477668 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b3828426-9676-412b-aeaa-22c7c97989c4-config-data" (OuterVolumeSpecName: "config-data") pod "b3828426-9676-412b-aeaa-22c7c97989c4" (UID: "b3828426-9676-412b-aeaa-22c7c97989c4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 14:58:31 crc kubenswrapper[4718]: I1210 14:58:31.502637 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x8928\" (UniqueName: \"kubernetes.io/projected/b3828426-9676-412b-aeaa-22c7c97989c4-kube-api-access-x8928\") on node \"crc\" DevicePath \"\"" Dec 10 14:58:31 crc kubenswrapper[4718]: I1210 14:58:31.502680 4718 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3828426-9676-412b-aeaa-22c7c97989c4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 10 14:58:31 crc kubenswrapper[4718]: I1210 14:58:31.502699 4718 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/b3828426-9676-412b-aeaa-22c7c97989c4-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Dec 10 14:58:31 crc kubenswrapper[4718]: I1210 14:58:31.502715 4718 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b3828426-9676-412b-aeaa-22c7c97989c4-config-data\") on node \"crc\" DevicePath \"\"" Dec 10 14:58:31 crc kubenswrapper[4718]: I1210 14:58:31.588633 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-qgdwr" event={"ID":"b3828426-9676-412b-aeaa-22c7c97989c4","Type":"ContainerDied","Data":"0ceca4f64d29b24363124f890feb23d791314c670d56f5b419b760ba8fcfe4d6"} Dec 10 14:58:31 crc kubenswrapper[4718]: I1210 14:58:31.588692 4718 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0ceca4f64d29b24363124f890feb23d791314c670d56f5b419b760ba8fcfe4d6" Dec 10 14:58:31 crc kubenswrapper[4718]: I1210 14:58:31.588757 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-qgdwr" Dec 10 14:58:31 crc kubenswrapper[4718]: I1210 14:58:31.599791 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-7cb9f4c9bb-nx4ml" event={"ID":"2ea453fb-60ad-4093-b15f-5cb288f92511","Type":"ContainerStarted","Data":"34c17b8fc996f879a9086e70be7842013648992030f467f50e40a16d3af57988"} Dec 10 14:58:32 crc kubenswrapper[4718]: I1210 14:58:32.858777 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7cf77b4997-gsztn"] Dec 10 14:58:32 crc kubenswrapper[4718]: E1210 14:58:32.870127 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b3828426-9676-412b-aeaa-22c7c97989c4" containerName="glance-db-sync" Dec 10 14:58:32 crc kubenswrapper[4718]: I1210 14:58:32.870192 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3828426-9676-412b-aeaa-22c7c97989c4" containerName="glance-db-sync" Dec 10 14:58:32 crc kubenswrapper[4718]: I1210 14:58:32.870433 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="b3828426-9676-412b-aeaa-22c7c97989c4" containerName="glance-db-sync" Dec 10 14:58:32 crc kubenswrapper[4718]: I1210 14:58:32.871982 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7cf77b4997-gsztn" Dec 10 14:58:32 crc kubenswrapper[4718]: I1210 14:58:32.943896 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7cf77b4997-gsztn"] Dec 10 14:58:33 crc kubenswrapper[4718]: I1210 14:58:33.046632 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fdjbs\" (UniqueName: \"kubernetes.io/projected/1c5b1250-54f6-4cbb-8106-452a43018155-kube-api-access-fdjbs\") pod \"dnsmasq-dns-7cf77b4997-gsztn\" (UID: \"1c5b1250-54f6-4cbb-8106-452a43018155\") " pod="openstack/dnsmasq-dns-7cf77b4997-gsztn" Dec 10 14:58:33 crc kubenswrapper[4718]: I1210 14:58:33.046917 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1c5b1250-54f6-4cbb-8106-452a43018155-config\") pod \"dnsmasq-dns-7cf77b4997-gsztn\" (UID: \"1c5b1250-54f6-4cbb-8106-452a43018155\") " pod="openstack/dnsmasq-dns-7cf77b4997-gsztn" Dec 10 14:58:33 crc kubenswrapper[4718]: I1210 14:58:33.046989 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1c5b1250-54f6-4cbb-8106-452a43018155-ovsdbserver-sb\") pod \"dnsmasq-dns-7cf77b4997-gsztn\" (UID: \"1c5b1250-54f6-4cbb-8106-452a43018155\") " pod="openstack/dnsmasq-dns-7cf77b4997-gsztn" Dec 10 14:58:33 crc kubenswrapper[4718]: I1210 14:58:33.047092 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1c5b1250-54f6-4cbb-8106-452a43018155-dns-svc\") pod \"dnsmasq-dns-7cf77b4997-gsztn\" (UID: \"1c5b1250-54f6-4cbb-8106-452a43018155\") " pod="openstack/dnsmasq-dns-7cf77b4997-gsztn" Dec 10 14:58:33 crc kubenswrapper[4718]: I1210 14:58:33.047183 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1c5b1250-54f6-4cbb-8106-452a43018155-dns-swift-storage-0\") pod \"dnsmasq-dns-7cf77b4997-gsztn\" (UID: \"1c5b1250-54f6-4cbb-8106-452a43018155\") " pod="openstack/dnsmasq-dns-7cf77b4997-gsztn" Dec 10 14:58:33 crc kubenswrapper[4718]: I1210 14:58:33.047280 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1c5b1250-54f6-4cbb-8106-452a43018155-ovsdbserver-nb\") pod \"dnsmasq-dns-7cf77b4997-gsztn\" (UID: \"1c5b1250-54f6-4cbb-8106-452a43018155\") " pod="openstack/dnsmasq-dns-7cf77b4997-gsztn" Dec 10 14:58:33 crc kubenswrapper[4718]: I1210 14:58:33.158015 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1c5b1250-54f6-4cbb-8106-452a43018155-ovsdbserver-nb\") pod \"dnsmasq-dns-7cf77b4997-gsztn\" (UID: \"1c5b1250-54f6-4cbb-8106-452a43018155\") " pod="openstack/dnsmasq-dns-7cf77b4997-gsztn" Dec 10 14:58:33 crc kubenswrapper[4718]: I1210 14:58:33.158127 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fdjbs\" (UniqueName: \"kubernetes.io/projected/1c5b1250-54f6-4cbb-8106-452a43018155-kube-api-access-fdjbs\") pod \"dnsmasq-dns-7cf77b4997-gsztn\" (UID: \"1c5b1250-54f6-4cbb-8106-452a43018155\") " pod="openstack/dnsmasq-dns-7cf77b4997-gsztn" Dec 10 14:58:33 crc kubenswrapper[4718]: I1210 14:58:33.158307 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1c5b1250-54f6-4cbb-8106-452a43018155-config\") pod \"dnsmasq-dns-7cf77b4997-gsztn\" (UID: \"1c5b1250-54f6-4cbb-8106-452a43018155\") " pod="openstack/dnsmasq-dns-7cf77b4997-gsztn" Dec 10 14:58:33 crc kubenswrapper[4718]: I1210 14:58:33.158368 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1c5b1250-54f6-4cbb-8106-452a43018155-ovsdbserver-sb\") pod \"dnsmasq-dns-7cf77b4997-gsztn\" (UID: \"1c5b1250-54f6-4cbb-8106-452a43018155\") " pod="openstack/dnsmasq-dns-7cf77b4997-gsztn" Dec 10 14:58:33 crc kubenswrapper[4718]: I1210 14:58:33.160431 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1c5b1250-54f6-4cbb-8106-452a43018155-ovsdbserver-nb\") pod \"dnsmasq-dns-7cf77b4997-gsztn\" (UID: \"1c5b1250-54f6-4cbb-8106-452a43018155\") " pod="openstack/dnsmasq-dns-7cf77b4997-gsztn" Dec 10 14:58:33 crc kubenswrapper[4718]: I1210 14:58:33.162431 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1c5b1250-54f6-4cbb-8106-452a43018155-config\") pod \"dnsmasq-dns-7cf77b4997-gsztn\" (UID: \"1c5b1250-54f6-4cbb-8106-452a43018155\") " pod="openstack/dnsmasq-dns-7cf77b4997-gsztn" Dec 10 14:58:33 crc kubenswrapper[4718]: I1210 14:58:33.172884 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1c5b1250-54f6-4cbb-8106-452a43018155-ovsdbserver-sb\") pod \"dnsmasq-dns-7cf77b4997-gsztn\" (UID: \"1c5b1250-54f6-4cbb-8106-452a43018155\") " pod="openstack/dnsmasq-dns-7cf77b4997-gsztn" Dec 10 14:58:33 crc kubenswrapper[4718]: I1210 14:58:33.173238 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1c5b1250-54f6-4cbb-8106-452a43018155-dns-svc\") pod \"dnsmasq-dns-7cf77b4997-gsztn\" (UID: \"1c5b1250-54f6-4cbb-8106-452a43018155\") " pod="openstack/dnsmasq-dns-7cf77b4997-gsztn" Dec 10 14:58:33 crc kubenswrapper[4718]: I1210 14:58:33.173453 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1c5b1250-54f6-4cbb-8106-452a43018155-dns-swift-storage-0\") pod \"dnsmasq-dns-7cf77b4997-gsztn\" (UID: \"1c5b1250-54f6-4cbb-8106-452a43018155\") " pod="openstack/dnsmasq-dns-7cf77b4997-gsztn" Dec 10 14:58:33 crc kubenswrapper[4718]: I1210 14:58:33.175351 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1c5b1250-54f6-4cbb-8106-452a43018155-dns-svc\") pod \"dnsmasq-dns-7cf77b4997-gsztn\" (UID: \"1c5b1250-54f6-4cbb-8106-452a43018155\") " pod="openstack/dnsmasq-dns-7cf77b4997-gsztn" Dec 10 14:58:33 crc kubenswrapper[4718]: I1210 14:58:33.179624 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1c5b1250-54f6-4cbb-8106-452a43018155-dns-swift-storage-0\") pod \"dnsmasq-dns-7cf77b4997-gsztn\" (UID: \"1c5b1250-54f6-4cbb-8106-452a43018155\") " pod="openstack/dnsmasq-dns-7cf77b4997-gsztn" Dec 10 14:58:33 crc kubenswrapper[4718]: I1210 14:58:33.195072 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fdjbs\" (UniqueName: \"kubernetes.io/projected/1c5b1250-54f6-4cbb-8106-452a43018155-kube-api-access-fdjbs\") pod \"dnsmasq-dns-7cf77b4997-gsztn\" (UID: \"1c5b1250-54f6-4cbb-8106-452a43018155\") " pod="openstack/dnsmasq-dns-7cf77b4997-gsztn" Dec 10 14:58:33 crc kubenswrapper[4718]: I1210 14:58:33.215559 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7cf77b4997-gsztn" Dec 10 14:58:33 crc kubenswrapper[4718]: I1210 14:58:33.787723 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-applier-0" Dec 10 14:58:33 crc kubenswrapper[4718]: I1210 14:58:33.835328 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-api-0"] Dec 10 14:58:33 crc kubenswrapper[4718]: I1210 14:58:33.836669 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/watcher-api-0" podUID="42d14e81-de96-43a8-b3d9-fbaca9457b22" containerName="watcher-api-log" containerID="cri-o://4cc370b8a9536785d63eebbf436717cf3e1c466c3694521394fbbf76e2d525dd" gracePeriod=30 Dec 10 14:58:33 crc kubenswrapper[4718]: I1210 14:58:33.836801 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/watcher-api-0" podUID="42d14e81-de96-43a8-b3d9-fbaca9457b22" containerName="watcher-api" containerID="cri-o://312a4e701b7fa1a24da17adfb81dc88f54646557a139c291ae0af12db57bf464" gracePeriod=30 Dec 10 14:58:34 crc kubenswrapper[4718]: I1210 14:58:34.668791 4718 generic.go:334] "Generic (PLEG): container finished" podID="42d14e81-de96-43a8-b3d9-fbaca9457b22" containerID="4cc370b8a9536785d63eebbf436717cf3e1c466c3694521394fbbf76e2d525dd" exitCode=143 Dec 10 14:58:34 crc kubenswrapper[4718]: I1210 14:58:34.668896 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"42d14e81-de96-43a8-b3d9-fbaca9457b22","Type":"ContainerDied","Data":"4cc370b8a9536785d63eebbf436717cf3e1c466c3694521394fbbf76e2d525dd"} Dec 10 14:58:34 crc kubenswrapper[4718]: I1210 14:58:34.958902 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Dec 10 14:58:34 crc kubenswrapper[4718]: I1210 14:58:34.961126 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 10 14:58:34 crc kubenswrapper[4718]: I1210 14:58:34.968912 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Dec 10 14:58:34 crc kubenswrapper[4718]: I1210 14:58:34.969137 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Dec 10 14:58:34 crc kubenswrapper[4718]: I1210 14:58:34.969496 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-2mtgq" Dec 10 14:58:34 crc kubenswrapper[4718]: I1210 14:58:34.978357 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 10 14:58:35 crc kubenswrapper[4718]: I1210 14:58:35.120477 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c6b4c36e-fed9-4522-9841-674e4bab4476-logs\") pod \"glance-default-external-api-0\" (UID: \"c6b4c36e-fed9-4522-9841-674e4bab4476\") " pod="openstack/glance-default-external-api-0" Dec 10 14:58:35 crc kubenswrapper[4718]: I1210 14:58:35.120568 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"c6b4c36e-fed9-4522-9841-674e4bab4476\") " pod="openstack/glance-default-external-api-0" Dec 10 14:58:35 crc kubenswrapper[4718]: I1210 14:58:35.120618 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6b4c36e-fed9-4522-9841-674e4bab4476-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"c6b4c36e-fed9-4522-9841-674e4bab4476\") " pod="openstack/glance-default-external-api-0" Dec 10 14:58:35 crc kubenswrapper[4718]: I1210 14:58:35.120703 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c6b4c36e-fed9-4522-9841-674e4bab4476-scripts\") pod \"glance-default-external-api-0\" (UID: \"c6b4c36e-fed9-4522-9841-674e4bab4476\") " pod="openstack/glance-default-external-api-0" Dec 10 14:58:35 crc kubenswrapper[4718]: I1210 14:58:35.120782 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c6b4c36e-fed9-4522-9841-674e4bab4476-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"c6b4c36e-fed9-4522-9841-674e4bab4476\") " pod="openstack/glance-default-external-api-0" Dec 10 14:58:35 crc kubenswrapper[4718]: I1210 14:58:35.120826 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xzl6l\" (UniqueName: \"kubernetes.io/projected/c6b4c36e-fed9-4522-9841-674e4bab4476-kube-api-access-xzl6l\") pod \"glance-default-external-api-0\" (UID: \"c6b4c36e-fed9-4522-9841-674e4bab4476\") " pod="openstack/glance-default-external-api-0" Dec 10 14:58:35 crc kubenswrapper[4718]: I1210 14:58:35.120868 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c6b4c36e-fed9-4522-9841-674e4bab4476-config-data\") pod \"glance-default-external-api-0\" (UID: \"c6b4c36e-fed9-4522-9841-674e4bab4476\") " pod="openstack/glance-default-external-api-0" Dec 10 14:58:35 crc kubenswrapper[4718]: I1210 14:58:35.255040 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c6b4c36e-fed9-4522-9841-674e4bab4476-config-data\") pod \"glance-default-external-api-0\" (UID: \"c6b4c36e-fed9-4522-9841-674e4bab4476\") " pod="openstack/glance-default-external-api-0" Dec 10 14:58:35 crc kubenswrapper[4718]: I1210 14:58:35.255266 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c6b4c36e-fed9-4522-9841-674e4bab4476-logs\") pod \"glance-default-external-api-0\" (UID: \"c6b4c36e-fed9-4522-9841-674e4bab4476\") " pod="openstack/glance-default-external-api-0" Dec 10 14:58:35 crc kubenswrapper[4718]: I1210 14:58:35.255308 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"c6b4c36e-fed9-4522-9841-674e4bab4476\") " pod="openstack/glance-default-external-api-0" Dec 10 14:58:35 crc kubenswrapper[4718]: I1210 14:58:35.255338 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6b4c36e-fed9-4522-9841-674e4bab4476-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"c6b4c36e-fed9-4522-9841-674e4bab4476\") " pod="openstack/glance-default-external-api-0" Dec 10 14:58:35 crc kubenswrapper[4718]: I1210 14:58:35.255432 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c6b4c36e-fed9-4522-9841-674e4bab4476-scripts\") pod \"glance-default-external-api-0\" (UID: \"c6b4c36e-fed9-4522-9841-674e4bab4476\") " pod="openstack/glance-default-external-api-0" Dec 10 14:58:35 crc kubenswrapper[4718]: I1210 14:58:35.255487 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c6b4c36e-fed9-4522-9841-674e4bab4476-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"c6b4c36e-fed9-4522-9841-674e4bab4476\") " pod="openstack/glance-default-external-api-0" Dec 10 14:58:35 crc kubenswrapper[4718]: I1210 14:58:35.255536 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xzl6l\" (UniqueName: \"kubernetes.io/projected/c6b4c36e-fed9-4522-9841-674e4bab4476-kube-api-access-xzl6l\") pod \"glance-default-external-api-0\" (UID: \"c6b4c36e-fed9-4522-9841-674e4bab4476\") " pod="openstack/glance-default-external-api-0" Dec 10 14:58:35 crc kubenswrapper[4718]: I1210 14:58:35.256920 4718 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"c6b4c36e-fed9-4522-9841-674e4bab4476\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/glance-default-external-api-0" Dec 10 14:58:35 crc kubenswrapper[4718]: I1210 14:58:35.257197 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c6b4c36e-fed9-4522-9841-674e4bab4476-logs\") pod \"glance-default-external-api-0\" (UID: \"c6b4c36e-fed9-4522-9841-674e4bab4476\") " pod="openstack/glance-default-external-api-0" Dec 10 14:58:35 crc kubenswrapper[4718]: I1210 14:58:35.259122 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c6b4c36e-fed9-4522-9841-674e4bab4476-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"c6b4c36e-fed9-4522-9841-674e4bab4476\") " pod="openstack/glance-default-external-api-0" Dec 10 14:58:35 crc kubenswrapper[4718]: I1210 14:58:35.274141 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c6b4c36e-fed9-4522-9841-674e4bab4476-scripts\") pod \"glance-default-external-api-0\" (UID: \"c6b4c36e-fed9-4522-9841-674e4bab4476\") " pod="openstack/glance-default-external-api-0" Dec 10 14:58:35 crc kubenswrapper[4718]: I1210 14:58:35.279902 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c6b4c36e-fed9-4522-9841-674e4bab4476-config-data\") pod \"glance-default-external-api-0\" (UID: \"c6b4c36e-fed9-4522-9841-674e4bab4476\") " pod="openstack/glance-default-external-api-0" Dec 10 14:58:35 crc kubenswrapper[4718]: I1210 14:58:35.283511 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xzl6l\" (UniqueName: \"kubernetes.io/projected/c6b4c36e-fed9-4522-9841-674e4bab4476-kube-api-access-xzl6l\") pod \"glance-default-external-api-0\" (UID: \"c6b4c36e-fed9-4522-9841-674e4bab4476\") " pod="openstack/glance-default-external-api-0" Dec 10 14:58:35 crc kubenswrapper[4718]: I1210 14:58:35.294916 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 10 14:58:35 crc kubenswrapper[4718]: I1210 14:58:35.303967 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 10 14:58:35 crc kubenswrapper[4718]: I1210 14:58:35.309067 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6b4c36e-fed9-4522-9841-674e4bab4476-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"c6b4c36e-fed9-4522-9841-674e4bab4476\") " pod="openstack/glance-default-external-api-0" Dec 10 14:58:35 crc kubenswrapper[4718]: I1210 14:58:35.323309 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 10 14:58:35 crc kubenswrapper[4718]: I1210 14:58:35.339825 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Dec 10 14:58:35 crc kubenswrapper[4718]: I1210 14:58:35.355200 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"c6b4c36e-fed9-4522-9841-674e4bab4476\") " pod="openstack/glance-default-external-api-0" Dec 10 14:58:35 crc kubenswrapper[4718]: I1210 14:58:35.459024 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2b32992e-7a8d-4a76-adfe-8c11f0b3ade9-scripts\") pod \"glance-default-internal-api-0\" (UID: \"2b32992e-7a8d-4a76-adfe-8c11f0b3ade9\") " pod="openstack/glance-default-internal-api-0" Dec 10 14:58:35 crc kubenswrapper[4718]: I1210 14:58:35.459125 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-internal-api-0\" (UID: \"2b32992e-7a8d-4a76-adfe-8c11f0b3ade9\") " pod="openstack/glance-default-internal-api-0" Dec 10 14:58:35 crc kubenswrapper[4718]: I1210 14:58:35.459262 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/2b32992e-7a8d-4a76-adfe-8c11f0b3ade9-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"2b32992e-7a8d-4a76-adfe-8c11f0b3ade9\") " pod="openstack/glance-default-internal-api-0" Dec 10 14:58:35 crc kubenswrapper[4718]: I1210 14:58:35.459354 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2b32992e-7a8d-4a76-adfe-8c11f0b3ade9-logs\") pod \"glance-default-internal-api-0\" (UID: \"2b32992e-7a8d-4a76-adfe-8c11f0b3ade9\") " pod="openstack/glance-default-internal-api-0" Dec 10 14:58:35 crc kubenswrapper[4718]: I1210 14:58:35.459407 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b32992e-7a8d-4a76-adfe-8c11f0b3ade9-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"2b32992e-7a8d-4a76-adfe-8c11f0b3ade9\") " pod="openstack/glance-default-internal-api-0" Dec 10 14:58:35 crc kubenswrapper[4718]: I1210 14:58:35.459467 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qxc98\" (UniqueName: \"kubernetes.io/projected/2b32992e-7a8d-4a76-adfe-8c11f0b3ade9-kube-api-access-qxc98\") pod \"glance-default-internal-api-0\" (UID: \"2b32992e-7a8d-4a76-adfe-8c11f0b3ade9\") " pod="openstack/glance-default-internal-api-0" Dec 10 14:58:35 crc kubenswrapper[4718]: I1210 14:58:35.459510 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2b32992e-7a8d-4a76-adfe-8c11f0b3ade9-config-data\") pod \"glance-default-internal-api-0\" (UID: \"2b32992e-7a8d-4a76-adfe-8c11f0b3ade9\") " pod="openstack/glance-default-internal-api-0" Dec 10 14:58:35 crc kubenswrapper[4718]: I1210 14:58:35.562059 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2b32992e-7a8d-4a76-adfe-8c11f0b3ade9-scripts\") pod \"glance-default-internal-api-0\" (UID: \"2b32992e-7a8d-4a76-adfe-8c11f0b3ade9\") " pod="openstack/glance-default-internal-api-0" Dec 10 14:58:35 crc kubenswrapper[4718]: I1210 14:58:35.562177 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-internal-api-0\" (UID: \"2b32992e-7a8d-4a76-adfe-8c11f0b3ade9\") " pod="openstack/glance-default-internal-api-0" Dec 10 14:58:35 crc kubenswrapper[4718]: I1210 14:58:35.562316 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/2b32992e-7a8d-4a76-adfe-8c11f0b3ade9-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"2b32992e-7a8d-4a76-adfe-8c11f0b3ade9\") " pod="openstack/glance-default-internal-api-0" Dec 10 14:58:35 crc kubenswrapper[4718]: I1210 14:58:35.562356 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2b32992e-7a8d-4a76-adfe-8c11f0b3ade9-logs\") pod \"glance-default-internal-api-0\" (UID: \"2b32992e-7a8d-4a76-adfe-8c11f0b3ade9\") " pod="openstack/glance-default-internal-api-0" Dec 10 14:58:35 crc kubenswrapper[4718]: I1210 14:58:35.562405 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b32992e-7a8d-4a76-adfe-8c11f0b3ade9-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"2b32992e-7a8d-4a76-adfe-8c11f0b3ade9\") " pod="openstack/glance-default-internal-api-0" Dec 10 14:58:35 crc kubenswrapper[4718]: I1210 14:58:35.562475 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qxc98\" (UniqueName: \"kubernetes.io/projected/2b32992e-7a8d-4a76-adfe-8c11f0b3ade9-kube-api-access-qxc98\") pod \"glance-default-internal-api-0\" (UID: \"2b32992e-7a8d-4a76-adfe-8c11f0b3ade9\") " pod="openstack/glance-default-internal-api-0" Dec 10 14:58:35 crc kubenswrapper[4718]: I1210 14:58:35.562540 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2b32992e-7a8d-4a76-adfe-8c11f0b3ade9-config-data\") pod \"glance-default-internal-api-0\" (UID: \"2b32992e-7a8d-4a76-adfe-8c11f0b3ade9\") " pod="openstack/glance-default-internal-api-0" Dec 10 14:58:35 crc kubenswrapper[4718]: I1210 14:58:35.563322 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2b32992e-7a8d-4a76-adfe-8c11f0b3ade9-logs\") pod \"glance-default-internal-api-0\" (UID: \"2b32992e-7a8d-4a76-adfe-8c11f0b3ade9\") " pod="openstack/glance-default-internal-api-0" Dec 10 14:58:35 crc kubenswrapper[4718]: I1210 14:58:35.563613 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/2b32992e-7a8d-4a76-adfe-8c11f0b3ade9-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"2b32992e-7a8d-4a76-adfe-8c11f0b3ade9\") " pod="openstack/glance-default-internal-api-0" Dec 10 14:58:35 crc kubenswrapper[4718]: I1210 14:58:35.563768 4718 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-internal-api-0\" (UID: \"2b32992e-7a8d-4a76-adfe-8c11f0b3ade9\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/glance-default-internal-api-0" Dec 10 14:58:35 crc kubenswrapper[4718]: I1210 14:58:35.570333 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2b32992e-7a8d-4a76-adfe-8c11f0b3ade9-scripts\") pod \"glance-default-internal-api-0\" (UID: \"2b32992e-7a8d-4a76-adfe-8c11f0b3ade9\") " pod="openstack/glance-default-internal-api-0" Dec 10 14:58:35 crc kubenswrapper[4718]: I1210 14:58:35.571108 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b32992e-7a8d-4a76-adfe-8c11f0b3ade9-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"2b32992e-7a8d-4a76-adfe-8c11f0b3ade9\") " pod="openstack/glance-default-internal-api-0" Dec 10 14:58:35 crc kubenswrapper[4718]: I1210 14:58:35.571729 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2b32992e-7a8d-4a76-adfe-8c11f0b3ade9-config-data\") pod \"glance-default-internal-api-0\" (UID: \"2b32992e-7a8d-4a76-adfe-8c11f0b3ade9\") " pod="openstack/glance-default-internal-api-0" Dec 10 14:58:35 crc kubenswrapper[4718]: I1210 14:58:35.588864 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 10 14:58:35 crc kubenswrapper[4718]: I1210 14:58:35.608313 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-internal-api-0\" (UID: \"2b32992e-7a8d-4a76-adfe-8c11f0b3ade9\") " pod="openstack/glance-default-internal-api-0" Dec 10 14:58:35 crc kubenswrapper[4718]: I1210 14:58:35.615858 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qxc98\" (UniqueName: \"kubernetes.io/projected/2b32992e-7a8d-4a76-adfe-8c11f0b3ade9-kube-api-access-qxc98\") pod \"glance-default-internal-api-0\" (UID: \"2b32992e-7a8d-4a76-adfe-8c11f0b3ade9\") " pod="openstack/glance-default-internal-api-0" Dec 10 14:58:35 crc kubenswrapper[4718]: I1210 14:58:35.758835 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 10 14:58:37 crc kubenswrapper[4718]: I1210 14:58:37.524786 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 10 14:58:37 crc kubenswrapper[4718]: I1210 14:58:37.685509 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 10 14:58:38 crc kubenswrapper[4718]: I1210 14:58:38.590904 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-decision-engine-0" Dec 10 14:58:38 crc kubenswrapper[4718]: I1210 14:58:38.624352 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/watcher-decision-engine-0" Dec 10 14:58:38 crc kubenswrapper[4718]: I1210 14:58:38.646188 4718 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/watcher-api-0" podUID="42d14e81-de96-43a8-b3d9-fbaca9457b22" containerName="watcher-api-log" probeResult="failure" output="Get \"http://10.217.0.159:9322/\": dial tcp 10.217.0.159:9322: connect: connection refused" Dec 10 14:58:38 crc kubenswrapper[4718]: I1210 14:58:38.646784 4718 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/watcher-api-0" podUID="42d14e81-de96-43a8-b3d9-fbaca9457b22" containerName="watcher-api" probeResult="failure" output="Get \"http://10.217.0.159:9322/\": dial tcp 10.217.0.159:9322: connect: connection refused" Dec 10 14:58:38 crc kubenswrapper[4718]: I1210 14:58:38.718003 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-decision-engine-0" Dec 10 14:58:38 crc kubenswrapper[4718]: I1210 14:58:38.756511 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-decision-engine-0" Dec 10 14:58:38 crc kubenswrapper[4718]: I1210 14:58:38.788279 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-applier-0" Dec 10 14:58:38 crc kubenswrapper[4718]: I1210 14:58:38.842573 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/watcher-applier-0" Dec 10 14:58:39 crc kubenswrapper[4718]: I1210 14:58:39.731886 4718 generic.go:334] "Generic (PLEG): container finished" podID="42d14e81-de96-43a8-b3d9-fbaca9457b22" containerID="312a4e701b7fa1a24da17adfb81dc88f54646557a139c291ae0af12db57bf464" exitCode=0 Dec 10 14:58:39 crc kubenswrapper[4718]: I1210 14:58:39.732482 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"42d14e81-de96-43a8-b3d9-fbaca9457b22","Type":"ContainerDied","Data":"312a4e701b7fa1a24da17adfb81dc88f54646557a139c291ae0af12db57bf464"} Dec 10 14:58:39 crc kubenswrapper[4718]: I1210 14:58:39.772572 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-applier-0" Dec 10 14:58:41 crc kubenswrapper[4718]: I1210 14:58:41.021785 4718 scope.go:117] "RemoveContainer" containerID="76beffb68a4302af15a19b5c310d822594a21c7c7ec551bf4bf551ca3dcf1f21" Dec 10 14:58:41 crc kubenswrapper[4718]: E1210 14:58:41.022763 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8zmhn_openshift-machine-config-operator(8db53917-7cfb-496d-b8a0-5cc68f3be4e7)\"" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" podUID="8db53917-7cfb-496d-b8a0-5cc68f3be4e7" Dec 10 14:58:42 crc kubenswrapper[4718]: I1210 14:58:42.428185 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Dec 10 14:58:42 crc kubenswrapper[4718]: I1210 14:58:42.581515 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/42d14e81-de96-43a8-b3d9-fbaca9457b22-config-data\") pod \"42d14e81-de96-43a8-b3d9-fbaca9457b22\" (UID: \"42d14e81-de96-43a8-b3d9-fbaca9457b22\") " Dec 10 14:58:42 crc kubenswrapper[4718]: I1210 14:58:42.581578 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42d14e81-de96-43a8-b3d9-fbaca9457b22-combined-ca-bundle\") pod \"42d14e81-de96-43a8-b3d9-fbaca9457b22\" (UID: \"42d14e81-de96-43a8-b3d9-fbaca9457b22\") " Dec 10 14:58:42 crc kubenswrapper[4718]: I1210 14:58:42.581803 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/42d14e81-de96-43a8-b3d9-fbaca9457b22-logs\") pod \"42d14e81-de96-43a8-b3d9-fbaca9457b22\" (UID: \"42d14e81-de96-43a8-b3d9-fbaca9457b22\") " Dec 10 14:58:42 crc kubenswrapper[4718]: I1210 14:58:42.581946 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnwqg\" (UniqueName: \"kubernetes.io/projected/42d14e81-de96-43a8-b3d9-fbaca9457b22-kube-api-access-rnwqg\") pod \"42d14e81-de96-43a8-b3d9-fbaca9457b22\" (UID: \"42d14e81-de96-43a8-b3d9-fbaca9457b22\") " Dec 10 14:58:42 crc kubenswrapper[4718]: I1210 14:58:42.581969 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/42d14e81-de96-43a8-b3d9-fbaca9457b22-custom-prometheus-ca\") pod \"42d14e81-de96-43a8-b3d9-fbaca9457b22\" (UID: \"42d14e81-de96-43a8-b3d9-fbaca9457b22\") " Dec 10 14:58:42 crc kubenswrapper[4718]: I1210 14:58:42.584540 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/42d14e81-de96-43a8-b3d9-fbaca9457b22-logs" (OuterVolumeSpecName: "logs") pod "42d14e81-de96-43a8-b3d9-fbaca9457b22" (UID: "42d14e81-de96-43a8-b3d9-fbaca9457b22"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 14:58:42 crc kubenswrapper[4718]: I1210 14:58:42.613453 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/42d14e81-de96-43a8-b3d9-fbaca9457b22-kube-api-access-rnwqg" (OuterVolumeSpecName: "kube-api-access-rnwqg") pod "42d14e81-de96-43a8-b3d9-fbaca9457b22" (UID: "42d14e81-de96-43a8-b3d9-fbaca9457b22"). InnerVolumeSpecName "kube-api-access-rnwqg". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 14:58:42 crc kubenswrapper[4718]: I1210 14:58:42.737501 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/42d14e81-de96-43a8-b3d9-fbaca9457b22-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "42d14e81-de96-43a8-b3d9-fbaca9457b22" (UID: "42d14e81-de96-43a8-b3d9-fbaca9457b22"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 14:58:42 crc kubenswrapper[4718]: I1210 14:58:42.761162 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnwqg\" (UniqueName: \"kubernetes.io/projected/42d14e81-de96-43a8-b3d9-fbaca9457b22-kube-api-access-rnwqg\") on node \"crc\" DevicePath \"\"" Dec 10 14:58:42 crc kubenswrapper[4718]: I1210 14:58:42.765210 4718 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/42d14e81-de96-43a8-b3d9-fbaca9457b22-logs\") on node \"crc\" DevicePath \"\"" Dec 10 14:58:42 crc kubenswrapper[4718]: I1210 14:58:42.783614 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 10 14:58:42 crc kubenswrapper[4718]: I1210 14:58:42.811995 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/42d14e81-de96-43a8-b3d9-fbaca9457b22-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "42d14e81-de96-43a8-b3d9-fbaca9457b22" (UID: "42d14e81-de96-43a8-b3d9-fbaca9457b22"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 14:58:42 crc kubenswrapper[4718]: I1210 14:58:42.854939 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-77c9ddb894-brvxz" event={"ID":"e1a09589-44b9-49f4-8970-d3381c3d4b99","Type":"ContainerStarted","Data":"4d33e478c80a854b7f668af54eb70894569f582550ed3b902b6d80576cc465c7"} Dec 10 14:58:42 crc kubenswrapper[4718]: I1210 14:58:42.860726 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"c6b4c36e-fed9-4522-9841-674e4bab4476","Type":"ContainerStarted","Data":"6e3d8c161b6a61231cc022f6ce81dea8cd06bdeffa0e82f756e68a1e9b5f530f"} Dec 10 14:58:42 crc kubenswrapper[4718]: I1210 14:58:42.867803 4718 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/42d14e81-de96-43a8-b3d9-fbaca9457b22-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Dec 10 14:58:42 crc kubenswrapper[4718]: I1210 14:58:42.867843 4718 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42d14e81-de96-43a8-b3d9-fbaca9457b22-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 10 14:58:42 crc kubenswrapper[4718]: I1210 14:58:42.889017 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/42d14e81-de96-43a8-b3d9-fbaca9457b22-config-data" (OuterVolumeSpecName: "config-data") pod "42d14e81-de96-43a8-b3d9-fbaca9457b22" (UID: "42d14e81-de96-43a8-b3d9-fbaca9457b22"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 14:58:42 crc kubenswrapper[4718]: I1210 14:58:42.890192 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"42d14e81-de96-43a8-b3d9-fbaca9457b22","Type":"ContainerDied","Data":"ca6fdd3668f489cc6d3e2aa09e1b2e3bae67899fafc9fbaed12ef54510f71fbb"} Dec 10 14:58:42 crc kubenswrapper[4718]: I1210 14:58:42.890302 4718 scope.go:117] "RemoveContainer" containerID="312a4e701b7fa1a24da17adfb81dc88f54646557a139c291ae0af12db57bf464" Dec 10 14:58:42 crc kubenswrapper[4718]: I1210 14:58:42.890756 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Dec 10 14:58:42 crc kubenswrapper[4718]: I1210 14:58:42.906853 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-cwrnt" event={"ID":"3990dc15-53e8-4cd7-a25d-f9b322b74f3e","Type":"ContainerStarted","Data":"72987f27f5a542319df56b0be9e453dd2a1038e82678d8220b4c98c21b0878ff"} Dec 10 14:58:42 crc kubenswrapper[4718]: I1210 14:58:42.944079 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6bb7f498bd-pjx6h" event={"ID":"57bc5c19-c945-4bca-adef-0ddf1b9fabac","Type":"ContainerStarted","Data":"23af10f064789f8e654d5fe32175e0c280568083b73723271e9b524fe4f4ed76"} Dec 10 14:58:42 crc kubenswrapper[4718]: I1210 14:58:42.952064 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-7cb9f4c9bb-nx4ml" event={"ID":"2ea453fb-60ad-4093-b15f-5cb288f92511","Type":"ContainerStarted","Data":"8fba230c007fd2aa28c8bae6c3317a5700d8a26750f1468e2fa2f90b6227e9ef"} Dec 10 14:58:42 crc kubenswrapper[4718]: I1210 14:58:42.967238 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7cf77b4997-gsztn"] Dec 10 14:58:42 crc kubenswrapper[4718]: I1210 14:58:42.968909 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"18de3c3d-d30b-4c09-b1a2-3a6376de8843","Type":"ContainerStarted","Data":"601978f1b4f9d5b5406fb43ec35cd234eb953245e78bb08c3e5c147e1a0788aa"} Dec 10 14:58:42 crc kubenswrapper[4718]: I1210 14:58:42.969788 4718 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/42d14e81-de96-43a8-b3d9-fbaca9457b22-config-data\") on node \"crc\" DevicePath \"\"" Dec 10 14:58:43 crc kubenswrapper[4718]: I1210 14:58:43.025791 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-6bcdc7c9dc-hxhdn"] Dec 10 14:58:43 crc kubenswrapper[4718]: I1210 14:58:43.043626 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-cwrnt" podStartSLOduration=3.439791995 podStartE2EDuration="1m13.043576324s" podCreationTimestamp="2025-12-10 14:57:30 +0000 UTC" firstStartedPulling="2025-12-10 14:57:32.272724171 +0000 UTC m=+1557.221947588" lastFinishedPulling="2025-12-10 14:58:41.8765085 +0000 UTC m=+1626.825731917" observedRunningTime="2025-12-10 14:58:42.95604852 +0000 UTC m=+1627.905271937" watchObservedRunningTime="2025-12-10 14:58:43.043576324 +0000 UTC m=+1627.992799751" Dec 10 14:58:43 crc kubenswrapper[4718]: I1210 14:58:43.097578 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-api-0"] Dec 10 14:58:43 crc kubenswrapper[4718]: I1210 14:58:43.104959 4718 scope.go:117] "RemoveContainer" containerID="4cc370b8a9536785d63eebbf436717cf3e1c466c3694521394fbbf76e2d525dd" Dec 10 14:58:43 crc kubenswrapper[4718]: I1210 14:58:43.115065 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/watcher-api-0"] Dec 10 14:58:43 crc kubenswrapper[4718]: I1210 14:58:43.131172 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-api-0"] Dec 10 14:58:43 crc kubenswrapper[4718]: E1210 14:58:43.136760 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42d14e81-de96-43a8-b3d9-fbaca9457b22" containerName="watcher-api" Dec 10 14:58:43 crc kubenswrapper[4718]: I1210 14:58:43.136813 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="42d14e81-de96-43a8-b3d9-fbaca9457b22" containerName="watcher-api" Dec 10 14:58:43 crc kubenswrapper[4718]: E1210 14:58:43.136847 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42d14e81-de96-43a8-b3d9-fbaca9457b22" containerName="watcher-api-log" Dec 10 14:58:43 crc kubenswrapper[4718]: I1210 14:58:43.136854 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="42d14e81-de96-43a8-b3d9-fbaca9457b22" containerName="watcher-api-log" Dec 10 14:58:43 crc kubenswrapper[4718]: I1210 14:58:43.137104 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="42d14e81-de96-43a8-b3d9-fbaca9457b22" containerName="watcher-api-log" Dec 10 14:58:43 crc kubenswrapper[4718]: I1210 14:58:43.137142 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="42d14e81-de96-43a8-b3d9-fbaca9457b22" containerName="watcher-api" Dec 10 14:58:43 crc kubenswrapper[4718]: I1210 14:58:43.138984 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Dec 10 14:58:43 crc kubenswrapper[4718]: I1210 14:58:43.158095 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-api-config-data" Dec 10 14:58:43 crc kubenswrapper[4718]: I1210 14:58:43.158483 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-watcher-internal-svc" Dec 10 14:58:43 crc kubenswrapper[4718]: I1210 14:58:43.158734 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-watcher-public-svc" Dec 10 14:58:43 crc kubenswrapper[4718]: I1210 14:58:43.174918 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b0c0a449-91c3-43fe-ba1b-02a146745b82-public-tls-certs\") pod \"watcher-api-0\" (UID: \"b0c0a449-91c3-43fe-ba1b-02a146745b82\") " pod="openstack/watcher-api-0" Dec 10 14:58:43 crc kubenswrapper[4718]: I1210 14:58:43.174996 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0c0a449-91c3-43fe-ba1b-02a146745b82-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"b0c0a449-91c3-43fe-ba1b-02a146745b82\") " pod="openstack/watcher-api-0" Dec 10 14:58:43 crc kubenswrapper[4718]: I1210 14:58:43.175032 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/b0c0a449-91c3-43fe-ba1b-02a146745b82-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"b0c0a449-91c3-43fe-ba1b-02a146745b82\") " pod="openstack/watcher-api-0" Dec 10 14:58:43 crc kubenswrapper[4718]: I1210 14:58:43.175053 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b0c0a449-91c3-43fe-ba1b-02a146745b82-internal-tls-certs\") pod \"watcher-api-0\" (UID: \"b0c0a449-91c3-43fe-ba1b-02a146745b82\") " pod="openstack/watcher-api-0" Dec 10 14:58:43 crc kubenswrapper[4718]: I1210 14:58:43.175124 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mgckh\" (UniqueName: \"kubernetes.io/projected/b0c0a449-91c3-43fe-ba1b-02a146745b82-kube-api-access-mgckh\") pod \"watcher-api-0\" (UID: \"b0c0a449-91c3-43fe-ba1b-02a146745b82\") " pod="openstack/watcher-api-0" Dec 10 14:58:43 crc kubenswrapper[4718]: I1210 14:58:43.175180 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b0c0a449-91c3-43fe-ba1b-02a146745b82-config-data\") pod \"watcher-api-0\" (UID: \"b0c0a449-91c3-43fe-ba1b-02a146745b82\") " pod="openstack/watcher-api-0" Dec 10 14:58:43 crc kubenswrapper[4718]: I1210 14:58:43.175200 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b0c0a449-91c3-43fe-ba1b-02a146745b82-logs\") pod \"watcher-api-0\" (UID: \"b0c0a449-91c3-43fe-ba1b-02a146745b82\") " pod="openstack/watcher-api-0" Dec 10 14:58:43 crc kubenswrapper[4718]: I1210 14:58:43.178829 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-api-0"] Dec 10 14:58:43 crc kubenswrapper[4718]: I1210 14:58:43.278319 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/b0c0a449-91c3-43fe-ba1b-02a146745b82-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"b0c0a449-91c3-43fe-ba1b-02a146745b82\") " pod="openstack/watcher-api-0" Dec 10 14:58:43 crc kubenswrapper[4718]: I1210 14:58:43.278412 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b0c0a449-91c3-43fe-ba1b-02a146745b82-internal-tls-certs\") pod \"watcher-api-0\" (UID: \"b0c0a449-91c3-43fe-ba1b-02a146745b82\") " pod="openstack/watcher-api-0" Dec 10 14:58:43 crc kubenswrapper[4718]: I1210 14:58:43.278479 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mgckh\" (UniqueName: \"kubernetes.io/projected/b0c0a449-91c3-43fe-ba1b-02a146745b82-kube-api-access-mgckh\") pod \"watcher-api-0\" (UID: \"b0c0a449-91c3-43fe-ba1b-02a146745b82\") " pod="openstack/watcher-api-0" Dec 10 14:58:43 crc kubenswrapper[4718]: I1210 14:58:43.278527 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b0c0a449-91c3-43fe-ba1b-02a146745b82-config-data\") pod \"watcher-api-0\" (UID: \"b0c0a449-91c3-43fe-ba1b-02a146745b82\") " pod="openstack/watcher-api-0" Dec 10 14:58:43 crc kubenswrapper[4718]: I1210 14:58:43.278548 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b0c0a449-91c3-43fe-ba1b-02a146745b82-logs\") pod \"watcher-api-0\" (UID: \"b0c0a449-91c3-43fe-ba1b-02a146745b82\") " pod="openstack/watcher-api-0" Dec 10 14:58:43 crc kubenswrapper[4718]: I1210 14:58:43.278651 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b0c0a449-91c3-43fe-ba1b-02a146745b82-public-tls-certs\") pod \"watcher-api-0\" (UID: \"b0c0a449-91c3-43fe-ba1b-02a146745b82\") " pod="openstack/watcher-api-0" Dec 10 14:58:43 crc kubenswrapper[4718]: I1210 14:58:43.278692 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0c0a449-91c3-43fe-ba1b-02a146745b82-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"b0c0a449-91c3-43fe-ba1b-02a146745b82\") " pod="openstack/watcher-api-0" Dec 10 14:58:43 crc kubenswrapper[4718]: I1210 14:58:43.282039 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b0c0a449-91c3-43fe-ba1b-02a146745b82-logs\") pod \"watcher-api-0\" (UID: \"b0c0a449-91c3-43fe-ba1b-02a146745b82\") " pod="openstack/watcher-api-0" Dec 10 14:58:43 crc kubenswrapper[4718]: I1210 14:58:43.292746 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b0c0a449-91c3-43fe-ba1b-02a146745b82-public-tls-certs\") pod \"watcher-api-0\" (UID: \"b0c0a449-91c3-43fe-ba1b-02a146745b82\") " pod="openstack/watcher-api-0" Dec 10 14:58:43 crc kubenswrapper[4718]: I1210 14:58:43.293057 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/b0c0a449-91c3-43fe-ba1b-02a146745b82-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"b0c0a449-91c3-43fe-ba1b-02a146745b82\") " pod="openstack/watcher-api-0" Dec 10 14:58:43 crc kubenswrapper[4718]: I1210 14:58:43.293369 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0c0a449-91c3-43fe-ba1b-02a146745b82-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"b0c0a449-91c3-43fe-ba1b-02a146745b82\") " pod="openstack/watcher-api-0" Dec 10 14:58:43 crc kubenswrapper[4718]: I1210 14:58:43.298640 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b0c0a449-91c3-43fe-ba1b-02a146745b82-internal-tls-certs\") pod \"watcher-api-0\" (UID: \"b0c0a449-91c3-43fe-ba1b-02a146745b82\") " pod="openstack/watcher-api-0" Dec 10 14:58:43 crc kubenswrapper[4718]: I1210 14:58:43.300915 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b0c0a449-91c3-43fe-ba1b-02a146745b82-config-data\") pod \"watcher-api-0\" (UID: \"b0c0a449-91c3-43fe-ba1b-02a146745b82\") " pod="openstack/watcher-api-0" Dec 10 14:58:43 crc kubenswrapper[4718]: I1210 14:58:43.316111 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mgckh\" (UniqueName: \"kubernetes.io/projected/b0c0a449-91c3-43fe-ba1b-02a146745b82-kube-api-access-mgckh\") pod \"watcher-api-0\" (UID: \"b0c0a449-91c3-43fe-ba1b-02a146745b82\") " pod="openstack/watcher-api-0" Dec 10 14:58:43 crc kubenswrapper[4718]: I1210 14:58:43.452453 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Dec 10 14:58:43 crc kubenswrapper[4718]: I1210 14:58:43.751680 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 10 14:58:43 crc kubenswrapper[4718]: W1210 14:58:43.818441 4718 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2b32992e_7a8d_4a76_adfe_8c11f0b3ade9.slice/crio-df4e0b785083e28986565414d96ed99da8a6f7475293bb16ff3c4ba0cf42bb5b WatchSource:0}: Error finding container df4e0b785083e28986565414d96ed99da8a6f7475293bb16ff3c4ba0cf42bb5b: Status 404 returned error can't find the container with id df4e0b785083e28986565414d96ed99da8a6f7475293bb16ff3c4ba0cf42bb5b Dec 10 14:58:44 crc kubenswrapper[4718]: I1210 14:58:44.059292 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="42d14e81-de96-43a8-b3d9-fbaca9457b22" path="/var/lib/kubelet/pods/42d14e81-de96-43a8-b3d9-fbaca9457b22/volumes" Dec 10 14:58:44 crc kubenswrapper[4718]: I1210 14:58:44.185563 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-api-0"] Dec 10 14:58:44 crc kubenswrapper[4718]: I1210 14:58:44.276400 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-hctjl" event={"ID":"71a092ad-773d-47b4-bc1f-73358adecf4a","Type":"ContainerStarted","Data":"cb50add0d25a417f0511946c6a546ff6032e149e5d3e6c23746e66422a172656"} Dec 10 14:58:44 crc kubenswrapper[4718]: I1210 14:58:44.288125 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-77c9ddb894-brvxz" event={"ID":"e1a09589-44b9-49f4-8970-d3381c3d4b99","Type":"ContainerStarted","Data":"7a921001c9ed00b14566bf730f10ca8fd9c20100e5de2d810562814d52773543"} Dec 10 14:58:44 crc kubenswrapper[4718]: I1210 14:58:44.317791 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"2b32992e-7a8d-4a76-adfe-8c11f0b3ade9","Type":"ContainerStarted","Data":"df4e0b785083e28986565414d96ed99da8a6f7475293bb16ff3c4ba0cf42bb5b"} Dec 10 14:58:44 crc kubenswrapper[4718]: I1210 14:58:44.323407 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7cf77b4997-gsztn" event={"ID":"1c5b1250-54f6-4cbb-8106-452a43018155","Type":"ContainerStarted","Data":"5840c6abcb6b83a95107d847ebe58b80a716c057e917dcf3616ad4a036b4d024"} Dec 10 14:58:44 crc kubenswrapper[4718]: I1210 14:58:44.334796 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-hctjl" podStartSLOduration=4.906016245 podStartE2EDuration="1m14.334761246s" podCreationTimestamp="2025-12-10 14:57:30 +0000 UTC" firstStartedPulling="2025-12-10 14:57:32.449722199 +0000 UTC m=+1557.398945626" lastFinishedPulling="2025-12-10 14:58:41.87846721 +0000 UTC m=+1626.827690627" observedRunningTime="2025-12-10 14:58:44.305178041 +0000 UTC m=+1629.254401458" watchObservedRunningTime="2025-12-10 14:58:44.334761246 +0000 UTC m=+1629.283984663" Dec 10 14:58:44 crc kubenswrapper[4718]: I1210 14:58:44.345586 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-77c9ddb894-brvxz" podStartSLOduration=5.9837055249999995 podStartE2EDuration="1m3.345551511s" podCreationTimestamp="2025-12-10 14:57:41 +0000 UTC" firstStartedPulling="2025-12-10 14:57:44.511932584 +0000 UTC m=+1569.461156001" lastFinishedPulling="2025-12-10 14:58:41.87377857 +0000 UTC m=+1626.823001987" observedRunningTime="2025-12-10 14:58:44.332784796 +0000 UTC m=+1629.282008223" watchObservedRunningTime="2025-12-10 14:58:44.345551511 +0000 UTC m=+1629.294774938" Dec 10 14:58:44 crc kubenswrapper[4718]: I1210 14:58:44.371421 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6bb7f498bd-pjx6h" event={"ID":"57bc5c19-c945-4bca-adef-0ddf1b9fabac","Type":"ContainerStarted","Data":"18c2b9ed9086807a1a8a78e70010cd811dfcd5d47bc9bf542e696eb06efee24c"} Dec 10 14:58:44 crc kubenswrapper[4718]: I1210 14:58:44.390193 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-6bcdc7c9dc-hxhdn" event={"ID":"6e6f831b-5d26-4e7c-9b6b-ebddeb01327c","Type":"ContainerStarted","Data":"d463c5ee7b35ce8c11d14d25c67fcffe86310faf4bc890a5eb03775b82ac8a2b"} Dec 10 14:58:44 crc kubenswrapper[4718]: I1210 14:58:44.391724 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-6bcdc7c9dc-hxhdn" Dec 10 14:58:44 crc kubenswrapper[4718]: I1210 14:58:44.407973 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-6bb7f498bd-pjx6h" podStartSLOduration=19.58214024 podStartE2EDuration="1m4.407937644s" podCreationTimestamp="2025-12-10 14:57:40 +0000 UTC" firstStartedPulling="2025-12-10 14:57:56.21533361 +0000 UTC m=+1581.164557027" lastFinishedPulling="2025-12-10 14:58:41.041131014 +0000 UTC m=+1625.990354431" observedRunningTime="2025-12-10 14:58:44.405359778 +0000 UTC m=+1629.354583195" watchObservedRunningTime="2025-12-10 14:58:44.407937644 +0000 UTC m=+1629.357161061" Dec 10 14:58:44 crc kubenswrapper[4718]: I1210 14:58:44.424307 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-7cb9f4c9bb-nx4ml" event={"ID":"2ea453fb-60ad-4093-b15f-5cb288f92511","Type":"ContainerStarted","Data":"538a4b26d709bd9e5b682296131ccc7d5ac97aa2fc4e5c8ef66566f196319388"} Dec 10 14:58:44 crc kubenswrapper[4718]: I1210 14:58:44.424749 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-7cb9f4c9bb-nx4ml" Dec 10 14:58:44 crc kubenswrapper[4718]: I1210 14:58:44.424901 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-7cb9f4c9bb-nx4ml" Dec 10 14:58:44 crc kubenswrapper[4718]: I1210 14:58:44.438182 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-6bcdc7c9dc-hxhdn" podStartSLOduration=15.438149105 podStartE2EDuration="15.438149105s" podCreationTimestamp="2025-12-10 14:58:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 14:58:44.430772137 +0000 UTC m=+1629.379995554" watchObservedRunningTime="2025-12-10 14:58:44.438149105 +0000 UTC m=+1629.387372532" Dec 10 14:58:44 crc kubenswrapper[4718]: I1210 14:58:44.511808 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-7cb9f4c9bb-nx4ml" podStartSLOduration=15.511773095 podStartE2EDuration="15.511773095s" podCreationTimestamp="2025-12-10 14:58:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 14:58:44.468440419 +0000 UTC m=+1629.417663846" watchObservedRunningTime="2025-12-10 14:58:44.511773095 +0000 UTC m=+1629.460996512" Dec 10 14:58:45 crc kubenswrapper[4718]: I1210 14:58:45.447461 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"b0c0a449-91c3-43fe-ba1b-02a146745b82","Type":"ContainerStarted","Data":"e1a2b1d22232f8a0613fa1a07b155795071da32aed1e47aad2da644bd936fabe"} Dec 10 14:58:45 crc kubenswrapper[4718]: I1210 14:58:45.453662 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-6bcdc7c9dc-hxhdn" event={"ID":"6e6f831b-5d26-4e7c-9b6b-ebddeb01327c","Type":"ContainerStarted","Data":"ee5d278c97a833a4582947189cc8792aff4256fc99417f281859616152661125"} Dec 10 14:58:45 crc kubenswrapper[4718]: I1210 14:58:45.462217 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"c6b4c36e-fed9-4522-9841-674e4bab4476","Type":"ContainerStarted","Data":"ead3357b0bfad41a777e216a2325ed871b751f958fd0e4e41b05e25df243dcbf"} Dec 10 14:58:45 crc kubenswrapper[4718]: I1210 14:58:45.472119 4718 generic.go:334] "Generic (PLEG): container finished" podID="1c5b1250-54f6-4cbb-8106-452a43018155" containerID="aa3b37a86998d342d296e3eda1558d1a2bbd7a212cee3c2e93cd2b0d22f9d91a" exitCode=0 Dec 10 14:58:45 crc kubenswrapper[4718]: I1210 14:58:45.472933 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7cf77b4997-gsztn" event={"ID":"1c5b1250-54f6-4cbb-8106-452a43018155","Type":"ContainerDied","Data":"aa3b37a86998d342d296e3eda1558d1a2bbd7a212cee3c2e93cd2b0d22f9d91a"} Dec 10 14:58:46 crc kubenswrapper[4718]: I1210 14:58:46.520072 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"2b32992e-7a8d-4a76-adfe-8c11f0b3ade9","Type":"ContainerStarted","Data":"f2ba94f5aae3d186600ae5b23acd5cbd70378a7b1ddfb7b39c718479f702b079"} Dec 10 14:58:46 crc kubenswrapper[4718]: I1210 14:58:46.539831 4718 generic.go:334] "Generic (PLEG): container finished" podID="95261732-95ae-4618-a8a3-c883c287553e" containerID="adaadc9620a0a65e9630ae6487fae0f5e731d6e32ee263930dc619f5499b27c2" exitCode=1 Dec 10 14:58:46 crc kubenswrapper[4718]: I1210 14:58:46.539956 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"95261732-95ae-4618-a8a3-c883c287553e","Type":"ContainerDied","Data":"adaadc9620a0a65e9630ae6487fae0f5e731d6e32ee263930dc619f5499b27c2"} Dec 10 14:58:46 crc kubenswrapper[4718]: I1210 14:58:46.541515 4718 scope.go:117] "RemoveContainer" containerID="adaadc9620a0a65e9630ae6487fae0f5e731d6e32ee263930dc619f5499b27c2" Dec 10 14:58:46 crc kubenswrapper[4718]: I1210 14:58:46.550511 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"b0c0a449-91c3-43fe-ba1b-02a146745b82","Type":"ContainerStarted","Data":"be354f7df07f314bef1df5cf78833368789f729c4eaf0f1ccc8868e593b25179"} Dec 10 14:58:47 crc kubenswrapper[4718]: I1210 14:58:47.573861 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"2b32992e-7a8d-4a76-adfe-8c11f0b3ade9","Type":"ContainerStarted","Data":"181c30f2eca37530e950192518a047194baf7306826789aea12b2a0cd3a0d1b5"} Dec 10 14:58:47 crc kubenswrapper[4718]: I1210 14:58:47.574207 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="2b32992e-7a8d-4a76-adfe-8c11f0b3ade9" containerName="glance-log" containerID="cri-o://f2ba94f5aae3d186600ae5b23acd5cbd70378a7b1ddfb7b39c718479f702b079" gracePeriod=30 Dec 10 14:58:47 crc kubenswrapper[4718]: I1210 14:58:47.574813 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="2b32992e-7a8d-4a76-adfe-8c11f0b3ade9" containerName="glance-httpd" containerID="cri-o://181c30f2eca37530e950192518a047194baf7306826789aea12b2a0cd3a0d1b5" gracePeriod=30 Dec 10 14:58:47 crc kubenswrapper[4718]: I1210 14:58:47.583999 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7cf77b4997-gsztn" event={"ID":"1c5b1250-54f6-4cbb-8106-452a43018155","Type":"ContainerStarted","Data":"0a8f037895e338fac6c2f2e2bba5b4d6df491641111f8a8447bdcfb683532e95"} Dec 10 14:58:47 crc kubenswrapper[4718]: I1210 14:58:47.584214 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7cf77b4997-gsztn" Dec 10 14:58:47 crc kubenswrapper[4718]: I1210 14:58:47.606441 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"95261732-95ae-4618-a8a3-c883c287553e","Type":"ContainerStarted","Data":"fe1809f1bf4abad0ceec1c1f2a13e3871e4415fc733067a42435419dae2a5707"} Dec 10 14:58:47 crc kubenswrapper[4718]: I1210 14:58:47.619862 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=13.61982199 podStartE2EDuration="13.61982199s" podCreationTimestamp="2025-12-10 14:58:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 14:58:47.598436504 +0000 UTC m=+1632.547659941" watchObservedRunningTime="2025-12-10 14:58:47.61982199 +0000 UTC m=+1632.569045407" Dec 10 14:58:47 crc kubenswrapper[4718]: I1210 14:58:47.620767 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"b0c0a449-91c3-43fe-ba1b-02a146745b82","Type":"ContainerStarted","Data":"e6d274f56dc947ba91b885011e5f568d86924500b2fcae789c83f205e8fb6548"} Dec 10 14:58:47 crc kubenswrapper[4718]: I1210 14:58:47.621479 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-api-0" Dec 10 14:58:47 crc kubenswrapper[4718]: I1210 14:58:47.637768 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"c6b4c36e-fed9-4522-9841-674e4bab4476","Type":"ContainerStarted","Data":"6e942a5d6995567a102679b4efbece317c4a0c1bceec8e41df6bdbf08618f81a"} Dec 10 14:58:47 crc kubenswrapper[4718]: I1210 14:58:47.638736 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="c6b4c36e-fed9-4522-9841-674e4bab4476" containerName="glance-log" containerID="cri-o://ead3357b0bfad41a777e216a2325ed871b751f958fd0e4e41b05e25df243dcbf" gracePeriod=30 Dec 10 14:58:47 crc kubenswrapper[4718]: I1210 14:58:47.638902 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="c6b4c36e-fed9-4522-9841-674e4bab4476" containerName="glance-httpd" containerID="cri-o://6e942a5d6995567a102679b4efbece317c4a0c1bceec8e41df6bdbf08618f81a" gracePeriod=30 Dec 10 14:58:47 crc kubenswrapper[4718]: I1210 14:58:47.752617 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7cf77b4997-gsztn" podStartSLOduration=15.752577459 podStartE2EDuration="15.752577459s" podCreationTimestamp="2025-12-10 14:58:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 14:58:47.641200555 +0000 UTC m=+1632.590423972" watchObservedRunningTime="2025-12-10 14:58:47.752577459 +0000 UTC m=+1632.701800886" Dec 10 14:58:47 crc kubenswrapper[4718]: I1210 14:58:47.880983 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-api-0" podStartSLOduration=4.880927846 podStartE2EDuration="4.880927846s" podCreationTimestamp="2025-12-10 14:58:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 14:58:47.674910356 +0000 UTC m=+1632.624133783" watchObservedRunningTime="2025-12-10 14:58:47.880927846 +0000 UTC m=+1632.830151263" Dec 10 14:58:47 crc kubenswrapper[4718]: I1210 14:58:47.969446 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=14.969407844 podStartE2EDuration="14.969407844s" podCreationTimestamp="2025-12-10 14:58:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 14:58:47.807152402 +0000 UTC m=+1632.756375819" watchObservedRunningTime="2025-12-10 14:58:47.969407844 +0000 UTC m=+1632.918631261" Dec 10 14:58:48 crc kubenswrapper[4718]: I1210 14:58:48.453045 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-api-0" Dec 10 14:58:48 crc kubenswrapper[4718]: I1210 14:58:48.591856 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-decision-engine-0" Dec 10 14:58:48 crc kubenswrapper[4718]: I1210 14:58:48.592309 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-decision-engine-0" Dec 10 14:58:48 crc kubenswrapper[4718]: I1210 14:58:48.694997 4718 generic.go:334] "Generic (PLEG): container finished" podID="2b32992e-7a8d-4a76-adfe-8c11f0b3ade9" containerID="181c30f2eca37530e950192518a047194baf7306826789aea12b2a0cd3a0d1b5" exitCode=143 Dec 10 14:58:48 crc kubenswrapper[4718]: I1210 14:58:48.695051 4718 generic.go:334] "Generic (PLEG): container finished" podID="2b32992e-7a8d-4a76-adfe-8c11f0b3ade9" containerID="f2ba94f5aae3d186600ae5b23acd5cbd70378a7b1ddfb7b39c718479f702b079" exitCode=143 Dec 10 14:58:48 crc kubenswrapper[4718]: I1210 14:58:48.695113 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"2b32992e-7a8d-4a76-adfe-8c11f0b3ade9","Type":"ContainerDied","Data":"181c30f2eca37530e950192518a047194baf7306826789aea12b2a0cd3a0d1b5"} Dec 10 14:58:48 crc kubenswrapper[4718]: I1210 14:58:48.695163 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"2b32992e-7a8d-4a76-adfe-8c11f0b3ade9","Type":"ContainerDied","Data":"f2ba94f5aae3d186600ae5b23acd5cbd70378a7b1ddfb7b39c718479f702b079"} Dec 10 14:58:48 crc kubenswrapper[4718]: I1210 14:58:48.713918 4718 generic.go:334] "Generic (PLEG): container finished" podID="c6b4c36e-fed9-4522-9841-674e4bab4476" containerID="6e942a5d6995567a102679b4efbece317c4a0c1bceec8e41df6bdbf08618f81a" exitCode=0 Dec 10 14:58:48 crc kubenswrapper[4718]: I1210 14:58:48.713969 4718 generic.go:334] "Generic (PLEG): container finished" podID="c6b4c36e-fed9-4522-9841-674e4bab4476" containerID="ead3357b0bfad41a777e216a2325ed871b751f958fd0e4e41b05e25df243dcbf" exitCode=143 Dec 10 14:58:48 crc kubenswrapper[4718]: I1210 14:58:48.716369 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"c6b4c36e-fed9-4522-9841-674e4bab4476","Type":"ContainerDied","Data":"6e942a5d6995567a102679b4efbece317c4a0c1bceec8e41df6bdbf08618f81a"} Dec 10 14:58:48 crc kubenswrapper[4718]: I1210 14:58:48.716955 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"c6b4c36e-fed9-4522-9841-674e4bab4476","Type":"ContainerDied","Data":"ead3357b0bfad41a777e216a2325ed871b751f958fd0e4e41b05e25df243dcbf"} Dec 10 14:58:48 crc kubenswrapper[4718]: I1210 14:58:48.746323 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/watcher-decision-engine-0" Dec 10 14:58:48 crc kubenswrapper[4718]: I1210 14:58:48.893715 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 10 14:58:49 crc kubenswrapper[4718]: I1210 14:58:49.001894 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"2b32992e-7a8d-4a76-adfe-8c11f0b3ade9\" (UID: \"2b32992e-7a8d-4a76-adfe-8c11f0b3ade9\") " Dec 10 14:58:49 crc kubenswrapper[4718]: I1210 14:58:49.001987 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2b32992e-7a8d-4a76-adfe-8c11f0b3ade9-config-data\") pod \"2b32992e-7a8d-4a76-adfe-8c11f0b3ade9\" (UID: \"2b32992e-7a8d-4a76-adfe-8c11f0b3ade9\") " Dec 10 14:58:49 crc kubenswrapper[4718]: I1210 14:58:49.002055 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qxc98\" (UniqueName: \"kubernetes.io/projected/2b32992e-7a8d-4a76-adfe-8c11f0b3ade9-kube-api-access-qxc98\") pod \"2b32992e-7a8d-4a76-adfe-8c11f0b3ade9\" (UID: \"2b32992e-7a8d-4a76-adfe-8c11f0b3ade9\") " Dec 10 14:58:49 crc kubenswrapper[4718]: I1210 14:58:49.002099 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2b32992e-7a8d-4a76-adfe-8c11f0b3ade9-logs\") pod \"2b32992e-7a8d-4a76-adfe-8c11f0b3ade9\" (UID: \"2b32992e-7a8d-4a76-adfe-8c11f0b3ade9\") " Dec 10 14:58:49 crc kubenswrapper[4718]: I1210 14:58:49.002220 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/2b32992e-7a8d-4a76-adfe-8c11f0b3ade9-httpd-run\") pod \"2b32992e-7a8d-4a76-adfe-8c11f0b3ade9\" (UID: \"2b32992e-7a8d-4a76-adfe-8c11f0b3ade9\") " Dec 10 14:58:49 crc kubenswrapper[4718]: I1210 14:58:49.002248 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b32992e-7a8d-4a76-adfe-8c11f0b3ade9-combined-ca-bundle\") pod \"2b32992e-7a8d-4a76-adfe-8c11f0b3ade9\" (UID: \"2b32992e-7a8d-4a76-adfe-8c11f0b3ade9\") " Dec 10 14:58:49 crc kubenswrapper[4718]: I1210 14:58:49.002312 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2b32992e-7a8d-4a76-adfe-8c11f0b3ade9-scripts\") pod \"2b32992e-7a8d-4a76-adfe-8c11f0b3ade9\" (UID: \"2b32992e-7a8d-4a76-adfe-8c11f0b3ade9\") " Dec 10 14:58:49 crc kubenswrapper[4718]: I1210 14:58:49.002982 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2b32992e-7a8d-4a76-adfe-8c11f0b3ade9-logs" (OuterVolumeSpecName: "logs") pod "2b32992e-7a8d-4a76-adfe-8c11f0b3ade9" (UID: "2b32992e-7a8d-4a76-adfe-8c11f0b3ade9"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 14:58:49 crc kubenswrapper[4718]: I1210 14:58:49.003053 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2b32992e-7a8d-4a76-adfe-8c11f0b3ade9-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "2b32992e-7a8d-4a76-adfe-8c11f0b3ade9" (UID: "2b32992e-7a8d-4a76-adfe-8c11f0b3ade9"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 14:58:49 crc kubenswrapper[4718]: I1210 14:58:49.033846 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2b32992e-7a8d-4a76-adfe-8c11f0b3ade9-kube-api-access-qxc98" (OuterVolumeSpecName: "kube-api-access-qxc98") pod "2b32992e-7a8d-4a76-adfe-8c11f0b3ade9" (UID: "2b32992e-7a8d-4a76-adfe-8c11f0b3ade9"). InnerVolumeSpecName "kube-api-access-qxc98". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 14:58:49 crc kubenswrapper[4718]: I1210 14:58:49.035724 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage02-crc" (OuterVolumeSpecName: "glance") pod "2b32992e-7a8d-4a76-adfe-8c11f0b3ade9" (UID: "2b32992e-7a8d-4a76-adfe-8c11f0b3ade9"). InnerVolumeSpecName "local-storage02-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Dec 10 14:58:49 crc kubenswrapper[4718]: I1210 14:58:49.094827 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2b32992e-7a8d-4a76-adfe-8c11f0b3ade9-scripts" (OuterVolumeSpecName: "scripts") pod "2b32992e-7a8d-4a76-adfe-8c11f0b3ade9" (UID: "2b32992e-7a8d-4a76-adfe-8c11f0b3ade9"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 14:58:49 crc kubenswrapper[4718]: I1210 14:58:49.099630 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2b32992e-7a8d-4a76-adfe-8c11f0b3ade9-config-data" (OuterVolumeSpecName: "config-data") pod "2b32992e-7a8d-4a76-adfe-8c11f0b3ade9" (UID: "2b32992e-7a8d-4a76-adfe-8c11f0b3ade9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 14:58:49 crc kubenswrapper[4718]: I1210 14:58:49.120574 4718 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/2b32992e-7a8d-4a76-adfe-8c11f0b3ade9-httpd-run\") on node \"crc\" DevicePath \"\"" Dec 10 14:58:49 crc kubenswrapper[4718]: I1210 14:58:49.120632 4718 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2b32992e-7a8d-4a76-adfe-8c11f0b3ade9-scripts\") on node \"crc\" DevicePath \"\"" Dec 10 14:58:49 crc kubenswrapper[4718]: I1210 14:58:49.120663 4718 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" " Dec 10 14:58:49 crc kubenswrapper[4718]: I1210 14:58:49.120676 4718 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2b32992e-7a8d-4a76-adfe-8c11f0b3ade9-config-data\") on node \"crc\" DevicePath \"\"" Dec 10 14:58:49 crc kubenswrapper[4718]: I1210 14:58:49.120689 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qxc98\" (UniqueName: \"kubernetes.io/projected/2b32992e-7a8d-4a76-adfe-8c11f0b3ade9-kube-api-access-qxc98\") on node \"crc\" DevicePath \"\"" Dec 10 14:58:49 crc kubenswrapper[4718]: I1210 14:58:49.120707 4718 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2b32992e-7a8d-4a76-adfe-8c11f0b3ade9-logs\") on node \"crc\" DevicePath \"\"" Dec 10 14:58:49 crc kubenswrapper[4718]: I1210 14:58:49.158650 4718 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage02-crc" (UniqueName: "kubernetes.io/local-volume/local-storage02-crc") on node "crc" Dec 10 14:58:49 crc kubenswrapper[4718]: I1210 14:58:49.163641 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2b32992e-7a8d-4a76-adfe-8c11f0b3ade9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2b32992e-7a8d-4a76-adfe-8c11f0b3ade9" (UID: "2b32992e-7a8d-4a76-adfe-8c11f0b3ade9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 14:58:49 crc kubenswrapper[4718]: I1210 14:58:49.223652 4718 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b32992e-7a8d-4a76-adfe-8c11f0b3ade9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 10 14:58:49 crc kubenswrapper[4718]: I1210 14:58:49.223896 4718 reconciler_common.go:293] "Volume detached for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" DevicePath \"\"" Dec 10 14:58:49 crc kubenswrapper[4718]: I1210 14:58:49.346211 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 10 14:58:49 crc kubenswrapper[4718]: I1210 14:58:49.428228 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xzl6l\" (UniqueName: \"kubernetes.io/projected/c6b4c36e-fed9-4522-9841-674e4bab4476-kube-api-access-xzl6l\") pod \"c6b4c36e-fed9-4522-9841-674e4bab4476\" (UID: \"c6b4c36e-fed9-4522-9841-674e4bab4476\") " Dec 10 14:58:49 crc kubenswrapper[4718]: I1210 14:58:49.428410 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c6b4c36e-fed9-4522-9841-674e4bab4476-logs\") pod \"c6b4c36e-fed9-4522-9841-674e4bab4476\" (UID: \"c6b4c36e-fed9-4522-9841-674e4bab4476\") " Dec 10 14:58:49 crc kubenswrapper[4718]: I1210 14:58:49.428486 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6b4c36e-fed9-4522-9841-674e4bab4476-combined-ca-bundle\") pod \"c6b4c36e-fed9-4522-9841-674e4bab4476\" (UID: \"c6b4c36e-fed9-4522-9841-674e4bab4476\") " Dec 10 14:58:49 crc kubenswrapper[4718]: I1210 14:58:49.428649 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c6b4c36e-fed9-4522-9841-674e4bab4476-config-data\") pod \"c6b4c36e-fed9-4522-9841-674e4bab4476\" (UID: \"c6b4c36e-fed9-4522-9841-674e4bab4476\") " Dec 10 14:58:49 crc kubenswrapper[4718]: I1210 14:58:49.428690 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c6b4c36e-fed9-4522-9841-674e4bab4476-httpd-run\") pod \"c6b4c36e-fed9-4522-9841-674e4bab4476\" (UID: \"c6b4c36e-fed9-4522-9841-674e4bab4476\") " Dec 10 14:58:49 crc kubenswrapper[4718]: I1210 14:58:49.428777 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"c6b4c36e-fed9-4522-9841-674e4bab4476\" (UID: \"c6b4c36e-fed9-4522-9841-674e4bab4476\") " Dec 10 14:58:49 crc kubenswrapper[4718]: I1210 14:58:49.428909 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c6b4c36e-fed9-4522-9841-674e4bab4476-scripts\") pod \"c6b4c36e-fed9-4522-9841-674e4bab4476\" (UID: \"c6b4c36e-fed9-4522-9841-674e4bab4476\") " Dec 10 14:58:49 crc kubenswrapper[4718]: I1210 14:58:49.433171 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c6b4c36e-fed9-4522-9841-674e4bab4476-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "c6b4c36e-fed9-4522-9841-674e4bab4476" (UID: "c6b4c36e-fed9-4522-9841-674e4bab4476"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 14:58:49 crc kubenswrapper[4718]: I1210 14:58:49.433376 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c6b4c36e-fed9-4522-9841-674e4bab4476-logs" (OuterVolumeSpecName: "logs") pod "c6b4c36e-fed9-4522-9841-674e4bab4476" (UID: "c6b4c36e-fed9-4522-9841-674e4bab4476"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 14:58:49 crc kubenswrapper[4718]: I1210 14:58:49.440163 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage08-crc" (OuterVolumeSpecName: "glance") pod "c6b4c36e-fed9-4522-9841-674e4bab4476" (UID: "c6b4c36e-fed9-4522-9841-674e4bab4476"). InnerVolumeSpecName "local-storage08-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Dec 10 14:58:49 crc kubenswrapper[4718]: I1210 14:58:49.452010 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c6b4c36e-fed9-4522-9841-674e4bab4476-kube-api-access-xzl6l" (OuterVolumeSpecName: "kube-api-access-xzl6l") pod "c6b4c36e-fed9-4522-9841-674e4bab4476" (UID: "c6b4c36e-fed9-4522-9841-674e4bab4476"). InnerVolumeSpecName "kube-api-access-xzl6l". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 14:58:49 crc kubenswrapper[4718]: I1210 14:58:49.457312 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c6b4c36e-fed9-4522-9841-674e4bab4476-scripts" (OuterVolumeSpecName: "scripts") pod "c6b4c36e-fed9-4522-9841-674e4bab4476" (UID: "c6b4c36e-fed9-4522-9841-674e4bab4476"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 14:58:49 crc kubenswrapper[4718]: I1210 14:58:49.494692 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c6b4c36e-fed9-4522-9841-674e4bab4476-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c6b4c36e-fed9-4522-9841-674e4bab4476" (UID: "c6b4c36e-fed9-4522-9841-674e4bab4476"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 14:58:49 crc kubenswrapper[4718]: I1210 14:58:49.532711 4718 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c6b4c36e-fed9-4522-9841-674e4bab4476-httpd-run\") on node \"crc\" DevicePath \"\"" Dec 10 14:58:49 crc kubenswrapper[4718]: I1210 14:58:49.532821 4718 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" " Dec 10 14:58:49 crc kubenswrapper[4718]: I1210 14:58:49.532837 4718 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c6b4c36e-fed9-4522-9841-674e4bab4476-scripts\") on node \"crc\" DevicePath \"\"" Dec 10 14:58:49 crc kubenswrapper[4718]: I1210 14:58:49.532851 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xzl6l\" (UniqueName: \"kubernetes.io/projected/c6b4c36e-fed9-4522-9841-674e4bab4476-kube-api-access-xzl6l\") on node \"crc\" DevicePath \"\"" Dec 10 14:58:49 crc kubenswrapper[4718]: I1210 14:58:49.532869 4718 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c6b4c36e-fed9-4522-9841-674e4bab4476-logs\") on node \"crc\" DevicePath \"\"" Dec 10 14:58:49 crc kubenswrapper[4718]: I1210 14:58:49.532883 4718 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6b4c36e-fed9-4522-9841-674e4bab4476-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 10 14:58:49 crc kubenswrapper[4718]: I1210 14:58:49.533431 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c6b4c36e-fed9-4522-9841-674e4bab4476-config-data" (OuterVolumeSpecName: "config-data") pod "c6b4c36e-fed9-4522-9841-674e4bab4476" (UID: "c6b4c36e-fed9-4522-9841-674e4bab4476"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 14:58:49 crc kubenswrapper[4718]: I1210 14:58:49.569287 4718 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage08-crc" (UniqueName: "kubernetes.io/local-volume/local-storage08-crc") on node "crc" Dec 10 14:58:49 crc kubenswrapper[4718]: I1210 14:58:49.637172 4718 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c6b4c36e-fed9-4522-9841-674e4bab4476-config-data\") on node \"crc\" DevicePath \"\"" Dec 10 14:58:49 crc kubenswrapper[4718]: I1210 14:58:49.637295 4718 reconciler_common.go:293] "Volume detached for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" DevicePath \"\"" Dec 10 14:58:49 crc kubenswrapper[4718]: I1210 14:58:49.731226 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"c6b4c36e-fed9-4522-9841-674e4bab4476","Type":"ContainerDied","Data":"6e3d8c161b6a61231cc022f6ce81dea8cd06bdeffa0e82f756e68a1e9b5f530f"} Dec 10 14:58:49 crc kubenswrapper[4718]: I1210 14:58:49.731311 4718 scope.go:117] "RemoveContainer" containerID="6e942a5d6995567a102679b4efbece317c4a0c1bceec8e41df6bdbf08618f81a" Dec 10 14:58:49 crc kubenswrapper[4718]: I1210 14:58:49.731329 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 10 14:58:49 crc kubenswrapper[4718]: I1210 14:58:49.738114 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 10 14:58:49 crc kubenswrapper[4718]: I1210 14:58:49.739831 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"2b32992e-7a8d-4a76-adfe-8c11f0b3ade9","Type":"ContainerDied","Data":"df4e0b785083e28986565414d96ed99da8a6f7475293bb16ff3c4ba0cf42bb5b"} Dec 10 14:58:49 crc kubenswrapper[4718]: I1210 14:58:49.741266 4718 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 10 14:58:49 crc kubenswrapper[4718]: I1210 14:58:49.800328 4718 scope.go:117] "RemoveContainer" containerID="ead3357b0bfad41a777e216a2325ed871b751f958fd0e4e41b05e25df243dcbf" Dec 10 14:58:49 crc kubenswrapper[4718]: I1210 14:58:49.809247 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 10 14:58:49 crc kubenswrapper[4718]: I1210 14:58:49.833145 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-decision-engine-0" Dec 10 14:58:49 crc kubenswrapper[4718]: I1210 14:58:49.837378 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 10 14:58:49 crc kubenswrapper[4718]: I1210 14:58:49.861431 4718 scope.go:117] "RemoveContainer" containerID="181c30f2eca37530e950192518a047194baf7306826789aea12b2a0cd3a0d1b5" Dec 10 14:58:49 crc kubenswrapper[4718]: I1210 14:58:49.874107 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 10 14:58:49 crc kubenswrapper[4718]: I1210 14:58:49.921715 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Dec 10 14:58:49 crc kubenswrapper[4718]: E1210 14:58:49.922390 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b32992e-7a8d-4a76-adfe-8c11f0b3ade9" containerName="glance-httpd" Dec 10 14:58:49 crc kubenswrapper[4718]: I1210 14:58:49.922432 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b32992e-7a8d-4a76-adfe-8c11f0b3ade9" containerName="glance-httpd" Dec 10 14:58:49 crc kubenswrapper[4718]: E1210 14:58:49.922456 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b32992e-7a8d-4a76-adfe-8c11f0b3ade9" containerName="glance-log" Dec 10 14:58:49 crc kubenswrapper[4718]: I1210 14:58:49.922468 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b32992e-7a8d-4a76-adfe-8c11f0b3ade9" containerName="glance-log" Dec 10 14:58:49 crc kubenswrapper[4718]: E1210 14:58:49.922478 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6b4c36e-fed9-4522-9841-674e4bab4476" containerName="glance-httpd" Dec 10 14:58:49 crc kubenswrapper[4718]: I1210 14:58:49.922486 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6b4c36e-fed9-4522-9841-674e4bab4476" containerName="glance-httpd" Dec 10 14:58:49 crc kubenswrapper[4718]: E1210 14:58:49.922504 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6b4c36e-fed9-4522-9841-674e4bab4476" containerName="glance-log" Dec 10 14:58:49 crc kubenswrapper[4718]: I1210 14:58:49.922510 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6b4c36e-fed9-4522-9841-674e4bab4476" containerName="glance-log" Dec 10 14:58:49 crc kubenswrapper[4718]: I1210 14:58:49.923225 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="2b32992e-7a8d-4a76-adfe-8c11f0b3ade9" containerName="glance-log" Dec 10 14:58:49 crc kubenswrapper[4718]: I1210 14:58:49.923246 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="c6b4c36e-fed9-4522-9841-674e4bab4476" containerName="glance-httpd" Dec 10 14:58:49 crc kubenswrapper[4718]: I1210 14:58:49.923268 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="2b32992e-7a8d-4a76-adfe-8c11f0b3ade9" containerName="glance-httpd" Dec 10 14:58:49 crc kubenswrapper[4718]: I1210 14:58:49.923280 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="c6b4c36e-fed9-4522-9841-674e4bab4476" containerName="glance-log" Dec 10 14:58:49 crc kubenswrapper[4718]: I1210 14:58:49.925126 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 10 14:58:49 crc kubenswrapper[4718]: I1210 14:58:49.928301 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Dec 10 14:58:49 crc kubenswrapper[4718]: I1210 14:58:49.932106 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Dec 10 14:58:49 crc kubenswrapper[4718]: I1210 14:58:49.932585 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Dec 10 14:58:49 crc kubenswrapper[4718]: I1210 14:58:49.932912 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-2mtgq" Dec 10 14:58:49 crc kubenswrapper[4718]: I1210 14:58:49.957558 4718 scope.go:117] "RemoveContainer" containerID="f2ba94f5aae3d186600ae5b23acd5cbd70378a7b1ddfb7b39c718479f702b079" Dec 10 14:58:50 crc kubenswrapper[4718]: I1210 14:58:50.023019 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 10 14:58:50 crc kubenswrapper[4718]: I1210 14:58:50.092127 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2b32992e-7a8d-4a76-adfe-8c11f0b3ade9" path="/var/lib/kubelet/pods/2b32992e-7a8d-4a76-adfe-8c11f0b3ade9/volumes" Dec 10 14:58:50 crc kubenswrapper[4718]: I1210 14:58:50.118262 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c6b4c36e-fed9-4522-9841-674e4bab4476" path="/var/lib/kubelet/pods/c6b4c36e-fed9-4522-9841-674e4bab4476/volumes" Dec 10 14:58:50 crc kubenswrapper[4718]: I1210 14:58:50.119509 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 10 14:58:50 crc kubenswrapper[4718]: I1210 14:58:50.124602 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 10 14:58:50 crc kubenswrapper[4718]: I1210 14:58:50.124758 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 10 14:58:50 crc kubenswrapper[4718]: I1210 14:58:50.135132 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Dec 10 14:58:50 crc kubenswrapper[4718]: I1210 14:58:50.135136 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Dec 10 14:58:50 crc kubenswrapper[4718]: I1210 14:58:50.169835 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 10 14:58:50 crc kubenswrapper[4718]: I1210 14:58:50.176519 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e6d8b276-6f9f-4b17-b61c-996bdec36f85-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"e6d8b276-6f9f-4b17-b61c-996bdec36f85\") " pod="openstack/glance-default-external-api-0" Dec 10 14:58:50 crc kubenswrapper[4718]: I1210 14:58:50.176591 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e6d8b276-6f9f-4b17-b61c-996bdec36f85-config-data\") pod \"glance-default-external-api-0\" (UID: \"e6d8b276-6f9f-4b17-b61c-996bdec36f85\") " pod="openstack/glance-default-external-api-0" Dec 10 14:58:50 crc kubenswrapper[4718]: I1210 14:58:50.176725 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"e6d8b276-6f9f-4b17-b61c-996bdec36f85\") " pod="openstack/glance-default-external-api-0" Dec 10 14:58:50 crc kubenswrapper[4718]: I1210 14:58:50.176797 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6dbbc\" (UniqueName: \"kubernetes.io/projected/e6d8b276-6f9f-4b17-b61c-996bdec36f85-kube-api-access-6dbbc\") pod \"glance-default-external-api-0\" (UID: \"e6d8b276-6f9f-4b17-b61c-996bdec36f85\") " pod="openstack/glance-default-external-api-0" Dec 10 14:58:50 crc kubenswrapper[4718]: I1210 14:58:50.176838 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e6d8b276-6f9f-4b17-b61c-996bdec36f85-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"e6d8b276-6f9f-4b17-b61c-996bdec36f85\") " pod="openstack/glance-default-external-api-0" Dec 10 14:58:50 crc kubenswrapper[4718]: I1210 14:58:50.176871 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6d8b276-6f9f-4b17-b61c-996bdec36f85-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"e6d8b276-6f9f-4b17-b61c-996bdec36f85\") " pod="openstack/glance-default-external-api-0" Dec 10 14:58:50 crc kubenswrapper[4718]: I1210 14:58:50.176896 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e6d8b276-6f9f-4b17-b61c-996bdec36f85-logs\") pod \"glance-default-external-api-0\" (UID: \"e6d8b276-6f9f-4b17-b61c-996bdec36f85\") " pod="openstack/glance-default-external-api-0" Dec 10 14:58:50 crc kubenswrapper[4718]: I1210 14:58:50.176918 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e6d8b276-6f9f-4b17-b61c-996bdec36f85-scripts\") pod \"glance-default-external-api-0\" (UID: \"e6d8b276-6f9f-4b17-b61c-996bdec36f85\") " pod="openstack/glance-default-external-api-0" Dec 10 14:58:50 crc kubenswrapper[4718]: I1210 14:58:50.279347 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7d1ecb19-5ab9-470c-8f0e-862e7675d2c6-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"7d1ecb19-5ab9-470c-8f0e-862e7675d2c6\") " pod="openstack/glance-default-internal-api-0" Dec 10 14:58:50 crc kubenswrapper[4718]: I1210 14:58:50.280056 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7d1ecb19-5ab9-470c-8f0e-862e7675d2c6-config-data\") pod \"glance-default-internal-api-0\" (UID: \"7d1ecb19-5ab9-470c-8f0e-862e7675d2c6\") " pod="openstack/glance-default-internal-api-0" Dec 10 14:58:50 crc kubenswrapper[4718]: I1210 14:58:50.280094 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d1ecb19-5ab9-470c-8f0e-862e7675d2c6-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"7d1ecb19-5ab9-470c-8f0e-862e7675d2c6\") " pod="openstack/glance-default-internal-api-0" Dec 10 14:58:50 crc kubenswrapper[4718]: I1210 14:58:50.280147 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7d1ecb19-5ab9-470c-8f0e-862e7675d2c6-scripts\") pod \"glance-default-internal-api-0\" (UID: \"7d1ecb19-5ab9-470c-8f0e-862e7675d2c6\") " pod="openstack/glance-default-internal-api-0" Dec 10 14:58:50 crc kubenswrapper[4718]: I1210 14:58:50.280242 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dkmmb\" (UniqueName: \"kubernetes.io/projected/7d1ecb19-5ab9-470c-8f0e-862e7675d2c6-kube-api-access-dkmmb\") pod \"glance-default-internal-api-0\" (UID: \"7d1ecb19-5ab9-470c-8f0e-862e7675d2c6\") " pod="openstack/glance-default-internal-api-0" Dec 10 14:58:50 crc kubenswrapper[4718]: I1210 14:58:50.280416 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"e6d8b276-6f9f-4b17-b61c-996bdec36f85\") " pod="openstack/glance-default-external-api-0" Dec 10 14:58:50 crc kubenswrapper[4718]: I1210 14:58:50.280471 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-internal-api-0\" (UID: \"7d1ecb19-5ab9-470c-8f0e-862e7675d2c6\") " pod="openstack/glance-default-internal-api-0" Dec 10 14:58:50 crc kubenswrapper[4718]: I1210 14:58:50.280622 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7d1ecb19-5ab9-470c-8f0e-862e7675d2c6-logs\") pod \"glance-default-internal-api-0\" (UID: \"7d1ecb19-5ab9-470c-8f0e-862e7675d2c6\") " pod="openstack/glance-default-internal-api-0" Dec 10 14:58:50 crc kubenswrapper[4718]: I1210 14:58:50.280670 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6dbbc\" (UniqueName: \"kubernetes.io/projected/e6d8b276-6f9f-4b17-b61c-996bdec36f85-kube-api-access-6dbbc\") pod \"glance-default-external-api-0\" (UID: \"e6d8b276-6f9f-4b17-b61c-996bdec36f85\") " pod="openstack/glance-default-external-api-0" Dec 10 14:58:50 crc kubenswrapper[4718]: I1210 14:58:50.280747 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e6d8b276-6f9f-4b17-b61c-996bdec36f85-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"e6d8b276-6f9f-4b17-b61c-996bdec36f85\") " pod="openstack/glance-default-external-api-0" Dec 10 14:58:50 crc kubenswrapper[4718]: I1210 14:58:50.280820 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6d8b276-6f9f-4b17-b61c-996bdec36f85-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"e6d8b276-6f9f-4b17-b61c-996bdec36f85\") " pod="openstack/glance-default-external-api-0" Dec 10 14:58:50 crc kubenswrapper[4718]: I1210 14:58:50.280899 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7d1ecb19-5ab9-470c-8f0e-862e7675d2c6-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"7d1ecb19-5ab9-470c-8f0e-862e7675d2c6\") " pod="openstack/glance-default-internal-api-0" Dec 10 14:58:50 crc kubenswrapper[4718]: I1210 14:58:50.280944 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e6d8b276-6f9f-4b17-b61c-996bdec36f85-logs\") pod \"glance-default-external-api-0\" (UID: \"e6d8b276-6f9f-4b17-b61c-996bdec36f85\") " pod="openstack/glance-default-external-api-0" Dec 10 14:58:50 crc kubenswrapper[4718]: I1210 14:58:50.280976 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e6d8b276-6f9f-4b17-b61c-996bdec36f85-scripts\") pod \"glance-default-external-api-0\" (UID: \"e6d8b276-6f9f-4b17-b61c-996bdec36f85\") " pod="openstack/glance-default-external-api-0" Dec 10 14:58:50 crc kubenswrapper[4718]: I1210 14:58:50.281094 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e6d8b276-6f9f-4b17-b61c-996bdec36f85-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"e6d8b276-6f9f-4b17-b61c-996bdec36f85\") " pod="openstack/glance-default-external-api-0" Dec 10 14:58:50 crc kubenswrapper[4718]: I1210 14:58:50.281127 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e6d8b276-6f9f-4b17-b61c-996bdec36f85-config-data\") pod \"glance-default-external-api-0\" (UID: \"e6d8b276-6f9f-4b17-b61c-996bdec36f85\") " pod="openstack/glance-default-external-api-0" Dec 10 14:58:50 crc kubenswrapper[4718]: I1210 14:58:50.282784 4718 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"e6d8b276-6f9f-4b17-b61c-996bdec36f85\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/glance-default-external-api-0" Dec 10 14:58:50 crc kubenswrapper[4718]: I1210 14:58:50.285203 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e6d8b276-6f9f-4b17-b61c-996bdec36f85-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"e6d8b276-6f9f-4b17-b61c-996bdec36f85\") " pod="openstack/glance-default-external-api-0" Dec 10 14:58:50 crc kubenswrapper[4718]: I1210 14:58:50.285306 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e6d8b276-6f9f-4b17-b61c-996bdec36f85-logs\") pod \"glance-default-external-api-0\" (UID: \"e6d8b276-6f9f-4b17-b61c-996bdec36f85\") " pod="openstack/glance-default-external-api-0" Dec 10 14:58:50 crc kubenswrapper[4718]: I1210 14:58:50.312713 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e6d8b276-6f9f-4b17-b61c-996bdec36f85-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"e6d8b276-6f9f-4b17-b61c-996bdec36f85\") " pod="openstack/glance-default-external-api-0" Dec 10 14:58:50 crc kubenswrapper[4718]: I1210 14:58:50.313678 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e6d8b276-6f9f-4b17-b61c-996bdec36f85-scripts\") pod \"glance-default-external-api-0\" (UID: \"e6d8b276-6f9f-4b17-b61c-996bdec36f85\") " pod="openstack/glance-default-external-api-0" Dec 10 14:58:50 crc kubenswrapper[4718]: I1210 14:58:50.325399 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e6d8b276-6f9f-4b17-b61c-996bdec36f85-config-data\") pod \"glance-default-external-api-0\" (UID: \"e6d8b276-6f9f-4b17-b61c-996bdec36f85\") " pod="openstack/glance-default-external-api-0" Dec 10 14:58:50 crc kubenswrapper[4718]: I1210 14:58:50.325680 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6dbbc\" (UniqueName: \"kubernetes.io/projected/e6d8b276-6f9f-4b17-b61c-996bdec36f85-kube-api-access-6dbbc\") pod \"glance-default-external-api-0\" (UID: \"e6d8b276-6f9f-4b17-b61c-996bdec36f85\") " pod="openstack/glance-default-external-api-0" Dec 10 14:58:50 crc kubenswrapper[4718]: I1210 14:58:50.334817 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"e6d8b276-6f9f-4b17-b61c-996bdec36f85\") " pod="openstack/glance-default-external-api-0" Dec 10 14:58:50 crc kubenswrapper[4718]: I1210 14:58:50.335631 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6d8b276-6f9f-4b17-b61c-996bdec36f85-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"e6d8b276-6f9f-4b17-b61c-996bdec36f85\") " pod="openstack/glance-default-external-api-0" Dec 10 14:58:50 crc kubenswrapper[4718]: I1210 14:58:50.387001 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-internal-api-0\" (UID: \"7d1ecb19-5ab9-470c-8f0e-862e7675d2c6\") " pod="openstack/glance-default-internal-api-0" Dec 10 14:58:50 crc kubenswrapper[4718]: I1210 14:58:50.387089 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7d1ecb19-5ab9-470c-8f0e-862e7675d2c6-logs\") pod \"glance-default-internal-api-0\" (UID: \"7d1ecb19-5ab9-470c-8f0e-862e7675d2c6\") " pod="openstack/glance-default-internal-api-0" Dec 10 14:58:50 crc kubenswrapper[4718]: I1210 14:58:50.387128 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7d1ecb19-5ab9-470c-8f0e-862e7675d2c6-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"7d1ecb19-5ab9-470c-8f0e-862e7675d2c6\") " pod="openstack/glance-default-internal-api-0" Dec 10 14:58:50 crc kubenswrapper[4718]: I1210 14:58:50.387222 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7d1ecb19-5ab9-470c-8f0e-862e7675d2c6-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"7d1ecb19-5ab9-470c-8f0e-862e7675d2c6\") " pod="openstack/glance-default-internal-api-0" Dec 10 14:58:50 crc kubenswrapper[4718]: I1210 14:58:50.387252 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7d1ecb19-5ab9-470c-8f0e-862e7675d2c6-config-data\") pod \"glance-default-internal-api-0\" (UID: \"7d1ecb19-5ab9-470c-8f0e-862e7675d2c6\") " pod="openstack/glance-default-internal-api-0" Dec 10 14:58:50 crc kubenswrapper[4718]: I1210 14:58:50.387280 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d1ecb19-5ab9-470c-8f0e-862e7675d2c6-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"7d1ecb19-5ab9-470c-8f0e-862e7675d2c6\") " pod="openstack/glance-default-internal-api-0" Dec 10 14:58:50 crc kubenswrapper[4718]: I1210 14:58:50.387304 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7d1ecb19-5ab9-470c-8f0e-862e7675d2c6-scripts\") pod \"glance-default-internal-api-0\" (UID: \"7d1ecb19-5ab9-470c-8f0e-862e7675d2c6\") " pod="openstack/glance-default-internal-api-0" Dec 10 14:58:50 crc kubenswrapper[4718]: I1210 14:58:50.387334 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dkmmb\" (UniqueName: \"kubernetes.io/projected/7d1ecb19-5ab9-470c-8f0e-862e7675d2c6-kube-api-access-dkmmb\") pod \"glance-default-internal-api-0\" (UID: \"7d1ecb19-5ab9-470c-8f0e-862e7675d2c6\") " pod="openstack/glance-default-internal-api-0" Dec 10 14:58:50 crc kubenswrapper[4718]: I1210 14:58:50.387608 4718 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-internal-api-0\" (UID: \"7d1ecb19-5ab9-470c-8f0e-862e7675d2c6\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/glance-default-internal-api-0" Dec 10 14:58:50 crc kubenswrapper[4718]: I1210 14:58:50.387951 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7d1ecb19-5ab9-470c-8f0e-862e7675d2c6-logs\") pod \"glance-default-internal-api-0\" (UID: \"7d1ecb19-5ab9-470c-8f0e-862e7675d2c6\") " pod="openstack/glance-default-internal-api-0" Dec 10 14:58:50 crc kubenswrapper[4718]: I1210 14:58:50.388339 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7d1ecb19-5ab9-470c-8f0e-862e7675d2c6-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"7d1ecb19-5ab9-470c-8f0e-862e7675d2c6\") " pod="openstack/glance-default-internal-api-0" Dec 10 14:58:50 crc kubenswrapper[4718]: I1210 14:58:50.403461 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7d1ecb19-5ab9-470c-8f0e-862e7675d2c6-scripts\") pod \"glance-default-internal-api-0\" (UID: \"7d1ecb19-5ab9-470c-8f0e-862e7675d2c6\") " pod="openstack/glance-default-internal-api-0" Dec 10 14:58:50 crc kubenswrapper[4718]: I1210 14:58:50.404137 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7d1ecb19-5ab9-470c-8f0e-862e7675d2c6-config-data\") pod \"glance-default-internal-api-0\" (UID: \"7d1ecb19-5ab9-470c-8f0e-862e7675d2c6\") " pod="openstack/glance-default-internal-api-0" Dec 10 14:58:50 crc kubenswrapper[4718]: I1210 14:58:50.404737 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7d1ecb19-5ab9-470c-8f0e-862e7675d2c6-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"7d1ecb19-5ab9-470c-8f0e-862e7675d2c6\") " pod="openstack/glance-default-internal-api-0" Dec 10 14:58:50 crc kubenswrapper[4718]: I1210 14:58:50.410317 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dkmmb\" (UniqueName: \"kubernetes.io/projected/7d1ecb19-5ab9-470c-8f0e-862e7675d2c6-kube-api-access-dkmmb\") pod \"glance-default-internal-api-0\" (UID: \"7d1ecb19-5ab9-470c-8f0e-862e7675d2c6\") " pod="openstack/glance-default-internal-api-0" Dec 10 14:58:50 crc kubenswrapper[4718]: I1210 14:58:50.414212 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d1ecb19-5ab9-470c-8f0e-862e7675d2c6-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"7d1ecb19-5ab9-470c-8f0e-862e7675d2c6\") " pod="openstack/glance-default-internal-api-0" Dec 10 14:58:50 crc kubenswrapper[4718]: I1210 14:58:50.433231 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-internal-api-0\" (UID: \"7d1ecb19-5ab9-470c-8f0e-862e7675d2c6\") " pod="openstack/glance-default-internal-api-0" Dec 10 14:58:50 crc kubenswrapper[4718]: I1210 14:58:50.499141 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 10 14:58:50 crc kubenswrapper[4718]: I1210 14:58:50.565693 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 10 14:58:51 crc kubenswrapper[4718]: I1210 14:58:51.307309 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-6bb7f498bd-pjx6h" Dec 10 14:58:51 crc kubenswrapper[4718]: I1210 14:58:51.309496 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-6bb7f498bd-pjx6h" Dec 10 14:58:51 crc kubenswrapper[4718]: I1210 14:58:51.408733 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 10 14:58:51 crc kubenswrapper[4718]: I1210 14:58:51.517869 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 10 14:58:51 crc kubenswrapper[4718]: I1210 14:58:51.526913 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-77c9ddb894-brvxz" Dec 10 14:58:51 crc kubenswrapper[4718]: I1210 14:58:51.528280 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-77c9ddb894-brvxz" Dec 10 14:58:51 crc kubenswrapper[4718]: I1210 14:58:51.805077 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"e6d8b276-6f9f-4b17-b61c-996bdec36f85","Type":"ContainerStarted","Data":"747defd0881d9398c746ed42f1d9de880dcd25b74424089274f0ad0173fca117"} Dec 10 14:58:51 crc kubenswrapper[4718]: I1210 14:58:51.811449 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"7d1ecb19-5ab9-470c-8f0e-862e7675d2c6","Type":"ContainerStarted","Data":"b655742eb1cff476b3ee73af6d903e450810880a71c4b851ff5af3958ec0f8d6"} Dec 10 14:58:52 crc kubenswrapper[4718]: I1210 14:58:52.645620 4718 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/watcher-api-0" podUID="b0c0a449-91c3-43fe-ba1b-02a146745b82" containerName="watcher-api" probeResult="failure" output="Get \"https://10.217.0.166:9322/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 10 14:58:52 crc kubenswrapper[4718]: I1210 14:58:52.867642 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"7d1ecb19-5ab9-470c-8f0e-862e7675d2c6","Type":"ContainerStarted","Data":"07ce423c4440dd12f0a3bf202da6f5b8a56b40b1c7579615b85e0b871a76593c"} Dec 10 14:58:52 crc kubenswrapper[4718]: I1210 14:58:52.877515 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"e6d8b276-6f9f-4b17-b61c-996bdec36f85","Type":"ContainerStarted","Data":"b6f101c7d12c30882997bb52c30f3bffc266d2a63aafc2678435cbabb9f6a25b"} Dec 10 14:58:53 crc kubenswrapper[4718]: I1210 14:58:53.197494 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-api-0" Dec 10 14:58:53 crc kubenswrapper[4718]: I1210 14:58:53.219614 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7cf77b4997-gsztn" Dec 10 14:58:53 crc kubenswrapper[4718]: I1210 14:58:53.382831 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-578598f949-4lgxw"] Dec 10 14:58:53 crc kubenswrapper[4718]: I1210 14:58:53.383270 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-578598f949-4lgxw" podUID="830caa3e-735c-4fda-9323-ad5cf8d4779a" containerName="dnsmasq-dns" containerID="cri-o://9cdcf6bfb5a9d58472ed8acc4d9913a45c1ce0eeb3f6857524686c93222b8e7b" gracePeriod=10 Dec 10 14:58:53 crc kubenswrapper[4718]: I1210 14:58:53.459674 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-api-0" Dec 10 14:58:53 crc kubenswrapper[4718]: I1210 14:58:53.487883 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/watcher-api-0" Dec 10 14:58:53 crc kubenswrapper[4718]: I1210 14:58:53.899236 4718 generic.go:334] "Generic (PLEG): container finished" podID="830caa3e-735c-4fda-9323-ad5cf8d4779a" containerID="9cdcf6bfb5a9d58472ed8acc4d9913a45c1ce0eeb3f6857524686c93222b8e7b" exitCode=0 Dec 10 14:58:54 crc kubenswrapper[4718]: I1210 14:58:53.899835 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-578598f949-4lgxw" event={"ID":"830caa3e-735c-4fda-9323-ad5cf8d4779a","Type":"ContainerDied","Data":"9cdcf6bfb5a9d58472ed8acc4d9913a45c1ce0eeb3f6857524686c93222b8e7b"} Dec 10 14:58:54 crc kubenswrapper[4718]: I1210 14:58:54.112243 4718 generic.go:334] "Generic (PLEG): container finished" podID="95261732-95ae-4618-a8a3-c883c287553e" containerID="fe1809f1bf4abad0ceec1c1f2a13e3871e4415fc733067a42435419dae2a5707" exitCode=1 Dec 10 14:58:54 crc kubenswrapper[4718]: I1210 14:58:54.132853 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"95261732-95ae-4618-a8a3-c883c287553e","Type":"ContainerDied","Data":"fe1809f1bf4abad0ceec1c1f2a13e3871e4415fc733067a42435419dae2a5707"} Dec 10 14:58:54 crc kubenswrapper[4718]: I1210 14:58:54.132934 4718 scope.go:117] "RemoveContainer" containerID="adaadc9620a0a65e9630ae6487fae0f5e731d6e32ee263930dc619f5499b27c2" Dec 10 14:58:54 crc kubenswrapper[4718]: I1210 14:58:54.134762 4718 scope.go:117] "RemoveContainer" containerID="fe1809f1bf4abad0ceec1c1f2a13e3871e4415fc733067a42435419dae2a5707" Dec 10 14:58:54 crc kubenswrapper[4718]: E1210 14:58:54.135083 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with CrashLoopBackOff: \"back-off 10s restarting failed container=watcher-decision-engine pod=watcher-decision-engine-0_openstack(95261732-95ae-4618-a8a3-c883c287553e)\"" pod="openstack/watcher-decision-engine-0" podUID="95261732-95ae-4618-a8a3-c883c287553e" Dec 10 14:58:54 crc kubenswrapper[4718]: I1210 14:58:54.140748 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-api-0" Dec 10 14:58:55 crc kubenswrapper[4718]: I1210 14:58:55.161822 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"7d1ecb19-5ab9-470c-8f0e-862e7675d2c6","Type":"ContainerStarted","Data":"c0fbe5ce5cebf4ef05059fbabea799d8263d4777808df61b99d66d470e2e1705"} Dec 10 14:58:55 crc kubenswrapper[4718]: I1210 14:58:55.171299 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"e6d8b276-6f9f-4b17-b61c-996bdec36f85","Type":"ContainerStarted","Data":"7478490f9f7961a45a1eb2837c66227a4e5d802f587b00eb8cdcd1973c9c2efe"} Dec 10 14:58:55 crc kubenswrapper[4718]: I1210 14:58:55.215044 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=6.215002076 podStartE2EDuration="6.215002076s" podCreationTimestamp="2025-12-10 14:58:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 14:58:55.20810085 +0000 UTC m=+1640.157324267" watchObservedRunningTime="2025-12-10 14:58:55.215002076 +0000 UTC m=+1640.164225493" Dec 10 14:58:55 crc kubenswrapper[4718]: I1210 14:58:55.274109 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=6.274078454 podStartE2EDuration="6.274078454s" podCreationTimestamp="2025-12-10 14:58:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 14:58:55.252589856 +0000 UTC m=+1640.201813283" watchObservedRunningTime="2025-12-10 14:58:55.274078454 +0000 UTC m=+1640.223301871" Dec 10 14:58:56 crc kubenswrapper[4718]: I1210 14:58:56.031879 4718 scope.go:117] "RemoveContainer" containerID="76beffb68a4302af15a19b5c310d822594a21c7c7ec551bf4bf551ca3dcf1f21" Dec 10 14:58:56 crc kubenswrapper[4718]: E1210 14:58:56.032554 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8zmhn_openshift-machine-config-operator(8db53917-7cfb-496d-b8a0-5cc68f3be4e7)\"" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" podUID="8db53917-7cfb-496d-b8a0-5cc68f3be4e7" Dec 10 14:58:56 crc kubenswrapper[4718]: I1210 14:58:56.913292 4718 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-578598f949-4lgxw" podUID="830caa3e-735c-4fda-9323-ad5cf8d4779a" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.151:5353: connect: connection refused" Dec 10 14:58:58 crc kubenswrapper[4718]: I1210 14:58:58.234599 4718 generic.go:334] "Generic (PLEG): container finished" podID="3990dc15-53e8-4cd7-a25d-f9b322b74f3e" containerID="72987f27f5a542319df56b0be9e453dd2a1038e82678d8220b4c98c21b0878ff" exitCode=0 Dec 10 14:58:58 crc kubenswrapper[4718]: I1210 14:58:58.235217 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-cwrnt" event={"ID":"3990dc15-53e8-4cd7-a25d-f9b322b74f3e","Type":"ContainerDied","Data":"72987f27f5a542319df56b0be9e453dd2a1038e82678d8220b4c98c21b0878ff"} Dec 10 14:58:58 crc kubenswrapper[4718]: I1210 14:58:58.589742 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-decision-engine-0" Dec 10 14:58:58 crc kubenswrapper[4718]: I1210 14:58:58.591620 4718 scope.go:117] "RemoveContainer" containerID="fe1809f1bf4abad0ceec1c1f2a13e3871e4415fc733067a42435419dae2a5707" Dec 10 14:58:58 crc kubenswrapper[4718]: E1210 14:58:58.592049 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with CrashLoopBackOff: \"back-off 10s restarting failed container=watcher-decision-engine pod=watcher-decision-engine-0_openstack(95261732-95ae-4618-a8a3-c883c287553e)\"" pod="openstack/watcher-decision-engine-0" podUID="95261732-95ae-4618-a8a3-c883c287553e" Dec 10 14:59:00 crc kubenswrapper[4718]: I1210 14:59:00.535676 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Dec 10 14:59:00 crc kubenswrapper[4718]: I1210 14:59:00.536131 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Dec 10 14:59:00 crc kubenswrapper[4718]: I1210 14:59:00.566731 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Dec 10 14:59:00 crc kubenswrapper[4718]: I1210 14:59:00.567276 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Dec 10 14:59:00 crc kubenswrapper[4718]: I1210 14:59:00.663922 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Dec 10 14:59:00 crc kubenswrapper[4718]: I1210 14:59:00.664808 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Dec 10 14:59:00 crc kubenswrapper[4718]: I1210 14:59:00.669242 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Dec 10 14:59:00 crc kubenswrapper[4718]: I1210 14:59:00.669430 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Dec 10 14:59:00 crc kubenswrapper[4718]: I1210 14:59:00.897029 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-cwrnt" Dec 10 14:59:01 crc kubenswrapper[4718]: I1210 14:59:01.045666 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3990dc15-53e8-4cd7-a25d-f9b322b74f3e-combined-ca-bundle\") pod \"3990dc15-53e8-4cd7-a25d-f9b322b74f3e\" (UID: \"3990dc15-53e8-4cd7-a25d-f9b322b74f3e\") " Dec 10 14:59:01 crc kubenswrapper[4718]: I1210 14:59:01.045853 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/3990dc15-53e8-4cd7-a25d-f9b322b74f3e-db-sync-config-data\") pod \"3990dc15-53e8-4cd7-a25d-f9b322b74f3e\" (UID: \"3990dc15-53e8-4cd7-a25d-f9b322b74f3e\") " Dec 10 14:59:01 crc kubenswrapper[4718]: I1210 14:59:01.045968 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ftgjj\" (UniqueName: \"kubernetes.io/projected/3990dc15-53e8-4cd7-a25d-f9b322b74f3e-kube-api-access-ftgjj\") pod \"3990dc15-53e8-4cd7-a25d-f9b322b74f3e\" (UID: \"3990dc15-53e8-4cd7-a25d-f9b322b74f3e\") " Dec 10 14:59:01 crc kubenswrapper[4718]: I1210 14:59:01.057561 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3990dc15-53e8-4cd7-a25d-f9b322b74f3e-kube-api-access-ftgjj" (OuterVolumeSpecName: "kube-api-access-ftgjj") pod "3990dc15-53e8-4cd7-a25d-f9b322b74f3e" (UID: "3990dc15-53e8-4cd7-a25d-f9b322b74f3e"). InnerVolumeSpecName "kube-api-access-ftgjj". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 14:59:01 crc kubenswrapper[4718]: I1210 14:59:01.058140 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3990dc15-53e8-4cd7-a25d-f9b322b74f3e-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "3990dc15-53e8-4cd7-a25d-f9b322b74f3e" (UID: "3990dc15-53e8-4cd7-a25d-f9b322b74f3e"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 14:59:01 crc kubenswrapper[4718]: I1210 14:59:01.092900 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3990dc15-53e8-4cd7-a25d-f9b322b74f3e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3990dc15-53e8-4cd7-a25d-f9b322b74f3e" (UID: "3990dc15-53e8-4cd7-a25d-f9b322b74f3e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 14:59:01 crc kubenswrapper[4718]: I1210 14:59:01.161980 4718 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3990dc15-53e8-4cd7-a25d-f9b322b74f3e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 10 14:59:01 crc kubenswrapper[4718]: I1210 14:59:01.162031 4718 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/3990dc15-53e8-4cd7-a25d-f9b322b74f3e-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Dec 10 14:59:01 crc kubenswrapper[4718]: I1210 14:59:01.162042 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ftgjj\" (UniqueName: \"kubernetes.io/projected/3990dc15-53e8-4cd7-a25d-f9b322b74f3e-kube-api-access-ftgjj\") on node \"crc\" DevicePath \"\"" Dec 10 14:59:01 crc kubenswrapper[4718]: I1210 14:59:01.281785 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-cwrnt" event={"ID":"3990dc15-53e8-4cd7-a25d-f9b322b74f3e","Type":"ContainerDied","Data":"8030f2957773807418461c2a2f1fa5b1ec32459c4bed524388222c09b87f0e0a"} Dec 10 14:59:01 crc kubenswrapper[4718]: I1210 14:59:01.282724 4718 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8030f2957773807418461c2a2f1fa5b1ec32459c4bed524388222c09b87f0e0a" Dec 10 14:59:01 crc kubenswrapper[4718]: I1210 14:59:01.282823 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Dec 10 14:59:01 crc kubenswrapper[4718]: I1210 14:59:01.282976 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Dec 10 14:59:01 crc kubenswrapper[4718]: I1210 14:59:01.283044 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Dec 10 14:59:01 crc kubenswrapper[4718]: I1210 14:59:01.283113 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Dec 10 14:59:01 crc kubenswrapper[4718]: I1210 14:59:01.282132 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-cwrnt" Dec 10 14:59:01 crc kubenswrapper[4718]: I1210 14:59:01.312343 4718 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-6bb7f498bd-pjx6h" podUID="57bc5c19-c945-4bca-adef-0ddf1b9fabac" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.155:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.155:8443: connect: connection refused" Dec 10 14:59:01 crc kubenswrapper[4718]: I1210 14:59:01.530626 4718 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-77c9ddb894-brvxz" podUID="e1a09589-44b9-49f4-8970-d3381c3d4b99" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.156:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.156:8443: connect: connection refused" Dec 10 14:59:02 crc kubenswrapper[4718]: I1210 14:59:02.540503 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-86c6894dc7-l9prg"] Dec 10 14:59:02 crc kubenswrapper[4718]: E1210 14:59:02.541667 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3990dc15-53e8-4cd7-a25d-f9b322b74f3e" containerName="barbican-db-sync" Dec 10 14:59:02 crc kubenswrapper[4718]: I1210 14:59:02.541690 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="3990dc15-53e8-4cd7-a25d-f9b322b74f3e" containerName="barbican-db-sync" Dec 10 14:59:02 crc kubenswrapper[4718]: I1210 14:59:02.541943 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="3990dc15-53e8-4cd7-a25d-f9b322b74f3e" containerName="barbican-db-sync" Dec 10 14:59:02 crc kubenswrapper[4718]: I1210 14:59:02.543592 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-86c6894dc7-l9prg" Dec 10 14:59:02 crc kubenswrapper[4718]: I1210 14:59:02.561969 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Dec 10 14:59:02 crc kubenswrapper[4718]: I1210 14:59:02.562483 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-mtk4v" Dec 10 14:59:02 crc kubenswrapper[4718]: I1210 14:59:02.562698 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Dec 10 14:59:02 crc kubenswrapper[4718]: I1210 14:59:02.588989 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-595577df7d-rjzmf"] Dec 10 14:59:02 crc kubenswrapper[4718]: I1210 14:59:02.638942 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7nsvp\" (UniqueName: \"kubernetes.io/projected/f8b33603-9f2b-410e-a4ae-52b20ea62bd9-kube-api-access-7nsvp\") pod \"barbican-worker-86c6894dc7-l9prg\" (UID: \"f8b33603-9f2b-410e-a4ae-52b20ea62bd9\") " pod="openstack/barbican-worker-86c6894dc7-l9prg" Dec 10 14:59:02 crc kubenswrapper[4718]: I1210 14:59:02.639406 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f8b33603-9f2b-410e-a4ae-52b20ea62bd9-logs\") pod \"barbican-worker-86c6894dc7-l9prg\" (UID: \"f8b33603-9f2b-410e-a4ae-52b20ea62bd9\") " pod="openstack/barbican-worker-86c6894dc7-l9prg" Dec 10 14:59:02 crc kubenswrapper[4718]: I1210 14:59:02.639495 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f8b33603-9f2b-410e-a4ae-52b20ea62bd9-combined-ca-bundle\") pod \"barbican-worker-86c6894dc7-l9prg\" (UID: \"f8b33603-9f2b-410e-a4ae-52b20ea62bd9\") " pod="openstack/barbican-worker-86c6894dc7-l9prg" Dec 10 14:59:02 crc kubenswrapper[4718]: I1210 14:59:02.639743 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f8b33603-9f2b-410e-a4ae-52b20ea62bd9-config-data\") pod \"barbican-worker-86c6894dc7-l9prg\" (UID: \"f8b33603-9f2b-410e-a4ae-52b20ea62bd9\") " pod="openstack/barbican-worker-86c6894dc7-l9prg" Dec 10 14:59:02 crc kubenswrapper[4718]: I1210 14:59:02.639891 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f8b33603-9f2b-410e-a4ae-52b20ea62bd9-config-data-custom\") pod \"barbican-worker-86c6894dc7-l9prg\" (UID: \"f8b33603-9f2b-410e-a4ae-52b20ea62bd9\") " pod="openstack/barbican-worker-86c6894dc7-l9prg" Dec 10 14:59:02 crc kubenswrapper[4718]: I1210 14:59:02.645600 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-595577df7d-rjzmf" Dec 10 14:59:02 crc kubenswrapper[4718]: I1210 14:59:02.673094 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Dec 10 14:59:02 crc kubenswrapper[4718]: I1210 14:59:02.674078 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-86c6894dc7-l9prg"] Dec 10 14:59:02 crc kubenswrapper[4718]: I1210 14:59:02.748874 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f8b33603-9f2b-410e-a4ae-52b20ea62bd9-logs\") pod \"barbican-worker-86c6894dc7-l9prg\" (UID: \"f8b33603-9f2b-410e-a4ae-52b20ea62bd9\") " pod="openstack/barbican-worker-86c6894dc7-l9prg" Dec 10 14:59:02 crc kubenswrapper[4718]: I1210 14:59:02.748920 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f8b33603-9f2b-410e-a4ae-52b20ea62bd9-combined-ca-bundle\") pod \"barbican-worker-86c6894dc7-l9prg\" (UID: \"f8b33603-9f2b-410e-a4ae-52b20ea62bd9\") " pod="openstack/barbican-worker-86c6894dc7-l9prg" Dec 10 14:59:02 crc kubenswrapper[4718]: I1210 14:59:02.749002 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f8b33603-9f2b-410e-a4ae-52b20ea62bd9-config-data\") pod \"barbican-worker-86c6894dc7-l9prg\" (UID: \"f8b33603-9f2b-410e-a4ae-52b20ea62bd9\") " pod="openstack/barbican-worker-86c6894dc7-l9prg" Dec 10 14:59:02 crc kubenswrapper[4718]: I1210 14:59:02.749075 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f8b33603-9f2b-410e-a4ae-52b20ea62bd9-config-data-custom\") pod \"barbican-worker-86c6894dc7-l9prg\" (UID: \"f8b33603-9f2b-410e-a4ae-52b20ea62bd9\") " pod="openstack/barbican-worker-86c6894dc7-l9prg" Dec 10 14:59:02 crc kubenswrapper[4718]: I1210 14:59:02.749128 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7nsvp\" (UniqueName: \"kubernetes.io/projected/f8b33603-9f2b-410e-a4ae-52b20ea62bd9-kube-api-access-7nsvp\") pod \"barbican-worker-86c6894dc7-l9prg\" (UID: \"f8b33603-9f2b-410e-a4ae-52b20ea62bd9\") " pod="openstack/barbican-worker-86c6894dc7-l9prg" Dec 10 14:59:02 crc kubenswrapper[4718]: I1210 14:59:02.767596 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f8b33603-9f2b-410e-a4ae-52b20ea62bd9-logs\") pod \"barbican-worker-86c6894dc7-l9prg\" (UID: \"f8b33603-9f2b-410e-a4ae-52b20ea62bd9\") " pod="openstack/barbican-worker-86c6894dc7-l9prg" Dec 10 14:59:02 crc kubenswrapper[4718]: I1210 14:59:02.786364 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f8b33603-9f2b-410e-a4ae-52b20ea62bd9-combined-ca-bundle\") pod \"barbican-worker-86c6894dc7-l9prg\" (UID: \"f8b33603-9f2b-410e-a4ae-52b20ea62bd9\") " pod="openstack/barbican-worker-86c6894dc7-l9prg" Dec 10 14:59:02 crc kubenswrapper[4718]: I1210 14:59:02.794067 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f8b33603-9f2b-410e-a4ae-52b20ea62bd9-config-data\") pod \"barbican-worker-86c6894dc7-l9prg\" (UID: \"f8b33603-9f2b-410e-a4ae-52b20ea62bd9\") " pod="openstack/barbican-worker-86c6894dc7-l9prg" Dec 10 14:59:02 crc kubenswrapper[4718]: I1210 14:59:02.811652 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f8b33603-9f2b-410e-a4ae-52b20ea62bd9-config-data-custom\") pod \"barbican-worker-86c6894dc7-l9prg\" (UID: \"f8b33603-9f2b-410e-a4ae-52b20ea62bd9\") " pod="openstack/barbican-worker-86c6894dc7-l9prg" Dec 10 14:59:02 crc kubenswrapper[4718]: I1210 14:59:02.833203 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7nsvp\" (UniqueName: \"kubernetes.io/projected/f8b33603-9f2b-410e-a4ae-52b20ea62bd9-kube-api-access-7nsvp\") pod \"barbican-worker-86c6894dc7-l9prg\" (UID: \"f8b33603-9f2b-410e-a4ae-52b20ea62bd9\") " pod="openstack/barbican-worker-86c6894dc7-l9prg" Dec 10 14:59:02 crc kubenswrapper[4718]: I1210 14:59:02.854945 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-srbdf\" (UniqueName: \"kubernetes.io/projected/2b5afb71-a4ff-4786-ab1b-3e3d04f7e7dc-kube-api-access-srbdf\") pod \"barbican-keystone-listener-595577df7d-rjzmf\" (UID: \"2b5afb71-a4ff-4786-ab1b-3e3d04f7e7dc\") " pod="openstack/barbican-keystone-listener-595577df7d-rjzmf" Dec 10 14:59:02 crc kubenswrapper[4718]: I1210 14:59:02.855055 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2b5afb71-a4ff-4786-ab1b-3e3d04f7e7dc-config-data\") pod \"barbican-keystone-listener-595577df7d-rjzmf\" (UID: \"2b5afb71-a4ff-4786-ab1b-3e3d04f7e7dc\") " pod="openstack/barbican-keystone-listener-595577df7d-rjzmf" Dec 10 14:59:02 crc kubenswrapper[4718]: I1210 14:59:02.855093 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b5afb71-a4ff-4786-ab1b-3e3d04f7e7dc-combined-ca-bundle\") pod \"barbican-keystone-listener-595577df7d-rjzmf\" (UID: \"2b5afb71-a4ff-4786-ab1b-3e3d04f7e7dc\") " pod="openstack/barbican-keystone-listener-595577df7d-rjzmf" Dec 10 14:59:02 crc kubenswrapper[4718]: I1210 14:59:02.855131 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2b5afb71-a4ff-4786-ab1b-3e3d04f7e7dc-config-data-custom\") pod \"barbican-keystone-listener-595577df7d-rjzmf\" (UID: \"2b5afb71-a4ff-4786-ab1b-3e3d04f7e7dc\") " pod="openstack/barbican-keystone-listener-595577df7d-rjzmf" Dec 10 14:59:02 crc kubenswrapper[4718]: I1210 14:59:02.855179 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2b5afb71-a4ff-4786-ab1b-3e3d04f7e7dc-logs\") pod \"barbican-keystone-listener-595577df7d-rjzmf\" (UID: \"2b5afb71-a4ff-4786-ab1b-3e3d04f7e7dc\") " pod="openstack/barbican-keystone-listener-595577df7d-rjzmf" Dec 10 14:59:02 crc kubenswrapper[4718]: I1210 14:59:02.871478 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-595577df7d-rjzmf"] Dec 10 14:59:02 crc kubenswrapper[4718]: I1210 14:59:02.948837 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-d67c77454-2pqgh"] Dec 10 14:59:02 crc kubenswrapper[4718]: I1210 14:59:02.951056 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-86c6894dc7-l9prg" Dec 10 14:59:02 crc kubenswrapper[4718]: I1210 14:59:02.990438 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-srbdf\" (UniqueName: \"kubernetes.io/projected/2b5afb71-a4ff-4786-ab1b-3e3d04f7e7dc-kube-api-access-srbdf\") pod \"barbican-keystone-listener-595577df7d-rjzmf\" (UID: \"2b5afb71-a4ff-4786-ab1b-3e3d04f7e7dc\") " pod="openstack/barbican-keystone-listener-595577df7d-rjzmf" Dec 10 14:59:02 crc kubenswrapper[4718]: I1210 14:59:02.992722 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2b5afb71-a4ff-4786-ab1b-3e3d04f7e7dc-config-data\") pod \"barbican-keystone-listener-595577df7d-rjzmf\" (UID: \"2b5afb71-a4ff-4786-ab1b-3e3d04f7e7dc\") " pod="openstack/barbican-keystone-listener-595577df7d-rjzmf" Dec 10 14:59:02 crc kubenswrapper[4718]: I1210 14:59:02.992870 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b5afb71-a4ff-4786-ab1b-3e3d04f7e7dc-combined-ca-bundle\") pod \"barbican-keystone-listener-595577df7d-rjzmf\" (UID: \"2b5afb71-a4ff-4786-ab1b-3e3d04f7e7dc\") " pod="openstack/barbican-keystone-listener-595577df7d-rjzmf" Dec 10 14:59:02 crc kubenswrapper[4718]: I1210 14:59:02.993074 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2b5afb71-a4ff-4786-ab1b-3e3d04f7e7dc-config-data-custom\") pod \"barbican-keystone-listener-595577df7d-rjzmf\" (UID: \"2b5afb71-a4ff-4786-ab1b-3e3d04f7e7dc\") " pod="openstack/barbican-keystone-listener-595577df7d-rjzmf" Dec 10 14:59:02 crc kubenswrapper[4718]: I1210 14:59:02.993336 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2b5afb71-a4ff-4786-ab1b-3e3d04f7e7dc-logs\") pod \"barbican-keystone-listener-595577df7d-rjzmf\" (UID: \"2b5afb71-a4ff-4786-ab1b-3e3d04f7e7dc\") " pod="openstack/barbican-keystone-listener-595577df7d-rjzmf" Dec 10 14:59:02 crc kubenswrapper[4718]: I1210 14:59:02.994441 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2b5afb71-a4ff-4786-ab1b-3e3d04f7e7dc-logs\") pod \"barbican-keystone-listener-595577df7d-rjzmf\" (UID: \"2b5afb71-a4ff-4786-ab1b-3e3d04f7e7dc\") " pod="openstack/barbican-keystone-listener-595577df7d-rjzmf" Dec 10 14:59:03 crc kubenswrapper[4718]: I1210 14:59:03.048494 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-srbdf\" (UniqueName: \"kubernetes.io/projected/2b5afb71-a4ff-4786-ab1b-3e3d04f7e7dc-kube-api-access-srbdf\") pod \"barbican-keystone-listener-595577df7d-rjzmf\" (UID: \"2b5afb71-a4ff-4786-ab1b-3e3d04f7e7dc\") " pod="openstack/barbican-keystone-listener-595577df7d-rjzmf" Dec 10 14:59:03 crc kubenswrapper[4718]: I1210 14:59:03.051476 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b5afb71-a4ff-4786-ab1b-3e3d04f7e7dc-combined-ca-bundle\") pod \"barbican-keystone-listener-595577df7d-rjzmf\" (UID: \"2b5afb71-a4ff-4786-ab1b-3e3d04f7e7dc\") " pod="openstack/barbican-keystone-listener-595577df7d-rjzmf" Dec 10 14:59:03 crc kubenswrapper[4718]: I1210 14:59:03.053293 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-549c96b4c7-5ccv8"] Dec 10 14:59:03 crc kubenswrapper[4718]: I1210 14:59:03.054268 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-d67c77454-2pqgh" Dec 10 14:59:03 crc kubenswrapper[4718]: I1210 14:59:03.060849 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Dec 10 14:59:03 crc kubenswrapper[4718]: I1210 14:59:03.069741 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2b5afb71-a4ff-4786-ab1b-3e3d04f7e7dc-config-data-custom\") pod \"barbican-keystone-listener-595577df7d-rjzmf\" (UID: \"2b5afb71-a4ff-4786-ab1b-3e3d04f7e7dc\") " pod="openstack/barbican-keystone-listener-595577df7d-rjzmf" Dec 10 14:59:03 crc kubenswrapper[4718]: I1210 14:59:03.126586 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6cafdb55-0f7a-4b44-94a2-f3005a0384e2-logs\") pod \"barbican-api-d67c77454-2pqgh\" (UID: \"6cafdb55-0f7a-4b44-94a2-f3005a0384e2\") " pod="openstack/barbican-api-d67c77454-2pqgh" Dec 10 14:59:03 crc kubenswrapper[4718]: I1210 14:59:03.126788 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6cafdb55-0f7a-4b44-94a2-f3005a0384e2-combined-ca-bundle\") pod \"barbican-api-d67c77454-2pqgh\" (UID: \"6cafdb55-0f7a-4b44-94a2-f3005a0384e2\") " pod="openstack/barbican-api-d67c77454-2pqgh" Dec 10 14:59:03 crc kubenswrapper[4718]: I1210 14:59:03.127145 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-89cj9\" (UniqueName: \"kubernetes.io/projected/6cafdb55-0f7a-4b44-94a2-f3005a0384e2-kube-api-access-89cj9\") pod \"barbican-api-d67c77454-2pqgh\" (UID: \"6cafdb55-0f7a-4b44-94a2-f3005a0384e2\") " pod="openstack/barbican-api-d67c77454-2pqgh" Dec 10 14:59:03 crc kubenswrapper[4718]: I1210 14:59:03.127213 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6cafdb55-0f7a-4b44-94a2-f3005a0384e2-config-data-custom\") pod \"barbican-api-d67c77454-2pqgh\" (UID: \"6cafdb55-0f7a-4b44-94a2-f3005a0384e2\") " pod="openstack/barbican-api-d67c77454-2pqgh" Dec 10 14:59:03 crc kubenswrapper[4718]: I1210 14:59:03.127797 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6cafdb55-0f7a-4b44-94a2-f3005a0384e2-config-data\") pod \"barbican-api-d67c77454-2pqgh\" (UID: \"6cafdb55-0f7a-4b44-94a2-f3005a0384e2\") " pod="openstack/barbican-api-d67c77454-2pqgh" Dec 10 14:59:03 crc kubenswrapper[4718]: I1210 14:59:03.128249 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-549c96b4c7-5ccv8" Dec 10 14:59:03 crc kubenswrapper[4718]: I1210 14:59:03.132949 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2b5afb71-a4ff-4786-ab1b-3e3d04f7e7dc-config-data\") pod \"barbican-keystone-listener-595577df7d-rjzmf\" (UID: \"2b5afb71-a4ff-4786-ab1b-3e3d04f7e7dc\") " pod="openstack/barbican-keystone-listener-595577df7d-rjzmf" Dec 10 14:59:03 crc kubenswrapper[4718]: I1210 14:59:03.216818 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-549c96b4c7-5ccv8"] Dec 10 14:59:03 crc kubenswrapper[4718]: I1210 14:59:03.243734 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6cafdb55-0f7a-4b44-94a2-f3005a0384e2-config-data\") pod \"barbican-api-d67c77454-2pqgh\" (UID: \"6cafdb55-0f7a-4b44-94a2-f3005a0384e2\") " pod="openstack/barbican-api-d67c77454-2pqgh" Dec 10 14:59:03 crc kubenswrapper[4718]: I1210 14:59:03.244497 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6cafdb55-0f7a-4b44-94a2-f3005a0384e2-logs\") pod \"barbican-api-d67c77454-2pqgh\" (UID: \"6cafdb55-0f7a-4b44-94a2-f3005a0384e2\") " pod="openstack/barbican-api-d67c77454-2pqgh" Dec 10 14:59:03 crc kubenswrapper[4718]: I1210 14:59:03.245650 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6cafdb55-0f7a-4b44-94a2-f3005a0384e2-combined-ca-bundle\") pod \"barbican-api-d67c77454-2pqgh\" (UID: \"6cafdb55-0f7a-4b44-94a2-f3005a0384e2\") " pod="openstack/barbican-api-d67c77454-2pqgh" Dec 10 14:59:03 crc kubenswrapper[4718]: I1210 14:59:03.245863 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-89cj9\" (UniqueName: \"kubernetes.io/projected/6cafdb55-0f7a-4b44-94a2-f3005a0384e2-kube-api-access-89cj9\") pod \"barbican-api-d67c77454-2pqgh\" (UID: \"6cafdb55-0f7a-4b44-94a2-f3005a0384e2\") " pod="openstack/barbican-api-d67c77454-2pqgh" Dec 10 14:59:03 crc kubenswrapper[4718]: I1210 14:59:03.245891 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6cafdb55-0f7a-4b44-94a2-f3005a0384e2-config-data-custom\") pod \"barbican-api-d67c77454-2pqgh\" (UID: \"6cafdb55-0f7a-4b44-94a2-f3005a0384e2\") " pod="openstack/barbican-api-d67c77454-2pqgh" Dec 10 14:59:03 crc kubenswrapper[4718]: I1210 14:59:03.261851 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6cafdb55-0f7a-4b44-94a2-f3005a0384e2-logs\") pod \"barbican-api-d67c77454-2pqgh\" (UID: \"6cafdb55-0f7a-4b44-94a2-f3005a0384e2\") " pod="openstack/barbican-api-d67c77454-2pqgh" Dec 10 14:59:03 crc kubenswrapper[4718]: I1210 14:59:03.265727 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6cafdb55-0f7a-4b44-94a2-f3005a0384e2-config-data\") pod \"barbican-api-d67c77454-2pqgh\" (UID: \"6cafdb55-0f7a-4b44-94a2-f3005a0384e2\") " pod="openstack/barbican-api-d67c77454-2pqgh" Dec 10 14:59:03 crc kubenswrapper[4718]: I1210 14:59:03.266108 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6cafdb55-0f7a-4b44-94a2-f3005a0384e2-config-data-custom\") pod \"barbican-api-d67c77454-2pqgh\" (UID: \"6cafdb55-0f7a-4b44-94a2-f3005a0384e2\") " pod="openstack/barbican-api-d67c77454-2pqgh" Dec 10 14:59:03 crc kubenswrapper[4718]: I1210 14:59:03.274625 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6cafdb55-0f7a-4b44-94a2-f3005a0384e2-combined-ca-bundle\") pod \"barbican-api-d67c77454-2pqgh\" (UID: \"6cafdb55-0f7a-4b44-94a2-f3005a0384e2\") " pod="openstack/barbican-api-d67c77454-2pqgh" Dec 10 14:59:03 crc kubenswrapper[4718]: I1210 14:59:03.297325 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-89cj9\" (UniqueName: \"kubernetes.io/projected/6cafdb55-0f7a-4b44-94a2-f3005a0384e2-kube-api-access-89cj9\") pod \"barbican-api-d67c77454-2pqgh\" (UID: \"6cafdb55-0f7a-4b44-94a2-f3005a0384e2\") " pod="openstack/barbican-api-d67c77454-2pqgh" Dec 10 14:59:03 crc kubenswrapper[4718]: I1210 14:59:03.304340 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-d67c77454-2pqgh"] Dec 10 14:59:03 crc kubenswrapper[4718]: I1210 14:59:03.325228 4718 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 10 14:59:03 crc kubenswrapper[4718]: I1210 14:59:03.325805 4718 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 10 14:59:03 crc kubenswrapper[4718]: I1210 14:59:03.353275 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sqw8z\" (UniqueName: \"kubernetes.io/projected/12eb657b-f669-41c3-9125-9266909468a3-kube-api-access-sqw8z\") pod \"dnsmasq-dns-549c96b4c7-5ccv8\" (UID: \"12eb657b-f669-41c3-9125-9266909468a3\") " pod="openstack/dnsmasq-dns-549c96b4c7-5ccv8" Dec 10 14:59:03 crc kubenswrapper[4718]: I1210 14:59:03.353429 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/12eb657b-f669-41c3-9125-9266909468a3-ovsdbserver-sb\") pod \"dnsmasq-dns-549c96b4c7-5ccv8\" (UID: \"12eb657b-f669-41c3-9125-9266909468a3\") " pod="openstack/dnsmasq-dns-549c96b4c7-5ccv8" Dec 10 14:59:03 crc kubenswrapper[4718]: I1210 14:59:03.353480 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/12eb657b-f669-41c3-9125-9266909468a3-ovsdbserver-nb\") pod \"dnsmasq-dns-549c96b4c7-5ccv8\" (UID: \"12eb657b-f669-41c3-9125-9266909468a3\") " pod="openstack/dnsmasq-dns-549c96b4c7-5ccv8" Dec 10 14:59:03 crc kubenswrapper[4718]: I1210 14:59:03.354490 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/12eb657b-f669-41c3-9125-9266909468a3-dns-svc\") pod \"dnsmasq-dns-549c96b4c7-5ccv8\" (UID: \"12eb657b-f669-41c3-9125-9266909468a3\") " pod="openstack/dnsmasq-dns-549c96b4c7-5ccv8" Dec 10 14:59:03 crc kubenswrapper[4718]: I1210 14:59:03.354662 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/12eb657b-f669-41c3-9125-9266909468a3-config\") pod \"dnsmasq-dns-549c96b4c7-5ccv8\" (UID: \"12eb657b-f669-41c3-9125-9266909468a3\") " pod="openstack/dnsmasq-dns-549c96b4c7-5ccv8" Dec 10 14:59:03 crc kubenswrapper[4718]: I1210 14:59:03.354793 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/12eb657b-f669-41c3-9125-9266909468a3-dns-swift-storage-0\") pod \"dnsmasq-dns-549c96b4c7-5ccv8\" (UID: \"12eb657b-f669-41c3-9125-9266909468a3\") " pod="openstack/dnsmasq-dns-549c96b4c7-5ccv8" Dec 10 14:59:03 crc kubenswrapper[4718]: I1210 14:59:03.375808 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-595577df7d-rjzmf" Dec 10 14:59:03 crc kubenswrapper[4718]: I1210 14:59:03.458285 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sqw8z\" (UniqueName: \"kubernetes.io/projected/12eb657b-f669-41c3-9125-9266909468a3-kube-api-access-sqw8z\") pod \"dnsmasq-dns-549c96b4c7-5ccv8\" (UID: \"12eb657b-f669-41c3-9125-9266909468a3\") " pod="openstack/dnsmasq-dns-549c96b4c7-5ccv8" Dec 10 14:59:03 crc kubenswrapper[4718]: I1210 14:59:03.459229 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/12eb657b-f669-41c3-9125-9266909468a3-ovsdbserver-sb\") pod \"dnsmasq-dns-549c96b4c7-5ccv8\" (UID: \"12eb657b-f669-41c3-9125-9266909468a3\") " pod="openstack/dnsmasq-dns-549c96b4c7-5ccv8" Dec 10 14:59:03 crc kubenswrapper[4718]: I1210 14:59:03.460369 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/12eb657b-f669-41c3-9125-9266909468a3-ovsdbserver-nb\") pod \"dnsmasq-dns-549c96b4c7-5ccv8\" (UID: \"12eb657b-f669-41c3-9125-9266909468a3\") " pod="openstack/dnsmasq-dns-549c96b4c7-5ccv8" Dec 10 14:59:03 crc kubenswrapper[4718]: I1210 14:59:03.460682 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/12eb657b-f669-41c3-9125-9266909468a3-dns-svc\") pod \"dnsmasq-dns-549c96b4c7-5ccv8\" (UID: \"12eb657b-f669-41c3-9125-9266909468a3\") " pod="openstack/dnsmasq-dns-549c96b4c7-5ccv8" Dec 10 14:59:03 crc kubenswrapper[4718]: I1210 14:59:03.460788 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/12eb657b-f669-41c3-9125-9266909468a3-ovsdbserver-sb\") pod \"dnsmasq-dns-549c96b4c7-5ccv8\" (UID: \"12eb657b-f669-41c3-9125-9266909468a3\") " pod="openstack/dnsmasq-dns-549c96b4c7-5ccv8" Dec 10 14:59:03 crc kubenswrapper[4718]: I1210 14:59:03.460912 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/12eb657b-f669-41c3-9125-9266909468a3-config\") pod \"dnsmasq-dns-549c96b4c7-5ccv8\" (UID: \"12eb657b-f669-41c3-9125-9266909468a3\") " pod="openstack/dnsmasq-dns-549c96b4c7-5ccv8" Dec 10 14:59:03 crc kubenswrapper[4718]: I1210 14:59:03.461030 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/12eb657b-f669-41c3-9125-9266909468a3-dns-swift-storage-0\") pod \"dnsmasq-dns-549c96b4c7-5ccv8\" (UID: \"12eb657b-f669-41c3-9125-9266909468a3\") " pod="openstack/dnsmasq-dns-549c96b4c7-5ccv8" Dec 10 14:59:03 crc kubenswrapper[4718]: I1210 14:59:03.462497 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/12eb657b-f669-41c3-9125-9266909468a3-ovsdbserver-nb\") pod \"dnsmasq-dns-549c96b4c7-5ccv8\" (UID: \"12eb657b-f669-41c3-9125-9266909468a3\") " pod="openstack/dnsmasq-dns-549c96b4c7-5ccv8" Dec 10 14:59:03 crc kubenswrapper[4718]: I1210 14:59:03.463343 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/12eb657b-f669-41c3-9125-9266909468a3-dns-svc\") pod \"dnsmasq-dns-549c96b4c7-5ccv8\" (UID: \"12eb657b-f669-41c3-9125-9266909468a3\") " pod="openstack/dnsmasq-dns-549c96b4c7-5ccv8" Dec 10 14:59:03 crc kubenswrapper[4718]: I1210 14:59:03.463578 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/12eb657b-f669-41c3-9125-9266909468a3-dns-swift-storage-0\") pod \"dnsmasq-dns-549c96b4c7-5ccv8\" (UID: \"12eb657b-f669-41c3-9125-9266909468a3\") " pod="openstack/dnsmasq-dns-549c96b4c7-5ccv8" Dec 10 14:59:03 crc kubenswrapper[4718]: I1210 14:59:03.463978 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/12eb657b-f669-41c3-9125-9266909468a3-config\") pod \"dnsmasq-dns-549c96b4c7-5ccv8\" (UID: \"12eb657b-f669-41c3-9125-9266909468a3\") " pod="openstack/dnsmasq-dns-549c96b4c7-5ccv8" Dec 10 14:59:03 crc kubenswrapper[4718]: I1210 14:59:03.511086 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sqw8z\" (UniqueName: \"kubernetes.io/projected/12eb657b-f669-41c3-9125-9266909468a3-kube-api-access-sqw8z\") pod \"dnsmasq-dns-549c96b4c7-5ccv8\" (UID: \"12eb657b-f669-41c3-9125-9266909468a3\") " pod="openstack/dnsmasq-dns-549c96b4c7-5ccv8" Dec 10 14:59:03 crc kubenswrapper[4718]: I1210 14:59:03.590694 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-d67c77454-2pqgh" Dec 10 14:59:03 crc kubenswrapper[4718]: I1210 14:59:03.601135 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-549c96b4c7-5ccv8" Dec 10 14:59:03 crc kubenswrapper[4718]: E1210 14:59:03.893188 4718 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/ubi9/httpd-24@sha256:6b929971283d69f485a7d3e449fb5a3dd65d5a4de585c73419e776821d00062c" Dec 10 14:59:03 crc kubenswrapper[4718]: E1210 14:59:03.893482 4718 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:proxy-httpd,Image:registry.redhat.io/ubi9/httpd-24@sha256:6b929971283d69f485a7d3e449fb5a3dd65d5a4de585c73419e776821d00062c,Command:[/usr/sbin/httpd],Args:[-DFOREGROUND],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:proxy-httpd,HostPort:0,ContainerPort:3000,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/httpd/conf/httpd.conf,SubPath:httpd.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/httpd/conf.d/ssl.conf,SubPath:ssl.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:run-httpd,ReadOnly:false,MountPath:/run/httpd,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:log-httpd,ReadOnly:false,MountPath:/var/log/httpd,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xmcqr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:{0 3000 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:30,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:{0 3000 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:30,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(18de3c3d-d30b-4c09-b1a2-3a6376de8843): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 10 14:59:03 crc kubenswrapper[4718]: E1210 14:59:03.894874 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"proxy-httpd\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"]" pod="openstack/ceilometer-0" podUID="18de3c3d-d30b-4c09-b1a2-3a6376de8843" Dec 10 14:59:04 crc kubenswrapper[4718]: I1210 14:59:04.207661 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-578598f949-4lgxw" Dec 10 14:59:04 crc kubenswrapper[4718]: I1210 14:59:04.285945 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/830caa3e-735c-4fda-9323-ad5cf8d4779a-dns-swift-storage-0\") pod \"830caa3e-735c-4fda-9323-ad5cf8d4779a\" (UID: \"830caa3e-735c-4fda-9323-ad5cf8d4779a\") " Dec 10 14:59:04 crc kubenswrapper[4718]: I1210 14:59:04.286040 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/830caa3e-735c-4fda-9323-ad5cf8d4779a-dns-svc\") pod \"830caa3e-735c-4fda-9323-ad5cf8d4779a\" (UID: \"830caa3e-735c-4fda-9323-ad5cf8d4779a\") " Dec 10 14:59:04 crc kubenswrapper[4718]: I1210 14:59:04.286239 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4b4gg\" (UniqueName: \"kubernetes.io/projected/830caa3e-735c-4fda-9323-ad5cf8d4779a-kube-api-access-4b4gg\") pod \"830caa3e-735c-4fda-9323-ad5cf8d4779a\" (UID: \"830caa3e-735c-4fda-9323-ad5cf8d4779a\") " Dec 10 14:59:04 crc kubenswrapper[4718]: I1210 14:59:04.286280 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/830caa3e-735c-4fda-9323-ad5cf8d4779a-ovsdbserver-sb\") pod \"830caa3e-735c-4fda-9323-ad5cf8d4779a\" (UID: \"830caa3e-735c-4fda-9323-ad5cf8d4779a\") " Dec 10 14:59:04 crc kubenswrapper[4718]: I1210 14:59:04.286319 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/830caa3e-735c-4fda-9323-ad5cf8d4779a-config\") pod \"830caa3e-735c-4fda-9323-ad5cf8d4779a\" (UID: \"830caa3e-735c-4fda-9323-ad5cf8d4779a\") " Dec 10 14:59:04 crc kubenswrapper[4718]: I1210 14:59:04.286351 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/830caa3e-735c-4fda-9323-ad5cf8d4779a-ovsdbserver-nb\") pod \"830caa3e-735c-4fda-9323-ad5cf8d4779a\" (UID: \"830caa3e-735c-4fda-9323-ad5cf8d4779a\") " Dec 10 14:59:04 crc kubenswrapper[4718]: I1210 14:59:04.325743 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/830caa3e-735c-4fda-9323-ad5cf8d4779a-kube-api-access-4b4gg" (OuterVolumeSpecName: "kube-api-access-4b4gg") pod "830caa3e-735c-4fda-9323-ad5cf8d4779a" (UID: "830caa3e-735c-4fda-9323-ad5cf8d4779a"). InnerVolumeSpecName "kube-api-access-4b4gg". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 14:59:04 crc kubenswrapper[4718]: I1210 14:59:04.393176 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4b4gg\" (UniqueName: \"kubernetes.io/projected/830caa3e-735c-4fda-9323-ad5cf8d4779a-kube-api-access-4b4gg\") on node \"crc\" DevicePath \"\"" Dec 10 14:59:04 crc kubenswrapper[4718]: I1210 14:59:04.467778 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-578598f949-4lgxw" event={"ID":"830caa3e-735c-4fda-9323-ad5cf8d4779a","Type":"ContainerDied","Data":"c35eb49bc0f96de8946bf83646b4fe04dd20714cb8b49b8887327be89d3b959c"} Dec 10 14:59:04 crc kubenswrapper[4718]: I1210 14:59:04.467870 4718 scope.go:117] "RemoveContainer" containerID="9cdcf6bfb5a9d58472ed8acc4d9913a45c1ce0eeb3f6857524686c93222b8e7b" Dec 10 14:59:04 crc kubenswrapper[4718]: I1210 14:59:04.468077 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-578598f949-4lgxw" Dec 10 14:59:04 crc kubenswrapper[4718]: I1210 14:59:04.506033 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/830caa3e-735c-4fda-9323-ad5cf8d4779a-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "830caa3e-735c-4fda-9323-ad5cf8d4779a" (UID: "830caa3e-735c-4fda-9323-ad5cf8d4779a"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:59:04 crc kubenswrapper[4718]: I1210 14:59:04.510725 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="18de3c3d-d30b-4c09-b1a2-3a6376de8843" containerName="ceilometer-notification-agent" containerID="cri-o://05464e0792ca08ae50c09bb7386a735a606b53c8af4f8c35fb6a7490fef3355f" gracePeriod=30 Dec 10 14:59:04 crc kubenswrapper[4718]: I1210 14:59:04.511541 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="18de3c3d-d30b-4c09-b1a2-3a6376de8843" containerName="sg-core" containerID="cri-o://601978f1b4f9d5b5406fb43ec35cd234eb953245e78bb08c3e5c147e1a0788aa" gracePeriod=30 Dec 10 14:59:04 crc kubenswrapper[4718]: I1210 14:59:04.531624 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/830caa3e-735c-4fda-9323-ad5cf8d4779a-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "830caa3e-735c-4fda-9323-ad5cf8d4779a" (UID: "830caa3e-735c-4fda-9323-ad5cf8d4779a"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:59:04 crc kubenswrapper[4718]: I1210 14:59:04.542180 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/830caa3e-735c-4fda-9323-ad5cf8d4779a-config" (OuterVolumeSpecName: "config") pod "830caa3e-735c-4fda-9323-ad5cf8d4779a" (UID: "830caa3e-735c-4fda-9323-ad5cf8d4779a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:59:04 crc kubenswrapper[4718]: I1210 14:59:04.622871 4718 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/830caa3e-735c-4fda-9323-ad5cf8d4779a-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 10 14:59:04 crc kubenswrapper[4718]: I1210 14:59:04.622931 4718 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/830caa3e-735c-4fda-9323-ad5cf8d4779a-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 10 14:59:04 crc kubenswrapper[4718]: I1210 14:59:04.622953 4718 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/830caa3e-735c-4fda-9323-ad5cf8d4779a-config\") on node \"crc\" DevicePath \"\"" Dec 10 14:59:04 crc kubenswrapper[4718]: I1210 14:59:04.659813 4718 scope.go:117] "RemoveContainer" containerID="50dccb61ea20f2f45d72195938f69e3a3d8444e08fd9804061d2efd12042234e" Dec 10 14:59:04 crc kubenswrapper[4718]: I1210 14:59:04.684621 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/830caa3e-735c-4fda-9323-ad5cf8d4779a-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "830caa3e-735c-4fda-9323-ad5cf8d4779a" (UID: "830caa3e-735c-4fda-9323-ad5cf8d4779a"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:59:04 crc kubenswrapper[4718]: I1210 14:59:04.705386 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/830caa3e-735c-4fda-9323-ad5cf8d4779a-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "830caa3e-735c-4fda-9323-ad5cf8d4779a" (UID: "830caa3e-735c-4fda-9323-ad5cf8d4779a"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:59:04 crc kubenswrapper[4718]: I1210 14:59:04.727661 4718 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/830caa3e-735c-4fda-9323-ad5cf8d4779a-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 10 14:59:04 crc kubenswrapper[4718]: I1210 14:59:04.727729 4718 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/830caa3e-735c-4fda-9323-ad5cf8d4779a-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 10 14:59:04 crc kubenswrapper[4718]: I1210 14:59:04.961500 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-578598f949-4lgxw"] Dec 10 14:59:04 crc kubenswrapper[4718]: I1210 14:59:04.997657 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-578598f949-4lgxw"] Dec 10 14:59:04 crc kubenswrapper[4718]: I1210 14:59:04.999591 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-d67c77454-2pqgh"] Dec 10 14:59:05 crc kubenswrapper[4718]: I1210 14:59:05.155115 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-86c6894dc7-l9prg"] Dec 10 14:59:05 crc kubenswrapper[4718]: I1210 14:59:05.299639 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-549c96b4c7-5ccv8"] Dec 10 14:59:05 crc kubenswrapper[4718]: I1210 14:59:05.364057 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-595577df7d-rjzmf"] Dec 10 14:59:05 crc kubenswrapper[4718]: I1210 14:59:05.550535 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-d67c77454-2pqgh" event={"ID":"6cafdb55-0f7a-4b44-94a2-f3005a0384e2","Type":"ContainerStarted","Data":"cf2e8fd9154fba32623e8896aa3e26bf8d732cb88dcea2dc43696f144018b263"} Dec 10 14:59:05 crc kubenswrapper[4718]: I1210 14:59:05.577123 4718 generic.go:334] "Generic (PLEG): container finished" podID="18de3c3d-d30b-4c09-b1a2-3a6376de8843" containerID="601978f1b4f9d5b5406fb43ec35cd234eb953245e78bb08c3e5c147e1a0788aa" exitCode=2 Dec 10 14:59:05 crc kubenswrapper[4718]: I1210 14:59:05.577251 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"18de3c3d-d30b-4c09-b1a2-3a6376de8843","Type":"ContainerDied","Data":"601978f1b4f9d5b5406fb43ec35cd234eb953245e78bb08c3e5c147e1a0788aa"} Dec 10 14:59:05 crc kubenswrapper[4718]: I1210 14:59:05.593614 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-549c96b4c7-5ccv8" event={"ID":"12eb657b-f669-41c3-9125-9266909468a3","Type":"ContainerStarted","Data":"e6815e87731b995b2225e0258fe50cdddcdde49264749d3a2322b5ccab7ea88e"} Dec 10 14:59:05 crc kubenswrapper[4718]: I1210 14:59:05.608828 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-86c6894dc7-l9prg" event={"ID":"f8b33603-9f2b-410e-a4ae-52b20ea62bd9","Type":"ContainerStarted","Data":"56e0eda3408dc23178b9c6b3874a170e1e23bdeb4d4cb6d71fbd5087dc8a644f"} Dec 10 14:59:05 crc kubenswrapper[4718]: I1210 14:59:05.615647 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-595577df7d-rjzmf" event={"ID":"2b5afb71-a4ff-4786-ab1b-3e3d04f7e7dc","Type":"ContainerStarted","Data":"a5390f4d0c574fc54ffad6b0afe402d69394f1dd068bbc1e2091d032a8880797"} Dec 10 14:59:06 crc kubenswrapper[4718]: I1210 14:59:06.087687 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="830caa3e-735c-4fda-9323-ad5cf8d4779a" path="/var/lib/kubelet/pods/830caa3e-735c-4fda-9323-ad5cf8d4779a/volumes" Dec 10 14:59:06 crc kubenswrapper[4718]: I1210 14:59:06.099751 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-7cb9f4c9bb-nx4ml" Dec 10 14:59:06 crc kubenswrapper[4718]: I1210 14:59:06.121798 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-7cb9f4c9bb-nx4ml" Dec 10 14:59:06 crc kubenswrapper[4718]: I1210 14:59:06.721198 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-549c96b4c7-5ccv8" event={"ID":"12eb657b-f669-41c3-9125-9266909468a3","Type":"ContainerStarted","Data":"2ad1a6919d25e557d590927ed1b0e6f4d76c12b13b66d93244c5b7512a178906"} Dec 10 14:59:06 crc kubenswrapper[4718]: I1210 14:59:06.913649 4718 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-578598f949-4lgxw" podUID="830caa3e-735c-4fda-9323-ad5cf8d4779a" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.151:5353: i/o timeout" Dec 10 14:59:07 crc kubenswrapper[4718]: I1210 14:59:07.021893 4718 scope.go:117] "RemoveContainer" containerID="76beffb68a4302af15a19b5c310d822594a21c7c7ec551bf4bf551ca3dcf1f21" Dec 10 14:59:07 crc kubenswrapper[4718]: E1210 14:59:07.022229 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8zmhn_openshift-machine-config-operator(8db53917-7cfb-496d-b8a0-5cc68f3be4e7)\"" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" podUID="8db53917-7cfb-496d-b8a0-5cc68f3be4e7" Dec 10 14:59:07 crc kubenswrapper[4718]: I1210 14:59:07.740335 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-d67c77454-2pqgh" event={"ID":"6cafdb55-0f7a-4b44-94a2-f3005a0384e2","Type":"ContainerStarted","Data":"e4ba03d50427fd13336a207c979789787b9ff13104666c18e30046c5b47b72e9"} Dec 10 14:59:07 crc kubenswrapper[4718]: I1210 14:59:07.745559 4718 generic.go:334] "Generic (PLEG): container finished" podID="12eb657b-f669-41c3-9125-9266909468a3" containerID="2ad1a6919d25e557d590927ed1b0e6f4d76c12b13b66d93244c5b7512a178906" exitCode=0 Dec 10 14:59:07 crc kubenswrapper[4718]: I1210 14:59:07.745636 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-549c96b4c7-5ccv8" event={"ID":"12eb657b-f669-41c3-9125-9266909468a3","Type":"ContainerDied","Data":"2ad1a6919d25e557d590927ed1b0e6f4d76c12b13b66d93244c5b7512a178906"} Dec 10 14:59:08 crc kubenswrapper[4718]: I1210 14:59:08.486548 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Dec 10 14:59:08 crc kubenswrapper[4718]: I1210 14:59:08.486966 4718 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 10 14:59:08 crc kubenswrapper[4718]: I1210 14:59:08.779529 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-d67c77454-2pqgh" event={"ID":"6cafdb55-0f7a-4b44-94a2-f3005a0384e2","Type":"ContainerStarted","Data":"1037ac85c650d8e987c279f98a080beb186b35fbe14ad70777d587dde75c1003"} Dec 10 14:59:08 crc kubenswrapper[4718]: I1210 14:59:08.779600 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-d67c77454-2pqgh" Dec 10 14:59:08 crc kubenswrapper[4718]: I1210 14:59:08.779622 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-d67c77454-2pqgh" Dec 10 14:59:08 crc kubenswrapper[4718]: I1210 14:59:08.793913 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-549c96b4c7-5ccv8" event={"ID":"12eb657b-f669-41c3-9125-9266909468a3","Type":"ContainerStarted","Data":"2f90c462de2b75a96cca3cd234b5626e538f196292dbd86ecd426a5f03fda4b8"} Dec 10 14:59:08 crc kubenswrapper[4718]: I1210 14:59:08.795534 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-549c96b4c7-5ccv8" Dec 10 14:59:08 crc kubenswrapper[4718]: I1210 14:59:08.816008 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-d67c77454-2pqgh" podStartSLOduration=6.815972924 podStartE2EDuration="6.815972924s" podCreationTimestamp="2025-12-10 14:59:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 14:59:08.810074784 +0000 UTC m=+1653.759298191" watchObservedRunningTime="2025-12-10 14:59:08.815972924 +0000 UTC m=+1653.765196351" Dec 10 14:59:08 crc kubenswrapper[4718]: I1210 14:59:08.869672 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-549c96b4c7-5ccv8" podStartSLOduration=6.869636883 podStartE2EDuration="6.869636883s" podCreationTimestamp="2025-12-10 14:59:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 14:59:08.857873403 +0000 UTC m=+1653.807096840" watchObservedRunningTime="2025-12-10 14:59:08.869636883 +0000 UTC m=+1653.818860300" Dec 10 14:59:08 crc kubenswrapper[4718]: I1210 14:59:08.955842 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-6bcdc7c9dc-hxhdn" Dec 10 14:59:09 crc kubenswrapper[4718]: I1210 14:59:09.788965 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Dec 10 14:59:10 crc kubenswrapper[4718]: I1210 14:59:10.262348 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Dec 10 14:59:10 crc kubenswrapper[4718]: I1210 14:59:10.263119 4718 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 10 14:59:10 crc kubenswrapper[4718]: I1210 14:59:10.850003 4718 generic.go:334] "Generic (PLEG): container finished" podID="18de3c3d-d30b-4c09-b1a2-3a6376de8843" containerID="05464e0792ca08ae50c09bb7386a735a606b53c8af4f8c35fb6a7490fef3355f" exitCode=0 Dec 10 14:59:10 crc kubenswrapper[4718]: I1210 14:59:10.850454 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"18de3c3d-d30b-4c09-b1a2-3a6376de8843","Type":"ContainerDied","Data":"05464e0792ca08ae50c09bb7386a735a606b53c8af4f8c35fb6a7490fef3355f"} Dec 10 14:59:10 crc kubenswrapper[4718]: I1210 14:59:10.928070 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Dec 10 14:59:11 crc kubenswrapper[4718]: I1210 14:59:11.006194 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-58896fd778-pk5pp"] Dec 10 14:59:11 crc kubenswrapper[4718]: E1210 14:59:11.006916 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="830caa3e-735c-4fda-9323-ad5cf8d4779a" containerName="init" Dec 10 14:59:11 crc kubenswrapper[4718]: I1210 14:59:11.006938 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="830caa3e-735c-4fda-9323-ad5cf8d4779a" containerName="init" Dec 10 14:59:11 crc kubenswrapper[4718]: E1210 14:59:11.006981 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="830caa3e-735c-4fda-9323-ad5cf8d4779a" containerName="dnsmasq-dns" Dec 10 14:59:11 crc kubenswrapper[4718]: I1210 14:59:11.006990 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="830caa3e-735c-4fda-9323-ad5cf8d4779a" containerName="dnsmasq-dns" Dec 10 14:59:11 crc kubenswrapper[4718]: I1210 14:59:11.007357 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="830caa3e-735c-4fda-9323-ad5cf8d4779a" containerName="dnsmasq-dns" Dec 10 14:59:11 crc kubenswrapper[4718]: I1210 14:59:11.008926 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-58896fd778-pk5pp" Dec 10 14:59:11 crc kubenswrapper[4718]: I1210 14:59:11.030169 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Dec 10 14:59:11 crc kubenswrapper[4718]: I1210 14:59:11.030432 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Dec 10 14:59:11 crc kubenswrapper[4718]: I1210 14:59:11.170287 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-74sjc\" (UniqueName: \"kubernetes.io/projected/c53fcbf5-3330-4aff-a699-ff475344e705-kube-api-access-74sjc\") pod \"barbican-api-58896fd778-pk5pp\" (UID: \"c53fcbf5-3330-4aff-a699-ff475344e705\") " pod="openstack/barbican-api-58896fd778-pk5pp" Dec 10 14:59:11 crc kubenswrapper[4718]: I1210 14:59:11.170506 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c53fcbf5-3330-4aff-a699-ff475344e705-internal-tls-certs\") pod \"barbican-api-58896fd778-pk5pp\" (UID: \"c53fcbf5-3330-4aff-a699-ff475344e705\") " pod="openstack/barbican-api-58896fd778-pk5pp" Dec 10 14:59:11 crc kubenswrapper[4718]: I1210 14:59:11.307099 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c53fcbf5-3330-4aff-a699-ff475344e705-config-data-custom\") pod \"barbican-api-58896fd778-pk5pp\" (UID: \"c53fcbf5-3330-4aff-a699-ff475344e705\") " pod="openstack/barbican-api-58896fd778-pk5pp" Dec 10 14:59:11 crc kubenswrapper[4718]: I1210 14:59:11.307281 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c53fcbf5-3330-4aff-a699-ff475344e705-logs\") pod \"barbican-api-58896fd778-pk5pp\" (UID: \"c53fcbf5-3330-4aff-a699-ff475344e705\") " pod="openstack/barbican-api-58896fd778-pk5pp" Dec 10 14:59:11 crc kubenswrapper[4718]: I1210 14:59:11.307326 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c53fcbf5-3330-4aff-a699-ff475344e705-public-tls-certs\") pod \"barbican-api-58896fd778-pk5pp\" (UID: \"c53fcbf5-3330-4aff-a699-ff475344e705\") " pod="openstack/barbican-api-58896fd778-pk5pp" Dec 10 14:59:11 crc kubenswrapper[4718]: I1210 14:59:11.307382 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c53fcbf5-3330-4aff-a699-ff475344e705-combined-ca-bundle\") pod \"barbican-api-58896fd778-pk5pp\" (UID: \"c53fcbf5-3330-4aff-a699-ff475344e705\") " pod="openstack/barbican-api-58896fd778-pk5pp" Dec 10 14:59:11 crc kubenswrapper[4718]: I1210 14:59:11.307628 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c53fcbf5-3330-4aff-a699-ff475344e705-config-data\") pod \"barbican-api-58896fd778-pk5pp\" (UID: \"c53fcbf5-3330-4aff-a699-ff475344e705\") " pod="openstack/barbican-api-58896fd778-pk5pp" Dec 10 14:59:11 crc kubenswrapper[4718]: I1210 14:59:11.317415 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-58896fd778-pk5pp"] Dec 10 14:59:11 crc kubenswrapper[4718]: I1210 14:59:11.410926 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c53fcbf5-3330-4aff-a699-ff475344e705-config-data-custom\") pod \"barbican-api-58896fd778-pk5pp\" (UID: \"c53fcbf5-3330-4aff-a699-ff475344e705\") " pod="openstack/barbican-api-58896fd778-pk5pp" Dec 10 14:59:11 crc kubenswrapper[4718]: I1210 14:59:11.411012 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c53fcbf5-3330-4aff-a699-ff475344e705-logs\") pod \"barbican-api-58896fd778-pk5pp\" (UID: \"c53fcbf5-3330-4aff-a699-ff475344e705\") " pod="openstack/barbican-api-58896fd778-pk5pp" Dec 10 14:59:11 crc kubenswrapper[4718]: I1210 14:59:11.411034 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c53fcbf5-3330-4aff-a699-ff475344e705-public-tls-certs\") pod \"barbican-api-58896fd778-pk5pp\" (UID: \"c53fcbf5-3330-4aff-a699-ff475344e705\") " pod="openstack/barbican-api-58896fd778-pk5pp" Dec 10 14:59:11 crc kubenswrapper[4718]: I1210 14:59:11.411056 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c53fcbf5-3330-4aff-a699-ff475344e705-combined-ca-bundle\") pod \"barbican-api-58896fd778-pk5pp\" (UID: \"c53fcbf5-3330-4aff-a699-ff475344e705\") " pod="openstack/barbican-api-58896fd778-pk5pp" Dec 10 14:59:11 crc kubenswrapper[4718]: I1210 14:59:11.411108 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c53fcbf5-3330-4aff-a699-ff475344e705-config-data\") pod \"barbican-api-58896fd778-pk5pp\" (UID: \"c53fcbf5-3330-4aff-a699-ff475344e705\") " pod="openstack/barbican-api-58896fd778-pk5pp" Dec 10 14:59:11 crc kubenswrapper[4718]: I1210 14:59:11.411176 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-74sjc\" (UniqueName: \"kubernetes.io/projected/c53fcbf5-3330-4aff-a699-ff475344e705-kube-api-access-74sjc\") pod \"barbican-api-58896fd778-pk5pp\" (UID: \"c53fcbf5-3330-4aff-a699-ff475344e705\") " pod="openstack/barbican-api-58896fd778-pk5pp" Dec 10 14:59:11 crc kubenswrapper[4718]: I1210 14:59:11.411242 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c53fcbf5-3330-4aff-a699-ff475344e705-internal-tls-certs\") pod \"barbican-api-58896fd778-pk5pp\" (UID: \"c53fcbf5-3330-4aff-a699-ff475344e705\") " pod="openstack/barbican-api-58896fd778-pk5pp" Dec 10 14:59:11 crc kubenswrapper[4718]: I1210 14:59:11.455144 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c53fcbf5-3330-4aff-a699-ff475344e705-logs\") pod \"barbican-api-58896fd778-pk5pp\" (UID: \"c53fcbf5-3330-4aff-a699-ff475344e705\") " pod="openstack/barbican-api-58896fd778-pk5pp" Dec 10 14:59:11 crc kubenswrapper[4718]: I1210 14:59:11.456321 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c53fcbf5-3330-4aff-a699-ff475344e705-config-data-custom\") pod \"barbican-api-58896fd778-pk5pp\" (UID: \"c53fcbf5-3330-4aff-a699-ff475344e705\") " pod="openstack/barbican-api-58896fd778-pk5pp" Dec 10 14:59:11 crc kubenswrapper[4718]: I1210 14:59:11.456780 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c53fcbf5-3330-4aff-a699-ff475344e705-internal-tls-certs\") pod \"barbican-api-58896fd778-pk5pp\" (UID: \"c53fcbf5-3330-4aff-a699-ff475344e705\") " pod="openstack/barbican-api-58896fd778-pk5pp" Dec 10 14:59:11 crc kubenswrapper[4718]: I1210 14:59:11.457217 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c53fcbf5-3330-4aff-a699-ff475344e705-public-tls-certs\") pod \"barbican-api-58896fd778-pk5pp\" (UID: \"c53fcbf5-3330-4aff-a699-ff475344e705\") " pod="openstack/barbican-api-58896fd778-pk5pp" Dec 10 14:59:11 crc kubenswrapper[4718]: I1210 14:59:11.475599 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c53fcbf5-3330-4aff-a699-ff475344e705-combined-ca-bundle\") pod \"barbican-api-58896fd778-pk5pp\" (UID: \"c53fcbf5-3330-4aff-a699-ff475344e705\") " pod="openstack/barbican-api-58896fd778-pk5pp" Dec 10 14:59:11 crc kubenswrapper[4718]: I1210 14:59:11.482116 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c53fcbf5-3330-4aff-a699-ff475344e705-config-data\") pod \"barbican-api-58896fd778-pk5pp\" (UID: \"c53fcbf5-3330-4aff-a699-ff475344e705\") " pod="openstack/barbican-api-58896fd778-pk5pp" Dec 10 14:59:11 crc kubenswrapper[4718]: I1210 14:59:11.574303 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-74sjc\" (UniqueName: \"kubernetes.io/projected/c53fcbf5-3330-4aff-a699-ff475344e705-kube-api-access-74sjc\") pod \"barbican-api-58896fd778-pk5pp\" (UID: \"c53fcbf5-3330-4aff-a699-ff475344e705\") " pod="openstack/barbican-api-58896fd778-pk5pp" Dec 10 14:59:11 crc kubenswrapper[4718]: I1210 14:59:11.672307 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-58896fd778-pk5pp" Dec 10 14:59:12 crc kubenswrapper[4718]: I1210 14:59:12.021539 4718 scope.go:117] "RemoveContainer" containerID="fe1809f1bf4abad0ceec1c1f2a13e3871e4415fc733067a42435419dae2a5707" Dec 10 14:59:13 crc kubenswrapper[4718]: I1210 14:59:13.326809 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-p2zkz"] Dec 10 14:59:13 crc kubenswrapper[4718]: I1210 14:59:13.330221 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-p2zkz" Dec 10 14:59:13 crc kubenswrapper[4718]: I1210 14:59:13.341926 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-p2zkz"] Dec 10 14:59:13 crc kubenswrapper[4718]: I1210 14:59:13.344068 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7ced0eff-a905-4c77-abbb-f2417e807e89-catalog-content\") pod \"redhat-marketplace-p2zkz\" (UID: \"7ced0eff-a905-4c77-abbb-f2417e807e89\") " pod="openshift-marketplace/redhat-marketplace-p2zkz" Dec 10 14:59:13 crc kubenswrapper[4718]: I1210 14:59:13.344190 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7drhr\" (UniqueName: \"kubernetes.io/projected/7ced0eff-a905-4c77-abbb-f2417e807e89-kube-api-access-7drhr\") pod \"redhat-marketplace-p2zkz\" (UID: \"7ced0eff-a905-4c77-abbb-f2417e807e89\") " pod="openshift-marketplace/redhat-marketplace-p2zkz" Dec 10 14:59:13 crc kubenswrapper[4718]: I1210 14:59:13.344256 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7ced0eff-a905-4c77-abbb-f2417e807e89-utilities\") pod \"redhat-marketplace-p2zkz\" (UID: \"7ced0eff-a905-4c77-abbb-f2417e807e89\") " pod="openshift-marketplace/redhat-marketplace-p2zkz" Dec 10 14:59:13 crc kubenswrapper[4718]: I1210 14:59:13.446699 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7drhr\" (UniqueName: \"kubernetes.io/projected/7ced0eff-a905-4c77-abbb-f2417e807e89-kube-api-access-7drhr\") pod \"redhat-marketplace-p2zkz\" (UID: \"7ced0eff-a905-4c77-abbb-f2417e807e89\") " pod="openshift-marketplace/redhat-marketplace-p2zkz" Dec 10 14:59:13 crc kubenswrapper[4718]: I1210 14:59:13.446797 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7ced0eff-a905-4c77-abbb-f2417e807e89-utilities\") pod \"redhat-marketplace-p2zkz\" (UID: \"7ced0eff-a905-4c77-abbb-f2417e807e89\") " pod="openshift-marketplace/redhat-marketplace-p2zkz" Dec 10 14:59:13 crc kubenswrapper[4718]: I1210 14:59:13.446952 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7ced0eff-a905-4c77-abbb-f2417e807e89-catalog-content\") pod \"redhat-marketplace-p2zkz\" (UID: \"7ced0eff-a905-4c77-abbb-f2417e807e89\") " pod="openshift-marketplace/redhat-marketplace-p2zkz" Dec 10 14:59:13 crc kubenswrapper[4718]: I1210 14:59:13.447332 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7ced0eff-a905-4c77-abbb-f2417e807e89-utilities\") pod \"redhat-marketplace-p2zkz\" (UID: \"7ced0eff-a905-4c77-abbb-f2417e807e89\") " pod="openshift-marketplace/redhat-marketplace-p2zkz" Dec 10 14:59:13 crc kubenswrapper[4718]: I1210 14:59:13.447426 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7ced0eff-a905-4c77-abbb-f2417e807e89-catalog-content\") pod \"redhat-marketplace-p2zkz\" (UID: \"7ced0eff-a905-4c77-abbb-f2417e807e89\") " pod="openshift-marketplace/redhat-marketplace-p2zkz" Dec 10 14:59:13 crc kubenswrapper[4718]: I1210 14:59:13.479838 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7drhr\" (UniqueName: \"kubernetes.io/projected/7ced0eff-a905-4c77-abbb-f2417e807e89-kube-api-access-7drhr\") pod \"redhat-marketplace-p2zkz\" (UID: \"7ced0eff-a905-4c77-abbb-f2417e807e89\") " pod="openshift-marketplace/redhat-marketplace-p2zkz" Dec 10 14:59:13 crc kubenswrapper[4718]: I1210 14:59:13.603664 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-549c96b4c7-5ccv8" Dec 10 14:59:13 crc kubenswrapper[4718]: I1210 14:59:13.685711 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-p2zkz" Dec 10 14:59:13 crc kubenswrapper[4718]: I1210 14:59:13.686335 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7cf77b4997-gsztn"] Dec 10 14:59:13 crc kubenswrapper[4718]: I1210 14:59:13.689641 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7cf77b4997-gsztn" podUID="1c5b1250-54f6-4cbb-8106-452a43018155" containerName="dnsmasq-dns" containerID="cri-o://0a8f037895e338fac6c2f2e2bba5b4d6df491641111f8a8447bdcfb683532e95" gracePeriod=10 Dec 10 14:59:14 crc kubenswrapper[4718]: I1210 14:59:14.212945 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Dec 10 14:59:14 crc kubenswrapper[4718]: I1210 14:59:14.214789 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Dec 10 14:59:14 crc kubenswrapper[4718]: I1210 14:59:14.217411 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Dec 10 14:59:14 crc kubenswrapper[4718]: I1210 14:59:14.219564 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Dec 10 14:59:14 crc kubenswrapper[4718]: I1210 14:59:14.219786 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-msjbj" Dec 10 14:59:14 crc kubenswrapper[4718]: I1210 14:59:14.259452 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Dec 10 14:59:14 crc kubenswrapper[4718]: I1210 14:59:14.368949 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ct5k8\" (UniqueName: \"kubernetes.io/projected/c0b43254-f8fe-4187-a8ce-aa65f7ac327e-kube-api-access-ct5k8\") pod \"openstackclient\" (UID: \"c0b43254-f8fe-4187-a8ce-aa65f7ac327e\") " pod="openstack/openstackclient" Dec 10 14:59:14 crc kubenswrapper[4718]: I1210 14:59:14.369463 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/c0b43254-f8fe-4187-a8ce-aa65f7ac327e-openstack-config\") pod \"openstackclient\" (UID: \"c0b43254-f8fe-4187-a8ce-aa65f7ac327e\") " pod="openstack/openstackclient" Dec 10 14:59:14 crc kubenswrapper[4718]: I1210 14:59:14.369621 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c0b43254-f8fe-4187-a8ce-aa65f7ac327e-combined-ca-bundle\") pod \"openstackclient\" (UID: \"c0b43254-f8fe-4187-a8ce-aa65f7ac327e\") " pod="openstack/openstackclient" Dec 10 14:59:14 crc kubenswrapper[4718]: I1210 14:59:14.369644 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/c0b43254-f8fe-4187-a8ce-aa65f7ac327e-openstack-config-secret\") pod \"openstackclient\" (UID: \"c0b43254-f8fe-4187-a8ce-aa65f7ac327e\") " pod="openstack/openstackclient" Dec 10 14:59:14 crc kubenswrapper[4718]: I1210 14:59:14.531037 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c0b43254-f8fe-4187-a8ce-aa65f7ac327e-combined-ca-bundle\") pod \"openstackclient\" (UID: \"c0b43254-f8fe-4187-a8ce-aa65f7ac327e\") " pod="openstack/openstackclient" Dec 10 14:59:14 crc kubenswrapper[4718]: I1210 14:59:14.531092 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/c0b43254-f8fe-4187-a8ce-aa65f7ac327e-openstack-config-secret\") pod \"openstackclient\" (UID: \"c0b43254-f8fe-4187-a8ce-aa65f7ac327e\") " pod="openstack/openstackclient" Dec 10 14:59:14 crc kubenswrapper[4718]: I1210 14:59:14.531199 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ct5k8\" (UniqueName: \"kubernetes.io/projected/c0b43254-f8fe-4187-a8ce-aa65f7ac327e-kube-api-access-ct5k8\") pod \"openstackclient\" (UID: \"c0b43254-f8fe-4187-a8ce-aa65f7ac327e\") " pod="openstack/openstackclient" Dec 10 14:59:14 crc kubenswrapper[4718]: I1210 14:59:14.531290 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/c0b43254-f8fe-4187-a8ce-aa65f7ac327e-openstack-config\") pod \"openstackclient\" (UID: \"c0b43254-f8fe-4187-a8ce-aa65f7ac327e\") " pod="openstack/openstackclient" Dec 10 14:59:14 crc kubenswrapper[4718]: I1210 14:59:14.534123 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/c0b43254-f8fe-4187-a8ce-aa65f7ac327e-openstack-config\") pod \"openstackclient\" (UID: \"c0b43254-f8fe-4187-a8ce-aa65f7ac327e\") " pod="openstack/openstackclient" Dec 10 14:59:14 crc kubenswrapper[4718]: I1210 14:59:14.545859 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/c0b43254-f8fe-4187-a8ce-aa65f7ac327e-openstack-config-secret\") pod \"openstackclient\" (UID: \"c0b43254-f8fe-4187-a8ce-aa65f7ac327e\") " pod="openstack/openstackclient" Dec 10 14:59:14 crc kubenswrapper[4718]: I1210 14:59:14.553740 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c0b43254-f8fe-4187-a8ce-aa65f7ac327e-combined-ca-bundle\") pod \"openstackclient\" (UID: \"c0b43254-f8fe-4187-a8ce-aa65f7ac327e\") " pod="openstack/openstackclient" Dec 10 14:59:14 crc kubenswrapper[4718]: I1210 14:59:14.589354 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ct5k8\" (UniqueName: \"kubernetes.io/projected/c0b43254-f8fe-4187-a8ce-aa65f7ac327e-kube-api-access-ct5k8\") pod \"openstackclient\" (UID: \"c0b43254-f8fe-4187-a8ce-aa65f7ac327e\") " pod="openstack/openstackclient" Dec 10 14:59:14 crc kubenswrapper[4718]: I1210 14:59:14.844123 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Dec 10 14:59:16 crc kubenswrapper[4718]: I1210 14:59:16.130904 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"18de3c3d-d30b-4c09-b1a2-3a6376de8843","Type":"ContainerDied","Data":"6c0bd3e4a77a149951b0ce1032fdd782f58f3c67d6a2eedd0de928d6e88e5d7b"} Dec 10 14:59:16 crc kubenswrapper[4718]: I1210 14:59:16.131424 4718 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6c0bd3e4a77a149951b0ce1032fdd782f58f3c67d6a2eedd0de928d6e88e5d7b" Dec 10 14:59:16 crc kubenswrapper[4718]: I1210 14:59:16.149788 4718 generic.go:334] "Generic (PLEG): container finished" podID="1c5b1250-54f6-4cbb-8106-452a43018155" containerID="0a8f037895e338fac6c2f2e2bba5b4d6df491641111f8a8447bdcfb683532e95" exitCode=0 Dec 10 14:59:16 crc kubenswrapper[4718]: I1210 14:59:16.149861 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7cf77b4997-gsztn" event={"ID":"1c5b1250-54f6-4cbb-8106-452a43018155","Type":"ContainerDied","Data":"0a8f037895e338fac6c2f2e2bba5b4d6df491641111f8a8447bdcfb683532e95"} Dec 10 14:59:16 crc kubenswrapper[4718]: I1210 14:59:16.259443 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 10 14:59:16 crc kubenswrapper[4718]: I1210 14:59:16.323808 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xmcqr\" (UniqueName: \"kubernetes.io/projected/18de3c3d-d30b-4c09-b1a2-3a6376de8843-kube-api-access-xmcqr\") pod \"18de3c3d-d30b-4c09-b1a2-3a6376de8843\" (UID: \"18de3c3d-d30b-4c09-b1a2-3a6376de8843\") " Dec 10 14:59:16 crc kubenswrapper[4718]: I1210 14:59:16.323939 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/18de3c3d-d30b-4c09-b1a2-3a6376de8843-run-httpd\") pod \"18de3c3d-d30b-4c09-b1a2-3a6376de8843\" (UID: \"18de3c3d-d30b-4c09-b1a2-3a6376de8843\") " Dec 10 14:59:16 crc kubenswrapper[4718]: I1210 14:59:16.324127 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/18de3c3d-d30b-4c09-b1a2-3a6376de8843-sg-core-conf-yaml\") pod \"18de3c3d-d30b-4c09-b1a2-3a6376de8843\" (UID: \"18de3c3d-d30b-4c09-b1a2-3a6376de8843\") " Dec 10 14:59:16 crc kubenswrapper[4718]: I1210 14:59:16.324316 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/18de3c3d-d30b-4c09-b1a2-3a6376de8843-combined-ca-bundle\") pod \"18de3c3d-d30b-4c09-b1a2-3a6376de8843\" (UID: \"18de3c3d-d30b-4c09-b1a2-3a6376de8843\") " Dec 10 14:59:16 crc kubenswrapper[4718]: I1210 14:59:16.325723 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/18de3c3d-d30b-4c09-b1a2-3a6376de8843-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "18de3c3d-d30b-4c09-b1a2-3a6376de8843" (UID: "18de3c3d-d30b-4c09-b1a2-3a6376de8843"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 14:59:16 crc kubenswrapper[4718]: I1210 14:59:16.326413 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/18de3c3d-d30b-4c09-b1a2-3a6376de8843-config-data\") pod \"18de3c3d-d30b-4c09-b1a2-3a6376de8843\" (UID: \"18de3c3d-d30b-4c09-b1a2-3a6376de8843\") " Dec 10 14:59:16 crc kubenswrapper[4718]: I1210 14:59:16.333682 4718 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/18de3c3d-d30b-4c09-b1a2-3a6376de8843-run-httpd\") on node \"crc\" DevicePath \"\"" Dec 10 14:59:16 crc kubenswrapper[4718]: I1210 14:59:16.396692 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18de3c3d-d30b-4c09-b1a2-3a6376de8843-config-data" (OuterVolumeSpecName: "config-data") pod "18de3c3d-d30b-4c09-b1a2-3a6376de8843" (UID: "18de3c3d-d30b-4c09-b1a2-3a6376de8843"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 14:59:16 crc kubenswrapper[4718]: I1210 14:59:16.402566 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18de3c3d-d30b-4c09-b1a2-3a6376de8843-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "18de3c3d-d30b-4c09-b1a2-3a6376de8843" (UID: "18de3c3d-d30b-4c09-b1a2-3a6376de8843"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 14:59:16 crc kubenswrapper[4718]: I1210 14:59:16.406190 4718 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-6bb7f498bd-pjx6h" podUID="57bc5c19-c945-4bca-adef-0ddf1b9fabac" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.155:8443/dashboard/auth/login/?next=/dashboard/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 10 14:59:16 crc kubenswrapper[4718]: I1210 14:59:16.420554 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/18de3c3d-d30b-4c09-b1a2-3a6376de8843-kube-api-access-xmcqr" (OuterVolumeSpecName: "kube-api-access-xmcqr") pod "18de3c3d-d30b-4c09-b1a2-3a6376de8843" (UID: "18de3c3d-d30b-4c09-b1a2-3a6376de8843"). InnerVolumeSpecName "kube-api-access-xmcqr". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 14:59:16 crc kubenswrapper[4718]: I1210 14:59:16.430936 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18de3c3d-d30b-4c09-b1a2-3a6376de8843-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "18de3c3d-d30b-4c09-b1a2-3a6376de8843" (UID: "18de3c3d-d30b-4c09-b1a2-3a6376de8843"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 14:59:16 crc kubenswrapper[4718]: I1210 14:59:16.446103 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/18de3c3d-d30b-4c09-b1a2-3a6376de8843-log-httpd\") pod \"18de3c3d-d30b-4c09-b1a2-3a6376de8843\" (UID: \"18de3c3d-d30b-4c09-b1a2-3a6376de8843\") " Dec 10 14:59:16 crc kubenswrapper[4718]: I1210 14:59:16.446378 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/18de3c3d-d30b-4c09-b1a2-3a6376de8843-scripts\") pod \"18de3c3d-d30b-4c09-b1a2-3a6376de8843\" (UID: \"18de3c3d-d30b-4c09-b1a2-3a6376de8843\") " Dec 10 14:59:16 crc kubenswrapper[4718]: I1210 14:59:16.446612 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/18de3c3d-d30b-4c09-b1a2-3a6376de8843-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "18de3c3d-d30b-4c09-b1a2-3a6376de8843" (UID: "18de3c3d-d30b-4c09-b1a2-3a6376de8843"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 14:59:16 crc kubenswrapper[4718]: I1210 14:59:16.448840 4718 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/18de3c3d-d30b-4c09-b1a2-3a6376de8843-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 10 14:59:16 crc kubenswrapper[4718]: I1210 14:59:16.448907 4718 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/18de3c3d-d30b-4c09-b1a2-3a6376de8843-config-data\") on node \"crc\" DevicePath \"\"" Dec 10 14:59:16 crc kubenswrapper[4718]: I1210 14:59:16.448924 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xmcqr\" (UniqueName: \"kubernetes.io/projected/18de3c3d-d30b-4c09-b1a2-3a6376de8843-kube-api-access-xmcqr\") on node \"crc\" DevicePath \"\"" Dec 10 14:59:16 crc kubenswrapper[4718]: I1210 14:59:16.448940 4718 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/18de3c3d-d30b-4c09-b1a2-3a6376de8843-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Dec 10 14:59:16 crc kubenswrapper[4718]: I1210 14:59:16.448949 4718 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/18de3c3d-d30b-4c09-b1a2-3a6376de8843-log-httpd\") on node \"crc\" DevicePath \"\"" Dec 10 14:59:16 crc kubenswrapper[4718]: I1210 14:59:16.471153 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18de3c3d-d30b-4c09-b1a2-3a6376de8843-scripts" (OuterVolumeSpecName: "scripts") pod "18de3c3d-d30b-4c09-b1a2-3a6376de8843" (UID: "18de3c3d-d30b-4c09-b1a2-3a6376de8843"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 14:59:16 crc kubenswrapper[4718]: I1210 14:59:16.553562 4718 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/18de3c3d-d30b-4c09-b1a2-3a6376de8843-scripts\") on node \"crc\" DevicePath \"\"" Dec 10 14:59:16 crc kubenswrapper[4718]: I1210 14:59:16.554498 4718 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-77c9ddb894-brvxz" podUID="e1a09589-44b9-49f4-8970-d3381c3d4b99" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.156:8443/dashboard/auth/login/?next=/dashboard/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 10 14:59:16 crc kubenswrapper[4718]: I1210 14:59:16.926601 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7cf77b4997-gsztn" Dec 10 14:59:17 crc kubenswrapper[4718]: I1210 14:59:17.107527 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1c5b1250-54f6-4cbb-8106-452a43018155-dns-svc\") pod \"1c5b1250-54f6-4cbb-8106-452a43018155\" (UID: \"1c5b1250-54f6-4cbb-8106-452a43018155\") " Dec 10 14:59:17 crc kubenswrapper[4718]: I1210 14:59:17.108297 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1c5b1250-54f6-4cbb-8106-452a43018155-ovsdbserver-nb\") pod \"1c5b1250-54f6-4cbb-8106-452a43018155\" (UID: \"1c5b1250-54f6-4cbb-8106-452a43018155\") " Dec 10 14:59:17 crc kubenswrapper[4718]: I1210 14:59:17.108327 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1c5b1250-54f6-4cbb-8106-452a43018155-ovsdbserver-sb\") pod \"1c5b1250-54f6-4cbb-8106-452a43018155\" (UID: \"1c5b1250-54f6-4cbb-8106-452a43018155\") " Dec 10 14:59:17 crc kubenswrapper[4718]: I1210 14:59:17.108438 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1c5b1250-54f6-4cbb-8106-452a43018155-config\") pod \"1c5b1250-54f6-4cbb-8106-452a43018155\" (UID: \"1c5b1250-54f6-4cbb-8106-452a43018155\") " Dec 10 14:59:17 crc kubenswrapper[4718]: I1210 14:59:17.108490 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1c5b1250-54f6-4cbb-8106-452a43018155-dns-swift-storage-0\") pod \"1c5b1250-54f6-4cbb-8106-452a43018155\" (UID: \"1c5b1250-54f6-4cbb-8106-452a43018155\") " Dec 10 14:59:17 crc kubenswrapper[4718]: I1210 14:59:17.108514 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fdjbs\" (UniqueName: \"kubernetes.io/projected/1c5b1250-54f6-4cbb-8106-452a43018155-kube-api-access-fdjbs\") pod \"1c5b1250-54f6-4cbb-8106-452a43018155\" (UID: \"1c5b1250-54f6-4cbb-8106-452a43018155\") " Dec 10 14:59:17 crc kubenswrapper[4718]: I1210 14:59:17.165334 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1c5b1250-54f6-4cbb-8106-452a43018155-kube-api-access-fdjbs" (OuterVolumeSpecName: "kube-api-access-fdjbs") pod "1c5b1250-54f6-4cbb-8106-452a43018155" (UID: "1c5b1250-54f6-4cbb-8106-452a43018155"). InnerVolumeSpecName "kube-api-access-fdjbs". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 14:59:17 crc kubenswrapper[4718]: I1210 14:59:17.218631 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fdjbs\" (UniqueName: \"kubernetes.io/projected/1c5b1250-54f6-4cbb-8106-452a43018155-kube-api-access-fdjbs\") on node \"crc\" DevicePath \"\"" Dec 10 14:59:17 crc kubenswrapper[4718]: I1210 14:59:17.227624 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7cf77b4997-gsztn" event={"ID":"1c5b1250-54f6-4cbb-8106-452a43018155","Type":"ContainerDied","Data":"5840c6abcb6b83a95107d847ebe58b80a716c057e917dcf3616ad4a036b4d024"} Dec 10 14:59:17 crc kubenswrapper[4718]: I1210 14:59:17.227728 4718 scope.go:117] "RemoveContainer" containerID="0a8f037895e338fac6c2f2e2bba5b4d6df491641111f8a8447bdcfb683532e95" Dec 10 14:59:17 crc kubenswrapper[4718]: I1210 14:59:17.227759 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7cf77b4997-gsztn" Dec 10 14:59:17 crc kubenswrapper[4718]: I1210 14:59:17.228120 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 10 14:59:17 crc kubenswrapper[4718]: I1210 14:59:17.639900 4718 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-d67c77454-2pqgh" podUID="6cafdb55-0f7a-4b44-94a2-f3005a0384e2" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.171:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 10 14:59:17 crc kubenswrapper[4718]: I1210 14:59:17.851894 4718 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-d67c77454-2pqgh" podUID="6cafdb55-0f7a-4b44-94a2-f3005a0384e2" containerName="barbican-api" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 10 14:59:17 crc kubenswrapper[4718]: I1210 14:59:17.889240 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Dec 10 14:59:17 crc kubenswrapper[4718]: I1210 14:59:17.914667 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-d67c77454-2pqgh" Dec 10 14:59:18 crc kubenswrapper[4718]: I1210 14:59:18.084557 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1c5b1250-54f6-4cbb-8106-452a43018155-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "1c5b1250-54f6-4cbb-8106-452a43018155" (UID: "1c5b1250-54f6-4cbb-8106-452a43018155"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:59:18 crc kubenswrapper[4718]: I1210 14:59:18.117812 4718 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1c5b1250-54f6-4cbb-8106-452a43018155-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 10 14:59:18 crc kubenswrapper[4718]: I1210 14:59:18.146885 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1c5b1250-54f6-4cbb-8106-452a43018155-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "1c5b1250-54f6-4cbb-8106-452a43018155" (UID: "1c5b1250-54f6-4cbb-8106-452a43018155"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:59:18 crc kubenswrapper[4718]: I1210 14:59:18.177439 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1c5b1250-54f6-4cbb-8106-452a43018155-config" (OuterVolumeSpecName: "config") pod "1c5b1250-54f6-4cbb-8106-452a43018155" (UID: "1c5b1250-54f6-4cbb-8106-452a43018155"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:59:18 crc kubenswrapper[4718]: I1210 14:59:18.235876 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1c5b1250-54f6-4cbb-8106-452a43018155-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "1c5b1250-54f6-4cbb-8106-452a43018155" (UID: "1c5b1250-54f6-4cbb-8106-452a43018155"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:59:18 crc kubenswrapper[4718]: I1210 14:59:18.241027 4718 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1c5b1250-54f6-4cbb-8106-452a43018155-config\") on node \"crc\" DevicePath \"\"" Dec 10 14:59:18 crc kubenswrapper[4718]: I1210 14:59:18.241099 4718 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1c5b1250-54f6-4cbb-8106-452a43018155-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 10 14:59:18 crc kubenswrapper[4718]: I1210 14:59:18.241114 4718 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1c5b1250-54f6-4cbb-8106-452a43018155-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 10 14:59:18 crc kubenswrapper[4718]: I1210 14:59:18.263024 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1c5b1250-54f6-4cbb-8106-452a43018155-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "1c5b1250-54f6-4cbb-8106-452a43018155" (UID: "1c5b1250-54f6-4cbb-8106-452a43018155"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:59:18 crc kubenswrapper[4718]: I1210 14:59:18.343676 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-p2zkz"] Dec 10 14:59:18 crc kubenswrapper[4718]: I1210 14:59:18.344197 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-58896fd778-pk5pp"] Dec 10 14:59:18 crc kubenswrapper[4718]: I1210 14:59:18.345368 4718 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1c5b1250-54f6-4cbb-8106-452a43018155-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 10 14:59:18 crc kubenswrapper[4718]: I1210 14:59:18.356829 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-58896fd778-pk5pp" event={"ID":"c53fcbf5-3330-4aff-a699-ff475344e705","Type":"ContainerStarted","Data":"d9ee984c58993198e35a9419f3778451adade3ff7a764dfd0e89e0cad843406e"} Dec 10 14:59:18 crc kubenswrapper[4718]: I1210 14:59:18.361014 4718 scope.go:117] "RemoveContainer" containerID="aa3b37a86998d342d296e3eda1558d1a2bbd7a212cee3c2e93cd2b0d22f9d91a" Dec 10 14:59:18 crc kubenswrapper[4718]: I1210 14:59:18.390493 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"95261732-95ae-4618-a8a3-c883c287553e","Type":"ContainerStarted","Data":"01cec3e8c46ffa2721da9de159356f6f456558ade0dd4bed790b394d87a42b36"} Dec 10 14:59:18 crc kubenswrapper[4718]: I1210 14:59:18.422949 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"c0b43254-f8fe-4187-a8ce-aa65f7ac327e","Type":"ContainerStarted","Data":"1533bf4aecf2e92d20dd6a19d05957fb53b33d433d3eb3db8918aff7d556ac8f"} Dec 10 14:59:18 crc kubenswrapper[4718]: I1210 14:59:18.457828 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-p2zkz" event={"ID":"7ced0eff-a905-4c77-abbb-f2417e807e89","Type":"ContainerStarted","Data":"a527bc43e6e16532fce409e9f5b450f26f416d3e4034cf1a4bde3c01fbef421d"} Dec 10 14:59:18 crc kubenswrapper[4718]: I1210 14:59:18.478826 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 10 14:59:18 crc kubenswrapper[4718]: I1210 14:59:18.559133 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Dec 10 14:59:18 crc kubenswrapper[4718]: I1210 14:59:18.590666 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-decision-engine-0" Dec 10 14:59:18 crc kubenswrapper[4718]: I1210 14:59:18.591061 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-decision-engine-0" Dec 10 14:59:18 crc kubenswrapper[4718]: I1210 14:59:18.655737 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 10 14:59:18 crc kubenswrapper[4718]: E1210 14:59:18.657592 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18de3c3d-d30b-4c09-b1a2-3a6376de8843" containerName="sg-core" Dec 10 14:59:18 crc kubenswrapper[4718]: I1210 14:59:18.657620 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="18de3c3d-d30b-4c09-b1a2-3a6376de8843" containerName="sg-core" Dec 10 14:59:18 crc kubenswrapper[4718]: E1210 14:59:18.657638 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18de3c3d-d30b-4c09-b1a2-3a6376de8843" containerName="ceilometer-notification-agent" Dec 10 14:59:18 crc kubenswrapper[4718]: I1210 14:59:18.657646 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="18de3c3d-d30b-4c09-b1a2-3a6376de8843" containerName="ceilometer-notification-agent" Dec 10 14:59:18 crc kubenswrapper[4718]: E1210 14:59:18.657686 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c5b1250-54f6-4cbb-8106-452a43018155" containerName="dnsmasq-dns" Dec 10 14:59:18 crc kubenswrapper[4718]: I1210 14:59:18.657708 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c5b1250-54f6-4cbb-8106-452a43018155" containerName="dnsmasq-dns" Dec 10 14:59:18 crc kubenswrapper[4718]: E1210 14:59:18.657731 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c5b1250-54f6-4cbb-8106-452a43018155" containerName="init" Dec 10 14:59:18 crc kubenswrapper[4718]: I1210 14:59:18.657740 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c5b1250-54f6-4cbb-8106-452a43018155" containerName="init" Dec 10 14:59:18 crc kubenswrapper[4718]: I1210 14:59:18.658221 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="1c5b1250-54f6-4cbb-8106-452a43018155" containerName="dnsmasq-dns" Dec 10 14:59:18 crc kubenswrapper[4718]: I1210 14:59:18.658245 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="18de3c3d-d30b-4c09-b1a2-3a6376de8843" containerName="sg-core" Dec 10 14:59:18 crc kubenswrapper[4718]: I1210 14:59:18.658260 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="18de3c3d-d30b-4c09-b1a2-3a6376de8843" containerName="ceilometer-notification-agent" Dec 10 14:59:18 crc kubenswrapper[4718]: I1210 14:59:18.681297 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 10 14:59:18 crc kubenswrapper[4718]: I1210 14:59:18.689484 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 10 14:59:18 crc kubenswrapper[4718]: I1210 14:59:18.689762 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 10 14:59:18 crc kubenswrapper[4718]: I1210 14:59:18.774591 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/watcher-decision-engine-0" Dec 10 14:59:18 crc kubenswrapper[4718]: I1210 14:59:18.823519 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3cc0d50f-35f9-45b0-8316-cdf12ae72a97-log-httpd\") pod \"ceilometer-0\" (UID: \"3cc0d50f-35f9-45b0-8316-cdf12ae72a97\") " pod="openstack/ceilometer-0" Dec 10 14:59:18 crc kubenswrapper[4718]: I1210 14:59:18.823602 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3cc0d50f-35f9-45b0-8316-cdf12ae72a97-run-httpd\") pod \"ceilometer-0\" (UID: \"3cc0d50f-35f9-45b0-8316-cdf12ae72a97\") " pod="openstack/ceilometer-0" Dec 10 14:59:18 crc kubenswrapper[4718]: I1210 14:59:18.823693 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3cc0d50f-35f9-45b0-8316-cdf12ae72a97-config-data\") pod \"ceilometer-0\" (UID: \"3cc0d50f-35f9-45b0-8316-cdf12ae72a97\") " pod="openstack/ceilometer-0" Dec 10 14:59:18 crc kubenswrapper[4718]: I1210 14:59:18.823755 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3cc0d50f-35f9-45b0-8316-cdf12ae72a97-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"3cc0d50f-35f9-45b0-8316-cdf12ae72a97\") " pod="openstack/ceilometer-0" Dec 10 14:59:18 crc kubenswrapper[4718]: I1210 14:59:18.823833 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9hc79\" (UniqueName: \"kubernetes.io/projected/3cc0d50f-35f9-45b0-8316-cdf12ae72a97-kube-api-access-9hc79\") pod \"ceilometer-0\" (UID: \"3cc0d50f-35f9-45b0-8316-cdf12ae72a97\") " pod="openstack/ceilometer-0" Dec 10 14:59:18 crc kubenswrapper[4718]: I1210 14:59:18.823888 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3cc0d50f-35f9-45b0-8316-cdf12ae72a97-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"3cc0d50f-35f9-45b0-8316-cdf12ae72a97\") " pod="openstack/ceilometer-0" Dec 10 14:59:18 crc kubenswrapper[4718]: I1210 14:59:18.823936 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3cc0d50f-35f9-45b0-8316-cdf12ae72a97-scripts\") pod \"ceilometer-0\" (UID: \"3cc0d50f-35f9-45b0-8316-cdf12ae72a97\") " pod="openstack/ceilometer-0" Dec 10 14:59:18 crc kubenswrapper[4718]: I1210 14:59:18.832036 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 10 14:59:18 crc kubenswrapper[4718]: I1210 14:59:18.885906 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7cf77b4997-gsztn"] Dec 10 14:59:18 crc kubenswrapper[4718]: I1210 14:59:18.903741 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7cf77b4997-gsztn"] Dec 10 14:59:18 crc kubenswrapper[4718]: I1210 14:59:18.926353 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3cc0d50f-35f9-45b0-8316-cdf12ae72a97-config-data\") pod \"ceilometer-0\" (UID: \"3cc0d50f-35f9-45b0-8316-cdf12ae72a97\") " pod="openstack/ceilometer-0" Dec 10 14:59:18 crc kubenswrapper[4718]: I1210 14:59:18.926524 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3cc0d50f-35f9-45b0-8316-cdf12ae72a97-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"3cc0d50f-35f9-45b0-8316-cdf12ae72a97\") " pod="openstack/ceilometer-0" Dec 10 14:59:18 crc kubenswrapper[4718]: I1210 14:59:18.926599 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9hc79\" (UniqueName: \"kubernetes.io/projected/3cc0d50f-35f9-45b0-8316-cdf12ae72a97-kube-api-access-9hc79\") pod \"ceilometer-0\" (UID: \"3cc0d50f-35f9-45b0-8316-cdf12ae72a97\") " pod="openstack/ceilometer-0" Dec 10 14:59:18 crc kubenswrapper[4718]: I1210 14:59:18.926652 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3cc0d50f-35f9-45b0-8316-cdf12ae72a97-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"3cc0d50f-35f9-45b0-8316-cdf12ae72a97\") " pod="openstack/ceilometer-0" Dec 10 14:59:18 crc kubenswrapper[4718]: I1210 14:59:18.926698 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3cc0d50f-35f9-45b0-8316-cdf12ae72a97-scripts\") pod \"ceilometer-0\" (UID: \"3cc0d50f-35f9-45b0-8316-cdf12ae72a97\") " pod="openstack/ceilometer-0" Dec 10 14:59:18 crc kubenswrapper[4718]: I1210 14:59:18.926805 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3cc0d50f-35f9-45b0-8316-cdf12ae72a97-log-httpd\") pod \"ceilometer-0\" (UID: \"3cc0d50f-35f9-45b0-8316-cdf12ae72a97\") " pod="openstack/ceilometer-0" Dec 10 14:59:18 crc kubenswrapper[4718]: I1210 14:59:18.926839 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3cc0d50f-35f9-45b0-8316-cdf12ae72a97-run-httpd\") pod \"ceilometer-0\" (UID: \"3cc0d50f-35f9-45b0-8316-cdf12ae72a97\") " pod="openstack/ceilometer-0" Dec 10 14:59:18 crc kubenswrapper[4718]: I1210 14:59:18.938077 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3cc0d50f-35f9-45b0-8316-cdf12ae72a97-run-httpd\") pod \"ceilometer-0\" (UID: \"3cc0d50f-35f9-45b0-8316-cdf12ae72a97\") " pod="openstack/ceilometer-0" Dec 10 14:59:18 crc kubenswrapper[4718]: I1210 14:59:18.938099 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3cc0d50f-35f9-45b0-8316-cdf12ae72a97-log-httpd\") pod \"ceilometer-0\" (UID: \"3cc0d50f-35f9-45b0-8316-cdf12ae72a97\") " pod="openstack/ceilometer-0" Dec 10 14:59:19 crc kubenswrapper[4718]: I1210 14:59:19.027830 4718 scope.go:117] "RemoveContainer" containerID="76beffb68a4302af15a19b5c310d822594a21c7c7ec551bf4bf551ca3dcf1f21" Dec 10 14:59:19 crc kubenswrapper[4718]: E1210 14:59:19.028407 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8zmhn_openshift-machine-config-operator(8db53917-7cfb-496d-b8a0-5cc68f3be4e7)\"" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" podUID="8db53917-7cfb-496d-b8a0-5cc68f3be4e7" Dec 10 14:59:19 crc kubenswrapper[4718]: I1210 14:59:19.040104 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3cc0d50f-35f9-45b0-8316-cdf12ae72a97-config-data\") pod \"ceilometer-0\" (UID: \"3cc0d50f-35f9-45b0-8316-cdf12ae72a97\") " pod="openstack/ceilometer-0" Dec 10 14:59:19 crc kubenswrapper[4718]: I1210 14:59:19.041786 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3cc0d50f-35f9-45b0-8316-cdf12ae72a97-scripts\") pod \"ceilometer-0\" (UID: \"3cc0d50f-35f9-45b0-8316-cdf12ae72a97\") " pod="openstack/ceilometer-0" Dec 10 14:59:19 crc kubenswrapper[4718]: I1210 14:59:19.041837 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3cc0d50f-35f9-45b0-8316-cdf12ae72a97-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"3cc0d50f-35f9-45b0-8316-cdf12ae72a97\") " pod="openstack/ceilometer-0" Dec 10 14:59:19 crc kubenswrapper[4718]: I1210 14:59:19.046165 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3cc0d50f-35f9-45b0-8316-cdf12ae72a97-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"3cc0d50f-35f9-45b0-8316-cdf12ae72a97\") " pod="openstack/ceilometer-0" Dec 10 14:59:19 crc kubenswrapper[4718]: I1210 14:59:19.046468 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9hc79\" (UniqueName: \"kubernetes.io/projected/3cc0d50f-35f9-45b0-8316-cdf12ae72a97-kube-api-access-9hc79\") pod \"ceilometer-0\" (UID: \"3cc0d50f-35f9-45b0-8316-cdf12ae72a97\") " pod="openstack/ceilometer-0" Dec 10 14:59:19 crc kubenswrapper[4718]: I1210 14:59:19.087491 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 10 14:59:19 crc kubenswrapper[4718]: I1210 14:59:19.504928 4718 generic.go:334] "Generic (PLEG): container finished" podID="7ced0eff-a905-4c77-abbb-f2417e807e89" containerID="00f06fc96fb559d7eccc8d08872fb0f6bfa5269ea99011d645a2b23b12f24735" exitCode=0 Dec 10 14:59:19 crc kubenswrapper[4718]: I1210 14:59:19.505108 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-p2zkz" event={"ID":"7ced0eff-a905-4c77-abbb-f2417e807e89","Type":"ContainerDied","Data":"00f06fc96fb559d7eccc8d08872fb0f6bfa5269ea99011d645a2b23b12f24735"} Dec 10 14:59:19 crc kubenswrapper[4718]: I1210 14:59:19.518288 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-595577df7d-rjzmf" event={"ID":"2b5afb71-a4ff-4786-ab1b-3e3d04f7e7dc","Type":"ContainerStarted","Data":"256f25685c8797bca505ca6dba07822dc846fb91ca5e418de49e22e4dccdc711"} Dec 10 14:59:19 crc kubenswrapper[4718]: I1210 14:59:19.632480 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-decision-engine-0" Dec 10 14:59:19 crc kubenswrapper[4718]: I1210 14:59:19.655650 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 10 14:59:19 crc kubenswrapper[4718]: W1210 14:59:19.656650 4718 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3cc0d50f_35f9_45b0_8316_cdf12ae72a97.slice/crio-7ceb32b8779cb7fa70cc58443dafa4c2c0b4e1d53a77d0206e76eb31d16c04ff WatchSource:0}: Error finding container 7ceb32b8779cb7fa70cc58443dafa4c2c0b4e1d53a77d0206e76eb31d16c04ff: Status 404 returned error can't find the container with id 7ceb32b8779cb7fa70cc58443dafa4c2c0b4e1d53a77d0206e76eb31d16c04ff Dec 10 14:59:20 crc kubenswrapper[4718]: I1210 14:59:20.040951 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="18de3c3d-d30b-4c09-b1a2-3a6376de8843" path="/var/lib/kubelet/pods/18de3c3d-d30b-4c09-b1a2-3a6376de8843/volumes" Dec 10 14:59:20 crc kubenswrapper[4718]: I1210 14:59:20.041884 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1c5b1250-54f6-4cbb-8106-452a43018155" path="/var/lib/kubelet/pods/1c5b1250-54f6-4cbb-8106-452a43018155/volumes" Dec 10 14:59:20 crc kubenswrapper[4718]: I1210 14:59:20.553575 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-58896fd778-pk5pp" event={"ID":"c53fcbf5-3330-4aff-a699-ff475344e705","Type":"ContainerStarted","Data":"2eeda2cd2ed43044f0640bb0ff8760f5682e117b501ad549ba10b3a0c57566f0"} Dec 10 14:59:20 crc kubenswrapper[4718]: I1210 14:59:20.563532 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3cc0d50f-35f9-45b0-8316-cdf12ae72a97","Type":"ContainerStarted","Data":"7ceb32b8779cb7fa70cc58443dafa4c2c0b4e1d53a77d0206e76eb31d16c04ff"} Dec 10 14:59:20 crc kubenswrapper[4718]: I1210 14:59:20.579809 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-86c6894dc7-l9prg" event={"ID":"f8b33603-9f2b-410e-a4ae-52b20ea62bd9","Type":"ContainerStarted","Data":"4b76640f110e9f91b3fcf7ea68741f2cfd8aeb1d1c0a1244374515318d2be4fe"} Dec 10 14:59:21 crc kubenswrapper[4718]: I1210 14:59:21.604357 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-86c6894dc7-l9prg" event={"ID":"f8b33603-9f2b-410e-a4ae-52b20ea62bd9","Type":"ContainerStarted","Data":"900cb4673b19f51eb3de70e0a87dcbc8cf98abcef374968285e8132d875d66b3"} Dec 10 14:59:21 crc kubenswrapper[4718]: I1210 14:59:21.615338 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-595577df7d-rjzmf" event={"ID":"2b5afb71-a4ff-4786-ab1b-3e3d04f7e7dc","Type":"ContainerStarted","Data":"4d05e346fd503ba465f12242f396c1ce34331d34b6f5e444073d4d0b423d455f"} Dec 10 14:59:21 crc kubenswrapper[4718]: I1210 14:59:21.636861 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-58896fd778-pk5pp" event={"ID":"c53fcbf5-3330-4aff-a699-ff475344e705","Type":"ContainerStarted","Data":"06b5f973d612c0c5cc5aa13969e7d15d2fb6278839bedd8488d14e7743815b5b"} Dec 10 14:59:21 crc kubenswrapper[4718]: I1210 14:59:21.637635 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-58896fd778-pk5pp" Dec 10 14:59:21 crc kubenswrapper[4718]: I1210 14:59:21.637726 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-58896fd778-pk5pp" Dec 10 14:59:21 crc kubenswrapper[4718]: I1210 14:59:21.648907 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3cc0d50f-35f9-45b0-8316-cdf12ae72a97","Type":"ContainerStarted","Data":"e71ee9423876c4df1d64e82e0e300727ba7d89ccc168d37fab5011a004587e62"} Dec 10 14:59:21 crc kubenswrapper[4718]: I1210 14:59:21.657363 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-595577df7d-rjzmf" podStartSLOduration=8.226539154 podStartE2EDuration="19.65732361s" podCreationTimestamp="2025-12-10 14:59:02 +0000 UTC" firstStartedPulling="2025-12-10 14:59:05.356674531 +0000 UTC m=+1650.305897948" lastFinishedPulling="2025-12-10 14:59:16.787458987 +0000 UTC m=+1661.736682404" observedRunningTime="2025-12-10 14:59:21.649464809 +0000 UTC m=+1666.598688246" watchObservedRunningTime="2025-12-10 14:59:21.65732361 +0000 UTC m=+1666.606547047" Dec 10 14:59:21 crc kubenswrapper[4718]: I1210 14:59:21.793183 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-58896fd778-pk5pp" podStartSLOduration=11.793134597 podStartE2EDuration="11.793134597s" podCreationTimestamp="2025-12-10 14:59:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 14:59:21.681467916 +0000 UTC m=+1666.630691333" watchObservedRunningTime="2025-12-10 14:59:21.793134597 +0000 UTC m=+1666.742358014" Dec 10 14:59:21 crc kubenswrapper[4718]: I1210 14:59:21.941114 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-9xz82"] Dec 10 14:59:21 crc kubenswrapper[4718]: I1210 14:59:21.943807 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9xz82" Dec 10 14:59:21 crc kubenswrapper[4718]: I1210 14:59:21.952279 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-9xz82"] Dec 10 14:59:22 crc kubenswrapper[4718]: I1210 14:59:22.029661 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/39e78dd0-89b2-4d82-acc7-9b0818c40856-catalog-content\") pod \"community-operators-9xz82\" (UID: \"39e78dd0-89b2-4d82-acc7-9b0818c40856\") " pod="openshift-marketplace/community-operators-9xz82" Dec 10 14:59:22 crc kubenswrapper[4718]: I1210 14:59:22.029726 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/39e78dd0-89b2-4d82-acc7-9b0818c40856-utilities\") pod \"community-operators-9xz82\" (UID: \"39e78dd0-89b2-4d82-acc7-9b0818c40856\") " pod="openshift-marketplace/community-operators-9xz82" Dec 10 14:59:22 crc kubenswrapper[4718]: I1210 14:59:22.029790 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qkmrg\" (UniqueName: \"kubernetes.io/projected/39e78dd0-89b2-4d82-acc7-9b0818c40856-kube-api-access-qkmrg\") pod \"community-operators-9xz82\" (UID: \"39e78dd0-89b2-4d82-acc7-9b0818c40856\") " pod="openshift-marketplace/community-operators-9xz82" Dec 10 14:59:22 crc kubenswrapper[4718]: I1210 14:59:22.137994 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qkmrg\" (UniqueName: \"kubernetes.io/projected/39e78dd0-89b2-4d82-acc7-9b0818c40856-kube-api-access-qkmrg\") pod \"community-operators-9xz82\" (UID: \"39e78dd0-89b2-4d82-acc7-9b0818c40856\") " pod="openshift-marketplace/community-operators-9xz82" Dec 10 14:59:22 crc kubenswrapper[4718]: I1210 14:59:22.138223 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/39e78dd0-89b2-4d82-acc7-9b0818c40856-catalog-content\") pod \"community-operators-9xz82\" (UID: \"39e78dd0-89b2-4d82-acc7-9b0818c40856\") " pod="openshift-marketplace/community-operators-9xz82" Dec 10 14:59:22 crc kubenswrapper[4718]: I1210 14:59:22.138264 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/39e78dd0-89b2-4d82-acc7-9b0818c40856-utilities\") pod \"community-operators-9xz82\" (UID: \"39e78dd0-89b2-4d82-acc7-9b0818c40856\") " pod="openshift-marketplace/community-operators-9xz82" Dec 10 14:59:22 crc kubenswrapper[4718]: I1210 14:59:22.145103 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/39e78dd0-89b2-4d82-acc7-9b0818c40856-catalog-content\") pod \"community-operators-9xz82\" (UID: \"39e78dd0-89b2-4d82-acc7-9b0818c40856\") " pod="openshift-marketplace/community-operators-9xz82" Dec 10 14:59:22 crc kubenswrapper[4718]: I1210 14:59:22.145825 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/39e78dd0-89b2-4d82-acc7-9b0818c40856-utilities\") pod \"community-operators-9xz82\" (UID: \"39e78dd0-89b2-4d82-acc7-9b0818c40856\") " pod="openshift-marketplace/community-operators-9xz82" Dec 10 14:59:22 crc kubenswrapper[4718]: I1210 14:59:22.180358 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qkmrg\" (UniqueName: \"kubernetes.io/projected/39e78dd0-89b2-4d82-acc7-9b0818c40856-kube-api-access-qkmrg\") pod \"community-operators-9xz82\" (UID: \"39e78dd0-89b2-4d82-acc7-9b0818c40856\") " pod="openshift-marketplace/community-operators-9xz82" Dec 10 14:59:22 crc kubenswrapper[4718]: I1210 14:59:22.531668 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9xz82" Dec 10 14:59:22 crc kubenswrapper[4718]: I1210 14:59:22.685718 4718 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-d67c77454-2pqgh" podUID="6cafdb55-0f7a-4b44-94a2-f3005a0384e2" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.171:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 10 14:59:22 crc kubenswrapper[4718]: I1210 14:59:22.757752 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-86c6894dc7-l9prg" podStartSLOduration=9.250347953 podStartE2EDuration="20.757710082s" podCreationTimestamp="2025-12-10 14:59:02 +0000 UTC" firstStartedPulling="2025-12-10 14:59:05.194439171 +0000 UTC m=+1650.143662588" lastFinishedPulling="2025-12-10 14:59:16.7018013 +0000 UTC m=+1661.651024717" observedRunningTime="2025-12-10 14:59:22.73805719 +0000 UTC m=+1667.687280617" watchObservedRunningTime="2025-12-10 14:59:22.757710082 +0000 UTC m=+1667.706933499" Dec 10 14:59:22 crc kubenswrapper[4718]: I1210 14:59:22.958763 4718 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-d67c77454-2pqgh" podUID="6cafdb55-0f7a-4b44-94a2-f3005a0384e2" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.171:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 10 14:59:23 crc kubenswrapper[4718]: I1210 14:59:23.041326 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-d67c77454-2pqgh" Dec 10 14:59:24 crc kubenswrapper[4718]: I1210 14:59:24.296023 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-9xz82"] Dec 10 14:59:24 crc kubenswrapper[4718]: I1210 14:59:24.995728 4718 generic.go:334] "Generic (PLEG): container finished" podID="7ced0eff-a905-4c77-abbb-f2417e807e89" containerID="844502f544438e16600d23a93f5cb482eca1b7a50a612e2b874ed9e8f1d1c497" exitCode=0 Dec 10 14:59:24 crc kubenswrapper[4718]: I1210 14:59:24.995798 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-p2zkz" event={"ID":"7ced0eff-a905-4c77-abbb-f2417e807e89","Type":"ContainerDied","Data":"844502f544438e16600d23a93f5cb482eca1b7a50a612e2b874ed9e8f1d1c497"} Dec 10 14:59:25 crc kubenswrapper[4718]: I1210 14:59:25.024528 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3cc0d50f-35f9-45b0-8316-cdf12ae72a97","Type":"ContainerStarted","Data":"62264a2d67bd8fd5ef03e7e98e4b1ddc79ea540a708bce5bd65507b8d2c77b31"} Dec 10 14:59:25 crc kubenswrapper[4718]: I1210 14:59:25.046427 4718 generic.go:334] "Generic (PLEG): container finished" podID="39e78dd0-89b2-4d82-acc7-9b0818c40856" containerID="2a18ee072e8c1857284445293cd826895b4c0cfb034749ba066430030d9efc2f" exitCode=0 Dec 10 14:59:25 crc kubenswrapper[4718]: I1210 14:59:25.046496 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9xz82" event={"ID":"39e78dd0-89b2-4d82-acc7-9b0818c40856","Type":"ContainerDied","Data":"2a18ee072e8c1857284445293cd826895b4c0cfb034749ba066430030d9efc2f"} Dec 10 14:59:25 crc kubenswrapper[4718]: I1210 14:59:25.046533 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9xz82" event={"ID":"39e78dd0-89b2-4d82-acc7-9b0818c40856","Type":"ContainerStarted","Data":"809de080b0ed8973894a20049946ccbf3266affadfe22a153d77afa89ac28bcf"} Dec 10 14:59:25 crc kubenswrapper[4718]: I1210 14:59:25.247670 4718 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-d67c77454-2pqgh" podUID="6cafdb55-0f7a-4b44-94a2-f3005a0384e2" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.171:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 10 14:59:26 crc kubenswrapper[4718]: I1210 14:59:26.318222 4718 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-6bb7f498bd-pjx6h" podUID="57bc5c19-c945-4bca-adef-0ddf1b9fabac" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.155:8443/dashboard/auth/login/?next=/dashboard/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 10 14:59:26 crc kubenswrapper[4718]: I1210 14:59:26.319305 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-6bb7f498bd-pjx6h" Dec 10 14:59:26 crc kubenswrapper[4718]: I1210 14:59:26.320746 4718 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="horizon" containerStatusID={"Type":"cri-o","ID":"18c2b9ed9086807a1a8a78e70010cd811dfcd5d47bc9bf542e696eb06efee24c"} pod="openstack/horizon-6bb7f498bd-pjx6h" containerMessage="Container horizon failed startup probe, will be restarted" Dec 10 14:59:26 crc kubenswrapper[4718]: I1210 14:59:26.320801 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-6bb7f498bd-pjx6h" podUID="57bc5c19-c945-4bca-adef-0ddf1b9fabac" containerName="horizon" containerID="cri-o://18c2b9ed9086807a1a8a78e70010cd811dfcd5d47bc9bf542e696eb06efee24c" gracePeriod=30 Dec 10 14:59:26 crc kubenswrapper[4718]: I1210 14:59:26.530618 4718 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-77c9ddb894-brvxz" podUID="e1a09589-44b9-49f4-8970-d3381c3d4b99" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.156:8443/dashboard/auth/login/?next=/dashboard/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 10 14:59:26 crc kubenswrapper[4718]: I1210 14:59:26.530741 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-77c9ddb894-brvxz" Dec 10 14:59:26 crc kubenswrapper[4718]: I1210 14:59:26.532042 4718 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="horizon" containerStatusID={"Type":"cri-o","ID":"7a921001c9ed00b14566bf730f10ca8fd9c20100e5de2d810562814d52773543"} pod="openstack/horizon-77c9ddb894-brvxz" containerMessage="Container horizon failed startup probe, will be restarted" Dec 10 14:59:26 crc kubenswrapper[4718]: I1210 14:59:26.532103 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-77c9ddb894-brvxz" podUID="e1a09589-44b9-49f4-8970-d3381c3d4b99" containerName="horizon" containerID="cri-o://7a921001c9ed00b14566bf730f10ca8fd9c20100e5de2d810562814d52773543" gracePeriod=30 Dec 10 14:59:27 crc kubenswrapper[4718]: I1210 14:59:27.097408 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3cc0d50f-35f9-45b0-8316-cdf12ae72a97","Type":"ContainerStarted","Data":"db4a4ebfa0197693d3cec46810cf27c433423fc868a385f9df331c8d99b47b10"} Dec 10 14:59:27 crc kubenswrapper[4718]: I1210 14:59:27.101106 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9xz82" event={"ID":"39e78dd0-89b2-4d82-acc7-9b0818c40856","Type":"ContainerStarted","Data":"0969c38e88f40864fa5502bfcc60fb16a711b763ad73c142d7536a727b2d3e18"} Dec 10 14:59:27 crc kubenswrapper[4718]: I1210 14:59:27.110927 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-p2zkz" event={"ID":"7ced0eff-a905-4c77-abbb-f2417e807e89","Type":"ContainerStarted","Data":"f04c473e26acadccf4c98af1027269926d2fed9cd16a6d167605f98ab0904274"} Dec 10 14:59:27 crc kubenswrapper[4718]: I1210 14:59:27.171914 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-p2zkz" podStartSLOduration=9.082617146 podStartE2EDuration="14.17189187s" podCreationTimestamp="2025-12-10 14:59:13 +0000 UTC" firstStartedPulling="2025-12-10 14:59:20.591018628 +0000 UTC m=+1665.540242045" lastFinishedPulling="2025-12-10 14:59:25.680293352 +0000 UTC m=+1670.629516769" observedRunningTime="2025-12-10 14:59:27.168105244 +0000 UTC m=+1672.117328661" watchObservedRunningTime="2025-12-10 14:59:27.17189187 +0000 UTC m=+1672.121115287" Dec 10 14:59:28 crc kubenswrapper[4718]: I1210 14:59:28.503609 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-6c7874df4c-ns7dm"] Dec 10 14:59:28 crc kubenswrapper[4718]: I1210 14:59:28.520618 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-6c7874df4c-ns7dm"] Dec 10 14:59:28 crc kubenswrapper[4718]: I1210 14:59:28.520824 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-6c7874df4c-ns7dm" Dec 10 14:59:28 crc kubenswrapper[4718]: I1210 14:59:28.524694 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Dec 10 14:59:28 crc kubenswrapper[4718]: I1210 14:59:28.525418 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Dec 10 14:59:28 crc kubenswrapper[4718]: I1210 14:59:28.536126 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Dec 10 14:59:28 crc kubenswrapper[4718]: I1210 14:59:28.680147 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/52552bcb-7acc-4882-86ef-0353a39e7262-config-data\") pod \"swift-proxy-6c7874df4c-ns7dm\" (UID: \"52552bcb-7acc-4882-86ef-0353a39e7262\") " pod="openstack/swift-proxy-6c7874df4c-ns7dm" Dec 10 14:59:28 crc kubenswrapper[4718]: I1210 14:59:28.680508 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qkw99\" (UniqueName: \"kubernetes.io/projected/52552bcb-7acc-4882-86ef-0353a39e7262-kube-api-access-qkw99\") pod \"swift-proxy-6c7874df4c-ns7dm\" (UID: \"52552bcb-7acc-4882-86ef-0353a39e7262\") " pod="openstack/swift-proxy-6c7874df4c-ns7dm" Dec 10 14:59:28 crc kubenswrapper[4718]: I1210 14:59:28.680638 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/52552bcb-7acc-4882-86ef-0353a39e7262-run-httpd\") pod \"swift-proxy-6c7874df4c-ns7dm\" (UID: \"52552bcb-7acc-4882-86ef-0353a39e7262\") " pod="openstack/swift-proxy-6c7874df4c-ns7dm" Dec 10 14:59:28 crc kubenswrapper[4718]: I1210 14:59:28.680807 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/52552bcb-7acc-4882-86ef-0353a39e7262-internal-tls-certs\") pod \"swift-proxy-6c7874df4c-ns7dm\" (UID: \"52552bcb-7acc-4882-86ef-0353a39e7262\") " pod="openstack/swift-proxy-6c7874df4c-ns7dm" Dec 10 14:59:28 crc kubenswrapper[4718]: I1210 14:59:28.680882 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/52552bcb-7acc-4882-86ef-0353a39e7262-combined-ca-bundle\") pod \"swift-proxy-6c7874df4c-ns7dm\" (UID: \"52552bcb-7acc-4882-86ef-0353a39e7262\") " pod="openstack/swift-proxy-6c7874df4c-ns7dm" Dec 10 14:59:28 crc kubenswrapper[4718]: I1210 14:59:28.680973 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/52552bcb-7acc-4882-86ef-0353a39e7262-public-tls-certs\") pod \"swift-proxy-6c7874df4c-ns7dm\" (UID: \"52552bcb-7acc-4882-86ef-0353a39e7262\") " pod="openstack/swift-proxy-6c7874df4c-ns7dm" Dec 10 14:59:28 crc kubenswrapper[4718]: I1210 14:59:28.681046 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/52552bcb-7acc-4882-86ef-0353a39e7262-etc-swift\") pod \"swift-proxy-6c7874df4c-ns7dm\" (UID: \"52552bcb-7acc-4882-86ef-0353a39e7262\") " pod="openstack/swift-proxy-6c7874df4c-ns7dm" Dec 10 14:59:28 crc kubenswrapper[4718]: I1210 14:59:28.681128 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/52552bcb-7acc-4882-86ef-0353a39e7262-log-httpd\") pod \"swift-proxy-6c7874df4c-ns7dm\" (UID: \"52552bcb-7acc-4882-86ef-0353a39e7262\") " pod="openstack/swift-proxy-6c7874df4c-ns7dm" Dec 10 14:59:28 crc kubenswrapper[4718]: I1210 14:59:28.785297 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/52552bcb-7acc-4882-86ef-0353a39e7262-config-data\") pod \"swift-proxy-6c7874df4c-ns7dm\" (UID: \"52552bcb-7acc-4882-86ef-0353a39e7262\") " pod="openstack/swift-proxy-6c7874df4c-ns7dm" Dec 10 14:59:28 crc kubenswrapper[4718]: I1210 14:59:28.785348 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qkw99\" (UniqueName: \"kubernetes.io/projected/52552bcb-7acc-4882-86ef-0353a39e7262-kube-api-access-qkw99\") pod \"swift-proxy-6c7874df4c-ns7dm\" (UID: \"52552bcb-7acc-4882-86ef-0353a39e7262\") " pod="openstack/swift-proxy-6c7874df4c-ns7dm" Dec 10 14:59:28 crc kubenswrapper[4718]: I1210 14:59:28.785406 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/52552bcb-7acc-4882-86ef-0353a39e7262-run-httpd\") pod \"swift-proxy-6c7874df4c-ns7dm\" (UID: \"52552bcb-7acc-4882-86ef-0353a39e7262\") " pod="openstack/swift-proxy-6c7874df4c-ns7dm" Dec 10 14:59:28 crc kubenswrapper[4718]: I1210 14:59:28.785521 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/52552bcb-7acc-4882-86ef-0353a39e7262-internal-tls-certs\") pod \"swift-proxy-6c7874df4c-ns7dm\" (UID: \"52552bcb-7acc-4882-86ef-0353a39e7262\") " pod="openstack/swift-proxy-6c7874df4c-ns7dm" Dec 10 14:59:28 crc kubenswrapper[4718]: I1210 14:59:28.785541 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/52552bcb-7acc-4882-86ef-0353a39e7262-combined-ca-bundle\") pod \"swift-proxy-6c7874df4c-ns7dm\" (UID: \"52552bcb-7acc-4882-86ef-0353a39e7262\") " pod="openstack/swift-proxy-6c7874df4c-ns7dm" Dec 10 14:59:28 crc kubenswrapper[4718]: I1210 14:59:28.785572 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/52552bcb-7acc-4882-86ef-0353a39e7262-public-tls-certs\") pod \"swift-proxy-6c7874df4c-ns7dm\" (UID: \"52552bcb-7acc-4882-86ef-0353a39e7262\") " pod="openstack/swift-proxy-6c7874df4c-ns7dm" Dec 10 14:59:28 crc kubenswrapper[4718]: I1210 14:59:28.785592 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/52552bcb-7acc-4882-86ef-0353a39e7262-etc-swift\") pod \"swift-proxy-6c7874df4c-ns7dm\" (UID: \"52552bcb-7acc-4882-86ef-0353a39e7262\") " pod="openstack/swift-proxy-6c7874df4c-ns7dm" Dec 10 14:59:28 crc kubenswrapper[4718]: I1210 14:59:28.785635 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/52552bcb-7acc-4882-86ef-0353a39e7262-log-httpd\") pod \"swift-proxy-6c7874df4c-ns7dm\" (UID: \"52552bcb-7acc-4882-86ef-0353a39e7262\") " pod="openstack/swift-proxy-6c7874df4c-ns7dm" Dec 10 14:59:28 crc kubenswrapper[4718]: I1210 14:59:28.786528 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/52552bcb-7acc-4882-86ef-0353a39e7262-log-httpd\") pod \"swift-proxy-6c7874df4c-ns7dm\" (UID: \"52552bcb-7acc-4882-86ef-0353a39e7262\") " pod="openstack/swift-proxy-6c7874df4c-ns7dm" Dec 10 14:59:28 crc kubenswrapper[4718]: I1210 14:59:28.788857 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/52552bcb-7acc-4882-86ef-0353a39e7262-run-httpd\") pod \"swift-proxy-6c7874df4c-ns7dm\" (UID: \"52552bcb-7acc-4882-86ef-0353a39e7262\") " pod="openstack/swift-proxy-6c7874df4c-ns7dm" Dec 10 14:59:28 crc kubenswrapper[4718]: I1210 14:59:28.805273 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/52552bcb-7acc-4882-86ef-0353a39e7262-internal-tls-certs\") pod \"swift-proxy-6c7874df4c-ns7dm\" (UID: \"52552bcb-7acc-4882-86ef-0353a39e7262\") " pod="openstack/swift-proxy-6c7874df4c-ns7dm" Dec 10 14:59:28 crc kubenswrapper[4718]: I1210 14:59:28.825714 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/52552bcb-7acc-4882-86ef-0353a39e7262-combined-ca-bundle\") pod \"swift-proxy-6c7874df4c-ns7dm\" (UID: \"52552bcb-7acc-4882-86ef-0353a39e7262\") " pod="openstack/swift-proxy-6c7874df4c-ns7dm" Dec 10 14:59:28 crc kubenswrapper[4718]: I1210 14:59:28.826312 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/52552bcb-7acc-4882-86ef-0353a39e7262-public-tls-certs\") pod \"swift-proxy-6c7874df4c-ns7dm\" (UID: \"52552bcb-7acc-4882-86ef-0353a39e7262\") " pod="openstack/swift-proxy-6c7874df4c-ns7dm" Dec 10 14:59:28 crc kubenswrapper[4718]: I1210 14:59:28.826761 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/52552bcb-7acc-4882-86ef-0353a39e7262-config-data\") pod \"swift-proxy-6c7874df4c-ns7dm\" (UID: \"52552bcb-7acc-4882-86ef-0353a39e7262\") " pod="openstack/swift-proxy-6c7874df4c-ns7dm" Dec 10 14:59:28 crc kubenswrapper[4718]: I1210 14:59:28.838134 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/52552bcb-7acc-4882-86ef-0353a39e7262-etc-swift\") pod \"swift-proxy-6c7874df4c-ns7dm\" (UID: \"52552bcb-7acc-4882-86ef-0353a39e7262\") " pod="openstack/swift-proxy-6c7874df4c-ns7dm" Dec 10 14:59:28 crc kubenswrapper[4718]: I1210 14:59:28.862942 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qkw99\" (UniqueName: \"kubernetes.io/projected/52552bcb-7acc-4882-86ef-0353a39e7262-kube-api-access-qkw99\") pod \"swift-proxy-6c7874df4c-ns7dm\" (UID: \"52552bcb-7acc-4882-86ef-0353a39e7262\") " pod="openstack/swift-proxy-6c7874df4c-ns7dm" Dec 10 14:59:28 crc kubenswrapper[4718]: I1210 14:59:28.877784 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-6c7874df4c-ns7dm" Dec 10 14:59:30 crc kubenswrapper[4718]: I1210 14:59:30.171938 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-6c7874df4c-ns7dm"] Dec 10 14:59:30 crc kubenswrapper[4718]: I1210 14:59:30.232657 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-58896fd778-pk5pp" Dec 10 14:59:30 crc kubenswrapper[4718]: W1210 14:59:30.288646 4718 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod52552bcb_7acc_4882_86ef_0353a39e7262.slice/crio-21300ed78d13ba8a6604cc05c6cb1024d472a8c294d5cc9990a2494191087bf1 WatchSource:0}: Error finding container 21300ed78d13ba8a6604cc05c6cb1024d472a8c294d5cc9990a2494191087bf1: Status 404 returned error can't find the container with id 21300ed78d13ba8a6604cc05c6cb1024d472a8c294d5cc9990a2494191087bf1 Dec 10 14:59:30 crc kubenswrapper[4718]: I1210 14:59:30.929864 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-58896fd778-pk5pp" Dec 10 14:59:31 crc kubenswrapper[4718]: I1210 14:59:31.022804 4718 scope.go:117] "RemoveContainer" containerID="76beffb68a4302af15a19b5c310d822594a21c7c7ec551bf4bf551ca3dcf1f21" Dec 10 14:59:31 crc kubenswrapper[4718]: E1210 14:59:31.023044 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8zmhn_openshift-machine-config-operator(8db53917-7cfb-496d-b8a0-5cc68f3be4e7)\"" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" podUID="8db53917-7cfb-496d-b8a0-5cc68f3be4e7" Dec 10 14:59:31 crc kubenswrapper[4718]: I1210 14:59:31.029895 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-d67c77454-2pqgh"] Dec 10 14:59:31 crc kubenswrapper[4718]: I1210 14:59:31.030323 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-d67c77454-2pqgh" podUID="6cafdb55-0f7a-4b44-94a2-f3005a0384e2" containerName="barbican-api-log" containerID="cri-o://e4ba03d50427fd13336a207c979789787b9ff13104666c18e30046c5b47b72e9" gracePeriod=30 Dec 10 14:59:31 crc kubenswrapper[4718]: I1210 14:59:31.030593 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-d67c77454-2pqgh" podUID="6cafdb55-0f7a-4b44-94a2-f3005a0384e2" containerName="barbican-api" containerID="cri-o://1037ac85c650d8e987c279f98a080beb186b35fbe14ad70777d587dde75c1003" gracePeriod=30 Dec 10 14:59:31 crc kubenswrapper[4718]: I1210 14:59:31.253755 4718 generic.go:334] "Generic (PLEG): container finished" podID="6cafdb55-0f7a-4b44-94a2-f3005a0384e2" containerID="e4ba03d50427fd13336a207c979789787b9ff13104666c18e30046c5b47b72e9" exitCode=143 Dec 10 14:59:31 crc kubenswrapper[4718]: I1210 14:59:31.254294 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-d67c77454-2pqgh" event={"ID":"6cafdb55-0f7a-4b44-94a2-f3005a0384e2","Type":"ContainerDied","Data":"e4ba03d50427fd13336a207c979789787b9ff13104666c18e30046c5b47b72e9"} Dec 10 14:59:31 crc kubenswrapper[4718]: I1210 14:59:31.285527 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-6c7874df4c-ns7dm" event={"ID":"52552bcb-7acc-4882-86ef-0353a39e7262","Type":"ContainerStarted","Data":"82337add600074e6809a03b259c2b2611152b7d6723050b64df2af35f7e9f5e7"} Dec 10 14:59:31 crc kubenswrapper[4718]: I1210 14:59:31.285619 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-6c7874df4c-ns7dm" event={"ID":"52552bcb-7acc-4882-86ef-0353a39e7262","Type":"ContainerStarted","Data":"21300ed78d13ba8a6604cc05c6cb1024d472a8c294d5cc9990a2494191087bf1"} Dec 10 14:59:32 crc kubenswrapper[4718]: I1210 14:59:32.323471 4718 generic.go:334] "Generic (PLEG): container finished" podID="39e78dd0-89b2-4d82-acc7-9b0818c40856" containerID="0969c38e88f40864fa5502bfcc60fb16a711b763ad73c142d7536a727b2d3e18" exitCode=0 Dec 10 14:59:32 crc kubenswrapper[4718]: I1210 14:59:32.323656 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9xz82" event={"ID":"39e78dd0-89b2-4d82-acc7-9b0818c40856","Type":"ContainerDied","Data":"0969c38e88f40864fa5502bfcc60fb16a711b763ad73c142d7536a727b2d3e18"} Dec 10 14:59:32 crc kubenswrapper[4718]: I1210 14:59:32.334382 4718 generic.go:334] "Generic (PLEG): container finished" podID="57bc5c19-c945-4bca-adef-0ddf1b9fabac" containerID="18c2b9ed9086807a1a8a78e70010cd811dfcd5d47bc9bf542e696eb06efee24c" exitCode=0 Dec 10 14:59:32 crc kubenswrapper[4718]: I1210 14:59:32.334493 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6bb7f498bd-pjx6h" event={"ID":"57bc5c19-c945-4bca-adef-0ddf1b9fabac","Type":"ContainerDied","Data":"18c2b9ed9086807a1a8a78e70010cd811dfcd5d47bc9bf542e696eb06efee24c"} Dec 10 14:59:33 crc kubenswrapper[4718]: I1210 14:59:33.368898 4718 generic.go:334] "Generic (PLEG): container finished" podID="e1a09589-44b9-49f4-8970-d3381c3d4b99" containerID="7a921001c9ed00b14566bf730f10ca8fd9c20100e5de2d810562814d52773543" exitCode=0 Dec 10 14:59:33 crc kubenswrapper[4718]: I1210 14:59:33.369559 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-77c9ddb894-brvxz" event={"ID":"e1a09589-44b9-49f4-8970-d3381c3d4b99","Type":"ContainerDied","Data":"7a921001c9ed00b14566bf730f10ca8fd9c20100e5de2d810562814d52773543"} Dec 10 14:59:33 crc kubenswrapper[4718]: I1210 14:59:33.687574 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-p2zkz" Dec 10 14:59:33 crc kubenswrapper[4718]: I1210 14:59:33.687669 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-p2zkz" Dec 10 14:59:33 crc kubenswrapper[4718]: I1210 14:59:33.775313 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-p2zkz" Dec 10 14:59:34 crc kubenswrapper[4718]: I1210 14:59:34.202785 4718 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-d67c77454-2pqgh" podUID="6cafdb55-0f7a-4b44-94a2-f3005a0384e2" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.171:9311/healthcheck\": read tcp 10.217.0.2:36310->10.217.0.171:9311: read: connection reset by peer" Dec 10 14:59:34 crc kubenswrapper[4718]: I1210 14:59:34.202848 4718 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-d67c77454-2pqgh" podUID="6cafdb55-0f7a-4b44-94a2-f3005a0384e2" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.171:9311/healthcheck\": read tcp 10.217.0.2:36312->10.217.0.171:9311: read: connection reset by peer" Dec 10 14:59:34 crc kubenswrapper[4718]: I1210 14:59:34.394909 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-6c7874df4c-ns7dm" event={"ID":"52552bcb-7acc-4882-86ef-0353a39e7262","Type":"ContainerStarted","Data":"6785b8803dd1c2f54b55ba681d81b846a232de2e607dd46c0432db13a071828a"} Dec 10 14:59:34 crc kubenswrapper[4718]: I1210 14:59:34.396930 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-6c7874df4c-ns7dm" Dec 10 14:59:34 crc kubenswrapper[4718]: I1210 14:59:34.397011 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-6c7874df4c-ns7dm" Dec 10 14:59:34 crc kubenswrapper[4718]: I1210 14:59:34.404364 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9xz82" event={"ID":"39e78dd0-89b2-4d82-acc7-9b0818c40856","Type":"ContainerStarted","Data":"92cc544e9fbbf61ed33fa3623394151b70c945bc4000f8dd264954aec83cc3fb"} Dec 10 14:59:34 crc kubenswrapper[4718]: I1210 14:59:34.414978 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-77c9ddb894-brvxz" event={"ID":"e1a09589-44b9-49f4-8970-d3381c3d4b99","Type":"ContainerStarted","Data":"e226f70ff8101fcd4b41685da9c2efd11a48147848a21a9047bebec72846c913"} Dec 10 14:59:34 crc kubenswrapper[4718]: I1210 14:59:34.431993 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-6c7874df4c-ns7dm" podStartSLOduration=6.431960842 podStartE2EDuration="6.431960842s" podCreationTimestamp="2025-12-10 14:59:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 14:59:34.426944564 +0000 UTC m=+1679.376168071" watchObservedRunningTime="2025-12-10 14:59:34.431960842 +0000 UTC m=+1679.381184259" Dec 10 14:59:34 crc kubenswrapper[4718]: I1210 14:59:34.455383 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6bb7f498bd-pjx6h" event={"ID":"57bc5c19-c945-4bca-adef-0ddf1b9fabac","Type":"ContainerStarted","Data":"b117b070cadfb38d1e352bf4a066eddfacde995e6de0938901cbc9ed0bddda18"} Dec 10 14:59:34 crc kubenswrapper[4718]: I1210 14:59:34.488274 4718 generic.go:334] "Generic (PLEG): container finished" podID="6cafdb55-0f7a-4b44-94a2-f3005a0384e2" containerID="1037ac85c650d8e987c279f98a080beb186b35fbe14ad70777d587dde75c1003" exitCode=0 Dec 10 14:59:34 crc kubenswrapper[4718]: I1210 14:59:34.488939 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-d67c77454-2pqgh" event={"ID":"6cafdb55-0f7a-4b44-94a2-f3005a0384e2","Type":"ContainerDied","Data":"1037ac85c650d8e987c279f98a080beb186b35fbe14ad70777d587dde75c1003"} Dec 10 14:59:34 crc kubenswrapper[4718]: I1210 14:59:34.497492 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-9xz82" podStartSLOduration=5.034052583 podStartE2EDuration="13.497460234s" podCreationTimestamp="2025-12-10 14:59:21 +0000 UTC" firstStartedPulling="2025-12-10 14:59:25.105846487 +0000 UTC m=+1670.055069904" lastFinishedPulling="2025-12-10 14:59:33.569254138 +0000 UTC m=+1678.518477555" observedRunningTime="2025-12-10 14:59:34.460591733 +0000 UTC m=+1679.409815150" watchObservedRunningTime="2025-12-10 14:59:34.497460234 +0000 UTC m=+1679.446683651" Dec 10 14:59:34 crc kubenswrapper[4718]: I1210 14:59:34.517938 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3cc0d50f-35f9-45b0-8316-cdf12ae72a97","Type":"ContainerStarted","Data":"8304b3ca9e66802bf4345c39855400c88f7e0239f1898e9d6ca36bce702475f1"} Dec 10 14:59:34 crc kubenswrapper[4718]: I1210 14:59:34.518133 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Dec 10 14:59:34 crc kubenswrapper[4718]: I1210 14:59:34.621378 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.107551374 podStartE2EDuration="16.621336456s" podCreationTimestamp="2025-12-10 14:59:18 +0000 UTC" firstStartedPulling="2025-12-10 14:59:19.665952732 +0000 UTC m=+1664.615176149" lastFinishedPulling="2025-12-10 14:59:33.179737814 +0000 UTC m=+1678.128961231" observedRunningTime="2025-12-10 14:59:34.579919529 +0000 UTC m=+1679.529142946" watchObservedRunningTime="2025-12-10 14:59:34.621336456 +0000 UTC m=+1679.570559873" Dec 10 14:59:34 crc kubenswrapper[4718]: I1210 14:59:34.651550 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-p2zkz" Dec 10 14:59:34 crc kubenswrapper[4718]: I1210 14:59:34.905823 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-d67c77454-2pqgh" Dec 10 14:59:34 crc kubenswrapper[4718]: I1210 14:59:34.928651 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 10 14:59:35 crc kubenswrapper[4718]: I1210 14:59:35.000418 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-89cj9\" (UniqueName: \"kubernetes.io/projected/6cafdb55-0f7a-4b44-94a2-f3005a0384e2-kube-api-access-89cj9\") pod \"6cafdb55-0f7a-4b44-94a2-f3005a0384e2\" (UID: \"6cafdb55-0f7a-4b44-94a2-f3005a0384e2\") " Dec 10 14:59:35 crc kubenswrapper[4718]: I1210 14:59:35.000501 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6cafdb55-0f7a-4b44-94a2-f3005a0384e2-config-data\") pod \"6cafdb55-0f7a-4b44-94a2-f3005a0384e2\" (UID: \"6cafdb55-0f7a-4b44-94a2-f3005a0384e2\") " Dec 10 14:59:35 crc kubenswrapper[4718]: I1210 14:59:35.000607 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6cafdb55-0f7a-4b44-94a2-f3005a0384e2-combined-ca-bundle\") pod \"6cafdb55-0f7a-4b44-94a2-f3005a0384e2\" (UID: \"6cafdb55-0f7a-4b44-94a2-f3005a0384e2\") " Dec 10 14:59:35 crc kubenswrapper[4718]: I1210 14:59:35.000638 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6cafdb55-0f7a-4b44-94a2-f3005a0384e2-logs\") pod \"6cafdb55-0f7a-4b44-94a2-f3005a0384e2\" (UID: \"6cafdb55-0f7a-4b44-94a2-f3005a0384e2\") " Dec 10 14:59:35 crc kubenswrapper[4718]: I1210 14:59:35.000667 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6cafdb55-0f7a-4b44-94a2-f3005a0384e2-config-data-custom\") pod \"6cafdb55-0f7a-4b44-94a2-f3005a0384e2\" (UID: \"6cafdb55-0f7a-4b44-94a2-f3005a0384e2\") " Dec 10 14:59:35 crc kubenswrapper[4718]: I1210 14:59:35.002624 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6cafdb55-0f7a-4b44-94a2-f3005a0384e2-logs" (OuterVolumeSpecName: "logs") pod "6cafdb55-0f7a-4b44-94a2-f3005a0384e2" (UID: "6cafdb55-0f7a-4b44-94a2-f3005a0384e2"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 14:59:35 crc kubenswrapper[4718]: I1210 14:59:35.011794 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6cafdb55-0f7a-4b44-94a2-f3005a0384e2-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "6cafdb55-0f7a-4b44-94a2-f3005a0384e2" (UID: "6cafdb55-0f7a-4b44-94a2-f3005a0384e2"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 14:59:35 crc kubenswrapper[4718]: I1210 14:59:35.040888 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6cafdb55-0f7a-4b44-94a2-f3005a0384e2-kube-api-access-89cj9" (OuterVolumeSpecName: "kube-api-access-89cj9") pod "6cafdb55-0f7a-4b44-94a2-f3005a0384e2" (UID: "6cafdb55-0f7a-4b44-94a2-f3005a0384e2"). InnerVolumeSpecName "kube-api-access-89cj9". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 14:59:35 crc kubenswrapper[4718]: I1210 14:59:35.049144 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-p2zkz"] Dec 10 14:59:35 crc kubenswrapper[4718]: I1210 14:59:35.050578 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6cafdb55-0f7a-4b44-94a2-f3005a0384e2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6cafdb55-0f7a-4b44-94a2-f3005a0384e2" (UID: "6cafdb55-0f7a-4b44-94a2-f3005a0384e2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 14:59:35 crc kubenswrapper[4718]: I1210 14:59:35.105699 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-89cj9\" (UniqueName: \"kubernetes.io/projected/6cafdb55-0f7a-4b44-94a2-f3005a0384e2-kube-api-access-89cj9\") on node \"crc\" DevicePath \"\"" Dec 10 14:59:35 crc kubenswrapper[4718]: I1210 14:59:35.105752 4718 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6cafdb55-0f7a-4b44-94a2-f3005a0384e2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 10 14:59:35 crc kubenswrapper[4718]: I1210 14:59:35.105764 4718 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6cafdb55-0f7a-4b44-94a2-f3005a0384e2-logs\") on node \"crc\" DevicePath \"\"" Dec 10 14:59:35 crc kubenswrapper[4718]: I1210 14:59:35.105775 4718 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6cafdb55-0f7a-4b44-94a2-f3005a0384e2-config-data-custom\") on node \"crc\" DevicePath \"\"" Dec 10 14:59:35 crc kubenswrapper[4718]: I1210 14:59:35.181510 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6cafdb55-0f7a-4b44-94a2-f3005a0384e2-config-data" (OuterVolumeSpecName: "config-data") pod "6cafdb55-0f7a-4b44-94a2-f3005a0384e2" (UID: "6cafdb55-0f7a-4b44-94a2-f3005a0384e2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 14:59:35 crc kubenswrapper[4718]: I1210 14:59:35.210157 4718 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6cafdb55-0f7a-4b44-94a2-f3005a0384e2-config-data\") on node \"crc\" DevicePath \"\"" Dec 10 14:59:35 crc kubenswrapper[4718]: I1210 14:59:35.571253 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-d67c77454-2pqgh" Dec 10 14:59:35 crc kubenswrapper[4718]: I1210 14:59:35.571267 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-d67c77454-2pqgh" event={"ID":"6cafdb55-0f7a-4b44-94a2-f3005a0384e2","Type":"ContainerDied","Data":"cf2e8fd9154fba32623e8896aa3e26bf8d732cb88dcea2dc43696f144018b263"} Dec 10 14:59:35 crc kubenswrapper[4718]: I1210 14:59:35.571446 4718 scope.go:117] "RemoveContainer" containerID="1037ac85c650d8e987c279f98a080beb186b35fbe14ad70777d587dde75c1003" Dec 10 14:59:35 crc kubenswrapper[4718]: I1210 14:59:35.582163 4718 generic.go:334] "Generic (PLEG): container finished" podID="71a092ad-773d-47b4-bc1f-73358adecf4a" containerID="cb50add0d25a417f0511946c6a546ff6032e149e5d3e6c23746e66422a172656" exitCode=0 Dec 10 14:59:35 crc kubenswrapper[4718]: I1210 14:59:35.583508 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-hctjl" event={"ID":"71a092ad-773d-47b4-bc1f-73358adecf4a","Type":"ContainerDied","Data":"cb50add0d25a417f0511946c6a546ff6032e149e5d3e6c23746e66422a172656"} Dec 10 14:59:35 crc kubenswrapper[4718]: I1210 14:59:35.665495 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-d67c77454-2pqgh"] Dec 10 14:59:35 crc kubenswrapper[4718]: I1210 14:59:35.683222 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-d67c77454-2pqgh"] Dec 10 14:59:35 crc kubenswrapper[4718]: I1210 14:59:35.683810 4718 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/swift-proxy-6c7874df4c-ns7dm" podUID="52552bcb-7acc-4882-86ef-0353a39e7262" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Dec 10 14:59:35 crc kubenswrapper[4718]: I1210 14:59:35.727740 4718 scope.go:117] "RemoveContainer" containerID="e4ba03d50427fd13336a207c979789787b9ff13104666c18e30046c5b47b72e9" Dec 10 14:59:36 crc kubenswrapper[4718]: I1210 14:59:36.063665 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6cafdb55-0f7a-4b44-94a2-f3005a0384e2" path="/var/lib/kubelet/pods/6cafdb55-0f7a-4b44-94a2-f3005a0384e2/volumes" Dec 10 14:59:36 crc kubenswrapper[4718]: I1210 14:59:36.602527 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-p2zkz" podUID="7ced0eff-a905-4c77-abbb-f2417e807e89" containerName="registry-server" containerID="cri-o://f04c473e26acadccf4c98af1027269926d2fed9cd16a6d167605f98ab0904274" gracePeriod=2 Dec 10 14:59:36 crc kubenswrapper[4718]: I1210 14:59:36.603175 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="3cc0d50f-35f9-45b0-8316-cdf12ae72a97" containerName="ceilometer-central-agent" containerID="cri-o://e71ee9423876c4df1d64e82e0e300727ba7d89ccc168d37fab5011a004587e62" gracePeriod=30 Dec 10 14:59:36 crc kubenswrapper[4718]: I1210 14:59:36.603414 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="3cc0d50f-35f9-45b0-8316-cdf12ae72a97" containerName="sg-core" containerID="cri-o://db4a4ebfa0197693d3cec46810cf27c433423fc868a385f9df331c8d99b47b10" gracePeriod=30 Dec 10 14:59:36 crc kubenswrapper[4718]: I1210 14:59:36.603447 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="3cc0d50f-35f9-45b0-8316-cdf12ae72a97" containerName="ceilometer-notification-agent" containerID="cri-o://62264a2d67bd8fd5ef03e7e98e4b1ddc79ea540a708bce5bd65507b8d2c77b31" gracePeriod=30 Dec 10 14:59:36 crc kubenswrapper[4718]: I1210 14:59:36.603504 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="3cc0d50f-35f9-45b0-8316-cdf12ae72a97" containerName="proxy-httpd" containerID="cri-o://8304b3ca9e66802bf4345c39855400c88f7e0239f1898e9d6ca36bce702475f1" gracePeriod=30 Dec 10 14:59:36 crc kubenswrapper[4718]: I1210 14:59:36.616066 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-6c7874df4c-ns7dm" Dec 10 14:59:37 crc kubenswrapper[4718]: I1210 14:59:37.245584 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-hctjl" Dec 10 14:59:37 crc kubenswrapper[4718]: I1210 14:59:37.257676 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-p2zkz" Dec 10 14:59:37 crc kubenswrapper[4718]: I1210 14:59:37.299310 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/71a092ad-773d-47b4-bc1f-73358adecf4a-etc-machine-id\") pod \"71a092ad-773d-47b4-bc1f-73358adecf4a\" (UID: \"71a092ad-773d-47b4-bc1f-73358adecf4a\") " Dec 10 14:59:37 crc kubenswrapper[4718]: I1210 14:59:37.299500 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bvqkf\" (UniqueName: \"kubernetes.io/projected/71a092ad-773d-47b4-bc1f-73358adecf4a-kube-api-access-bvqkf\") pod \"71a092ad-773d-47b4-bc1f-73358adecf4a\" (UID: \"71a092ad-773d-47b4-bc1f-73358adecf4a\") " Dec 10 14:59:37 crc kubenswrapper[4718]: I1210 14:59:37.299621 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/71a092ad-773d-47b4-bc1f-73358adecf4a-scripts\") pod \"71a092ad-773d-47b4-bc1f-73358adecf4a\" (UID: \"71a092ad-773d-47b4-bc1f-73358adecf4a\") " Dec 10 14:59:37 crc kubenswrapper[4718]: I1210 14:59:37.299742 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/71a092ad-773d-47b4-bc1f-73358adecf4a-config-data\") pod \"71a092ad-773d-47b4-bc1f-73358adecf4a\" (UID: \"71a092ad-773d-47b4-bc1f-73358adecf4a\") " Dec 10 14:59:37 crc kubenswrapper[4718]: I1210 14:59:37.299779 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/71a092ad-773d-47b4-bc1f-73358adecf4a-db-sync-config-data\") pod \"71a092ad-773d-47b4-bc1f-73358adecf4a\" (UID: \"71a092ad-773d-47b4-bc1f-73358adecf4a\") " Dec 10 14:59:37 crc kubenswrapper[4718]: I1210 14:59:37.299880 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/71a092ad-773d-47b4-bc1f-73358adecf4a-combined-ca-bundle\") pod \"71a092ad-773d-47b4-bc1f-73358adecf4a\" (UID: \"71a092ad-773d-47b4-bc1f-73358adecf4a\") " Dec 10 14:59:37 crc kubenswrapper[4718]: I1210 14:59:37.299935 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7ced0eff-a905-4c77-abbb-f2417e807e89-catalog-content\") pod \"7ced0eff-a905-4c77-abbb-f2417e807e89\" (UID: \"7ced0eff-a905-4c77-abbb-f2417e807e89\") " Dec 10 14:59:37 crc kubenswrapper[4718]: I1210 14:59:37.300054 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7ced0eff-a905-4c77-abbb-f2417e807e89-utilities\") pod \"7ced0eff-a905-4c77-abbb-f2417e807e89\" (UID: \"7ced0eff-a905-4c77-abbb-f2417e807e89\") " Dec 10 14:59:37 crc kubenswrapper[4718]: I1210 14:59:37.300131 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7drhr\" (UniqueName: \"kubernetes.io/projected/7ced0eff-a905-4c77-abbb-f2417e807e89-kube-api-access-7drhr\") pod \"7ced0eff-a905-4c77-abbb-f2417e807e89\" (UID: \"7ced0eff-a905-4c77-abbb-f2417e807e89\") " Dec 10 14:59:37 crc kubenswrapper[4718]: I1210 14:59:37.313343 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/71a092ad-773d-47b4-bc1f-73358adecf4a-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "71a092ad-773d-47b4-bc1f-73358adecf4a" (UID: "71a092ad-773d-47b4-bc1f-73358adecf4a"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 10 14:59:37 crc kubenswrapper[4718]: I1210 14:59:37.313865 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7ced0eff-a905-4c77-abbb-f2417e807e89-utilities" (OuterVolumeSpecName: "utilities") pod "7ced0eff-a905-4c77-abbb-f2417e807e89" (UID: "7ced0eff-a905-4c77-abbb-f2417e807e89"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 14:59:37 crc kubenswrapper[4718]: I1210 14:59:37.322208 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/71a092ad-773d-47b4-bc1f-73358adecf4a-kube-api-access-bvqkf" (OuterVolumeSpecName: "kube-api-access-bvqkf") pod "71a092ad-773d-47b4-bc1f-73358adecf4a" (UID: "71a092ad-773d-47b4-bc1f-73358adecf4a"). InnerVolumeSpecName "kube-api-access-bvqkf". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 14:59:37 crc kubenswrapper[4718]: I1210 14:59:37.328595 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/71a092ad-773d-47b4-bc1f-73358adecf4a-scripts" (OuterVolumeSpecName: "scripts") pod "71a092ad-773d-47b4-bc1f-73358adecf4a" (UID: "71a092ad-773d-47b4-bc1f-73358adecf4a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 14:59:37 crc kubenswrapper[4718]: I1210 14:59:37.377506 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7ced0eff-a905-4c77-abbb-f2417e807e89-kube-api-access-7drhr" (OuterVolumeSpecName: "kube-api-access-7drhr") pod "7ced0eff-a905-4c77-abbb-f2417e807e89" (UID: "7ced0eff-a905-4c77-abbb-f2417e807e89"). InnerVolumeSpecName "kube-api-access-7drhr". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 14:59:37 crc kubenswrapper[4718]: I1210 14:59:37.381329 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/71a092ad-773d-47b4-bc1f-73358adecf4a-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "71a092ad-773d-47b4-bc1f-73358adecf4a" (UID: "71a092ad-773d-47b4-bc1f-73358adecf4a"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 14:59:37 crc kubenswrapper[4718]: I1210 14:59:37.382548 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7ced0eff-a905-4c77-abbb-f2417e807e89-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7ced0eff-a905-4c77-abbb-f2417e807e89" (UID: "7ced0eff-a905-4c77-abbb-f2417e807e89"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 14:59:37 crc kubenswrapper[4718]: I1210 14:59:37.396723 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/71a092ad-773d-47b4-bc1f-73358adecf4a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "71a092ad-773d-47b4-bc1f-73358adecf4a" (UID: "71a092ad-773d-47b4-bc1f-73358adecf4a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 14:59:37 crc kubenswrapper[4718]: I1210 14:59:37.408051 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bvqkf\" (UniqueName: \"kubernetes.io/projected/71a092ad-773d-47b4-bc1f-73358adecf4a-kube-api-access-bvqkf\") on node \"crc\" DevicePath \"\"" Dec 10 14:59:37 crc kubenswrapper[4718]: I1210 14:59:37.408097 4718 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/71a092ad-773d-47b4-bc1f-73358adecf4a-scripts\") on node \"crc\" DevicePath \"\"" Dec 10 14:59:37 crc kubenswrapper[4718]: I1210 14:59:37.408109 4718 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/71a092ad-773d-47b4-bc1f-73358adecf4a-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Dec 10 14:59:37 crc kubenswrapper[4718]: I1210 14:59:37.408118 4718 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/71a092ad-773d-47b4-bc1f-73358adecf4a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 10 14:59:37 crc kubenswrapper[4718]: I1210 14:59:37.408128 4718 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7ced0eff-a905-4c77-abbb-f2417e807e89-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 10 14:59:37 crc kubenswrapper[4718]: I1210 14:59:37.408136 4718 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7ced0eff-a905-4c77-abbb-f2417e807e89-utilities\") on node \"crc\" DevicePath \"\"" Dec 10 14:59:37 crc kubenswrapper[4718]: I1210 14:59:37.408147 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7drhr\" (UniqueName: \"kubernetes.io/projected/7ced0eff-a905-4c77-abbb-f2417e807e89-kube-api-access-7drhr\") on node \"crc\" DevicePath \"\"" Dec 10 14:59:37 crc kubenswrapper[4718]: I1210 14:59:37.408156 4718 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/71a092ad-773d-47b4-bc1f-73358adecf4a-etc-machine-id\") on node \"crc\" DevicePath \"\"" Dec 10 14:59:37 crc kubenswrapper[4718]: I1210 14:59:37.420641 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/71a092ad-773d-47b4-bc1f-73358adecf4a-config-data" (OuterVolumeSpecName: "config-data") pod "71a092ad-773d-47b4-bc1f-73358adecf4a" (UID: "71a092ad-773d-47b4-bc1f-73358adecf4a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 14:59:37 crc kubenswrapper[4718]: I1210 14:59:37.510086 4718 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/71a092ad-773d-47b4-bc1f-73358adecf4a-config-data\") on node \"crc\" DevicePath \"\"" Dec 10 14:59:37 crc kubenswrapper[4718]: I1210 14:59:37.619264 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-hctjl" event={"ID":"71a092ad-773d-47b4-bc1f-73358adecf4a","Type":"ContainerDied","Data":"15c5569295772b94dae2b5f9138d2ea881de6ca4a9b203fd4169a88ed3fc0bf0"} Dec 10 14:59:37 crc kubenswrapper[4718]: I1210 14:59:37.619334 4718 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="15c5569295772b94dae2b5f9138d2ea881de6ca4a9b203fd4169a88ed3fc0bf0" Dec 10 14:59:37 crc kubenswrapper[4718]: I1210 14:59:37.619368 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-hctjl" Dec 10 14:59:37 crc kubenswrapper[4718]: I1210 14:59:37.632108 4718 generic.go:334] "Generic (PLEG): container finished" podID="7ced0eff-a905-4c77-abbb-f2417e807e89" containerID="f04c473e26acadccf4c98af1027269926d2fed9cd16a6d167605f98ab0904274" exitCode=0 Dec 10 14:59:37 crc kubenswrapper[4718]: I1210 14:59:37.632282 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-p2zkz" event={"ID":"7ced0eff-a905-4c77-abbb-f2417e807e89","Type":"ContainerDied","Data":"f04c473e26acadccf4c98af1027269926d2fed9cd16a6d167605f98ab0904274"} Dec 10 14:59:37 crc kubenswrapper[4718]: I1210 14:59:37.632330 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-p2zkz" event={"ID":"7ced0eff-a905-4c77-abbb-f2417e807e89","Type":"ContainerDied","Data":"a527bc43e6e16532fce409e9f5b450f26f416d3e4034cf1a4bde3c01fbef421d"} Dec 10 14:59:37 crc kubenswrapper[4718]: I1210 14:59:37.632355 4718 scope.go:117] "RemoveContainer" containerID="f04c473e26acadccf4c98af1027269926d2fed9cd16a6d167605f98ab0904274" Dec 10 14:59:37 crc kubenswrapper[4718]: I1210 14:59:37.632763 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-p2zkz" Dec 10 14:59:37 crc kubenswrapper[4718]: I1210 14:59:37.639026 4718 generic.go:334] "Generic (PLEG): container finished" podID="95261732-95ae-4618-a8a3-c883c287553e" containerID="01cec3e8c46ffa2721da9de159356f6f456558ade0dd4bed790b394d87a42b36" exitCode=1 Dec 10 14:59:37 crc kubenswrapper[4718]: I1210 14:59:37.639116 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"95261732-95ae-4618-a8a3-c883c287553e","Type":"ContainerDied","Data":"01cec3e8c46ffa2721da9de159356f6f456558ade0dd4bed790b394d87a42b36"} Dec 10 14:59:37 crc kubenswrapper[4718]: I1210 14:59:37.640345 4718 scope.go:117] "RemoveContainer" containerID="01cec3e8c46ffa2721da9de159356f6f456558ade0dd4bed790b394d87a42b36" Dec 10 14:59:37 crc kubenswrapper[4718]: E1210 14:59:37.640911 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with CrashLoopBackOff: \"back-off 20s restarting failed container=watcher-decision-engine pod=watcher-decision-engine-0_openstack(95261732-95ae-4618-a8a3-c883c287553e)\"" pod="openstack/watcher-decision-engine-0" podUID="95261732-95ae-4618-a8a3-c883c287553e" Dec 10 14:59:37 crc kubenswrapper[4718]: I1210 14:59:37.651206 4718 generic.go:334] "Generic (PLEG): container finished" podID="3cc0d50f-35f9-45b0-8316-cdf12ae72a97" containerID="8304b3ca9e66802bf4345c39855400c88f7e0239f1898e9d6ca36bce702475f1" exitCode=0 Dec 10 14:59:37 crc kubenswrapper[4718]: I1210 14:59:37.651262 4718 generic.go:334] "Generic (PLEG): container finished" podID="3cc0d50f-35f9-45b0-8316-cdf12ae72a97" containerID="db4a4ebfa0197693d3cec46810cf27c433423fc868a385f9df331c8d99b47b10" exitCode=2 Dec 10 14:59:37 crc kubenswrapper[4718]: I1210 14:59:37.651273 4718 generic.go:334] "Generic (PLEG): container finished" podID="3cc0d50f-35f9-45b0-8316-cdf12ae72a97" containerID="62264a2d67bd8fd5ef03e7e98e4b1ddc79ea540a708bce5bd65507b8d2c77b31" exitCode=0 Dec 10 14:59:37 crc kubenswrapper[4718]: I1210 14:59:37.651552 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3cc0d50f-35f9-45b0-8316-cdf12ae72a97","Type":"ContainerDied","Data":"8304b3ca9e66802bf4345c39855400c88f7e0239f1898e9d6ca36bce702475f1"} Dec 10 14:59:37 crc kubenswrapper[4718]: I1210 14:59:37.651726 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3cc0d50f-35f9-45b0-8316-cdf12ae72a97","Type":"ContainerDied","Data":"db4a4ebfa0197693d3cec46810cf27c433423fc868a385f9df331c8d99b47b10"} Dec 10 14:59:37 crc kubenswrapper[4718]: I1210 14:59:37.651741 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3cc0d50f-35f9-45b0-8316-cdf12ae72a97","Type":"ContainerDied","Data":"62264a2d67bd8fd5ef03e7e98e4b1ddc79ea540a708bce5bd65507b8d2c77b31"} Dec 10 14:59:37 crc kubenswrapper[4718]: I1210 14:59:37.732401 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-p2zkz"] Dec 10 14:59:37 crc kubenswrapper[4718]: I1210 14:59:37.745595 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-p2zkz"] Dec 10 14:59:38 crc kubenswrapper[4718]: I1210 14:59:38.061513 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7ced0eff-a905-4c77-abbb-f2417e807e89" path="/var/lib/kubelet/pods/7ced0eff-a905-4c77-abbb-f2417e807e89/volumes" Dec 10 14:59:38 crc kubenswrapper[4718]: I1210 14:59:38.119302 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Dec 10 14:59:38 crc kubenswrapper[4718]: E1210 14:59:38.121892 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="71a092ad-773d-47b4-bc1f-73358adecf4a" containerName="cinder-db-sync" Dec 10 14:59:38 crc kubenswrapper[4718]: I1210 14:59:38.121920 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="71a092ad-773d-47b4-bc1f-73358adecf4a" containerName="cinder-db-sync" Dec 10 14:59:38 crc kubenswrapper[4718]: E1210 14:59:38.121930 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ced0eff-a905-4c77-abbb-f2417e807e89" containerName="extract-content" Dec 10 14:59:38 crc kubenswrapper[4718]: I1210 14:59:38.121937 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ced0eff-a905-4c77-abbb-f2417e807e89" containerName="extract-content" Dec 10 14:59:38 crc kubenswrapper[4718]: E1210 14:59:38.121951 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ced0eff-a905-4c77-abbb-f2417e807e89" containerName="registry-server" Dec 10 14:59:38 crc kubenswrapper[4718]: I1210 14:59:38.121958 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ced0eff-a905-4c77-abbb-f2417e807e89" containerName="registry-server" Dec 10 14:59:38 crc kubenswrapper[4718]: E1210 14:59:38.121987 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ced0eff-a905-4c77-abbb-f2417e807e89" containerName="extract-utilities" Dec 10 14:59:38 crc kubenswrapper[4718]: I1210 14:59:38.121993 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ced0eff-a905-4c77-abbb-f2417e807e89" containerName="extract-utilities" Dec 10 14:59:38 crc kubenswrapper[4718]: E1210 14:59:38.122012 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6cafdb55-0f7a-4b44-94a2-f3005a0384e2" containerName="barbican-api-log" Dec 10 14:59:38 crc kubenswrapper[4718]: I1210 14:59:38.122020 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="6cafdb55-0f7a-4b44-94a2-f3005a0384e2" containerName="barbican-api-log" Dec 10 14:59:38 crc kubenswrapper[4718]: E1210 14:59:38.122032 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6cafdb55-0f7a-4b44-94a2-f3005a0384e2" containerName="barbican-api" Dec 10 14:59:38 crc kubenswrapper[4718]: I1210 14:59:38.122039 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="6cafdb55-0f7a-4b44-94a2-f3005a0384e2" containerName="barbican-api" Dec 10 14:59:38 crc kubenswrapper[4718]: I1210 14:59:38.122316 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="71a092ad-773d-47b4-bc1f-73358adecf4a" containerName="cinder-db-sync" Dec 10 14:59:38 crc kubenswrapper[4718]: I1210 14:59:38.122326 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="6cafdb55-0f7a-4b44-94a2-f3005a0384e2" containerName="barbican-api" Dec 10 14:59:38 crc kubenswrapper[4718]: I1210 14:59:38.122345 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="7ced0eff-a905-4c77-abbb-f2417e807e89" containerName="registry-server" Dec 10 14:59:38 crc kubenswrapper[4718]: I1210 14:59:38.122361 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="6cafdb55-0f7a-4b44-94a2-f3005a0384e2" containerName="barbican-api-log" Dec 10 14:59:38 crc kubenswrapper[4718]: I1210 14:59:38.135540 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Dec 10 14:59:38 crc kubenswrapper[4718]: I1210 14:59:38.142566 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-cp2kf" Dec 10 14:59:38 crc kubenswrapper[4718]: I1210 14:59:38.142935 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Dec 10 14:59:38 crc kubenswrapper[4718]: I1210 14:59:38.146375 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Dec 10 14:59:38 crc kubenswrapper[4718]: I1210 14:59:38.146627 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Dec 10 14:59:38 crc kubenswrapper[4718]: I1210 14:59:38.159086 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Dec 10 14:59:38 crc kubenswrapper[4718]: I1210 14:59:38.203496 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5cd5fc4d55-tx99q"] Dec 10 14:59:38 crc kubenswrapper[4718]: I1210 14:59:38.206323 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5cd5fc4d55-tx99q" Dec 10 14:59:38 crc kubenswrapper[4718]: I1210 14:59:38.238593 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b814c215-d63c-49f4-8760-fb3688f9d9e3-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"b814c215-d63c-49f4-8760-fb3688f9d9e3\") " pod="openstack/cinder-scheduler-0" Dec 10 14:59:38 crc kubenswrapper[4718]: I1210 14:59:38.238712 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b814c215-d63c-49f4-8760-fb3688f9d9e3-scripts\") pod \"cinder-scheduler-0\" (UID: \"b814c215-d63c-49f4-8760-fb3688f9d9e3\") " pod="openstack/cinder-scheduler-0" Dec 10 14:59:38 crc kubenswrapper[4718]: I1210 14:59:38.241682 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b814c215-d63c-49f4-8760-fb3688f9d9e3-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"b814c215-d63c-49f4-8760-fb3688f9d9e3\") " pod="openstack/cinder-scheduler-0" Dec 10 14:59:38 crc kubenswrapper[4718]: I1210 14:59:38.242025 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f6871804-824e-4dc2-a7db-f7c788915808-config\") pod \"dnsmasq-dns-5cd5fc4d55-tx99q\" (UID: \"f6871804-824e-4dc2-a7db-f7c788915808\") " pod="openstack/dnsmasq-dns-5cd5fc4d55-tx99q" Dec 10 14:59:38 crc kubenswrapper[4718]: I1210 14:59:38.242140 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b814c215-d63c-49f4-8760-fb3688f9d9e3-config-data\") pod \"cinder-scheduler-0\" (UID: \"b814c215-d63c-49f4-8760-fb3688f9d9e3\") " pod="openstack/cinder-scheduler-0" Dec 10 14:59:38 crc kubenswrapper[4718]: I1210 14:59:38.242215 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4jpbw\" (UniqueName: \"kubernetes.io/projected/f6871804-824e-4dc2-a7db-f7c788915808-kube-api-access-4jpbw\") pod \"dnsmasq-dns-5cd5fc4d55-tx99q\" (UID: \"f6871804-824e-4dc2-a7db-f7c788915808\") " pod="openstack/dnsmasq-dns-5cd5fc4d55-tx99q" Dec 10 14:59:38 crc kubenswrapper[4718]: I1210 14:59:38.242318 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f6871804-824e-4dc2-a7db-f7c788915808-dns-swift-storage-0\") pod \"dnsmasq-dns-5cd5fc4d55-tx99q\" (UID: \"f6871804-824e-4dc2-a7db-f7c788915808\") " pod="openstack/dnsmasq-dns-5cd5fc4d55-tx99q" Dec 10 14:59:38 crc kubenswrapper[4718]: I1210 14:59:38.242356 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f6871804-824e-4dc2-a7db-f7c788915808-ovsdbserver-sb\") pod \"dnsmasq-dns-5cd5fc4d55-tx99q\" (UID: \"f6871804-824e-4dc2-a7db-f7c788915808\") " pod="openstack/dnsmasq-dns-5cd5fc4d55-tx99q" Dec 10 14:59:38 crc kubenswrapper[4718]: I1210 14:59:38.242497 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f6871804-824e-4dc2-a7db-f7c788915808-ovsdbserver-nb\") pod \"dnsmasq-dns-5cd5fc4d55-tx99q\" (UID: \"f6871804-824e-4dc2-a7db-f7c788915808\") " pod="openstack/dnsmasq-dns-5cd5fc4d55-tx99q" Dec 10 14:59:38 crc kubenswrapper[4718]: I1210 14:59:38.242629 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f6871804-824e-4dc2-a7db-f7c788915808-dns-svc\") pod \"dnsmasq-dns-5cd5fc4d55-tx99q\" (UID: \"f6871804-824e-4dc2-a7db-f7c788915808\") " pod="openstack/dnsmasq-dns-5cd5fc4d55-tx99q" Dec 10 14:59:38 crc kubenswrapper[4718]: I1210 14:59:38.242741 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wbsr7\" (UniqueName: \"kubernetes.io/projected/b814c215-d63c-49f4-8760-fb3688f9d9e3-kube-api-access-wbsr7\") pod \"cinder-scheduler-0\" (UID: \"b814c215-d63c-49f4-8760-fb3688f9d9e3\") " pod="openstack/cinder-scheduler-0" Dec 10 14:59:38 crc kubenswrapper[4718]: I1210 14:59:38.242850 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b814c215-d63c-49f4-8760-fb3688f9d9e3-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"b814c215-d63c-49f4-8760-fb3688f9d9e3\") " pod="openstack/cinder-scheduler-0" Dec 10 14:59:38 crc kubenswrapper[4718]: I1210 14:59:38.255949 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5cd5fc4d55-tx99q"] Dec 10 14:59:38 crc kubenswrapper[4718]: I1210 14:59:38.345318 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f6871804-824e-4dc2-a7db-f7c788915808-dns-svc\") pod \"dnsmasq-dns-5cd5fc4d55-tx99q\" (UID: \"f6871804-824e-4dc2-a7db-f7c788915808\") " pod="openstack/dnsmasq-dns-5cd5fc4d55-tx99q" Dec 10 14:59:38 crc kubenswrapper[4718]: I1210 14:59:38.345500 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wbsr7\" (UniqueName: \"kubernetes.io/projected/b814c215-d63c-49f4-8760-fb3688f9d9e3-kube-api-access-wbsr7\") pod \"cinder-scheduler-0\" (UID: \"b814c215-d63c-49f4-8760-fb3688f9d9e3\") " pod="openstack/cinder-scheduler-0" Dec 10 14:59:38 crc kubenswrapper[4718]: I1210 14:59:38.345550 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b814c215-d63c-49f4-8760-fb3688f9d9e3-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"b814c215-d63c-49f4-8760-fb3688f9d9e3\") " pod="openstack/cinder-scheduler-0" Dec 10 14:59:38 crc kubenswrapper[4718]: I1210 14:59:38.345581 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b814c215-d63c-49f4-8760-fb3688f9d9e3-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"b814c215-d63c-49f4-8760-fb3688f9d9e3\") " pod="openstack/cinder-scheduler-0" Dec 10 14:59:38 crc kubenswrapper[4718]: I1210 14:59:38.345642 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b814c215-d63c-49f4-8760-fb3688f9d9e3-scripts\") pod \"cinder-scheduler-0\" (UID: \"b814c215-d63c-49f4-8760-fb3688f9d9e3\") " pod="openstack/cinder-scheduler-0" Dec 10 14:59:38 crc kubenswrapper[4718]: I1210 14:59:38.345672 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b814c215-d63c-49f4-8760-fb3688f9d9e3-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"b814c215-d63c-49f4-8760-fb3688f9d9e3\") " pod="openstack/cinder-scheduler-0" Dec 10 14:59:38 crc kubenswrapper[4718]: I1210 14:59:38.345736 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f6871804-824e-4dc2-a7db-f7c788915808-config\") pod \"dnsmasq-dns-5cd5fc4d55-tx99q\" (UID: \"f6871804-824e-4dc2-a7db-f7c788915808\") " pod="openstack/dnsmasq-dns-5cd5fc4d55-tx99q" Dec 10 14:59:38 crc kubenswrapper[4718]: I1210 14:59:38.345771 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b814c215-d63c-49f4-8760-fb3688f9d9e3-config-data\") pod \"cinder-scheduler-0\" (UID: \"b814c215-d63c-49f4-8760-fb3688f9d9e3\") " pod="openstack/cinder-scheduler-0" Dec 10 14:59:38 crc kubenswrapper[4718]: I1210 14:59:38.345799 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4jpbw\" (UniqueName: \"kubernetes.io/projected/f6871804-824e-4dc2-a7db-f7c788915808-kube-api-access-4jpbw\") pod \"dnsmasq-dns-5cd5fc4d55-tx99q\" (UID: \"f6871804-824e-4dc2-a7db-f7c788915808\") " pod="openstack/dnsmasq-dns-5cd5fc4d55-tx99q" Dec 10 14:59:38 crc kubenswrapper[4718]: I1210 14:59:38.345837 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f6871804-824e-4dc2-a7db-f7c788915808-dns-swift-storage-0\") pod \"dnsmasq-dns-5cd5fc4d55-tx99q\" (UID: \"f6871804-824e-4dc2-a7db-f7c788915808\") " pod="openstack/dnsmasq-dns-5cd5fc4d55-tx99q" Dec 10 14:59:38 crc kubenswrapper[4718]: I1210 14:59:38.345856 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f6871804-824e-4dc2-a7db-f7c788915808-ovsdbserver-sb\") pod \"dnsmasq-dns-5cd5fc4d55-tx99q\" (UID: \"f6871804-824e-4dc2-a7db-f7c788915808\") " pod="openstack/dnsmasq-dns-5cd5fc4d55-tx99q" Dec 10 14:59:38 crc kubenswrapper[4718]: I1210 14:59:38.345900 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f6871804-824e-4dc2-a7db-f7c788915808-ovsdbserver-nb\") pod \"dnsmasq-dns-5cd5fc4d55-tx99q\" (UID: \"f6871804-824e-4dc2-a7db-f7c788915808\") " pod="openstack/dnsmasq-dns-5cd5fc4d55-tx99q" Dec 10 14:59:38 crc kubenswrapper[4718]: I1210 14:59:38.347305 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f6871804-824e-4dc2-a7db-f7c788915808-ovsdbserver-nb\") pod \"dnsmasq-dns-5cd5fc4d55-tx99q\" (UID: \"f6871804-824e-4dc2-a7db-f7c788915808\") " pod="openstack/dnsmasq-dns-5cd5fc4d55-tx99q" Dec 10 14:59:38 crc kubenswrapper[4718]: I1210 14:59:38.348134 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f6871804-824e-4dc2-a7db-f7c788915808-dns-svc\") pod \"dnsmasq-dns-5cd5fc4d55-tx99q\" (UID: \"f6871804-824e-4dc2-a7db-f7c788915808\") " pod="openstack/dnsmasq-dns-5cd5fc4d55-tx99q" Dec 10 14:59:38 crc kubenswrapper[4718]: I1210 14:59:38.350740 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f6871804-824e-4dc2-a7db-f7c788915808-config\") pod \"dnsmasq-dns-5cd5fc4d55-tx99q\" (UID: \"f6871804-824e-4dc2-a7db-f7c788915808\") " pod="openstack/dnsmasq-dns-5cd5fc4d55-tx99q" Dec 10 14:59:38 crc kubenswrapper[4718]: I1210 14:59:38.353477 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b814c215-d63c-49f4-8760-fb3688f9d9e3-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"b814c215-d63c-49f4-8760-fb3688f9d9e3\") " pod="openstack/cinder-scheduler-0" Dec 10 14:59:38 crc kubenswrapper[4718]: I1210 14:59:38.354272 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f6871804-824e-4dc2-a7db-f7c788915808-dns-swift-storage-0\") pod \"dnsmasq-dns-5cd5fc4d55-tx99q\" (UID: \"f6871804-824e-4dc2-a7db-f7c788915808\") " pod="openstack/dnsmasq-dns-5cd5fc4d55-tx99q" Dec 10 14:59:38 crc kubenswrapper[4718]: I1210 14:59:38.355771 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b814c215-d63c-49f4-8760-fb3688f9d9e3-config-data\") pod \"cinder-scheduler-0\" (UID: \"b814c215-d63c-49f4-8760-fb3688f9d9e3\") " pod="openstack/cinder-scheduler-0" Dec 10 14:59:38 crc kubenswrapper[4718]: I1210 14:59:38.370801 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f6871804-824e-4dc2-a7db-f7c788915808-ovsdbserver-sb\") pod \"dnsmasq-dns-5cd5fc4d55-tx99q\" (UID: \"f6871804-824e-4dc2-a7db-f7c788915808\") " pod="openstack/dnsmasq-dns-5cd5fc4d55-tx99q" Dec 10 14:59:38 crc kubenswrapper[4718]: I1210 14:59:38.373371 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b814c215-d63c-49f4-8760-fb3688f9d9e3-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"b814c215-d63c-49f4-8760-fb3688f9d9e3\") " pod="openstack/cinder-scheduler-0" Dec 10 14:59:38 crc kubenswrapper[4718]: I1210 14:59:38.383779 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4jpbw\" (UniqueName: \"kubernetes.io/projected/f6871804-824e-4dc2-a7db-f7c788915808-kube-api-access-4jpbw\") pod \"dnsmasq-dns-5cd5fc4d55-tx99q\" (UID: \"f6871804-824e-4dc2-a7db-f7c788915808\") " pod="openstack/dnsmasq-dns-5cd5fc4d55-tx99q" Dec 10 14:59:38 crc kubenswrapper[4718]: I1210 14:59:38.385765 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b814c215-d63c-49f4-8760-fb3688f9d9e3-scripts\") pod \"cinder-scheduler-0\" (UID: \"b814c215-d63c-49f4-8760-fb3688f9d9e3\") " pod="openstack/cinder-scheduler-0" Dec 10 14:59:38 crc kubenswrapper[4718]: I1210 14:59:38.390718 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wbsr7\" (UniqueName: \"kubernetes.io/projected/b814c215-d63c-49f4-8760-fb3688f9d9e3-kube-api-access-wbsr7\") pod \"cinder-scheduler-0\" (UID: \"b814c215-d63c-49f4-8760-fb3688f9d9e3\") " pod="openstack/cinder-scheduler-0" Dec 10 14:59:38 crc kubenswrapper[4718]: I1210 14:59:38.395055 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b814c215-d63c-49f4-8760-fb3688f9d9e3-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"b814c215-d63c-49f4-8760-fb3688f9d9e3\") " pod="openstack/cinder-scheduler-0" Dec 10 14:59:38 crc kubenswrapper[4718]: I1210 14:59:38.456277 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Dec 10 14:59:38 crc kubenswrapper[4718]: I1210 14:59:38.459971 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Dec 10 14:59:38 crc kubenswrapper[4718]: I1210 14:59:38.468116 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Dec 10 14:59:38 crc kubenswrapper[4718]: I1210 14:59:38.493267 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Dec 10 14:59:38 crc kubenswrapper[4718]: I1210 14:59:38.517193 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Dec 10 14:59:38 crc kubenswrapper[4718]: I1210 14:59:38.539939 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5cd5fc4d55-tx99q" Dec 10 14:59:38 crc kubenswrapper[4718]: I1210 14:59:38.566011 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hpc8h\" (UniqueName: \"kubernetes.io/projected/cc254fa2-5ac7-45d7-845b-f1ac0c898e98-kube-api-access-hpc8h\") pod \"cinder-api-0\" (UID: \"cc254fa2-5ac7-45d7-845b-f1ac0c898e98\") " pod="openstack/cinder-api-0" Dec 10 14:59:38 crc kubenswrapper[4718]: I1210 14:59:38.566138 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cc254fa2-5ac7-45d7-845b-f1ac0c898e98-config-data\") pod \"cinder-api-0\" (UID: \"cc254fa2-5ac7-45d7-845b-f1ac0c898e98\") " pod="openstack/cinder-api-0" Dec 10 14:59:38 crc kubenswrapper[4718]: I1210 14:59:38.566218 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cc254fa2-5ac7-45d7-845b-f1ac0c898e98-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"cc254fa2-5ac7-45d7-845b-f1ac0c898e98\") " pod="openstack/cinder-api-0" Dec 10 14:59:38 crc kubenswrapper[4718]: I1210 14:59:38.566686 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/cc254fa2-5ac7-45d7-845b-f1ac0c898e98-config-data-custom\") pod \"cinder-api-0\" (UID: \"cc254fa2-5ac7-45d7-845b-f1ac0c898e98\") " pod="openstack/cinder-api-0" Dec 10 14:59:38 crc kubenswrapper[4718]: I1210 14:59:38.566927 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cc254fa2-5ac7-45d7-845b-f1ac0c898e98-logs\") pod \"cinder-api-0\" (UID: \"cc254fa2-5ac7-45d7-845b-f1ac0c898e98\") " pod="openstack/cinder-api-0" Dec 10 14:59:38 crc kubenswrapper[4718]: I1210 14:59:38.567210 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/cc254fa2-5ac7-45d7-845b-f1ac0c898e98-etc-machine-id\") pod \"cinder-api-0\" (UID: \"cc254fa2-5ac7-45d7-845b-f1ac0c898e98\") " pod="openstack/cinder-api-0" Dec 10 14:59:38 crc kubenswrapper[4718]: I1210 14:59:38.567335 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cc254fa2-5ac7-45d7-845b-f1ac0c898e98-scripts\") pod \"cinder-api-0\" (UID: \"cc254fa2-5ac7-45d7-845b-f1ac0c898e98\") " pod="openstack/cinder-api-0" Dec 10 14:59:38 crc kubenswrapper[4718]: I1210 14:59:38.590071 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-decision-engine-0" Dec 10 14:59:38 crc kubenswrapper[4718]: I1210 14:59:38.668465 4718 scope.go:117] "RemoveContainer" containerID="01cec3e8c46ffa2721da9de159356f6f456558ade0dd4bed790b394d87a42b36" Dec 10 14:59:38 crc kubenswrapper[4718]: I1210 14:59:38.669434 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cc254fa2-5ac7-45d7-845b-f1ac0c898e98-logs\") pod \"cinder-api-0\" (UID: \"cc254fa2-5ac7-45d7-845b-f1ac0c898e98\") " pod="openstack/cinder-api-0" Dec 10 14:59:38 crc kubenswrapper[4718]: I1210 14:59:38.669618 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/cc254fa2-5ac7-45d7-845b-f1ac0c898e98-etc-machine-id\") pod \"cinder-api-0\" (UID: \"cc254fa2-5ac7-45d7-845b-f1ac0c898e98\") " pod="openstack/cinder-api-0" Dec 10 14:59:38 crc kubenswrapper[4718]: I1210 14:59:38.669685 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cc254fa2-5ac7-45d7-845b-f1ac0c898e98-scripts\") pod \"cinder-api-0\" (UID: \"cc254fa2-5ac7-45d7-845b-f1ac0c898e98\") " pod="openstack/cinder-api-0" Dec 10 14:59:38 crc kubenswrapper[4718]: I1210 14:59:38.669729 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hpc8h\" (UniqueName: \"kubernetes.io/projected/cc254fa2-5ac7-45d7-845b-f1ac0c898e98-kube-api-access-hpc8h\") pod \"cinder-api-0\" (UID: \"cc254fa2-5ac7-45d7-845b-f1ac0c898e98\") " pod="openstack/cinder-api-0" Dec 10 14:59:38 crc kubenswrapper[4718]: I1210 14:59:38.669864 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cc254fa2-5ac7-45d7-845b-f1ac0c898e98-config-data\") pod \"cinder-api-0\" (UID: \"cc254fa2-5ac7-45d7-845b-f1ac0c898e98\") " pod="openstack/cinder-api-0" Dec 10 14:59:38 crc kubenswrapper[4718]: I1210 14:59:38.669912 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cc254fa2-5ac7-45d7-845b-f1ac0c898e98-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"cc254fa2-5ac7-45d7-845b-f1ac0c898e98\") " pod="openstack/cinder-api-0" Dec 10 14:59:38 crc kubenswrapper[4718]: I1210 14:59:38.669998 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/cc254fa2-5ac7-45d7-845b-f1ac0c898e98-config-data-custom\") pod \"cinder-api-0\" (UID: \"cc254fa2-5ac7-45d7-845b-f1ac0c898e98\") " pod="openstack/cinder-api-0" Dec 10 14:59:38 crc kubenswrapper[4718]: E1210 14:59:38.673035 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with CrashLoopBackOff: \"back-off 20s restarting failed container=watcher-decision-engine pod=watcher-decision-engine-0_openstack(95261732-95ae-4618-a8a3-c883c287553e)\"" pod="openstack/watcher-decision-engine-0" podUID="95261732-95ae-4618-a8a3-c883c287553e" Dec 10 14:59:38 crc kubenswrapper[4718]: I1210 14:59:38.673323 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cc254fa2-5ac7-45d7-845b-f1ac0c898e98-logs\") pod \"cinder-api-0\" (UID: \"cc254fa2-5ac7-45d7-845b-f1ac0c898e98\") " pod="openstack/cinder-api-0" Dec 10 14:59:38 crc kubenswrapper[4718]: I1210 14:59:38.673983 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/cc254fa2-5ac7-45d7-845b-f1ac0c898e98-etc-machine-id\") pod \"cinder-api-0\" (UID: \"cc254fa2-5ac7-45d7-845b-f1ac0c898e98\") " pod="openstack/cinder-api-0" Dec 10 14:59:38 crc kubenswrapper[4718]: I1210 14:59:38.683376 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/cc254fa2-5ac7-45d7-845b-f1ac0c898e98-config-data-custom\") pod \"cinder-api-0\" (UID: \"cc254fa2-5ac7-45d7-845b-f1ac0c898e98\") " pod="openstack/cinder-api-0" Dec 10 14:59:38 crc kubenswrapper[4718]: I1210 14:59:38.690600 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cc254fa2-5ac7-45d7-845b-f1ac0c898e98-config-data\") pod \"cinder-api-0\" (UID: \"cc254fa2-5ac7-45d7-845b-f1ac0c898e98\") " pod="openstack/cinder-api-0" Dec 10 14:59:38 crc kubenswrapper[4718]: I1210 14:59:38.697607 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cc254fa2-5ac7-45d7-845b-f1ac0c898e98-scripts\") pod \"cinder-api-0\" (UID: \"cc254fa2-5ac7-45d7-845b-f1ac0c898e98\") " pod="openstack/cinder-api-0" Dec 10 14:59:38 crc kubenswrapper[4718]: I1210 14:59:38.703270 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cc254fa2-5ac7-45d7-845b-f1ac0c898e98-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"cc254fa2-5ac7-45d7-845b-f1ac0c898e98\") " pod="openstack/cinder-api-0" Dec 10 14:59:38 crc kubenswrapper[4718]: I1210 14:59:38.710128 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hpc8h\" (UniqueName: \"kubernetes.io/projected/cc254fa2-5ac7-45d7-845b-f1ac0c898e98-kube-api-access-hpc8h\") pod \"cinder-api-0\" (UID: \"cc254fa2-5ac7-45d7-845b-f1ac0c898e98\") " pod="openstack/cinder-api-0" Dec 10 14:59:38 crc kubenswrapper[4718]: I1210 14:59:38.841803 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Dec 10 14:59:39 crc kubenswrapper[4718]: I1210 14:59:39.301731 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-6c7874df4c-ns7dm" Dec 10 14:59:39 crc kubenswrapper[4718]: I1210 14:59:39.685943 4718 generic.go:334] "Generic (PLEG): container finished" podID="3cc0d50f-35f9-45b0-8316-cdf12ae72a97" containerID="e71ee9423876c4df1d64e82e0e300727ba7d89ccc168d37fab5011a004587e62" exitCode=0 Dec 10 14:59:39 crc kubenswrapper[4718]: I1210 14:59:39.686017 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3cc0d50f-35f9-45b0-8316-cdf12ae72a97","Type":"ContainerDied","Data":"e71ee9423876c4df1d64e82e0e300727ba7d89ccc168d37fab5011a004587e62"} Dec 10 14:59:40 crc kubenswrapper[4718]: I1210 14:59:40.098299 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-bqj7n"] Dec 10 14:59:40 crc kubenswrapper[4718]: I1210 14:59:40.100254 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-bqj7n" Dec 10 14:59:40 crc kubenswrapper[4718]: I1210 14:59:40.133844 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-bqj7n"] Dec 10 14:59:40 crc kubenswrapper[4718]: I1210 14:59:40.204506 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-z2hg5"] Dec 10 14:59:40 crc kubenswrapper[4718]: I1210 14:59:40.206987 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-z2hg5" Dec 10 14:59:40 crc kubenswrapper[4718]: I1210 14:59:40.216076 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-z2hg5"] Dec 10 14:59:40 crc kubenswrapper[4718]: I1210 14:59:40.239935 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7plmv\" (UniqueName: \"kubernetes.io/projected/2ffe7b9f-4057-4b1c-a2dd-18fbafb5474e-kube-api-access-7plmv\") pod \"nova-api-db-create-bqj7n\" (UID: \"2ffe7b9f-4057-4b1c-a2dd-18fbafb5474e\") " pod="openstack/nova-api-db-create-bqj7n" Dec 10 14:59:40 crc kubenswrapper[4718]: I1210 14:59:40.240044 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2ffe7b9f-4057-4b1c-a2dd-18fbafb5474e-operator-scripts\") pod \"nova-api-db-create-bqj7n\" (UID: \"2ffe7b9f-4057-4b1c-a2dd-18fbafb5474e\") " pod="openstack/nova-api-db-create-bqj7n" Dec 10 14:59:40 crc kubenswrapper[4718]: I1210 14:59:40.326219 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-1795-account-create-update-xl996"] Dec 10 14:59:40 crc kubenswrapper[4718]: I1210 14:59:40.328706 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-1795-account-create-update-xl996" Dec 10 14:59:40 crc kubenswrapper[4718]: I1210 14:59:40.333245 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Dec 10 14:59:40 crc kubenswrapper[4718]: I1210 14:59:40.342242 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2ffe7b9f-4057-4b1c-a2dd-18fbafb5474e-operator-scripts\") pod \"nova-api-db-create-bqj7n\" (UID: \"2ffe7b9f-4057-4b1c-a2dd-18fbafb5474e\") " pod="openstack/nova-api-db-create-bqj7n" Dec 10 14:59:40 crc kubenswrapper[4718]: I1210 14:59:40.342462 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/92f397d2-d553-4a53-88b2-314c2dc7ebf6-operator-scripts\") pod \"nova-cell0-db-create-z2hg5\" (UID: \"92f397d2-d553-4a53-88b2-314c2dc7ebf6\") " pod="openstack/nova-cell0-db-create-z2hg5" Dec 10 14:59:40 crc kubenswrapper[4718]: I1210 14:59:40.342679 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tb8b7\" (UniqueName: \"kubernetes.io/projected/92f397d2-d553-4a53-88b2-314c2dc7ebf6-kube-api-access-tb8b7\") pod \"nova-cell0-db-create-z2hg5\" (UID: \"92f397d2-d553-4a53-88b2-314c2dc7ebf6\") " pod="openstack/nova-cell0-db-create-z2hg5" Dec 10 14:59:40 crc kubenswrapper[4718]: I1210 14:59:40.342828 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7plmv\" (UniqueName: \"kubernetes.io/projected/2ffe7b9f-4057-4b1c-a2dd-18fbafb5474e-kube-api-access-7plmv\") pod \"nova-api-db-create-bqj7n\" (UID: \"2ffe7b9f-4057-4b1c-a2dd-18fbafb5474e\") " pod="openstack/nova-api-db-create-bqj7n" Dec 10 14:59:40 crc kubenswrapper[4718]: I1210 14:59:40.346824 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2ffe7b9f-4057-4b1c-a2dd-18fbafb5474e-operator-scripts\") pod \"nova-api-db-create-bqj7n\" (UID: \"2ffe7b9f-4057-4b1c-a2dd-18fbafb5474e\") " pod="openstack/nova-api-db-create-bqj7n" Dec 10 14:59:40 crc kubenswrapper[4718]: I1210 14:59:40.363239 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-1795-account-create-update-xl996"] Dec 10 14:59:40 crc kubenswrapper[4718]: I1210 14:59:40.389283 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7plmv\" (UniqueName: \"kubernetes.io/projected/2ffe7b9f-4057-4b1c-a2dd-18fbafb5474e-kube-api-access-7plmv\") pod \"nova-api-db-create-bqj7n\" (UID: \"2ffe7b9f-4057-4b1c-a2dd-18fbafb5474e\") " pod="openstack/nova-api-db-create-bqj7n" Dec 10 14:59:40 crc kubenswrapper[4718]: I1210 14:59:40.430331 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-bqj7n" Dec 10 14:59:40 crc kubenswrapper[4718]: I1210 14:59:40.449359 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e8a444d7-abdd-44f0-82e1-cc8d1cd9b240-operator-scripts\") pod \"nova-api-1795-account-create-update-xl996\" (UID: \"e8a444d7-abdd-44f0-82e1-cc8d1cd9b240\") " pod="openstack/nova-api-1795-account-create-update-xl996" Dec 10 14:59:40 crc kubenswrapper[4718]: I1210 14:59:40.449498 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bzprn\" (UniqueName: \"kubernetes.io/projected/e8a444d7-abdd-44f0-82e1-cc8d1cd9b240-kube-api-access-bzprn\") pod \"nova-api-1795-account-create-update-xl996\" (UID: \"e8a444d7-abdd-44f0-82e1-cc8d1cd9b240\") " pod="openstack/nova-api-1795-account-create-update-xl996" Dec 10 14:59:40 crc kubenswrapper[4718]: I1210 14:59:40.449625 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/92f397d2-d553-4a53-88b2-314c2dc7ebf6-operator-scripts\") pod \"nova-cell0-db-create-z2hg5\" (UID: \"92f397d2-d553-4a53-88b2-314c2dc7ebf6\") " pod="openstack/nova-cell0-db-create-z2hg5" Dec 10 14:59:40 crc kubenswrapper[4718]: I1210 14:59:40.449740 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tb8b7\" (UniqueName: \"kubernetes.io/projected/92f397d2-d553-4a53-88b2-314c2dc7ebf6-kube-api-access-tb8b7\") pod \"nova-cell0-db-create-z2hg5\" (UID: \"92f397d2-d553-4a53-88b2-314c2dc7ebf6\") " pod="openstack/nova-cell0-db-create-z2hg5" Dec 10 14:59:40 crc kubenswrapper[4718]: I1210 14:59:40.451544 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/92f397d2-d553-4a53-88b2-314c2dc7ebf6-operator-scripts\") pod \"nova-cell0-db-create-z2hg5\" (UID: \"92f397d2-d553-4a53-88b2-314c2dc7ebf6\") " pod="openstack/nova-cell0-db-create-z2hg5" Dec 10 14:59:40 crc kubenswrapper[4718]: I1210 14:59:40.498505 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-q7sbn"] Dec 10 14:59:40 crc kubenswrapper[4718]: I1210 14:59:40.500302 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-q7sbn" Dec 10 14:59:40 crc kubenswrapper[4718]: I1210 14:59:40.504963 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tb8b7\" (UniqueName: \"kubernetes.io/projected/92f397d2-d553-4a53-88b2-314c2dc7ebf6-kube-api-access-tb8b7\") pod \"nova-cell0-db-create-z2hg5\" (UID: \"92f397d2-d553-4a53-88b2-314c2dc7ebf6\") " pod="openstack/nova-cell0-db-create-z2hg5" Dec 10 14:59:40 crc kubenswrapper[4718]: I1210 14:59:40.521494 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-abb8-account-create-update-gvhvt"] Dec 10 14:59:40 crc kubenswrapper[4718]: I1210 14:59:40.525845 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-abb8-account-create-update-gvhvt" Dec 10 14:59:40 crc kubenswrapper[4718]: I1210 14:59:40.528425 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Dec 10 14:59:40 crc kubenswrapper[4718]: I1210 14:59:40.545057 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-z2hg5" Dec 10 14:59:40 crc kubenswrapper[4718]: I1210 14:59:40.548703 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-q7sbn"] Dec 10 14:59:40 crc kubenswrapper[4718]: I1210 14:59:40.557935 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/494a054f-4818-4649-83ef-5b058e0d9436-operator-scripts\") pod \"nova-cell1-db-create-q7sbn\" (UID: \"494a054f-4818-4649-83ef-5b058e0d9436\") " pod="openstack/nova-cell1-db-create-q7sbn" Dec 10 14:59:40 crc kubenswrapper[4718]: I1210 14:59:40.558348 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c96sm\" (UniqueName: \"kubernetes.io/projected/494a054f-4818-4649-83ef-5b058e0d9436-kube-api-access-c96sm\") pod \"nova-cell1-db-create-q7sbn\" (UID: \"494a054f-4818-4649-83ef-5b058e0d9436\") " pod="openstack/nova-cell1-db-create-q7sbn" Dec 10 14:59:40 crc kubenswrapper[4718]: I1210 14:59:40.559887 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e8a444d7-abdd-44f0-82e1-cc8d1cd9b240-operator-scripts\") pod \"nova-api-1795-account-create-update-xl996\" (UID: \"e8a444d7-abdd-44f0-82e1-cc8d1cd9b240\") " pod="openstack/nova-api-1795-account-create-update-xl996" Dec 10 14:59:40 crc kubenswrapper[4718]: I1210 14:59:40.560116 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bzprn\" (UniqueName: \"kubernetes.io/projected/e8a444d7-abdd-44f0-82e1-cc8d1cd9b240-kube-api-access-bzprn\") pod \"nova-api-1795-account-create-update-xl996\" (UID: \"e8a444d7-abdd-44f0-82e1-cc8d1cd9b240\") " pod="openstack/nova-api-1795-account-create-update-xl996" Dec 10 14:59:40 crc kubenswrapper[4718]: I1210 14:59:40.560913 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e8a444d7-abdd-44f0-82e1-cc8d1cd9b240-operator-scripts\") pod \"nova-api-1795-account-create-update-xl996\" (UID: \"e8a444d7-abdd-44f0-82e1-cc8d1cd9b240\") " pod="openstack/nova-api-1795-account-create-update-xl996" Dec 10 14:59:40 crc kubenswrapper[4718]: I1210 14:59:40.592673 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-abb8-account-create-update-gvhvt"] Dec 10 14:59:40 crc kubenswrapper[4718]: I1210 14:59:40.625352 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bzprn\" (UniqueName: \"kubernetes.io/projected/e8a444d7-abdd-44f0-82e1-cc8d1cd9b240-kube-api-access-bzprn\") pod \"nova-api-1795-account-create-update-xl996\" (UID: \"e8a444d7-abdd-44f0-82e1-cc8d1cd9b240\") " pod="openstack/nova-api-1795-account-create-update-xl996" Dec 10 14:59:40 crc kubenswrapper[4718]: I1210 14:59:40.658089 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-1795-account-create-update-xl996" Dec 10 14:59:40 crc kubenswrapper[4718]: I1210 14:59:40.666630 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/494a054f-4818-4649-83ef-5b058e0d9436-operator-scripts\") pod \"nova-cell1-db-create-q7sbn\" (UID: \"494a054f-4818-4649-83ef-5b058e0d9436\") " pod="openstack/nova-cell1-db-create-q7sbn" Dec 10 14:59:40 crc kubenswrapper[4718]: I1210 14:59:40.667044 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c96sm\" (UniqueName: \"kubernetes.io/projected/494a054f-4818-4649-83ef-5b058e0d9436-kube-api-access-c96sm\") pod \"nova-cell1-db-create-q7sbn\" (UID: \"494a054f-4818-4649-83ef-5b058e0d9436\") " pod="openstack/nova-cell1-db-create-q7sbn" Dec 10 14:59:40 crc kubenswrapper[4718]: I1210 14:59:40.667194 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qfzjq\" (UniqueName: \"kubernetes.io/projected/53c8e09f-d39f-4f38-9e9e-199399c09a14-kube-api-access-qfzjq\") pod \"nova-cell0-abb8-account-create-update-gvhvt\" (UID: \"53c8e09f-d39f-4f38-9e9e-199399c09a14\") " pod="openstack/nova-cell0-abb8-account-create-update-gvhvt" Dec 10 14:59:40 crc kubenswrapper[4718]: I1210 14:59:40.667892 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/53c8e09f-d39f-4f38-9e9e-199399c09a14-operator-scripts\") pod \"nova-cell0-abb8-account-create-update-gvhvt\" (UID: \"53c8e09f-d39f-4f38-9e9e-199399c09a14\") " pod="openstack/nova-cell0-abb8-account-create-update-gvhvt" Dec 10 14:59:40 crc kubenswrapper[4718]: I1210 14:59:40.668281 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/494a054f-4818-4649-83ef-5b058e0d9436-operator-scripts\") pod \"nova-cell1-db-create-q7sbn\" (UID: \"494a054f-4818-4649-83ef-5b058e0d9436\") " pod="openstack/nova-cell1-db-create-q7sbn" Dec 10 14:59:40 crc kubenswrapper[4718]: I1210 14:59:40.723745 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c96sm\" (UniqueName: \"kubernetes.io/projected/494a054f-4818-4649-83ef-5b058e0d9436-kube-api-access-c96sm\") pod \"nova-cell1-db-create-q7sbn\" (UID: \"494a054f-4818-4649-83ef-5b058e0d9436\") " pod="openstack/nova-cell1-db-create-q7sbn" Dec 10 14:59:40 crc kubenswrapper[4718]: I1210 14:59:40.775448 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-9820-account-create-update-dph9s"] Dec 10 14:59:40 crc kubenswrapper[4718]: I1210 14:59:40.779198 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-9820-account-create-update-dph9s" Dec 10 14:59:40 crc kubenswrapper[4718]: I1210 14:59:40.790712 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Dec 10 14:59:40 crc kubenswrapper[4718]: I1210 14:59:40.822685 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/53c8e09f-d39f-4f38-9e9e-199399c09a14-operator-scripts\") pod \"nova-cell0-abb8-account-create-update-gvhvt\" (UID: \"53c8e09f-d39f-4f38-9e9e-199399c09a14\") " pod="openstack/nova-cell0-abb8-account-create-update-gvhvt" Dec 10 14:59:40 crc kubenswrapper[4718]: I1210 14:59:40.824765 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qfzjq\" (UniqueName: \"kubernetes.io/projected/53c8e09f-d39f-4f38-9e9e-199399c09a14-kube-api-access-qfzjq\") pod \"nova-cell0-abb8-account-create-update-gvhvt\" (UID: \"53c8e09f-d39f-4f38-9e9e-199399c09a14\") " pod="openstack/nova-cell0-abb8-account-create-update-gvhvt" Dec 10 14:59:40 crc kubenswrapper[4718]: I1210 14:59:40.826689 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/53c8e09f-d39f-4f38-9e9e-199399c09a14-operator-scripts\") pod \"nova-cell0-abb8-account-create-update-gvhvt\" (UID: \"53c8e09f-d39f-4f38-9e9e-199399c09a14\") " pod="openstack/nova-cell0-abb8-account-create-update-gvhvt" Dec 10 14:59:40 crc kubenswrapper[4718]: I1210 14:59:40.827183 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-9820-account-create-update-dph9s"] Dec 10 14:59:40 crc kubenswrapper[4718]: I1210 14:59:40.873622 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qfzjq\" (UniqueName: \"kubernetes.io/projected/53c8e09f-d39f-4f38-9e9e-199399c09a14-kube-api-access-qfzjq\") pod \"nova-cell0-abb8-account-create-update-gvhvt\" (UID: \"53c8e09f-d39f-4f38-9e9e-199399c09a14\") " pod="openstack/nova-cell0-abb8-account-create-update-gvhvt" Dec 10 14:59:40 crc kubenswrapper[4718]: I1210 14:59:40.914459 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-q7sbn" Dec 10 14:59:40 crc kubenswrapper[4718]: I1210 14:59:40.928567 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7wxl7\" (UniqueName: \"kubernetes.io/projected/973eb84a-8809-4047-9112-4501f249ba68-kube-api-access-7wxl7\") pod \"nova-cell1-9820-account-create-update-dph9s\" (UID: \"973eb84a-8809-4047-9112-4501f249ba68\") " pod="openstack/nova-cell1-9820-account-create-update-dph9s" Dec 10 14:59:40 crc kubenswrapper[4718]: I1210 14:59:40.928848 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/973eb84a-8809-4047-9112-4501f249ba68-operator-scripts\") pod \"nova-cell1-9820-account-create-update-dph9s\" (UID: \"973eb84a-8809-4047-9112-4501f249ba68\") " pod="openstack/nova-cell1-9820-account-create-update-dph9s" Dec 10 14:59:40 crc kubenswrapper[4718]: I1210 14:59:40.976688 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-abb8-account-create-update-gvhvt" Dec 10 14:59:41 crc kubenswrapper[4718]: I1210 14:59:41.031435 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7wxl7\" (UniqueName: \"kubernetes.io/projected/973eb84a-8809-4047-9112-4501f249ba68-kube-api-access-7wxl7\") pod \"nova-cell1-9820-account-create-update-dph9s\" (UID: \"973eb84a-8809-4047-9112-4501f249ba68\") " pod="openstack/nova-cell1-9820-account-create-update-dph9s" Dec 10 14:59:41 crc kubenswrapper[4718]: I1210 14:59:41.031558 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/973eb84a-8809-4047-9112-4501f249ba68-operator-scripts\") pod \"nova-cell1-9820-account-create-update-dph9s\" (UID: \"973eb84a-8809-4047-9112-4501f249ba68\") " pod="openstack/nova-cell1-9820-account-create-update-dph9s" Dec 10 14:59:41 crc kubenswrapper[4718]: I1210 14:59:41.032476 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/973eb84a-8809-4047-9112-4501f249ba68-operator-scripts\") pod \"nova-cell1-9820-account-create-update-dph9s\" (UID: \"973eb84a-8809-4047-9112-4501f249ba68\") " pod="openstack/nova-cell1-9820-account-create-update-dph9s" Dec 10 14:59:41 crc kubenswrapper[4718]: I1210 14:59:41.057824 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7wxl7\" (UniqueName: \"kubernetes.io/projected/973eb84a-8809-4047-9112-4501f249ba68-kube-api-access-7wxl7\") pod \"nova-cell1-9820-account-create-update-dph9s\" (UID: \"973eb84a-8809-4047-9112-4501f249ba68\") " pod="openstack/nova-cell1-9820-account-create-update-dph9s" Dec 10 14:59:41 crc kubenswrapper[4718]: I1210 14:59:41.111611 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-9820-account-create-update-dph9s" Dec 10 14:59:41 crc kubenswrapper[4718]: I1210 14:59:41.331359 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-6bb7f498bd-pjx6h" Dec 10 14:59:41 crc kubenswrapper[4718]: I1210 14:59:41.331726 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-6bb7f498bd-pjx6h" Dec 10 14:59:41 crc kubenswrapper[4718]: I1210 14:59:41.527018 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-77c9ddb894-brvxz" Dec 10 14:59:41 crc kubenswrapper[4718]: I1210 14:59:41.527111 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-77c9ddb894-brvxz" Dec 10 14:59:41 crc kubenswrapper[4718]: I1210 14:59:41.869690 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Dec 10 14:59:42 crc kubenswrapper[4718]: I1210 14:59:42.020969 4718 scope.go:117] "RemoveContainer" containerID="76beffb68a4302af15a19b5c310d822594a21c7c7ec551bf4bf551ca3dcf1f21" Dec 10 14:59:42 crc kubenswrapper[4718]: E1210 14:59:42.021259 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8zmhn_openshift-machine-config-operator(8db53917-7cfb-496d-b8a0-5cc68f3be4e7)\"" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" podUID="8db53917-7cfb-496d-b8a0-5cc68f3be4e7" Dec 10 14:59:42 crc kubenswrapper[4718]: I1210 14:59:42.533680 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-9xz82" Dec 10 14:59:42 crc kubenswrapper[4718]: I1210 14:59:42.533770 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-9xz82" Dec 10 14:59:42 crc kubenswrapper[4718]: I1210 14:59:42.623818 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-9xz82" Dec 10 14:59:42 crc kubenswrapper[4718]: I1210 14:59:42.850236 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-9xz82" Dec 10 14:59:42 crc kubenswrapper[4718]: I1210 14:59:42.924555 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-9xz82"] Dec 10 14:59:44 crc kubenswrapper[4718]: I1210 14:59:44.806261 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-9xz82" podUID="39e78dd0-89b2-4d82-acc7-9b0818c40856" containerName="registry-server" containerID="cri-o://92cc544e9fbbf61ed33fa3623394151b70c945bc4000f8dd264954aec83cc3fb" gracePeriod=2 Dec 10 14:59:45 crc kubenswrapper[4718]: I1210 14:59:45.822993 4718 generic.go:334] "Generic (PLEG): container finished" podID="db4ee945-67d7-4670-9192-2ecaf4f03c3d" containerID="1ac122f7920ac5e36138ce8ff9e8333fbba94203b46380f16113189b53ac545e" exitCode=0 Dec 10 14:59:45 crc kubenswrapper[4718]: I1210 14:59:45.823081 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-zr6j2" event={"ID":"db4ee945-67d7-4670-9192-2ecaf4f03c3d","Type":"ContainerDied","Data":"1ac122f7920ac5e36138ce8ff9e8333fbba94203b46380f16113189b53ac545e"} Dec 10 14:59:45 crc kubenswrapper[4718]: I1210 14:59:45.830903 4718 generic.go:334] "Generic (PLEG): container finished" podID="39e78dd0-89b2-4d82-acc7-9b0818c40856" containerID="92cc544e9fbbf61ed33fa3623394151b70c945bc4000f8dd264954aec83cc3fb" exitCode=0 Dec 10 14:59:45 crc kubenswrapper[4718]: I1210 14:59:45.830948 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9xz82" event={"ID":"39e78dd0-89b2-4d82-acc7-9b0818c40856","Type":"ContainerDied","Data":"92cc544e9fbbf61ed33fa3623394151b70c945bc4000f8dd264954aec83cc3fb"} Dec 10 14:59:46 crc kubenswrapper[4718]: I1210 14:59:46.301624 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-5vctd"] Dec 10 14:59:46 crc kubenswrapper[4718]: I1210 14:59:46.304714 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5vctd" Dec 10 14:59:46 crc kubenswrapper[4718]: I1210 14:59:46.342097 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-5vctd"] Dec 10 14:59:46 crc kubenswrapper[4718]: I1210 14:59:46.389011 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cmv67\" (UniqueName: \"kubernetes.io/projected/81ebdd62-3494-4d9a-8d04-ae6122173e69-kube-api-access-cmv67\") pod \"certified-operators-5vctd\" (UID: \"81ebdd62-3494-4d9a-8d04-ae6122173e69\") " pod="openshift-marketplace/certified-operators-5vctd" Dec 10 14:59:46 crc kubenswrapper[4718]: I1210 14:59:46.389182 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/81ebdd62-3494-4d9a-8d04-ae6122173e69-utilities\") pod \"certified-operators-5vctd\" (UID: \"81ebdd62-3494-4d9a-8d04-ae6122173e69\") " pod="openshift-marketplace/certified-operators-5vctd" Dec 10 14:59:46 crc kubenswrapper[4718]: I1210 14:59:46.389342 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/81ebdd62-3494-4d9a-8d04-ae6122173e69-catalog-content\") pod \"certified-operators-5vctd\" (UID: \"81ebdd62-3494-4d9a-8d04-ae6122173e69\") " pod="openshift-marketplace/certified-operators-5vctd" Dec 10 14:59:46 crc kubenswrapper[4718]: I1210 14:59:46.492288 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cmv67\" (UniqueName: \"kubernetes.io/projected/81ebdd62-3494-4d9a-8d04-ae6122173e69-kube-api-access-cmv67\") pod \"certified-operators-5vctd\" (UID: \"81ebdd62-3494-4d9a-8d04-ae6122173e69\") " pod="openshift-marketplace/certified-operators-5vctd" Dec 10 14:59:46 crc kubenswrapper[4718]: I1210 14:59:46.492438 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/81ebdd62-3494-4d9a-8d04-ae6122173e69-utilities\") pod \"certified-operators-5vctd\" (UID: \"81ebdd62-3494-4d9a-8d04-ae6122173e69\") " pod="openshift-marketplace/certified-operators-5vctd" Dec 10 14:59:46 crc kubenswrapper[4718]: I1210 14:59:46.492711 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/81ebdd62-3494-4d9a-8d04-ae6122173e69-catalog-content\") pod \"certified-operators-5vctd\" (UID: \"81ebdd62-3494-4d9a-8d04-ae6122173e69\") " pod="openshift-marketplace/certified-operators-5vctd" Dec 10 14:59:46 crc kubenswrapper[4718]: I1210 14:59:46.493308 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/81ebdd62-3494-4d9a-8d04-ae6122173e69-catalog-content\") pod \"certified-operators-5vctd\" (UID: \"81ebdd62-3494-4d9a-8d04-ae6122173e69\") " pod="openshift-marketplace/certified-operators-5vctd" Dec 10 14:59:46 crc kubenswrapper[4718]: I1210 14:59:46.493302 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/81ebdd62-3494-4d9a-8d04-ae6122173e69-utilities\") pod \"certified-operators-5vctd\" (UID: \"81ebdd62-3494-4d9a-8d04-ae6122173e69\") " pod="openshift-marketplace/certified-operators-5vctd" Dec 10 14:59:46 crc kubenswrapper[4718]: I1210 14:59:46.521211 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cmv67\" (UniqueName: \"kubernetes.io/projected/81ebdd62-3494-4d9a-8d04-ae6122173e69-kube-api-access-cmv67\") pod \"certified-operators-5vctd\" (UID: \"81ebdd62-3494-4d9a-8d04-ae6122173e69\") " pod="openshift-marketplace/certified-operators-5vctd" Dec 10 14:59:46 crc kubenswrapper[4718]: I1210 14:59:46.637781 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5vctd" Dec 10 14:59:48 crc kubenswrapper[4718]: E1210 14:59:48.537665 4718 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos10/openstack-openstackclient:current" Dec 10 14:59:48 crc kubenswrapper[4718]: E1210 14:59:48.538276 4718 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos10/openstack-openstackclient:current" Dec 10 14:59:48 crc kubenswrapper[4718]: E1210 14:59:48.538492 4718 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:openstackclient,Image:quay.rdoproject.org/podified-master-centos10/openstack-openstackclient:current,Command:[/bin/sleep],Args:[infinity],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n69h579h7dhb9h5f5h66fh55bh667hfbh574h54ch696h66dh584h5b8h554h9fhd7h577h67fhbch668h697h5b9hc4h66bhd8h549hc9h8bh65h59cq,ValueFrom:nil,},EnvVar{Name:OS_CLOUD,Value:default,ValueFrom:nil,},EnvVar{Name:PROMETHEUS_CA_CERT,Value:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,ValueFrom:nil,},EnvVar{Name:PROMETHEUS_HOST,Value:metric-storage-prometheus.openstack.svc,ValueFrom:nil,},EnvVar{Name:PROMETHEUS_PORT,Value:9090,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:openstack-config,ReadOnly:false,MountPath:/home/cloud-admin/.config/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/home/cloud-admin/.config/openstack/secure.yaml,SubPath:secure.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/home/cloud-admin/cloudrc,SubPath:cloudrc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ct5k8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42401,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42401,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstackclient_openstack(c0b43254-f8fe-4187-a8ce-aa65f7ac327e): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 10 14:59:48 crc kubenswrapper[4718]: E1210 14:59:48.540770 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openstackclient\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/openstackclient" podUID="c0b43254-f8fe-4187-a8ce-aa65f7ac327e" Dec 10 14:59:48 crc kubenswrapper[4718]: I1210 14:59:48.599582 4718 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/watcher-decision-engine-0" Dec 10 14:59:48 crc kubenswrapper[4718]: I1210 14:59:48.600968 4718 scope.go:117] "RemoveContainer" containerID="01cec3e8c46ffa2721da9de159356f6f456558ade0dd4bed790b394d87a42b36" Dec 10 14:59:48 crc kubenswrapper[4718]: E1210 14:59:48.601369 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with CrashLoopBackOff: \"back-off 20s restarting failed container=watcher-decision-engine pod=watcher-decision-engine-0_openstack(95261732-95ae-4618-a8a3-c883c287553e)\"" pod="openstack/watcher-decision-engine-0" podUID="95261732-95ae-4618-a8a3-c883c287553e" Dec 10 14:59:48 crc kubenswrapper[4718]: I1210 14:59:48.602605 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-decision-engine-0" Dec 10 14:59:48 crc kubenswrapper[4718]: I1210 14:59:48.803873 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-zr6j2" Dec 10 14:59:48 crc kubenswrapper[4718]: I1210 14:59:48.924021 4718 scope.go:117] "RemoveContainer" containerID="01cec3e8c46ffa2721da9de159356f6f456558ade0dd4bed790b394d87a42b36" Dec 10 14:59:48 crc kubenswrapper[4718]: E1210 14:59:48.924449 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with CrashLoopBackOff: \"back-off 20s restarting failed container=watcher-decision-engine pod=watcher-decision-engine-0_openstack(95261732-95ae-4618-a8a3-c883c287553e)\"" pod="openstack/watcher-decision-engine-0" podUID="95261732-95ae-4618-a8a3-c883c287553e" Dec 10 14:59:48 crc kubenswrapper[4718]: I1210 14:59:48.924715 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-zr6j2" Dec 10 14:59:48 crc kubenswrapper[4718]: I1210 14:59:48.925014 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-zr6j2" event={"ID":"db4ee945-67d7-4670-9192-2ecaf4f03c3d","Type":"ContainerDied","Data":"60fede4de97b5270c89c4bd04a080af782c13ea31acedbb13bbef7c09c0d048a"} Dec 10 14:59:48 crc kubenswrapper[4718]: I1210 14:59:48.925049 4718 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="60fede4de97b5270c89c4bd04a080af782c13ea31acedbb13bbef7c09c0d048a" Dec 10 14:59:48 crc kubenswrapper[4718]: I1210 14:59:48.936688 4718 scope.go:117] "RemoveContainer" containerID="844502f544438e16600d23a93f5cb482eca1b7a50a612e2b874ed9e8f1d1c497" Dec 10 14:59:48 crc kubenswrapper[4718]: E1210 14:59:48.951949 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openstackclient\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-openstackclient:current\\\"\"" pod="openstack/openstackclient" podUID="c0b43254-f8fe-4187-a8ce-aa65f7ac327e" Dec 10 14:59:49 crc kubenswrapper[4718]: I1210 14:59:49.002560 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-666bl\" (UniqueName: \"kubernetes.io/projected/db4ee945-67d7-4670-9192-2ecaf4f03c3d-kube-api-access-666bl\") pod \"db4ee945-67d7-4670-9192-2ecaf4f03c3d\" (UID: \"db4ee945-67d7-4670-9192-2ecaf4f03c3d\") " Dec 10 14:59:49 crc kubenswrapper[4718]: I1210 14:59:49.002642 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db4ee945-67d7-4670-9192-2ecaf4f03c3d-combined-ca-bundle\") pod \"db4ee945-67d7-4670-9192-2ecaf4f03c3d\" (UID: \"db4ee945-67d7-4670-9192-2ecaf4f03c3d\") " Dec 10 14:59:49 crc kubenswrapper[4718]: I1210 14:59:49.003662 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/db4ee945-67d7-4670-9192-2ecaf4f03c3d-config\") pod \"db4ee945-67d7-4670-9192-2ecaf4f03c3d\" (UID: \"db4ee945-67d7-4670-9192-2ecaf4f03c3d\") " Dec 10 14:59:49 crc kubenswrapper[4718]: I1210 14:59:49.019225 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/db4ee945-67d7-4670-9192-2ecaf4f03c3d-kube-api-access-666bl" (OuterVolumeSpecName: "kube-api-access-666bl") pod "db4ee945-67d7-4670-9192-2ecaf4f03c3d" (UID: "db4ee945-67d7-4670-9192-2ecaf4f03c3d"). InnerVolumeSpecName "kube-api-access-666bl". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 14:59:49 crc kubenswrapper[4718]: I1210 14:59:49.075609 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/db4ee945-67d7-4670-9192-2ecaf4f03c3d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "db4ee945-67d7-4670-9192-2ecaf4f03c3d" (UID: "db4ee945-67d7-4670-9192-2ecaf4f03c3d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 14:59:49 crc kubenswrapper[4718]: I1210 14:59:49.077881 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/db4ee945-67d7-4670-9192-2ecaf4f03c3d-config" (OuterVolumeSpecName: "config") pod "db4ee945-67d7-4670-9192-2ecaf4f03c3d" (UID: "db4ee945-67d7-4670-9192-2ecaf4f03c3d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 14:59:49 crc kubenswrapper[4718]: I1210 14:59:49.089504 4718 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="3cc0d50f-35f9-45b0-8316-cdf12ae72a97" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.176:3000/\": dial tcp 10.217.0.176:3000: connect: connection refused" Dec 10 14:59:49 crc kubenswrapper[4718]: I1210 14:59:49.107220 4718 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/db4ee945-67d7-4670-9192-2ecaf4f03c3d-config\") on node \"crc\" DevicePath \"\"" Dec 10 14:59:49 crc kubenswrapper[4718]: I1210 14:59:49.107281 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-666bl\" (UniqueName: \"kubernetes.io/projected/db4ee945-67d7-4670-9192-2ecaf4f03c3d-kube-api-access-666bl\") on node \"crc\" DevicePath \"\"" Dec 10 14:59:49 crc kubenswrapper[4718]: I1210 14:59:49.107300 4718 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db4ee945-67d7-4670-9192-2ecaf4f03c3d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 10 14:59:49 crc kubenswrapper[4718]: I1210 14:59:49.205551 4718 scope.go:117] "RemoveContainer" containerID="00f06fc96fb559d7eccc8d08872fb0f6bfa5269ea99011d645a2b23b12f24735" Dec 10 14:59:49 crc kubenswrapper[4718]: I1210 14:59:49.276642 4718 scope.go:117] "RemoveContainer" containerID="f04c473e26acadccf4c98af1027269926d2fed9cd16a6d167605f98ab0904274" Dec 10 14:59:49 crc kubenswrapper[4718]: E1210 14:59:49.320762 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f04c473e26acadccf4c98af1027269926d2fed9cd16a6d167605f98ab0904274\": container with ID starting with f04c473e26acadccf4c98af1027269926d2fed9cd16a6d167605f98ab0904274 not found: ID does not exist" containerID="f04c473e26acadccf4c98af1027269926d2fed9cd16a6d167605f98ab0904274" Dec 10 14:59:49 crc kubenswrapper[4718]: I1210 14:59:49.320824 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f04c473e26acadccf4c98af1027269926d2fed9cd16a6d167605f98ab0904274"} err="failed to get container status \"f04c473e26acadccf4c98af1027269926d2fed9cd16a6d167605f98ab0904274\": rpc error: code = NotFound desc = could not find container \"f04c473e26acadccf4c98af1027269926d2fed9cd16a6d167605f98ab0904274\": container with ID starting with f04c473e26acadccf4c98af1027269926d2fed9cd16a6d167605f98ab0904274 not found: ID does not exist" Dec 10 14:59:49 crc kubenswrapper[4718]: I1210 14:59:49.320861 4718 scope.go:117] "RemoveContainer" containerID="844502f544438e16600d23a93f5cb482eca1b7a50a612e2b874ed9e8f1d1c497" Dec 10 14:59:49 crc kubenswrapper[4718]: E1210 14:59:49.333591 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"844502f544438e16600d23a93f5cb482eca1b7a50a612e2b874ed9e8f1d1c497\": container with ID starting with 844502f544438e16600d23a93f5cb482eca1b7a50a612e2b874ed9e8f1d1c497 not found: ID does not exist" containerID="844502f544438e16600d23a93f5cb482eca1b7a50a612e2b874ed9e8f1d1c497" Dec 10 14:59:49 crc kubenswrapper[4718]: I1210 14:59:49.333681 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"844502f544438e16600d23a93f5cb482eca1b7a50a612e2b874ed9e8f1d1c497"} err="failed to get container status \"844502f544438e16600d23a93f5cb482eca1b7a50a612e2b874ed9e8f1d1c497\": rpc error: code = NotFound desc = could not find container \"844502f544438e16600d23a93f5cb482eca1b7a50a612e2b874ed9e8f1d1c497\": container with ID starting with 844502f544438e16600d23a93f5cb482eca1b7a50a612e2b874ed9e8f1d1c497 not found: ID does not exist" Dec 10 14:59:49 crc kubenswrapper[4718]: I1210 14:59:49.333731 4718 scope.go:117] "RemoveContainer" containerID="00f06fc96fb559d7eccc8d08872fb0f6bfa5269ea99011d645a2b23b12f24735" Dec 10 14:59:49 crc kubenswrapper[4718]: E1210 14:59:49.334743 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"00f06fc96fb559d7eccc8d08872fb0f6bfa5269ea99011d645a2b23b12f24735\": container with ID starting with 00f06fc96fb559d7eccc8d08872fb0f6bfa5269ea99011d645a2b23b12f24735 not found: ID does not exist" containerID="00f06fc96fb559d7eccc8d08872fb0f6bfa5269ea99011d645a2b23b12f24735" Dec 10 14:59:49 crc kubenswrapper[4718]: I1210 14:59:49.334789 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"00f06fc96fb559d7eccc8d08872fb0f6bfa5269ea99011d645a2b23b12f24735"} err="failed to get container status \"00f06fc96fb559d7eccc8d08872fb0f6bfa5269ea99011d645a2b23b12f24735\": rpc error: code = NotFound desc = could not find container \"00f06fc96fb559d7eccc8d08872fb0f6bfa5269ea99011d645a2b23b12f24735\": container with ID starting with 00f06fc96fb559d7eccc8d08872fb0f6bfa5269ea99011d645a2b23b12f24735 not found: ID does not exist" Dec 10 14:59:49 crc kubenswrapper[4718]: I1210 14:59:49.334809 4718 scope.go:117] "RemoveContainer" containerID="fe1809f1bf4abad0ceec1c1f2a13e3871e4415fc733067a42435419dae2a5707" Dec 10 14:59:49 crc kubenswrapper[4718]: I1210 14:59:49.770013 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9xz82" Dec 10 14:59:49 crc kubenswrapper[4718]: I1210 14:59:49.834757 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 10 14:59:49 crc kubenswrapper[4718]: I1210 14:59:49.938468 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9hc79\" (UniqueName: \"kubernetes.io/projected/3cc0d50f-35f9-45b0-8316-cdf12ae72a97-kube-api-access-9hc79\") pod \"3cc0d50f-35f9-45b0-8316-cdf12ae72a97\" (UID: \"3cc0d50f-35f9-45b0-8316-cdf12ae72a97\") " Dec 10 14:59:49 crc kubenswrapper[4718]: I1210 14:59:49.938582 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3cc0d50f-35f9-45b0-8316-cdf12ae72a97-combined-ca-bundle\") pod \"3cc0d50f-35f9-45b0-8316-cdf12ae72a97\" (UID: \"3cc0d50f-35f9-45b0-8316-cdf12ae72a97\") " Dec 10 14:59:49 crc kubenswrapper[4718]: I1210 14:59:49.938782 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3cc0d50f-35f9-45b0-8316-cdf12ae72a97-scripts\") pod \"3cc0d50f-35f9-45b0-8316-cdf12ae72a97\" (UID: \"3cc0d50f-35f9-45b0-8316-cdf12ae72a97\") " Dec 10 14:59:49 crc kubenswrapper[4718]: I1210 14:59:49.938885 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/39e78dd0-89b2-4d82-acc7-9b0818c40856-utilities\") pod \"39e78dd0-89b2-4d82-acc7-9b0818c40856\" (UID: \"39e78dd0-89b2-4d82-acc7-9b0818c40856\") " Dec 10 14:59:49 crc kubenswrapper[4718]: I1210 14:59:49.938920 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3cc0d50f-35f9-45b0-8316-cdf12ae72a97-sg-core-conf-yaml\") pod \"3cc0d50f-35f9-45b0-8316-cdf12ae72a97\" (UID: \"3cc0d50f-35f9-45b0-8316-cdf12ae72a97\") " Dec 10 14:59:49 crc kubenswrapper[4718]: I1210 14:59:49.939052 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/39e78dd0-89b2-4d82-acc7-9b0818c40856-catalog-content\") pod \"39e78dd0-89b2-4d82-acc7-9b0818c40856\" (UID: \"39e78dd0-89b2-4d82-acc7-9b0818c40856\") " Dec 10 14:59:49 crc kubenswrapper[4718]: I1210 14:59:49.939146 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3cc0d50f-35f9-45b0-8316-cdf12ae72a97-run-httpd\") pod \"3cc0d50f-35f9-45b0-8316-cdf12ae72a97\" (UID: \"3cc0d50f-35f9-45b0-8316-cdf12ae72a97\") " Dec 10 14:59:49 crc kubenswrapper[4718]: I1210 14:59:49.939165 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3cc0d50f-35f9-45b0-8316-cdf12ae72a97-log-httpd\") pod \"3cc0d50f-35f9-45b0-8316-cdf12ae72a97\" (UID: \"3cc0d50f-35f9-45b0-8316-cdf12ae72a97\") " Dec 10 14:59:49 crc kubenswrapper[4718]: I1210 14:59:49.939183 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3cc0d50f-35f9-45b0-8316-cdf12ae72a97-config-data\") pod \"3cc0d50f-35f9-45b0-8316-cdf12ae72a97\" (UID: \"3cc0d50f-35f9-45b0-8316-cdf12ae72a97\") " Dec 10 14:59:49 crc kubenswrapper[4718]: I1210 14:59:49.939235 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qkmrg\" (UniqueName: \"kubernetes.io/projected/39e78dd0-89b2-4d82-acc7-9b0818c40856-kube-api-access-qkmrg\") pod \"39e78dd0-89b2-4d82-acc7-9b0818c40856\" (UID: \"39e78dd0-89b2-4d82-acc7-9b0818c40856\") " Dec 10 14:59:49 crc kubenswrapper[4718]: I1210 14:59:49.940990 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3cc0d50f-35f9-45b0-8316-cdf12ae72a97-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "3cc0d50f-35f9-45b0-8316-cdf12ae72a97" (UID: "3cc0d50f-35f9-45b0-8316-cdf12ae72a97"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 14:59:49 crc kubenswrapper[4718]: I1210 14:59:49.941034 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3cc0d50f-35f9-45b0-8316-cdf12ae72a97-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "3cc0d50f-35f9-45b0-8316-cdf12ae72a97" (UID: "3cc0d50f-35f9-45b0-8316-cdf12ae72a97"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 14:59:49 crc kubenswrapper[4718]: I1210 14:59:49.941473 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/39e78dd0-89b2-4d82-acc7-9b0818c40856-utilities" (OuterVolumeSpecName: "utilities") pod "39e78dd0-89b2-4d82-acc7-9b0818c40856" (UID: "39e78dd0-89b2-4d82-acc7-9b0818c40856"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 14:59:49 crc kubenswrapper[4718]: I1210 14:59:49.980742 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9xz82" event={"ID":"39e78dd0-89b2-4d82-acc7-9b0818c40856","Type":"ContainerDied","Data":"809de080b0ed8973894a20049946ccbf3266affadfe22a153d77afa89ac28bcf"} Dec 10 14:59:49 crc kubenswrapper[4718]: I1210 14:59:49.980838 4718 scope.go:117] "RemoveContainer" containerID="92cc544e9fbbf61ed33fa3623394151b70c945bc4000f8dd264954aec83cc3fb" Dec 10 14:59:49 crc kubenswrapper[4718]: I1210 14:59:49.981055 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9xz82" Dec 10 14:59:50 crc kubenswrapper[4718]: I1210 14:59:50.024061 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 10 14:59:50 crc kubenswrapper[4718]: I1210 14:59:50.032244 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3cc0d50f-35f9-45b0-8316-cdf12ae72a97-scripts" (OuterVolumeSpecName: "scripts") pod "3cc0d50f-35f9-45b0-8316-cdf12ae72a97" (UID: "3cc0d50f-35f9-45b0-8316-cdf12ae72a97"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 14:59:50 crc kubenswrapper[4718]: I1210 14:59:50.032606 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cc0d50f-35f9-45b0-8316-cdf12ae72a97-kube-api-access-9hc79" (OuterVolumeSpecName: "kube-api-access-9hc79") pod "3cc0d50f-35f9-45b0-8316-cdf12ae72a97" (UID: "3cc0d50f-35f9-45b0-8316-cdf12ae72a97"). InnerVolumeSpecName "kube-api-access-9hc79". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 14:59:50 crc kubenswrapper[4718]: I1210 14:59:50.035987 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/39e78dd0-89b2-4d82-acc7-9b0818c40856-kube-api-access-qkmrg" (OuterVolumeSpecName: "kube-api-access-qkmrg") pod "39e78dd0-89b2-4d82-acc7-9b0818c40856" (UID: "39e78dd0-89b2-4d82-acc7-9b0818c40856"). InnerVolumeSpecName "kube-api-access-qkmrg". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 14:59:50 crc kubenswrapper[4718]: I1210 14:59:50.044103 4718 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/39e78dd0-89b2-4d82-acc7-9b0818c40856-utilities\") on node \"crc\" DevicePath \"\"" Dec 10 14:59:50 crc kubenswrapper[4718]: I1210 14:59:50.044150 4718 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3cc0d50f-35f9-45b0-8316-cdf12ae72a97-log-httpd\") on node \"crc\" DevicePath \"\"" Dec 10 14:59:50 crc kubenswrapper[4718]: I1210 14:59:50.044162 4718 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3cc0d50f-35f9-45b0-8316-cdf12ae72a97-run-httpd\") on node \"crc\" DevicePath \"\"" Dec 10 14:59:50 crc kubenswrapper[4718]: I1210 14:59:50.044175 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qkmrg\" (UniqueName: \"kubernetes.io/projected/39e78dd0-89b2-4d82-acc7-9b0818c40856-kube-api-access-qkmrg\") on node \"crc\" DevicePath \"\"" Dec 10 14:59:50 crc kubenswrapper[4718]: I1210 14:59:50.044194 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9hc79\" (UniqueName: \"kubernetes.io/projected/3cc0d50f-35f9-45b0-8316-cdf12ae72a97-kube-api-access-9hc79\") on node \"crc\" DevicePath \"\"" Dec 10 14:59:50 crc kubenswrapper[4718]: I1210 14:59:50.044212 4718 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3cc0d50f-35f9-45b0-8316-cdf12ae72a97-scripts\") on node \"crc\" DevicePath \"\"" Dec 10 14:59:50 crc kubenswrapper[4718]: I1210 14:59:50.049124 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/39e78dd0-89b2-4d82-acc7-9b0818c40856-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "39e78dd0-89b2-4d82-acc7-9b0818c40856" (UID: "39e78dd0-89b2-4d82-acc7-9b0818c40856"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 14:59:50 crc kubenswrapper[4718]: I1210 14:59:50.073319 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3cc0d50f-35f9-45b0-8316-cdf12ae72a97-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "3cc0d50f-35f9-45b0-8316-cdf12ae72a97" (UID: "3cc0d50f-35f9-45b0-8316-cdf12ae72a97"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 14:59:50 crc kubenswrapper[4718]: I1210 14:59:50.151594 4718 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/39e78dd0-89b2-4d82-acc7-9b0818c40856-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 10 14:59:50 crc kubenswrapper[4718]: I1210 14:59:50.151630 4718 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3cc0d50f-35f9-45b0-8316-cdf12ae72a97-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Dec 10 14:59:50 crc kubenswrapper[4718]: I1210 14:59:50.174405 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-bqj7n"] Dec 10 14:59:50 crc kubenswrapper[4718]: I1210 14:59:50.174459 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3cc0d50f-35f9-45b0-8316-cdf12ae72a97","Type":"ContainerDied","Data":"7ceb32b8779cb7fa70cc58443dafa4c2c0b4e1d53a77d0206e76eb31d16c04ff"} Dec 10 14:59:50 crc kubenswrapper[4718]: I1210 14:59:50.206410 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3cc0d50f-35f9-45b0-8316-cdf12ae72a97-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3cc0d50f-35f9-45b0-8316-cdf12ae72a97" (UID: "3cc0d50f-35f9-45b0-8316-cdf12ae72a97"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 14:59:50 crc kubenswrapper[4718]: I1210 14:59:50.234359 4718 scope.go:117] "RemoveContainer" containerID="0969c38e88f40864fa5502bfcc60fb16a711b763ad73c142d7536a727b2d3e18" Dec 10 14:59:50 crc kubenswrapper[4718]: I1210 14:59:50.259631 4718 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3cc0d50f-35f9-45b0-8316-cdf12ae72a97-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 10 14:59:50 crc kubenswrapper[4718]: I1210 14:59:50.348237 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5cd5fc4d55-tx99q"] Dec 10 14:59:50 crc kubenswrapper[4718]: I1210 14:59:50.558696 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3cc0d50f-35f9-45b0-8316-cdf12ae72a97-config-data" (OuterVolumeSpecName: "config-data") pod "3cc0d50f-35f9-45b0-8316-cdf12ae72a97" (UID: "3cc0d50f-35f9-45b0-8316-cdf12ae72a97"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 14:59:50 crc kubenswrapper[4718]: I1210 14:59:50.601527 4718 scope.go:117] "RemoveContainer" containerID="2a18ee072e8c1857284445293cd826895b4c0cfb034749ba066430030d9efc2f" Dec 10 14:59:50 crc kubenswrapper[4718]: I1210 14:59:50.606505 4718 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3cc0d50f-35f9-45b0-8316-cdf12ae72a97-config-data\") on node \"crc\" DevicePath \"\"" Dec 10 14:59:50 crc kubenswrapper[4718]: I1210 14:59:50.616287 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-9xz82"] Dec 10 14:59:50 crc kubenswrapper[4718]: I1210 14:59:50.648377 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-9xz82"] Dec 10 14:59:50 crc kubenswrapper[4718]: I1210 14:59:50.676252 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-75958fc765-86zrn"] Dec 10 14:59:50 crc kubenswrapper[4718]: E1210 14:59:50.679683 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3cc0d50f-35f9-45b0-8316-cdf12ae72a97" containerName="sg-core" Dec 10 14:59:50 crc kubenswrapper[4718]: I1210 14:59:50.679706 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="3cc0d50f-35f9-45b0-8316-cdf12ae72a97" containerName="sg-core" Dec 10 14:59:50 crc kubenswrapper[4718]: E1210 14:59:50.679731 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3cc0d50f-35f9-45b0-8316-cdf12ae72a97" containerName="proxy-httpd" Dec 10 14:59:50 crc kubenswrapper[4718]: I1210 14:59:50.679738 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="3cc0d50f-35f9-45b0-8316-cdf12ae72a97" containerName="proxy-httpd" Dec 10 14:59:50 crc kubenswrapper[4718]: E1210 14:59:50.679750 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3cc0d50f-35f9-45b0-8316-cdf12ae72a97" containerName="ceilometer-notification-agent" Dec 10 14:59:50 crc kubenswrapper[4718]: I1210 14:59:50.679757 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="3cc0d50f-35f9-45b0-8316-cdf12ae72a97" containerName="ceilometer-notification-agent" Dec 10 14:59:50 crc kubenswrapper[4718]: E1210 14:59:50.679783 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db4ee945-67d7-4670-9192-2ecaf4f03c3d" containerName="neutron-db-sync" Dec 10 14:59:50 crc kubenswrapper[4718]: I1210 14:59:50.679790 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="db4ee945-67d7-4670-9192-2ecaf4f03c3d" containerName="neutron-db-sync" Dec 10 14:59:50 crc kubenswrapper[4718]: E1210 14:59:50.679801 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3cc0d50f-35f9-45b0-8316-cdf12ae72a97" containerName="ceilometer-central-agent" Dec 10 14:59:50 crc kubenswrapper[4718]: I1210 14:59:50.679808 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="3cc0d50f-35f9-45b0-8316-cdf12ae72a97" containerName="ceilometer-central-agent" Dec 10 14:59:50 crc kubenswrapper[4718]: E1210 14:59:50.679827 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="39e78dd0-89b2-4d82-acc7-9b0818c40856" containerName="registry-server" Dec 10 14:59:50 crc kubenswrapper[4718]: I1210 14:59:50.679833 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="39e78dd0-89b2-4d82-acc7-9b0818c40856" containerName="registry-server" Dec 10 14:59:50 crc kubenswrapper[4718]: E1210 14:59:50.679850 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="39e78dd0-89b2-4d82-acc7-9b0818c40856" containerName="extract-content" Dec 10 14:59:50 crc kubenswrapper[4718]: I1210 14:59:50.679857 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="39e78dd0-89b2-4d82-acc7-9b0818c40856" containerName="extract-content" Dec 10 14:59:50 crc kubenswrapper[4718]: E1210 14:59:50.679865 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="39e78dd0-89b2-4d82-acc7-9b0818c40856" containerName="extract-utilities" Dec 10 14:59:50 crc kubenswrapper[4718]: I1210 14:59:50.679880 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="39e78dd0-89b2-4d82-acc7-9b0818c40856" containerName="extract-utilities" Dec 10 14:59:50 crc kubenswrapper[4718]: I1210 14:59:50.680115 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="3cc0d50f-35f9-45b0-8316-cdf12ae72a97" containerName="sg-core" Dec 10 14:59:50 crc kubenswrapper[4718]: I1210 14:59:50.680145 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="3cc0d50f-35f9-45b0-8316-cdf12ae72a97" containerName="proxy-httpd" Dec 10 14:59:50 crc kubenswrapper[4718]: I1210 14:59:50.680161 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="db4ee945-67d7-4670-9192-2ecaf4f03c3d" containerName="neutron-db-sync" Dec 10 14:59:50 crc kubenswrapper[4718]: I1210 14:59:50.680173 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="3cc0d50f-35f9-45b0-8316-cdf12ae72a97" containerName="ceilometer-central-agent" Dec 10 14:59:50 crc kubenswrapper[4718]: I1210 14:59:50.680187 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="3cc0d50f-35f9-45b0-8316-cdf12ae72a97" containerName="ceilometer-notification-agent" Dec 10 14:59:50 crc kubenswrapper[4718]: I1210 14:59:50.680204 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="39e78dd0-89b2-4d82-acc7-9b0818c40856" containerName="registry-server" Dec 10 14:59:50 crc kubenswrapper[4718]: I1210 14:59:50.681706 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75958fc765-86zrn" Dec 10 14:59:50 crc kubenswrapper[4718]: I1210 14:59:50.712293 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dffdff5c-1fe4-4dbb-93c7-14f497a4e939-dns-svc\") pod \"dnsmasq-dns-75958fc765-86zrn\" (UID: \"dffdff5c-1fe4-4dbb-93c7-14f497a4e939\") " pod="openstack/dnsmasq-dns-75958fc765-86zrn" Dec 10 14:59:50 crc kubenswrapper[4718]: I1210 14:59:50.712493 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/dffdff5c-1fe4-4dbb-93c7-14f497a4e939-dns-swift-storage-0\") pod \"dnsmasq-dns-75958fc765-86zrn\" (UID: \"dffdff5c-1fe4-4dbb-93c7-14f497a4e939\") " pod="openstack/dnsmasq-dns-75958fc765-86zrn" Dec 10 14:59:50 crc kubenswrapper[4718]: I1210 14:59:50.712663 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dffdff5c-1fe4-4dbb-93c7-14f497a4e939-config\") pod \"dnsmasq-dns-75958fc765-86zrn\" (UID: \"dffdff5c-1fe4-4dbb-93c7-14f497a4e939\") " pod="openstack/dnsmasq-dns-75958fc765-86zrn" Dec 10 14:59:50 crc kubenswrapper[4718]: I1210 14:59:50.712804 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b2fk4\" (UniqueName: \"kubernetes.io/projected/dffdff5c-1fe4-4dbb-93c7-14f497a4e939-kube-api-access-b2fk4\") pod \"dnsmasq-dns-75958fc765-86zrn\" (UID: \"dffdff5c-1fe4-4dbb-93c7-14f497a4e939\") " pod="openstack/dnsmasq-dns-75958fc765-86zrn" Dec 10 14:59:50 crc kubenswrapper[4718]: I1210 14:59:50.712929 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/dffdff5c-1fe4-4dbb-93c7-14f497a4e939-ovsdbserver-nb\") pod \"dnsmasq-dns-75958fc765-86zrn\" (UID: \"dffdff5c-1fe4-4dbb-93c7-14f497a4e939\") " pod="openstack/dnsmasq-dns-75958fc765-86zrn" Dec 10 14:59:50 crc kubenswrapper[4718]: I1210 14:59:50.712978 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/dffdff5c-1fe4-4dbb-93c7-14f497a4e939-ovsdbserver-sb\") pod \"dnsmasq-dns-75958fc765-86zrn\" (UID: \"dffdff5c-1fe4-4dbb-93c7-14f497a4e939\") " pod="openstack/dnsmasq-dns-75958fc765-86zrn" Dec 10 14:59:50 crc kubenswrapper[4718]: I1210 14:59:50.727702 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-75958fc765-86zrn"] Dec 10 14:59:50 crc kubenswrapper[4718]: I1210 14:59:50.733991 4718 scope.go:117] "RemoveContainer" containerID="8304b3ca9e66802bf4345c39855400c88f7e0239f1898e9d6ca36bce702475f1" Dec 10 14:59:50 crc kubenswrapper[4718]: I1210 14:59:50.744237 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 10 14:59:50 crc kubenswrapper[4718]: I1210 14:59:50.793828 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Dec 10 14:59:50 crc kubenswrapper[4718]: I1210 14:59:50.835159 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/dffdff5c-1fe4-4dbb-93c7-14f497a4e939-ovsdbserver-nb\") pod \"dnsmasq-dns-75958fc765-86zrn\" (UID: \"dffdff5c-1fe4-4dbb-93c7-14f497a4e939\") " pod="openstack/dnsmasq-dns-75958fc765-86zrn" Dec 10 14:59:50 crc kubenswrapper[4718]: I1210 14:59:50.835231 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/dffdff5c-1fe4-4dbb-93c7-14f497a4e939-ovsdbserver-sb\") pod \"dnsmasq-dns-75958fc765-86zrn\" (UID: \"dffdff5c-1fe4-4dbb-93c7-14f497a4e939\") " pod="openstack/dnsmasq-dns-75958fc765-86zrn" Dec 10 14:59:50 crc kubenswrapper[4718]: I1210 14:59:50.835289 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dffdff5c-1fe4-4dbb-93c7-14f497a4e939-dns-svc\") pod \"dnsmasq-dns-75958fc765-86zrn\" (UID: \"dffdff5c-1fe4-4dbb-93c7-14f497a4e939\") " pod="openstack/dnsmasq-dns-75958fc765-86zrn" Dec 10 14:59:50 crc kubenswrapper[4718]: I1210 14:59:50.835339 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/dffdff5c-1fe4-4dbb-93c7-14f497a4e939-dns-swift-storage-0\") pod \"dnsmasq-dns-75958fc765-86zrn\" (UID: \"dffdff5c-1fe4-4dbb-93c7-14f497a4e939\") " pod="openstack/dnsmasq-dns-75958fc765-86zrn" Dec 10 14:59:50 crc kubenswrapper[4718]: I1210 14:59:50.835416 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dffdff5c-1fe4-4dbb-93c7-14f497a4e939-config\") pod \"dnsmasq-dns-75958fc765-86zrn\" (UID: \"dffdff5c-1fe4-4dbb-93c7-14f497a4e939\") " pod="openstack/dnsmasq-dns-75958fc765-86zrn" Dec 10 14:59:50 crc kubenswrapper[4718]: I1210 14:59:50.835467 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b2fk4\" (UniqueName: \"kubernetes.io/projected/dffdff5c-1fe4-4dbb-93c7-14f497a4e939-kube-api-access-b2fk4\") pod \"dnsmasq-dns-75958fc765-86zrn\" (UID: \"dffdff5c-1fe4-4dbb-93c7-14f497a4e939\") " pod="openstack/dnsmasq-dns-75958fc765-86zrn" Dec 10 14:59:50 crc kubenswrapper[4718]: I1210 14:59:50.836561 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/dffdff5c-1fe4-4dbb-93c7-14f497a4e939-ovsdbserver-nb\") pod \"dnsmasq-dns-75958fc765-86zrn\" (UID: \"dffdff5c-1fe4-4dbb-93c7-14f497a4e939\") " pod="openstack/dnsmasq-dns-75958fc765-86zrn" Dec 10 14:59:50 crc kubenswrapper[4718]: I1210 14:59:50.836903 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/dffdff5c-1fe4-4dbb-93c7-14f497a4e939-ovsdbserver-sb\") pod \"dnsmasq-dns-75958fc765-86zrn\" (UID: \"dffdff5c-1fe4-4dbb-93c7-14f497a4e939\") " pod="openstack/dnsmasq-dns-75958fc765-86zrn" Dec 10 14:59:50 crc kubenswrapper[4718]: I1210 14:59:50.837277 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/dffdff5c-1fe4-4dbb-93c7-14f497a4e939-dns-swift-storage-0\") pod \"dnsmasq-dns-75958fc765-86zrn\" (UID: \"dffdff5c-1fe4-4dbb-93c7-14f497a4e939\") " pod="openstack/dnsmasq-dns-75958fc765-86zrn" Dec 10 14:59:50 crc kubenswrapper[4718]: I1210 14:59:50.837428 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dffdff5c-1fe4-4dbb-93c7-14f497a4e939-dns-svc\") pod \"dnsmasq-dns-75958fc765-86zrn\" (UID: \"dffdff5c-1fe4-4dbb-93c7-14f497a4e939\") " pod="openstack/dnsmasq-dns-75958fc765-86zrn" Dec 10 14:59:50 crc kubenswrapper[4718]: I1210 14:59:50.842300 4718 scope.go:117] "RemoveContainer" containerID="db4a4ebfa0197693d3cec46810cf27c433423fc868a385f9df331c8d99b47b10" Dec 10 14:59:50 crc kubenswrapper[4718]: I1210 14:59:50.843726 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dffdff5c-1fe4-4dbb-93c7-14f497a4e939-config\") pod \"dnsmasq-dns-75958fc765-86zrn\" (UID: \"dffdff5c-1fe4-4dbb-93c7-14f497a4e939\") " pod="openstack/dnsmasq-dns-75958fc765-86zrn" Dec 10 14:59:50 crc kubenswrapper[4718]: I1210 14:59:50.856640 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-7cb5f4dbb8-qk86f"] Dec 10 14:59:50 crc kubenswrapper[4718]: I1210 14:59:50.865276 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7cb5f4dbb8-qk86f" Dec 10 14:59:50 crc kubenswrapper[4718]: I1210 14:59:50.878048 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Dec 10 14:59:50 crc kubenswrapper[4718]: I1210 14:59:50.878274 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Dec 10 14:59:50 crc kubenswrapper[4718]: I1210 14:59:50.878361 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-hfcj7" Dec 10 14:59:50 crc kubenswrapper[4718]: I1210 14:59:50.879794 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Dec 10 14:59:50 crc kubenswrapper[4718]: I1210 14:59:50.906936 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b2fk4\" (UniqueName: \"kubernetes.io/projected/dffdff5c-1fe4-4dbb-93c7-14f497a4e939-kube-api-access-b2fk4\") pod \"dnsmasq-dns-75958fc765-86zrn\" (UID: \"dffdff5c-1fe4-4dbb-93c7-14f497a4e939\") " pod="openstack/dnsmasq-dns-75958fc765-86zrn" Dec 10 14:59:50 crc kubenswrapper[4718]: I1210 14:59:50.907173 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 10 14:59:50 crc kubenswrapper[4718]: I1210 14:59:50.919563 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 10 14:59:50 crc kubenswrapper[4718]: I1210 14:59:50.941047 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 10 14:59:50 crc kubenswrapper[4718]: I1210 14:59:50.941497 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 10 14:59:50 crc kubenswrapper[4718]: I1210 14:59:50.942508 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 10 14:59:50 crc kubenswrapper[4718]: I1210 14:59:50.949011 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5dtrl\" (UniqueName: \"kubernetes.io/projected/aef87f6d-8e6b-4569-b12d-b10b34872959-kube-api-access-5dtrl\") pod \"neutron-7cb5f4dbb8-qk86f\" (UID: \"aef87f6d-8e6b-4569-b12d-b10b34872959\") " pod="openstack/neutron-7cb5f4dbb8-qk86f" Dec 10 14:59:50 crc kubenswrapper[4718]: I1210 14:59:50.949173 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/aef87f6d-8e6b-4569-b12d-b10b34872959-config\") pod \"neutron-7cb5f4dbb8-qk86f\" (UID: \"aef87f6d-8e6b-4569-b12d-b10b34872959\") " pod="openstack/neutron-7cb5f4dbb8-qk86f" Dec 10 14:59:50 crc kubenswrapper[4718]: I1210 14:59:50.949283 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f7cb8719-7e84-49ea-a5a7-e48eca56b5bd-config-data\") pod \"ceilometer-0\" (UID: \"f7cb8719-7e84-49ea-a5a7-e48eca56b5bd\") " pod="openstack/ceilometer-0" Dec 10 14:59:50 crc kubenswrapper[4718]: I1210 14:59:50.949336 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f7cb8719-7e84-49ea-a5a7-e48eca56b5bd-scripts\") pod \"ceilometer-0\" (UID: \"f7cb8719-7e84-49ea-a5a7-e48eca56b5bd\") " pod="openstack/ceilometer-0" Dec 10 14:59:50 crc kubenswrapper[4718]: I1210 14:59:50.949373 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/aef87f6d-8e6b-4569-b12d-b10b34872959-ovndb-tls-certs\") pod \"neutron-7cb5f4dbb8-qk86f\" (UID: \"aef87f6d-8e6b-4569-b12d-b10b34872959\") " pod="openstack/neutron-7cb5f4dbb8-qk86f" Dec 10 14:59:50 crc kubenswrapper[4718]: I1210 14:59:50.949603 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aef87f6d-8e6b-4569-b12d-b10b34872959-combined-ca-bundle\") pod \"neutron-7cb5f4dbb8-qk86f\" (UID: \"aef87f6d-8e6b-4569-b12d-b10b34872959\") " pod="openstack/neutron-7cb5f4dbb8-qk86f" Dec 10 14:59:50 crc kubenswrapper[4718]: I1210 14:59:50.949644 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7cb8719-7e84-49ea-a5a7-e48eca56b5bd-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f7cb8719-7e84-49ea-a5a7-e48eca56b5bd\") " pod="openstack/ceilometer-0" Dec 10 14:59:50 crc kubenswrapper[4718]: I1210 14:59:50.949682 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/aef87f6d-8e6b-4569-b12d-b10b34872959-httpd-config\") pod \"neutron-7cb5f4dbb8-qk86f\" (UID: \"aef87f6d-8e6b-4569-b12d-b10b34872959\") " pod="openstack/neutron-7cb5f4dbb8-qk86f" Dec 10 14:59:50 crc kubenswrapper[4718]: I1210 14:59:50.949742 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f7cb8719-7e84-49ea-a5a7-e48eca56b5bd-run-httpd\") pod \"ceilometer-0\" (UID: \"f7cb8719-7e84-49ea-a5a7-e48eca56b5bd\") " pod="openstack/ceilometer-0" Dec 10 14:59:50 crc kubenswrapper[4718]: I1210 14:59:50.949814 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f7cb8719-7e84-49ea-a5a7-e48eca56b5bd-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f7cb8719-7e84-49ea-a5a7-e48eca56b5bd\") " pod="openstack/ceilometer-0" Dec 10 14:59:50 crc kubenswrapper[4718]: I1210 14:59:50.949885 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f7cb8719-7e84-49ea-a5a7-e48eca56b5bd-log-httpd\") pod \"ceilometer-0\" (UID: \"f7cb8719-7e84-49ea-a5a7-e48eca56b5bd\") " pod="openstack/ceilometer-0" Dec 10 14:59:50 crc kubenswrapper[4718]: I1210 14:59:50.949985 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j65dn\" (UniqueName: \"kubernetes.io/projected/f7cb8719-7e84-49ea-a5a7-e48eca56b5bd-kube-api-access-j65dn\") pod \"ceilometer-0\" (UID: \"f7cb8719-7e84-49ea-a5a7-e48eca56b5bd\") " pod="openstack/ceilometer-0" Dec 10 14:59:50 crc kubenswrapper[4718]: I1210 14:59:50.960498 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-7cb5f4dbb8-qk86f"] Dec 10 14:59:50 crc kubenswrapper[4718]: I1210 14:59:50.983461 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-q7sbn"] Dec 10 14:59:51 crc kubenswrapper[4718]: I1210 14:59:51.001226 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-abb8-account-create-update-gvhvt"] Dec 10 14:59:51 crc kubenswrapper[4718]: I1210 14:59:51.057378 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f7cb8719-7e84-49ea-a5a7-e48eca56b5bd-run-httpd\") pod \"ceilometer-0\" (UID: \"f7cb8719-7e84-49ea-a5a7-e48eca56b5bd\") " pod="openstack/ceilometer-0" Dec 10 14:59:51 crc kubenswrapper[4718]: I1210 14:59:51.057525 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f7cb8719-7e84-49ea-a5a7-e48eca56b5bd-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f7cb8719-7e84-49ea-a5a7-e48eca56b5bd\") " pod="openstack/ceilometer-0" Dec 10 14:59:51 crc kubenswrapper[4718]: I1210 14:59:51.057566 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f7cb8719-7e84-49ea-a5a7-e48eca56b5bd-log-httpd\") pod \"ceilometer-0\" (UID: \"f7cb8719-7e84-49ea-a5a7-e48eca56b5bd\") " pod="openstack/ceilometer-0" Dec 10 14:59:51 crc kubenswrapper[4718]: I1210 14:59:51.057664 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j65dn\" (UniqueName: \"kubernetes.io/projected/f7cb8719-7e84-49ea-a5a7-e48eca56b5bd-kube-api-access-j65dn\") pod \"ceilometer-0\" (UID: \"f7cb8719-7e84-49ea-a5a7-e48eca56b5bd\") " pod="openstack/ceilometer-0" Dec 10 14:59:51 crc kubenswrapper[4718]: I1210 14:59:51.057703 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5dtrl\" (UniqueName: \"kubernetes.io/projected/aef87f6d-8e6b-4569-b12d-b10b34872959-kube-api-access-5dtrl\") pod \"neutron-7cb5f4dbb8-qk86f\" (UID: \"aef87f6d-8e6b-4569-b12d-b10b34872959\") " pod="openstack/neutron-7cb5f4dbb8-qk86f" Dec 10 14:59:51 crc kubenswrapper[4718]: I1210 14:59:51.057816 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/aef87f6d-8e6b-4569-b12d-b10b34872959-config\") pod \"neutron-7cb5f4dbb8-qk86f\" (UID: \"aef87f6d-8e6b-4569-b12d-b10b34872959\") " pod="openstack/neutron-7cb5f4dbb8-qk86f" Dec 10 14:59:51 crc kubenswrapper[4718]: I1210 14:59:51.057907 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f7cb8719-7e84-49ea-a5a7-e48eca56b5bd-config-data\") pod \"ceilometer-0\" (UID: \"f7cb8719-7e84-49ea-a5a7-e48eca56b5bd\") " pod="openstack/ceilometer-0" Dec 10 14:59:51 crc kubenswrapper[4718]: I1210 14:59:51.057968 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f7cb8719-7e84-49ea-a5a7-e48eca56b5bd-scripts\") pod \"ceilometer-0\" (UID: \"f7cb8719-7e84-49ea-a5a7-e48eca56b5bd\") " pod="openstack/ceilometer-0" Dec 10 14:59:51 crc kubenswrapper[4718]: I1210 14:59:51.058020 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/aef87f6d-8e6b-4569-b12d-b10b34872959-ovndb-tls-certs\") pod \"neutron-7cb5f4dbb8-qk86f\" (UID: \"aef87f6d-8e6b-4569-b12d-b10b34872959\") " pod="openstack/neutron-7cb5f4dbb8-qk86f" Dec 10 14:59:51 crc kubenswrapper[4718]: I1210 14:59:51.058712 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aef87f6d-8e6b-4569-b12d-b10b34872959-combined-ca-bundle\") pod \"neutron-7cb5f4dbb8-qk86f\" (UID: \"aef87f6d-8e6b-4569-b12d-b10b34872959\") " pod="openstack/neutron-7cb5f4dbb8-qk86f" Dec 10 14:59:51 crc kubenswrapper[4718]: I1210 14:59:51.067435 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7cb8719-7e84-49ea-a5a7-e48eca56b5bd-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f7cb8719-7e84-49ea-a5a7-e48eca56b5bd\") " pod="openstack/ceilometer-0" Dec 10 14:59:51 crc kubenswrapper[4718]: I1210 14:59:51.067567 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/aef87f6d-8e6b-4569-b12d-b10b34872959-httpd-config\") pod \"neutron-7cb5f4dbb8-qk86f\" (UID: \"aef87f6d-8e6b-4569-b12d-b10b34872959\") " pod="openstack/neutron-7cb5f4dbb8-qk86f" Dec 10 14:59:51 crc kubenswrapper[4718]: I1210 14:59:51.077052 4718 scope.go:117] "RemoveContainer" containerID="62264a2d67bd8fd5ef03e7e98e4b1ddc79ea540a708bce5bd65507b8d2c77b31" Dec 10 14:59:51 crc kubenswrapper[4718]: I1210 14:59:51.059455 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f7cb8719-7e84-49ea-a5a7-e48eca56b5bd-run-httpd\") pod \"ceilometer-0\" (UID: \"f7cb8719-7e84-49ea-a5a7-e48eca56b5bd\") " pod="openstack/ceilometer-0" Dec 10 14:59:51 crc kubenswrapper[4718]: I1210 14:59:51.059886 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f7cb8719-7e84-49ea-a5a7-e48eca56b5bd-log-httpd\") pod \"ceilometer-0\" (UID: \"f7cb8719-7e84-49ea-a5a7-e48eca56b5bd\") " pod="openstack/ceilometer-0" Dec 10 14:59:51 crc kubenswrapper[4718]: I1210 14:59:51.086976 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75958fc765-86zrn" Dec 10 14:59:51 crc kubenswrapper[4718]: I1210 14:59:51.086994 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f7cb8719-7e84-49ea-a5a7-e48eca56b5bd-config-data\") pod \"ceilometer-0\" (UID: \"f7cb8719-7e84-49ea-a5a7-e48eca56b5bd\") " pod="openstack/ceilometer-0" Dec 10 14:59:51 crc kubenswrapper[4718]: I1210 14:59:51.095938 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7cb8719-7e84-49ea-a5a7-e48eca56b5bd-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f7cb8719-7e84-49ea-a5a7-e48eca56b5bd\") " pod="openstack/ceilometer-0" Dec 10 14:59:51 crc kubenswrapper[4718]: I1210 14:59:51.099808 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/aef87f6d-8e6b-4569-b12d-b10b34872959-config\") pod \"neutron-7cb5f4dbb8-qk86f\" (UID: \"aef87f6d-8e6b-4569-b12d-b10b34872959\") " pod="openstack/neutron-7cb5f4dbb8-qk86f" Dec 10 14:59:51 crc kubenswrapper[4718]: I1210 14:59:51.102590 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/aef87f6d-8e6b-4569-b12d-b10b34872959-ovndb-tls-certs\") pod \"neutron-7cb5f4dbb8-qk86f\" (UID: \"aef87f6d-8e6b-4569-b12d-b10b34872959\") " pod="openstack/neutron-7cb5f4dbb8-qk86f" Dec 10 14:59:51 crc kubenswrapper[4718]: I1210 14:59:51.105773 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/aef87f6d-8e6b-4569-b12d-b10b34872959-httpd-config\") pod \"neutron-7cb5f4dbb8-qk86f\" (UID: \"aef87f6d-8e6b-4569-b12d-b10b34872959\") " pod="openstack/neutron-7cb5f4dbb8-qk86f" Dec 10 14:59:51 crc kubenswrapper[4718]: I1210 14:59:51.111514 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f7cb8719-7e84-49ea-a5a7-e48eca56b5bd-scripts\") pod \"ceilometer-0\" (UID: \"f7cb8719-7e84-49ea-a5a7-e48eca56b5bd\") " pod="openstack/ceilometer-0" Dec 10 14:59:51 crc kubenswrapper[4718]: I1210 14:59:51.112788 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aef87f6d-8e6b-4569-b12d-b10b34872959-combined-ca-bundle\") pod \"neutron-7cb5f4dbb8-qk86f\" (UID: \"aef87f6d-8e6b-4569-b12d-b10b34872959\") " pod="openstack/neutron-7cb5f4dbb8-qk86f" Dec 10 14:59:51 crc kubenswrapper[4718]: I1210 14:59:51.114233 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j65dn\" (UniqueName: \"kubernetes.io/projected/f7cb8719-7e84-49ea-a5a7-e48eca56b5bd-kube-api-access-j65dn\") pod \"ceilometer-0\" (UID: \"f7cb8719-7e84-49ea-a5a7-e48eca56b5bd\") " pod="openstack/ceilometer-0" Dec 10 14:59:51 crc kubenswrapper[4718]: I1210 14:59:51.124783 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-abb8-account-create-update-gvhvt" event={"ID":"53c8e09f-d39f-4f38-9e9e-199399c09a14","Type":"ContainerStarted","Data":"67daca023360235070f0916125a12ff4a6b9fb0a52534451d78ef3853c049db2"} Dec 10 14:59:51 crc kubenswrapper[4718]: I1210 14:59:51.127211 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f7cb8719-7e84-49ea-a5a7-e48eca56b5bd-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f7cb8719-7e84-49ea-a5a7-e48eca56b5bd\") " pod="openstack/ceilometer-0" Dec 10 14:59:51 crc kubenswrapper[4718]: I1210 14:59:51.127603 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-q7sbn" event={"ID":"494a054f-4818-4649-83ef-5b058e0d9436","Type":"ContainerStarted","Data":"1a294a727382cd25bdd4a7517a99a743ba66d74f514dc09426cfc3ca090c29df"} Dec 10 14:59:51 crc kubenswrapper[4718]: I1210 14:59:51.128790 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5dtrl\" (UniqueName: \"kubernetes.io/projected/aef87f6d-8e6b-4569-b12d-b10b34872959-kube-api-access-5dtrl\") pod \"neutron-7cb5f4dbb8-qk86f\" (UID: \"aef87f6d-8e6b-4569-b12d-b10b34872959\") " pod="openstack/neutron-7cb5f4dbb8-qk86f" Dec 10 14:59:51 crc kubenswrapper[4718]: I1210 14:59:51.176876 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-bqj7n" event={"ID":"2ffe7b9f-4057-4b1c-a2dd-18fbafb5474e","Type":"ContainerStarted","Data":"3788c1c2dfd3bab1fc2a97ccb9179fee7e7a0e8383936e75c17253c6eabcf485"} Dec 10 14:59:51 crc kubenswrapper[4718]: I1210 14:59:51.177544 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-bqj7n" event={"ID":"2ffe7b9f-4057-4b1c-a2dd-18fbafb5474e","Type":"ContainerStarted","Data":"ca3dee9ebe2442722e8d97969d706a2119bde1e1fe7fe37768f45ea3656bea33"} Dec 10 14:59:51 crc kubenswrapper[4718]: I1210 14:59:51.184159 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-z2hg5"] Dec 10 14:59:51 crc kubenswrapper[4718]: I1210 14:59:51.200304 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-1795-account-create-update-xl996"] Dec 10 14:59:51 crc kubenswrapper[4718]: W1210 14:59:51.219150 4718 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf6871804_824e_4dc2_a7db_f7c788915808.slice/crio-323b59fc5c0353783072d2980a37a42b318e677f1539488f25591cc97377b3d2 WatchSource:0}: Error finding container 323b59fc5c0353783072d2980a37a42b318e677f1539488f25591cc97377b3d2: Status 404 returned error can't find the container with id 323b59fc5c0353783072d2980a37a42b318e677f1539488f25591cc97377b3d2 Dec 10 14:59:51 crc kubenswrapper[4718]: W1210 14:59:51.234733 4718 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod973eb84a_8809_4047_9112_4501f249ba68.slice/crio-be143394851ef519ba9a8c0ffb5fdfe1c4ee70dced442f960eaedb4358a6e8f2 WatchSource:0}: Error finding container be143394851ef519ba9a8c0ffb5fdfe1c4ee70dced442f960eaedb4358a6e8f2: Status 404 returned error can't find the container with id be143394851ef519ba9a8c0ffb5fdfe1c4ee70dced442f960eaedb4358a6e8f2 Dec 10 14:59:51 crc kubenswrapper[4718]: I1210 14:59:51.236632 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5cd5fc4d55-tx99q"] Dec 10 14:59:51 crc kubenswrapper[4718]: W1210 14:59:51.253597 4718 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod92f397d2_d553_4a53_88b2_314c2dc7ebf6.slice/crio-0b442256c5864966d1031fa770daa01452d00460c14542a5a24fcf676a7c141a WatchSource:0}: Error finding container 0b442256c5864966d1031fa770daa01452d00460c14542a5a24fcf676a7c141a: Status 404 returned error can't find the container with id 0b442256c5864966d1031fa770daa01452d00460c14542a5a24fcf676a7c141a Dec 10 14:59:51 crc kubenswrapper[4718]: I1210 14:59:51.278698 4718 scope.go:117] "RemoveContainer" containerID="e71ee9423876c4df1d64e82e0e300727ba7d89ccc168d37fab5011a004587e62" Dec 10 14:59:51 crc kubenswrapper[4718]: I1210 14:59:51.311699 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Dec 10 14:59:51 crc kubenswrapper[4718]: W1210 14:59:51.312380 4718 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb814c215_d63c_49f4_8760_fb3688f9d9e3.slice/crio-25f9b445a7e77cb5b55d723adf9fc11af4e81076a940d61ac4fe2d5646271dc7 WatchSource:0}: Error finding container 25f9b445a7e77cb5b55d723adf9fc11af4e81076a940d61ac4fe2d5646271dc7: Status 404 returned error can't find the container with id 25f9b445a7e77cb5b55d723adf9fc11af4e81076a940d61ac4fe2d5646271dc7 Dec 10 14:59:51 crc kubenswrapper[4718]: I1210 14:59:51.326965 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7cb5f4dbb8-qk86f" Dec 10 14:59:51 crc kubenswrapper[4718]: I1210 14:59:51.330633 4718 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-6bb7f498bd-pjx6h" podUID="57bc5c19-c945-4bca-adef-0ddf1b9fabac" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.155:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.155:8443: connect: connection refused" Dec 10 14:59:51 crc kubenswrapper[4718]: I1210 14:59:51.351964 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 10 14:59:51 crc kubenswrapper[4718]: I1210 14:59:51.378305 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-9820-account-create-update-dph9s"] Dec 10 14:59:51 crc kubenswrapper[4718]: I1210 14:59:51.395490 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Dec 10 14:59:51 crc kubenswrapper[4718]: I1210 14:59:51.415076 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-db-create-bqj7n" podStartSLOduration=11.415042484 podStartE2EDuration="11.415042484s" podCreationTimestamp="2025-12-10 14:59:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 14:59:51.184774452 +0000 UTC m=+1696.133997869" watchObservedRunningTime="2025-12-10 14:59:51.415042484 +0000 UTC m=+1696.364265901" Dec 10 14:59:51 crc kubenswrapper[4718]: I1210 14:59:51.560988 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-5vctd"] Dec 10 14:59:51 crc kubenswrapper[4718]: I1210 14:59:51.572046 4718 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-77c9ddb894-brvxz" podUID="e1a09589-44b9-49f4-8970-d3381c3d4b99" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.156:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.156:8443: connect: connection refused" Dec 10 14:59:52 crc kubenswrapper[4718]: I1210 14:59:52.077742 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="39e78dd0-89b2-4d82-acc7-9b0818c40856" path="/var/lib/kubelet/pods/39e78dd0-89b2-4d82-acc7-9b0818c40856/volumes" Dec 10 14:59:52 crc kubenswrapper[4718]: I1210 14:59:52.084629 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cc0d50f-35f9-45b0-8316-cdf12ae72a97" path="/var/lib/kubelet/pods/3cc0d50f-35f9-45b0-8316-cdf12ae72a97/volumes" Dec 10 14:59:52 crc kubenswrapper[4718]: I1210 14:59:52.222361 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-1795-account-create-update-xl996" event={"ID":"e8a444d7-abdd-44f0-82e1-cc8d1cd9b240","Type":"ContainerStarted","Data":"b259724fdbb71844f3d23ef06ac648d537016ad2f2d51af66858019ad2728a1b"} Dec 10 14:59:52 crc kubenswrapper[4718]: I1210 14:59:52.235631 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-9820-account-create-update-dph9s" event={"ID":"973eb84a-8809-4047-9112-4501f249ba68","Type":"ContainerStarted","Data":"be143394851ef519ba9a8c0ffb5fdfe1c4ee70dced442f960eaedb4358a6e8f2"} Dec 10 14:59:52 crc kubenswrapper[4718]: I1210 14:59:52.240916 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-z2hg5" event={"ID":"92f397d2-d553-4a53-88b2-314c2dc7ebf6","Type":"ContainerStarted","Data":"0b442256c5864966d1031fa770daa01452d00460c14542a5a24fcf676a7c141a"} Dec 10 14:59:52 crc kubenswrapper[4718]: I1210 14:59:52.248904 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"b814c215-d63c-49f4-8760-fb3688f9d9e3","Type":"ContainerStarted","Data":"25f9b445a7e77cb5b55d723adf9fc11af4e81076a940d61ac4fe2d5646271dc7"} Dec 10 14:59:52 crc kubenswrapper[4718]: I1210 14:59:52.259351 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5cd5fc4d55-tx99q" event={"ID":"f6871804-824e-4dc2-a7db-f7c788915808","Type":"ContainerStarted","Data":"323b59fc5c0353783072d2980a37a42b318e677f1539488f25591cc97377b3d2"} Dec 10 14:59:52 crc kubenswrapper[4718]: I1210 14:59:52.273725 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5vctd" event={"ID":"81ebdd62-3494-4d9a-8d04-ae6122173e69","Type":"ContainerStarted","Data":"f60f79b37b84ca5a2895c1fb0f70ac7049955c85c2c85964cb71bb8f095441f7"} Dec 10 14:59:52 crc kubenswrapper[4718]: I1210 14:59:52.291100 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-abb8-account-create-update-gvhvt" event={"ID":"53c8e09f-d39f-4f38-9e9e-199399c09a14","Type":"ContainerStarted","Data":"a29fd9c61aa28706fa5c1ffbb42321687f1b88f8cd61205213d14fefcf0558fa"} Dec 10 14:59:52 crc kubenswrapper[4718]: I1210 14:59:52.321961 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"cc254fa2-5ac7-45d7-845b-f1ac0c898e98","Type":"ContainerStarted","Data":"59e09028113e2e5577ba2309668687def831b0c8bb818f71fd313d292b0259a7"} Dec 10 14:59:52 crc kubenswrapper[4718]: I1210 14:59:52.452437 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-q7sbn" event={"ID":"494a054f-4818-4649-83ef-5b058e0d9436","Type":"ContainerStarted","Data":"f7a33f0cb27f4558a08f843750f07d74190973272c6b174236e70bfa6c0c2489"} Dec 10 14:59:52 crc kubenswrapper[4718]: I1210 14:59:52.472475 4718 generic.go:334] "Generic (PLEG): container finished" podID="2ffe7b9f-4057-4b1c-a2dd-18fbafb5474e" containerID="3788c1c2dfd3bab1fc2a97ccb9179fee7e7a0e8383936e75c17253c6eabcf485" exitCode=0 Dec 10 14:59:52 crc kubenswrapper[4718]: I1210 14:59:52.472546 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-bqj7n" event={"ID":"2ffe7b9f-4057-4b1c-a2dd-18fbafb5474e","Type":"ContainerDied","Data":"3788c1c2dfd3bab1fc2a97ccb9179fee7e7a0e8383936e75c17253c6eabcf485"} Dec 10 14:59:52 crc kubenswrapper[4718]: I1210 14:59:52.507137 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-abb8-account-create-update-gvhvt" podStartSLOduration=12.507108625 podStartE2EDuration="12.507108625s" podCreationTimestamp="2025-12-10 14:59:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 14:59:52.44040859 +0000 UTC m=+1697.389632007" watchObservedRunningTime="2025-12-10 14:59:52.507108625 +0000 UTC m=+1697.456332032" Dec 10 14:59:52 crc kubenswrapper[4718]: I1210 14:59:52.576371 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-db-create-q7sbn" podStartSLOduration=12.576338663 podStartE2EDuration="12.576338663s" podCreationTimestamp="2025-12-10 14:59:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 14:59:52.481463701 +0000 UTC m=+1697.430687118" watchObservedRunningTime="2025-12-10 14:59:52.576338663 +0000 UTC m=+1697.525562080" Dec 10 14:59:52 crc kubenswrapper[4718]: I1210 14:59:52.678450 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-75958fc765-86zrn"] Dec 10 14:59:53 crc kubenswrapper[4718]: I1210 14:59:53.031142 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 10 14:59:53 crc kubenswrapper[4718]: I1210 14:59:53.384353 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-7cb5f4dbb8-qk86f"] Dec 10 14:59:53 crc kubenswrapper[4718]: I1210 14:59:53.489975 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-1795-account-create-update-xl996" event={"ID":"e8a444d7-abdd-44f0-82e1-cc8d1cd9b240","Type":"ContainerStarted","Data":"fdb8c0b77873ec44286374f4c946c1bb142cba1c170126b8e161a530444c509c"} Dec 10 14:59:53 crc kubenswrapper[4718]: I1210 14:59:53.497194 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-z2hg5" event={"ID":"92f397d2-d553-4a53-88b2-314c2dc7ebf6","Type":"ContainerStarted","Data":"71042674cf241a0594e797aeb1d33b8781828a24c8a0265bb141be1271360622"} Dec 10 14:59:53 crc kubenswrapper[4718]: I1210 14:59:53.543359 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-1795-account-create-update-xl996" podStartSLOduration=13.543319814 podStartE2EDuration="13.543319814s" podCreationTimestamp="2025-12-10 14:59:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 14:59:53.528143123 +0000 UTC m=+1698.477366550" watchObservedRunningTime="2025-12-10 14:59:53.543319814 +0000 UTC m=+1698.492543231" Dec 10 14:59:53 crc kubenswrapper[4718]: W1210 14:59:53.563249 4718 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddffdff5c_1fe4_4dbb_93c7_14f497a4e939.slice/crio-06c9969e1d51169ad21079e6aed664ea730a710536e2c86b706fc0d6ffc8ef5e WatchSource:0}: Error finding container 06c9969e1d51169ad21079e6aed664ea730a710536e2c86b706fc0d6ffc8ef5e: Status 404 returned error can't find the container with id 06c9969e1d51169ad21079e6aed664ea730a710536e2c86b706fc0d6ffc8ef5e Dec 10 14:59:53 crc kubenswrapper[4718]: I1210 14:59:53.585111 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-db-create-z2hg5" podStartSLOduration=13.585057662 podStartE2EDuration="13.585057662s" podCreationTimestamp="2025-12-10 14:59:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 14:59:53.561272955 +0000 UTC m=+1698.510496382" watchObservedRunningTime="2025-12-10 14:59:53.585057662 +0000 UTC m=+1698.534281079" Dec 10 14:59:54 crc kubenswrapper[4718]: I1210 14:59:54.128007 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-bqj7n" Dec 10 14:59:54 crc kubenswrapper[4718]: I1210 14:59:54.260628 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7plmv\" (UniqueName: \"kubernetes.io/projected/2ffe7b9f-4057-4b1c-a2dd-18fbafb5474e-kube-api-access-7plmv\") pod \"2ffe7b9f-4057-4b1c-a2dd-18fbafb5474e\" (UID: \"2ffe7b9f-4057-4b1c-a2dd-18fbafb5474e\") " Dec 10 14:59:54 crc kubenswrapper[4718]: I1210 14:59:54.260858 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2ffe7b9f-4057-4b1c-a2dd-18fbafb5474e-operator-scripts\") pod \"2ffe7b9f-4057-4b1c-a2dd-18fbafb5474e\" (UID: \"2ffe7b9f-4057-4b1c-a2dd-18fbafb5474e\") " Dec 10 14:59:54 crc kubenswrapper[4718]: I1210 14:59:54.261689 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2ffe7b9f-4057-4b1c-a2dd-18fbafb5474e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "2ffe7b9f-4057-4b1c-a2dd-18fbafb5474e" (UID: "2ffe7b9f-4057-4b1c-a2dd-18fbafb5474e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:59:54 crc kubenswrapper[4718]: I1210 14:59:54.316779 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2ffe7b9f-4057-4b1c-a2dd-18fbafb5474e-kube-api-access-7plmv" (OuterVolumeSpecName: "kube-api-access-7plmv") pod "2ffe7b9f-4057-4b1c-a2dd-18fbafb5474e" (UID: "2ffe7b9f-4057-4b1c-a2dd-18fbafb5474e"). InnerVolumeSpecName "kube-api-access-7plmv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 14:59:54 crc kubenswrapper[4718]: I1210 14:59:54.367504 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7plmv\" (UniqueName: \"kubernetes.io/projected/2ffe7b9f-4057-4b1c-a2dd-18fbafb5474e-kube-api-access-7plmv\") on node \"crc\" DevicePath \"\"" Dec 10 14:59:54 crc kubenswrapper[4718]: I1210 14:59:54.367567 4718 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2ffe7b9f-4057-4b1c-a2dd-18fbafb5474e-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 10 14:59:54 crc kubenswrapper[4718]: I1210 14:59:54.544459 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f7cb8719-7e84-49ea-a5a7-e48eca56b5bd","Type":"ContainerStarted","Data":"df06077e44d2eac051401c5b6501f493fe043345d76299a6b66a21a8792ab47e"} Dec 10 14:59:54 crc kubenswrapper[4718]: I1210 14:59:54.554994 4718 generic.go:334] "Generic (PLEG): container finished" podID="53c8e09f-d39f-4f38-9e9e-199399c09a14" containerID="a29fd9c61aa28706fa5c1ffbb42321687f1b88f8cd61205213d14fefcf0558fa" exitCode=0 Dec 10 14:59:54 crc kubenswrapper[4718]: I1210 14:59:54.555159 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-abb8-account-create-update-gvhvt" event={"ID":"53c8e09f-d39f-4f38-9e9e-199399c09a14","Type":"ContainerDied","Data":"a29fd9c61aa28706fa5c1ffbb42321687f1b88f8cd61205213d14fefcf0558fa"} Dec 10 14:59:54 crc kubenswrapper[4718]: I1210 14:59:54.584519 4718 generic.go:334] "Generic (PLEG): container finished" podID="e8a444d7-abdd-44f0-82e1-cc8d1cd9b240" containerID="fdb8c0b77873ec44286374f4c946c1bb142cba1c170126b8e161a530444c509c" exitCode=0 Dec 10 14:59:54 crc kubenswrapper[4718]: I1210 14:59:54.584709 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-1795-account-create-update-xl996" event={"ID":"e8a444d7-abdd-44f0-82e1-cc8d1cd9b240","Type":"ContainerDied","Data":"fdb8c0b77873ec44286374f4c946c1bb142cba1c170126b8e161a530444c509c"} Dec 10 14:59:54 crc kubenswrapper[4718]: I1210 14:59:54.634538 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-bqj7n" event={"ID":"2ffe7b9f-4057-4b1c-a2dd-18fbafb5474e","Type":"ContainerDied","Data":"ca3dee9ebe2442722e8d97969d706a2119bde1e1fe7fe37768f45ea3656bea33"} Dec 10 14:59:54 crc kubenswrapper[4718]: I1210 14:59:54.634596 4718 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ca3dee9ebe2442722e8d97969d706a2119bde1e1fe7fe37768f45ea3656bea33" Dec 10 14:59:54 crc kubenswrapper[4718]: I1210 14:59:54.634737 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-bqj7n" Dec 10 14:59:54 crc kubenswrapper[4718]: I1210 14:59:54.665772 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75958fc765-86zrn" event={"ID":"dffdff5c-1fe4-4dbb-93c7-14f497a4e939","Type":"ContainerStarted","Data":"06c9969e1d51169ad21079e6aed664ea730a710536e2c86b706fc0d6ffc8ef5e"} Dec 10 14:59:54 crc kubenswrapper[4718]: I1210 14:59:54.742278 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-z2hg5" event={"ID":"92f397d2-d553-4a53-88b2-314c2dc7ebf6","Type":"ContainerDied","Data":"71042674cf241a0594e797aeb1d33b8781828a24c8a0265bb141be1271360622"} Dec 10 14:59:54 crc kubenswrapper[4718]: I1210 14:59:54.759115 4718 generic.go:334] "Generic (PLEG): container finished" podID="92f397d2-d553-4a53-88b2-314c2dc7ebf6" containerID="71042674cf241a0594e797aeb1d33b8781828a24c8a0265bb141be1271360622" exitCode=0 Dec 10 14:59:54 crc kubenswrapper[4718]: I1210 14:59:54.791667 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-5fd4cf9989-fxv7b"] Dec 10 14:59:54 crc kubenswrapper[4718]: E1210 14:59:54.792427 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ffe7b9f-4057-4b1c-a2dd-18fbafb5474e" containerName="mariadb-database-create" Dec 10 14:59:54 crc kubenswrapper[4718]: I1210 14:59:54.792454 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ffe7b9f-4057-4b1c-a2dd-18fbafb5474e" containerName="mariadb-database-create" Dec 10 14:59:54 crc kubenswrapper[4718]: I1210 14:59:54.792698 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="2ffe7b9f-4057-4b1c-a2dd-18fbafb5474e" containerName="mariadb-database-create" Dec 10 14:59:54 crc kubenswrapper[4718]: I1210 14:59:54.795154 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5fd4cf9989-fxv7b" Dec 10 14:59:54 crc kubenswrapper[4718]: I1210 14:59:54.795276 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-q7sbn" event={"ID":"494a054f-4818-4649-83ef-5b058e0d9436","Type":"ContainerDied","Data":"f7a33f0cb27f4558a08f843750f07d74190973272c6b174236e70bfa6c0c2489"} Dec 10 14:59:54 crc kubenswrapper[4718]: I1210 14:59:54.795462 4718 generic.go:334] "Generic (PLEG): container finished" podID="494a054f-4818-4649-83ef-5b058e0d9436" containerID="f7a33f0cb27f4558a08f843750f07d74190973272c6b174236e70bfa6c0c2489" exitCode=0 Dec 10 14:59:54 crc kubenswrapper[4718]: I1210 14:59:54.800252 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Dec 10 14:59:54 crc kubenswrapper[4718]: I1210 14:59:54.802258 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Dec 10 14:59:54 crc kubenswrapper[4718]: I1210 14:59:54.820798 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5vctd" event={"ID":"81ebdd62-3494-4d9a-8d04-ae6122173e69","Type":"ContainerStarted","Data":"7ada4e507dfe874f1e8114e2405ec490f308a9ea81aca627b428e58772e6bd3b"} Dec 10 14:59:54 crc kubenswrapper[4718]: I1210 14:59:54.826265 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-5fd4cf9989-fxv7b"] Dec 10 14:59:54 crc kubenswrapper[4718]: I1210 14:59:54.835524 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7cb5f4dbb8-qk86f" event={"ID":"aef87f6d-8e6b-4569-b12d-b10b34872959","Type":"ContainerStarted","Data":"ae9e0a03455c5787ced40bd5de8cd24d33cc721b98f8646495fc6192b2c2097a"} Dec 10 14:59:54 crc kubenswrapper[4718]: I1210 14:59:54.876611 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-9820-account-create-update-dph9s" podStartSLOduration=14.876518041 podStartE2EDuration="14.876518041s" podCreationTimestamp="2025-12-10 14:59:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 14:59:54.834096146 +0000 UTC m=+1699.783319593" watchObservedRunningTime="2025-12-10 14:59:54.876518041 +0000 UTC m=+1699.825741458" Dec 10 14:59:54 crc kubenswrapper[4718]: I1210 14:59:54.907002 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/ddd7c56e-7efb-44f9-8da2-45d0d54a9756-ovndb-tls-certs\") pod \"neutron-5fd4cf9989-fxv7b\" (UID: \"ddd7c56e-7efb-44f9-8da2-45d0d54a9756\") " pod="openstack/neutron-5fd4cf9989-fxv7b" Dec 10 14:59:54 crc kubenswrapper[4718]: I1210 14:59:54.907156 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ddd7c56e-7efb-44f9-8da2-45d0d54a9756-combined-ca-bundle\") pod \"neutron-5fd4cf9989-fxv7b\" (UID: \"ddd7c56e-7efb-44f9-8da2-45d0d54a9756\") " pod="openstack/neutron-5fd4cf9989-fxv7b" Dec 10 14:59:54 crc kubenswrapper[4718]: I1210 14:59:54.907264 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/ddd7c56e-7efb-44f9-8da2-45d0d54a9756-httpd-config\") pod \"neutron-5fd4cf9989-fxv7b\" (UID: \"ddd7c56e-7efb-44f9-8da2-45d0d54a9756\") " pod="openstack/neutron-5fd4cf9989-fxv7b" Dec 10 14:59:54 crc kubenswrapper[4718]: I1210 14:59:54.907316 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/ddd7c56e-7efb-44f9-8da2-45d0d54a9756-config\") pod \"neutron-5fd4cf9989-fxv7b\" (UID: \"ddd7c56e-7efb-44f9-8da2-45d0d54a9756\") " pod="openstack/neutron-5fd4cf9989-fxv7b" Dec 10 14:59:54 crc kubenswrapper[4718]: I1210 14:59:54.907403 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ddd7c56e-7efb-44f9-8da2-45d0d54a9756-public-tls-certs\") pod \"neutron-5fd4cf9989-fxv7b\" (UID: \"ddd7c56e-7efb-44f9-8da2-45d0d54a9756\") " pod="openstack/neutron-5fd4cf9989-fxv7b" Dec 10 14:59:54 crc kubenswrapper[4718]: I1210 14:59:54.907455 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xq97p\" (UniqueName: \"kubernetes.io/projected/ddd7c56e-7efb-44f9-8da2-45d0d54a9756-kube-api-access-xq97p\") pod \"neutron-5fd4cf9989-fxv7b\" (UID: \"ddd7c56e-7efb-44f9-8da2-45d0d54a9756\") " pod="openstack/neutron-5fd4cf9989-fxv7b" Dec 10 14:59:54 crc kubenswrapper[4718]: I1210 14:59:54.907522 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ddd7c56e-7efb-44f9-8da2-45d0d54a9756-internal-tls-certs\") pod \"neutron-5fd4cf9989-fxv7b\" (UID: \"ddd7c56e-7efb-44f9-8da2-45d0d54a9756\") " pod="openstack/neutron-5fd4cf9989-fxv7b" Dec 10 14:59:55 crc kubenswrapper[4718]: I1210 14:59:55.011092 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/ddd7c56e-7efb-44f9-8da2-45d0d54a9756-ovndb-tls-certs\") pod \"neutron-5fd4cf9989-fxv7b\" (UID: \"ddd7c56e-7efb-44f9-8da2-45d0d54a9756\") " pod="openstack/neutron-5fd4cf9989-fxv7b" Dec 10 14:59:55 crc kubenswrapper[4718]: I1210 14:59:55.011725 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ddd7c56e-7efb-44f9-8da2-45d0d54a9756-combined-ca-bundle\") pod \"neutron-5fd4cf9989-fxv7b\" (UID: \"ddd7c56e-7efb-44f9-8da2-45d0d54a9756\") " pod="openstack/neutron-5fd4cf9989-fxv7b" Dec 10 14:59:55 crc kubenswrapper[4718]: I1210 14:59:55.011834 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/ddd7c56e-7efb-44f9-8da2-45d0d54a9756-httpd-config\") pod \"neutron-5fd4cf9989-fxv7b\" (UID: \"ddd7c56e-7efb-44f9-8da2-45d0d54a9756\") " pod="openstack/neutron-5fd4cf9989-fxv7b" Dec 10 14:59:55 crc kubenswrapper[4718]: I1210 14:59:55.011929 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/ddd7c56e-7efb-44f9-8da2-45d0d54a9756-config\") pod \"neutron-5fd4cf9989-fxv7b\" (UID: \"ddd7c56e-7efb-44f9-8da2-45d0d54a9756\") " pod="openstack/neutron-5fd4cf9989-fxv7b" Dec 10 14:59:55 crc kubenswrapper[4718]: I1210 14:59:55.012021 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ddd7c56e-7efb-44f9-8da2-45d0d54a9756-public-tls-certs\") pod \"neutron-5fd4cf9989-fxv7b\" (UID: \"ddd7c56e-7efb-44f9-8da2-45d0d54a9756\") " pod="openstack/neutron-5fd4cf9989-fxv7b" Dec 10 14:59:55 crc kubenswrapper[4718]: I1210 14:59:55.012098 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xq97p\" (UniqueName: \"kubernetes.io/projected/ddd7c56e-7efb-44f9-8da2-45d0d54a9756-kube-api-access-xq97p\") pod \"neutron-5fd4cf9989-fxv7b\" (UID: \"ddd7c56e-7efb-44f9-8da2-45d0d54a9756\") " pod="openstack/neutron-5fd4cf9989-fxv7b" Dec 10 14:59:55 crc kubenswrapper[4718]: I1210 14:59:55.012203 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ddd7c56e-7efb-44f9-8da2-45d0d54a9756-internal-tls-certs\") pod \"neutron-5fd4cf9989-fxv7b\" (UID: \"ddd7c56e-7efb-44f9-8da2-45d0d54a9756\") " pod="openstack/neutron-5fd4cf9989-fxv7b" Dec 10 14:59:55 crc kubenswrapper[4718]: I1210 14:59:55.022691 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/ddd7c56e-7efb-44f9-8da2-45d0d54a9756-httpd-config\") pod \"neutron-5fd4cf9989-fxv7b\" (UID: \"ddd7c56e-7efb-44f9-8da2-45d0d54a9756\") " pod="openstack/neutron-5fd4cf9989-fxv7b" Dec 10 14:59:55 crc kubenswrapper[4718]: I1210 14:59:55.027323 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ddd7c56e-7efb-44f9-8da2-45d0d54a9756-internal-tls-certs\") pod \"neutron-5fd4cf9989-fxv7b\" (UID: \"ddd7c56e-7efb-44f9-8da2-45d0d54a9756\") " pod="openstack/neutron-5fd4cf9989-fxv7b" Dec 10 14:59:55 crc kubenswrapper[4718]: I1210 14:59:55.027323 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ddd7c56e-7efb-44f9-8da2-45d0d54a9756-public-tls-certs\") pod \"neutron-5fd4cf9989-fxv7b\" (UID: \"ddd7c56e-7efb-44f9-8da2-45d0d54a9756\") " pod="openstack/neutron-5fd4cf9989-fxv7b" Dec 10 14:59:55 crc kubenswrapper[4718]: I1210 14:59:55.027783 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ddd7c56e-7efb-44f9-8da2-45d0d54a9756-combined-ca-bundle\") pod \"neutron-5fd4cf9989-fxv7b\" (UID: \"ddd7c56e-7efb-44f9-8da2-45d0d54a9756\") " pod="openstack/neutron-5fd4cf9989-fxv7b" Dec 10 14:59:55 crc kubenswrapper[4718]: I1210 14:59:55.029573 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/ddd7c56e-7efb-44f9-8da2-45d0d54a9756-ovndb-tls-certs\") pod \"neutron-5fd4cf9989-fxv7b\" (UID: \"ddd7c56e-7efb-44f9-8da2-45d0d54a9756\") " pod="openstack/neutron-5fd4cf9989-fxv7b" Dec 10 14:59:55 crc kubenswrapper[4718]: I1210 14:59:55.035584 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xq97p\" (UniqueName: \"kubernetes.io/projected/ddd7c56e-7efb-44f9-8da2-45d0d54a9756-kube-api-access-xq97p\") pod \"neutron-5fd4cf9989-fxv7b\" (UID: \"ddd7c56e-7efb-44f9-8da2-45d0d54a9756\") " pod="openstack/neutron-5fd4cf9989-fxv7b" Dec 10 14:59:55 crc kubenswrapper[4718]: I1210 14:59:55.035585 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/ddd7c56e-7efb-44f9-8da2-45d0d54a9756-config\") pod \"neutron-5fd4cf9989-fxv7b\" (UID: \"ddd7c56e-7efb-44f9-8da2-45d0d54a9756\") " pod="openstack/neutron-5fd4cf9989-fxv7b" Dec 10 14:59:55 crc kubenswrapper[4718]: I1210 14:59:55.287229 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5fd4cf9989-fxv7b" Dec 10 14:59:55 crc kubenswrapper[4718]: I1210 14:59:55.857215 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-9820-account-create-update-dph9s" event={"ID":"973eb84a-8809-4047-9112-4501f249ba68","Type":"ContainerStarted","Data":"4aba8bb9ec93f14d819c0a3a755c64b0c571a598daa9bd5de906e209defd3259"} Dec 10 14:59:55 crc kubenswrapper[4718]: I1210 14:59:55.861150 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5cd5fc4d55-tx99q" event={"ID":"f6871804-824e-4dc2-a7db-f7c788915808","Type":"ContainerStarted","Data":"f8e0d2a6b8a8835d7855aa7ce5e84a5df9465e983900a5090f72894c3ceabac6"} Dec 10 14:59:55 crc kubenswrapper[4718]: I1210 14:59:55.866813 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75958fc765-86zrn" event={"ID":"dffdff5c-1fe4-4dbb-93c7-14f497a4e939","Type":"ContainerStarted","Data":"bc8c905ce26e7970b0ce341f8acc58987ce97e6ab2c599c5a2e10fd5a25ca7b0"} Dec 10 14:59:55 crc kubenswrapper[4718]: I1210 14:59:55.873061 4718 generic.go:334] "Generic (PLEG): container finished" podID="81ebdd62-3494-4d9a-8d04-ae6122173e69" containerID="7ada4e507dfe874f1e8114e2405ec490f308a9ea81aca627b428e58772e6bd3b" exitCode=0 Dec 10 14:59:55 crc kubenswrapper[4718]: I1210 14:59:55.873165 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5vctd" event={"ID":"81ebdd62-3494-4d9a-8d04-ae6122173e69","Type":"ContainerDied","Data":"7ada4e507dfe874f1e8114e2405ec490f308a9ea81aca627b428e58772e6bd3b"} Dec 10 14:59:55 crc kubenswrapper[4718]: I1210 14:59:55.876789 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7cb5f4dbb8-qk86f" event={"ID":"aef87f6d-8e6b-4569-b12d-b10b34872959","Type":"ContainerStarted","Data":"0d6cd96b278e48853b45b028f3a3155e17599ab7a3ee02355a52f044d7125bae"} Dec 10 14:59:56 crc kubenswrapper[4718]: I1210 14:59:56.016930 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-5fd4cf9989-fxv7b"] Dec 10 14:59:56 crc kubenswrapper[4718]: I1210 14:59:56.746670 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-q7sbn" Dec 10 14:59:56 crc kubenswrapper[4718]: I1210 14:59:56.872000 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c96sm\" (UniqueName: \"kubernetes.io/projected/494a054f-4818-4649-83ef-5b058e0d9436-kube-api-access-c96sm\") pod \"494a054f-4818-4649-83ef-5b058e0d9436\" (UID: \"494a054f-4818-4649-83ef-5b058e0d9436\") " Dec 10 14:59:56 crc kubenswrapper[4718]: I1210 14:59:56.873462 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/494a054f-4818-4649-83ef-5b058e0d9436-operator-scripts\") pod \"494a054f-4818-4649-83ef-5b058e0d9436\" (UID: \"494a054f-4818-4649-83ef-5b058e0d9436\") " Dec 10 14:59:56 crc kubenswrapper[4718]: I1210 14:59:56.875601 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/494a054f-4818-4649-83ef-5b058e0d9436-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "494a054f-4818-4649-83ef-5b058e0d9436" (UID: "494a054f-4818-4649-83ef-5b058e0d9436"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:59:56 crc kubenswrapper[4718]: I1210 14:59:56.896316 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-abb8-account-create-update-gvhvt" event={"ID":"53c8e09f-d39f-4f38-9e9e-199399c09a14","Type":"ContainerDied","Data":"67daca023360235070f0916125a12ff4a6b9fb0a52534451d78ef3853c049db2"} Dec 10 14:59:56 crc kubenswrapper[4718]: I1210 14:59:56.896455 4718 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="67daca023360235070f0916125a12ff4a6b9fb0a52534451d78ef3853c049db2" Dec 10 14:59:56 crc kubenswrapper[4718]: I1210 14:59:56.899104 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"cc254fa2-5ac7-45d7-845b-f1ac0c898e98","Type":"ContainerStarted","Data":"ec781200113ccc4d091b6ed5d935b9d1d8ba0d8a2174447a13133922ba36f62d"} Dec 10 14:59:56 crc kubenswrapper[4718]: I1210 14:59:56.900908 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/494a054f-4818-4649-83ef-5b058e0d9436-kube-api-access-c96sm" (OuterVolumeSpecName: "kube-api-access-c96sm") pod "494a054f-4818-4649-83ef-5b058e0d9436" (UID: "494a054f-4818-4649-83ef-5b058e0d9436"). InnerVolumeSpecName "kube-api-access-c96sm". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 14:59:56 crc kubenswrapper[4718]: I1210 14:59:56.906567 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5fd4cf9989-fxv7b" event={"ID":"ddd7c56e-7efb-44f9-8da2-45d0d54a9756","Type":"ContainerStarted","Data":"ba11d3190348f0e922daf76a0f6a6de82850345cea8a255a8b95a146d97e166c"} Dec 10 14:59:56 crc kubenswrapper[4718]: I1210 14:59:56.924291 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7cb5f4dbb8-qk86f" event={"ID":"aef87f6d-8e6b-4569-b12d-b10b34872959","Type":"ContainerStarted","Data":"b53f56f1fd1a035ec47cc925e5efbb4afa85a55ae69e7b6f1e4a1dc4beb22080"} Dec 10 14:59:56 crc kubenswrapper[4718]: I1210 14:59:56.938719 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-abb8-account-create-update-gvhvt" Dec 10 14:59:56 crc kubenswrapper[4718]: I1210 14:59:56.943651 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-z2hg5" event={"ID":"92f397d2-d553-4a53-88b2-314c2dc7ebf6","Type":"ContainerDied","Data":"0b442256c5864966d1031fa770daa01452d00460c14542a5a24fcf676a7c141a"} Dec 10 14:59:56 crc kubenswrapper[4718]: I1210 14:59:56.943693 4718 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0b442256c5864966d1031fa770daa01452d00460c14542a5a24fcf676a7c141a" Dec 10 14:59:56 crc kubenswrapper[4718]: I1210 14:59:56.947921 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f7cb8719-7e84-49ea-a5a7-e48eca56b5bd","Type":"ContainerStarted","Data":"98c682674f43ddeeef8f3dc4167fea6925bd60c6b114f68a2d403e50048e343c"} Dec 10 14:59:56 crc kubenswrapper[4718]: I1210 14:59:56.961049 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-1795-account-create-update-xl996" event={"ID":"e8a444d7-abdd-44f0-82e1-cc8d1cd9b240","Type":"ContainerDied","Data":"b259724fdbb71844f3d23ef06ac648d537016ad2f2d51af66858019ad2728a1b"} Dec 10 14:59:56 crc kubenswrapper[4718]: I1210 14:59:56.961127 4718 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b259724fdbb71844f3d23ef06ac648d537016ad2f2d51af66858019ad2728a1b" Dec 10 14:59:56 crc kubenswrapper[4718]: I1210 14:59:56.963963 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-z2hg5" Dec 10 14:59:56 crc kubenswrapper[4718]: I1210 14:59:56.969058 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-q7sbn" Dec 10 14:59:56 crc kubenswrapper[4718]: I1210 14:59:56.969129 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-q7sbn" event={"ID":"494a054f-4818-4649-83ef-5b058e0d9436","Type":"ContainerDied","Data":"1a294a727382cd25bdd4a7517a99a743ba66d74f514dc09426cfc3ca090c29df"} Dec 10 14:59:56 crc kubenswrapper[4718]: I1210 14:59:56.969186 4718 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1a294a727382cd25bdd4a7517a99a743ba66d74f514dc09426cfc3ca090c29df" Dec 10 14:59:56 crc kubenswrapper[4718]: I1210 14:59:56.978526 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c96sm\" (UniqueName: \"kubernetes.io/projected/494a054f-4818-4649-83ef-5b058e0d9436-kube-api-access-c96sm\") on node \"crc\" DevicePath \"\"" Dec 10 14:59:56 crc kubenswrapper[4718]: I1210 14:59:56.978562 4718 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/494a054f-4818-4649-83ef-5b058e0d9436-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 10 14:59:56 crc kubenswrapper[4718]: I1210 14:59:56.991905 4718 generic.go:334] "Generic (PLEG): container finished" podID="f6871804-824e-4dc2-a7db-f7c788915808" containerID="f8e0d2a6b8a8835d7855aa7ce5e84a5df9465e983900a5090f72894c3ceabac6" exitCode=0 Dec 10 14:59:56 crc kubenswrapper[4718]: I1210 14:59:56.992032 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5cd5fc4d55-tx99q" event={"ID":"f6871804-824e-4dc2-a7db-f7c788915808","Type":"ContainerDied","Data":"f8e0d2a6b8a8835d7855aa7ce5e84a5df9465e983900a5090f72894c3ceabac6"} Dec 10 14:59:56 crc kubenswrapper[4718]: I1210 14:59:56.992798 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-1795-account-create-update-xl996" Dec 10 14:59:57 crc kubenswrapper[4718]: I1210 14:59:57.010304 4718 generic.go:334] "Generic (PLEG): container finished" podID="dffdff5c-1fe4-4dbb-93c7-14f497a4e939" containerID="bc8c905ce26e7970b0ce341f8acc58987ce97e6ab2c599c5a2e10fd5a25ca7b0" exitCode=0 Dec 10 14:59:57 crc kubenswrapper[4718]: I1210 14:59:57.010437 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75958fc765-86zrn" event={"ID":"dffdff5c-1fe4-4dbb-93c7-14f497a4e939","Type":"ContainerDied","Data":"bc8c905ce26e7970b0ce341f8acc58987ce97e6ab2c599c5a2e10fd5a25ca7b0"} Dec 10 14:59:57 crc kubenswrapper[4718]: I1210 14:59:57.022141 4718 scope.go:117] "RemoveContainer" containerID="76beffb68a4302af15a19b5c310d822594a21c7c7ec551bf4bf551ca3dcf1f21" Dec 10 14:59:57 crc kubenswrapper[4718]: E1210 14:59:57.022517 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8zmhn_openshift-machine-config-operator(8db53917-7cfb-496d-b8a0-5cc68f3be4e7)\"" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" podUID="8db53917-7cfb-496d-b8a0-5cc68f3be4e7" Dec 10 14:59:57 crc kubenswrapper[4718]: I1210 14:59:57.113362 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tb8b7\" (UniqueName: \"kubernetes.io/projected/92f397d2-d553-4a53-88b2-314c2dc7ebf6-kube-api-access-tb8b7\") pod \"92f397d2-d553-4a53-88b2-314c2dc7ebf6\" (UID: \"92f397d2-d553-4a53-88b2-314c2dc7ebf6\") " Dec 10 14:59:57 crc kubenswrapper[4718]: I1210 14:59:57.117321 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e8a444d7-abdd-44f0-82e1-cc8d1cd9b240-operator-scripts\") pod \"e8a444d7-abdd-44f0-82e1-cc8d1cd9b240\" (UID: \"e8a444d7-abdd-44f0-82e1-cc8d1cd9b240\") " Dec 10 14:59:57 crc kubenswrapper[4718]: I1210 14:59:57.117473 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qfzjq\" (UniqueName: \"kubernetes.io/projected/53c8e09f-d39f-4f38-9e9e-199399c09a14-kube-api-access-qfzjq\") pod \"53c8e09f-d39f-4f38-9e9e-199399c09a14\" (UID: \"53c8e09f-d39f-4f38-9e9e-199399c09a14\") " Dec 10 14:59:57 crc kubenswrapper[4718]: I1210 14:59:57.117658 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bzprn\" (UniqueName: \"kubernetes.io/projected/e8a444d7-abdd-44f0-82e1-cc8d1cd9b240-kube-api-access-bzprn\") pod \"e8a444d7-abdd-44f0-82e1-cc8d1cd9b240\" (UID: \"e8a444d7-abdd-44f0-82e1-cc8d1cd9b240\") " Dec 10 14:59:57 crc kubenswrapper[4718]: I1210 14:59:57.117733 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/53c8e09f-d39f-4f38-9e9e-199399c09a14-operator-scripts\") pod \"53c8e09f-d39f-4f38-9e9e-199399c09a14\" (UID: \"53c8e09f-d39f-4f38-9e9e-199399c09a14\") " Dec 10 14:59:57 crc kubenswrapper[4718]: I1210 14:59:57.117809 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/92f397d2-d553-4a53-88b2-314c2dc7ebf6-operator-scripts\") pod \"92f397d2-d553-4a53-88b2-314c2dc7ebf6\" (UID: \"92f397d2-d553-4a53-88b2-314c2dc7ebf6\") " Dec 10 14:59:57 crc kubenswrapper[4718]: I1210 14:59:57.118203 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e8a444d7-abdd-44f0-82e1-cc8d1cd9b240-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e8a444d7-abdd-44f0-82e1-cc8d1cd9b240" (UID: "e8a444d7-abdd-44f0-82e1-cc8d1cd9b240"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:59:57 crc kubenswrapper[4718]: I1210 14:59:57.119770 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/53c8e09f-d39f-4f38-9e9e-199399c09a14-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "53c8e09f-d39f-4f38-9e9e-199399c09a14" (UID: "53c8e09f-d39f-4f38-9e9e-199399c09a14"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:59:57 crc kubenswrapper[4718]: I1210 14:59:57.120781 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/92f397d2-d553-4a53-88b2-314c2dc7ebf6-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "92f397d2-d553-4a53-88b2-314c2dc7ebf6" (UID: "92f397d2-d553-4a53-88b2-314c2dc7ebf6"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:59:57 crc kubenswrapper[4718]: I1210 14:59:57.122210 4718 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e8a444d7-abdd-44f0-82e1-cc8d1cd9b240-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 10 14:59:57 crc kubenswrapper[4718]: I1210 14:59:57.122242 4718 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/53c8e09f-d39f-4f38-9e9e-199399c09a14-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 10 14:59:57 crc kubenswrapper[4718]: I1210 14:59:57.122254 4718 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/92f397d2-d553-4a53-88b2-314c2dc7ebf6-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 10 14:59:57 crc kubenswrapper[4718]: I1210 14:59:57.123724 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/92f397d2-d553-4a53-88b2-314c2dc7ebf6-kube-api-access-tb8b7" (OuterVolumeSpecName: "kube-api-access-tb8b7") pod "92f397d2-d553-4a53-88b2-314c2dc7ebf6" (UID: "92f397d2-d553-4a53-88b2-314c2dc7ebf6"). InnerVolumeSpecName "kube-api-access-tb8b7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 14:59:57 crc kubenswrapper[4718]: I1210 14:59:57.126683 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e8a444d7-abdd-44f0-82e1-cc8d1cd9b240-kube-api-access-bzprn" (OuterVolumeSpecName: "kube-api-access-bzprn") pod "e8a444d7-abdd-44f0-82e1-cc8d1cd9b240" (UID: "e8a444d7-abdd-44f0-82e1-cc8d1cd9b240"). InnerVolumeSpecName "kube-api-access-bzprn". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 14:59:57 crc kubenswrapper[4718]: I1210 14:59:57.135789 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/53c8e09f-d39f-4f38-9e9e-199399c09a14-kube-api-access-qfzjq" (OuterVolumeSpecName: "kube-api-access-qfzjq") pod "53c8e09f-d39f-4f38-9e9e-199399c09a14" (UID: "53c8e09f-d39f-4f38-9e9e-199399c09a14"). InnerVolumeSpecName "kube-api-access-qfzjq". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 14:59:57 crc kubenswrapper[4718]: I1210 14:59:57.224638 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qfzjq\" (UniqueName: \"kubernetes.io/projected/53c8e09f-d39f-4f38-9e9e-199399c09a14-kube-api-access-qfzjq\") on node \"crc\" DevicePath \"\"" Dec 10 14:59:57 crc kubenswrapper[4718]: I1210 14:59:57.224671 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bzprn\" (UniqueName: \"kubernetes.io/projected/e8a444d7-abdd-44f0-82e1-cc8d1cd9b240-kube-api-access-bzprn\") on node \"crc\" DevicePath \"\"" Dec 10 14:59:57 crc kubenswrapper[4718]: I1210 14:59:57.224681 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tb8b7\" (UniqueName: \"kubernetes.io/projected/92f397d2-d553-4a53-88b2-314c2dc7ebf6-kube-api-access-tb8b7\") on node \"crc\" DevicePath \"\"" Dec 10 14:59:57 crc kubenswrapper[4718]: I1210 14:59:57.819195 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5cd5fc4d55-tx99q" Dec 10 14:59:57 crc kubenswrapper[4718]: I1210 14:59:57.850510 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f6871804-824e-4dc2-a7db-f7c788915808-config\") pod \"f6871804-824e-4dc2-a7db-f7c788915808\" (UID: \"f6871804-824e-4dc2-a7db-f7c788915808\") " Dec 10 14:59:57 crc kubenswrapper[4718]: I1210 14:59:57.850586 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f6871804-824e-4dc2-a7db-f7c788915808-dns-svc\") pod \"f6871804-824e-4dc2-a7db-f7c788915808\" (UID: \"f6871804-824e-4dc2-a7db-f7c788915808\") " Dec 10 14:59:57 crc kubenswrapper[4718]: I1210 14:59:57.850814 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f6871804-824e-4dc2-a7db-f7c788915808-ovsdbserver-nb\") pod \"f6871804-824e-4dc2-a7db-f7c788915808\" (UID: \"f6871804-824e-4dc2-a7db-f7c788915808\") " Dec 10 14:59:57 crc kubenswrapper[4718]: I1210 14:59:57.850921 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f6871804-824e-4dc2-a7db-f7c788915808-ovsdbserver-sb\") pod \"f6871804-824e-4dc2-a7db-f7c788915808\" (UID: \"f6871804-824e-4dc2-a7db-f7c788915808\") " Dec 10 14:59:57 crc kubenswrapper[4718]: I1210 14:59:57.851027 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4jpbw\" (UniqueName: \"kubernetes.io/projected/f6871804-824e-4dc2-a7db-f7c788915808-kube-api-access-4jpbw\") pod \"f6871804-824e-4dc2-a7db-f7c788915808\" (UID: \"f6871804-824e-4dc2-a7db-f7c788915808\") " Dec 10 14:59:57 crc kubenswrapper[4718]: I1210 14:59:57.851211 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f6871804-824e-4dc2-a7db-f7c788915808-dns-swift-storage-0\") pod \"f6871804-824e-4dc2-a7db-f7c788915808\" (UID: \"f6871804-824e-4dc2-a7db-f7c788915808\") " Dec 10 14:59:57 crc kubenswrapper[4718]: I1210 14:59:57.869778 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f6871804-824e-4dc2-a7db-f7c788915808-kube-api-access-4jpbw" (OuterVolumeSpecName: "kube-api-access-4jpbw") pod "f6871804-824e-4dc2-a7db-f7c788915808" (UID: "f6871804-824e-4dc2-a7db-f7c788915808"). InnerVolumeSpecName "kube-api-access-4jpbw". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 14:59:57 crc kubenswrapper[4718]: I1210 14:59:57.902637 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f6871804-824e-4dc2-a7db-f7c788915808-config" (OuterVolumeSpecName: "config") pod "f6871804-824e-4dc2-a7db-f7c788915808" (UID: "f6871804-824e-4dc2-a7db-f7c788915808"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:59:57 crc kubenswrapper[4718]: I1210 14:59:57.903659 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f6871804-824e-4dc2-a7db-f7c788915808-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "f6871804-824e-4dc2-a7db-f7c788915808" (UID: "f6871804-824e-4dc2-a7db-f7c788915808"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:59:57 crc kubenswrapper[4718]: I1210 14:59:57.918077 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f6871804-824e-4dc2-a7db-f7c788915808-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "f6871804-824e-4dc2-a7db-f7c788915808" (UID: "f6871804-824e-4dc2-a7db-f7c788915808"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:59:57 crc kubenswrapper[4718]: I1210 14:59:57.935161 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f6871804-824e-4dc2-a7db-f7c788915808-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "f6871804-824e-4dc2-a7db-f7c788915808" (UID: "f6871804-824e-4dc2-a7db-f7c788915808"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:59:57 crc kubenswrapper[4718]: I1210 14:59:57.936041 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f6871804-824e-4dc2-a7db-f7c788915808-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "f6871804-824e-4dc2-a7db-f7c788915808" (UID: "f6871804-824e-4dc2-a7db-f7c788915808"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:59:57 crc kubenswrapper[4718]: I1210 14:59:57.956605 4718 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f6871804-824e-4dc2-a7db-f7c788915808-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 10 14:59:57 crc kubenswrapper[4718]: I1210 14:59:57.956650 4718 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f6871804-824e-4dc2-a7db-f7c788915808-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 10 14:59:57 crc kubenswrapper[4718]: I1210 14:59:57.956689 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4jpbw\" (UniqueName: \"kubernetes.io/projected/f6871804-824e-4dc2-a7db-f7c788915808-kube-api-access-4jpbw\") on node \"crc\" DevicePath \"\"" Dec 10 14:59:57 crc kubenswrapper[4718]: I1210 14:59:57.956723 4718 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f6871804-824e-4dc2-a7db-f7c788915808-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 10 14:59:57 crc kubenswrapper[4718]: I1210 14:59:57.956755 4718 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f6871804-824e-4dc2-a7db-f7c788915808-config\") on node \"crc\" DevicePath \"\"" Dec 10 14:59:57 crc kubenswrapper[4718]: I1210 14:59:57.956766 4718 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f6871804-824e-4dc2-a7db-f7c788915808-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 10 14:59:58 crc kubenswrapper[4718]: I1210 14:59:58.113025 4718 generic.go:334] "Generic (PLEG): container finished" podID="973eb84a-8809-4047-9112-4501f249ba68" containerID="4aba8bb9ec93f14d819c0a3a755c64b0c571a598daa9bd5de906e209defd3259" exitCode=0 Dec 10 14:59:58 crc kubenswrapper[4718]: I1210 14:59:58.113208 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-9820-account-create-update-dph9s" event={"ID":"973eb84a-8809-4047-9112-4501f249ba68","Type":"ContainerDied","Data":"4aba8bb9ec93f14d819c0a3a755c64b0c571a598daa9bd5de906e209defd3259"} Dec 10 14:59:58 crc kubenswrapper[4718]: I1210 14:59:58.118484 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"b814c215-d63c-49f4-8760-fb3688f9d9e3","Type":"ContainerStarted","Data":"5791b8b6236cb77364dfdedef24d9fc7a0f89cb75fd04ecf2bfd2755d31feff9"} Dec 10 14:59:58 crc kubenswrapper[4718]: I1210 14:59:58.137584 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5cd5fc4d55-tx99q" event={"ID":"f6871804-824e-4dc2-a7db-f7c788915808","Type":"ContainerDied","Data":"323b59fc5c0353783072d2980a37a42b318e677f1539488f25591cc97377b3d2"} Dec 10 14:59:58 crc kubenswrapper[4718]: I1210 14:59:58.137994 4718 scope.go:117] "RemoveContainer" containerID="f8e0d2a6b8a8835d7855aa7ce5e84a5df9465e983900a5090f72894c3ceabac6" Dec 10 14:59:58 crc kubenswrapper[4718]: I1210 14:59:58.138678 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5cd5fc4d55-tx99q" Dec 10 14:59:58 crc kubenswrapper[4718]: I1210 14:59:58.152631 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-abb8-account-create-update-gvhvt" Dec 10 14:59:58 crc kubenswrapper[4718]: I1210 14:59:58.154254 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-1795-account-create-update-xl996" Dec 10 14:59:58 crc kubenswrapper[4718]: I1210 14:59:58.154413 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-z2hg5" Dec 10 14:59:58 crc kubenswrapper[4718]: I1210 14:59:58.154613 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5fd4cf9989-fxv7b" event={"ID":"ddd7c56e-7efb-44f9-8da2-45d0d54a9756","Type":"ContainerStarted","Data":"065428bbee71dbb9c0b6ecf848d8bf2c80122c9c3188e2e1c7ee88867d56aa26"} Dec 10 14:59:58 crc kubenswrapper[4718]: I1210 14:59:58.713054 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-7cb5f4dbb8-qk86f" podStartSLOduration=8.713018724 podStartE2EDuration="8.713018724s" podCreationTimestamp="2025-12-10 14:59:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 14:59:58.188665328 +0000 UTC m=+1703.137888745" watchObservedRunningTime="2025-12-10 14:59:58.713018724 +0000 UTC m=+1703.662242151" Dec 10 14:59:58 crc kubenswrapper[4718]: I1210 14:59:58.802703 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5cd5fc4d55-tx99q"] Dec 10 14:59:58 crc kubenswrapper[4718]: I1210 14:59:58.835745 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5cd5fc4d55-tx99q"] Dec 10 14:59:59 crc kubenswrapper[4718]: I1210 14:59:59.168359 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"b814c215-d63c-49f4-8760-fb3688f9d9e3","Type":"ContainerStarted","Data":"59d04ca29c705cc505a53aa2c5e0105e010565eac6ffd489a511abfe5da5f22a"} Dec 10 14:59:59 crc kubenswrapper[4718]: I1210 14:59:59.175656 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75958fc765-86zrn" event={"ID":"dffdff5c-1fe4-4dbb-93c7-14f497a4e939","Type":"ContainerStarted","Data":"1239efa9e73f912dd8e937153183ce37860c0b79e87c56baa44eb8b5fb376ac7"} Dec 10 14:59:59 crc kubenswrapper[4718]: I1210 14:59:59.179989 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5fd4cf9989-fxv7b" event={"ID":"ddd7c56e-7efb-44f9-8da2-45d0d54a9756","Type":"ContainerStarted","Data":"ed6da5132685ddfaf5035f453acc4eff085eb38bf0404b18adf7da33327acab6"} Dec 10 14:59:59 crc kubenswrapper[4718]: I1210 14:59:59.181511 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-5fd4cf9989-fxv7b" Dec 10 14:59:59 crc kubenswrapper[4718]: I1210 14:59:59.188859 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f7cb8719-7e84-49ea-a5a7-e48eca56b5bd","Type":"ContainerStarted","Data":"9bcf705243f11ced10fe30234cf27288acb50f6abd504752daa8bf97090c014c"} Dec 10 14:59:59 crc kubenswrapper[4718]: I1210 14:59:59.194598 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="cc254fa2-5ac7-45d7-845b-f1ac0c898e98" containerName="cinder-api-log" containerID="cri-o://ec781200113ccc4d091b6ed5d935b9d1d8ba0d8a2174447a13133922ba36f62d" gracePeriod=30 Dec 10 14:59:59 crc kubenswrapper[4718]: I1210 14:59:59.195004 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"cc254fa2-5ac7-45d7-845b-f1ac0c898e98","Type":"ContainerStarted","Data":"92f9e72a1cf8e90371f6c150cd64bfd9616de4b148a76d88030082c23b0d29bc"} Dec 10 14:59:59 crc kubenswrapper[4718]: I1210 14:59:59.195050 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Dec 10 14:59:59 crc kubenswrapper[4718]: I1210 14:59:59.195087 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="cc254fa2-5ac7-45d7-845b-f1ac0c898e98" containerName="cinder-api" containerID="cri-o://92f9e72a1cf8e90371f6c150cd64bfd9616de4b148a76d88030082c23b0d29bc" gracePeriod=30 Dec 10 14:59:59 crc kubenswrapper[4718]: I1210 14:59:59.212282 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-75958fc765-86zrn" podStartSLOduration=9.212239779 podStartE2EDuration="9.212239779s" podCreationTimestamp="2025-12-10 14:59:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 14:59:59.200157605 +0000 UTC m=+1704.149381022" watchObservedRunningTime="2025-12-10 14:59:59.212239779 +0000 UTC m=+1704.161463196" Dec 10 14:59:59 crc kubenswrapper[4718]: I1210 14:59:59.248436 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-5fd4cf9989-fxv7b" podStartSLOduration=5.248375416 podStartE2EDuration="5.248375416s" podCreationTimestamp="2025-12-10 14:59:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 14:59:59.228129258 +0000 UTC m=+1704.177352675" watchObservedRunningTime="2025-12-10 14:59:59.248375416 +0000 UTC m=+1704.197598833" Dec 10 14:59:59 crc kubenswrapper[4718]: I1210 14:59:59.274087 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=21.274045281 podStartE2EDuration="21.274045281s" podCreationTimestamp="2025-12-10 14:59:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 14:59:59.254431158 +0000 UTC m=+1704.203654595" watchObservedRunningTime="2025-12-10 14:59:59.274045281 +0000 UTC m=+1704.223268698" Dec 10 14:59:59 crc kubenswrapper[4718]: I1210 14:59:59.770877 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-9820-account-create-update-dph9s" Dec 10 14:59:59 crc kubenswrapper[4718]: I1210 14:59:59.843672 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7wxl7\" (UniqueName: \"kubernetes.io/projected/973eb84a-8809-4047-9112-4501f249ba68-kube-api-access-7wxl7\") pod \"973eb84a-8809-4047-9112-4501f249ba68\" (UID: \"973eb84a-8809-4047-9112-4501f249ba68\") " Dec 10 14:59:59 crc kubenswrapper[4718]: I1210 14:59:59.843766 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/973eb84a-8809-4047-9112-4501f249ba68-operator-scripts\") pod \"973eb84a-8809-4047-9112-4501f249ba68\" (UID: \"973eb84a-8809-4047-9112-4501f249ba68\") " Dec 10 14:59:59 crc kubenswrapper[4718]: I1210 14:59:59.845024 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/973eb84a-8809-4047-9112-4501f249ba68-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "973eb84a-8809-4047-9112-4501f249ba68" (UID: "973eb84a-8809-4047-9112-4501f249ba68"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 14:59:59 crc kubenswrapper[4718]: I1210 14:59:59.857991 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/973eb84a-8809-4047-9112-4501f249ba68-kube-api-access-7wxl7" (OuterVolumeSpecName: "kube-api-access-7wxl7") pod "973eb84a-8809-4047-9112-4501f249ba68" (UID: "973eb84a-8809-4047-9112-4501f249ba68"). InnerVolumeSpecName "kube-api-access-7wxl7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 14:59:59 crc kubenswrapper[4718]: I1210 14:59:59.946517 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7wxl7\" (UniqueName: \"kubernetes.io/projected/973eb84a-8809-4047-9112-4501f249ba68-kube-api-access-7wxl7\") on node \"crc\" DevicePath \"\"" Dec 10 14:59:59 crc kubenswrapper[4718]: I1210 14:59:59.946569 4718 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/973eb84a-8809-4047-9112-4501f249ba68-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 10 15:00:00 crc kubenswrapper[4718]: I1210 15:00:00.231042 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f6871804-824e-4dc2-a7db-f7c788915808" path="/var/lib/kubelet/pods/f6871804-824e-4dc2-a7db-f7c788915808/volumes" Dec 10 15:00:00 crc kubenswrapper[4718]: I1210 15:00:00.284774 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-9820-account-create-update-dph9s" Dec 10 15:00:00 crc kubenswrapper[4718]: I1210 15:00:00.285210 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-9820-account-create-update-dph9s" event={"ID":"973eb84a-8809-4047-9112-4501f249ba68","Type":"ContainerDied","Data":"be143394851ef519ba9a8c0ffb5fdfe1c4ee70dced442f960eaedb4358a6e8f2"} Dec 10 15:00:00 crc kubenswrapper[4718]: I1210 15:00:00.285293 4718 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="be143394851ef519ba9a8c0ffb5fdfe1c4ee70dced442f960eaedb4358a6e8f2" Dec 10 15:00:00 crc kubenswrapper[4718]: I1210 15:00:00.325503 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29422980-7wqr6"] Dec 10 15:00:00 crc kubenswrapper[4718]: E1210 15:00:00.326305 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="973eb84a-8809-4047-9112-4501f249ba68" containerName="mariadb-account-create-update" Dec 10 15:00:00 crc kubenswrapper[4718]: I1210 15:00:00.326328 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="973eb84a-8809-4047-9112-4501f249ba68" containerName="mariadb-account-create-update" Dec 10 15:00:00 crc kubenswrapper[4718]: E1210 15:00:00.326371 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="53c8e09f-d39f-4f38-9e9e-199399c09a14" containerName="mariadb-account-create-update" Dec 10 15:00:00 crc kubenswrapper[4718]: I1210 15:00:00.326397 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="53c8e09f-d39f-4f38-9e9e-199399c09a14" containerName="mariadb-account-create-update" Dec 10 15:00:00 crc kubenswrapper[4718]: E1210 15:00:00.326416 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="494a054f-4818-4649-83ef-5b058e0d9436" containerName="mariadb-database-create" Dec 10 15:00:00 crc kubenswrapper[4718]: I1210 15:00:00.326425 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="494a054f-4818-4649-83ef-5b058e0d9436" containerName="mariadb-database-create" Dec 10 15:00:00 crc kubenswrapper[4718]: E1210 15:00:00.326464 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e8a444d7-abdd-44f0-82e1-cc8d1cd9b240" containerName="mariadb-account-create-update" Dec 10 15:00:00 crc kubenswrapper[4718]: I1210 15:00:00.326474 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="e8a444d7-abdd-44f0-82e1-cc8d1cd9b240" containerName="mariadb-account-create-update" Dec 10 15:00:00 crc kubenswrapper[4718]: E1210 15:00:00.326490 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="92f397d2-d553-4a53-88b2-314c2dc7ebf6" containerName="mariadb-database-create" Dec 10 15:00:00 crc kubenswrapper[4718]: I1210 15:00:00.326497 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="92f397d2-d553-4a53-88b2-314c2dc7ebf6" containerName="mariadb-database-create" Dec 10 15:00:00 crc kubenswrapper[4718]: E1210 15:00:00.326508 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f6871804-824e-4dc2-a7db-f7c788915808" containerName="init" Dec 10 15:00:00 crc kubenswrapper[4718]: I1210 15:00:00.326515 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="f6871804-824e-4dc2-a7db-f7c788915808" containerName="init" Dec 10 15:00:00 crc kubenswrapper[4718]: I1210 15:00:00.326796 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="e8a444d7-abdd-44f0-82e1-cc8d1cd9b240" containerName="mariadb-account-create-update" Dec 10 15:00:00 crc kubenswrapper[4718]: I1210 15:00:00.326820 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="92f397d2-d553-4a53-88b2-314c2dc7ebf6" containerName="mariadb-database-create" Dec 10 15:00:00 crc kubenswrapper[4718]: I1210 15:00:00.326830 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="973eb84a-8809-4047-9112-4501f249ba68" containerName="mariadb-account-create-update" Dec 10 15:00:00 crc kubenswrapper[4718]: I1210 15:00:00.326848 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="f6871804-824e-4dc2-a7db-f7c788915808" containerName="init" Dec 10 15:00:00 crc kubenswrapper[4718]: I1210 15:00:00.326862 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="494a054f-4818-4649-83ef-5b058e0d9436" containerName="mariadb-database-create" Dec 10 15:00:00 crc kubenswrapper[4718]: I1210 15:00:00.326881 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="53c8e09f-d39f-4f38-9e9e-199399c09a14" containerName="mariadb-account-create-update" Dec 10 15:00:00 crc kubenswrapper[4718]: I1210 15:00:00.328002 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29422980-7wqr6" Dec 10 15:00:00 crc kubenswrapper[4718]: I1210 15:00:00.339563 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29422980-7wqr6"] Dec 10 15:00:00 crc kubenswrapper[4718]: I1210 15:00:00.368187 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 10 15:00:00 crc kubenswrapper[4718]: I1210 15:00:00.368565 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 10 15:00:00 crc kubenswrapper[4718]: I1210 15:00:00.373945 4718 generic.go:334] "Generic (PLEG): container finished" podID="cc254fa2-5ac7-45d7-845b-f1ac0c898e98" containerID="ec781200113ccc4d091b6ed5d935b9d1d8ba0d8a2174447a13133922ba36f62d" exitCode=143 Dec 10 15:00:00 crc kubenswrapper[4718]: I1210 15:00:00.374091 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"cc254fa2-5ac7-45d7-845b-f1ac0c898e98","Type":"ContainerDied","Data":"ec781200113ccc4d091b6ed5d935b9d1d8ba0d8a2174447a13133922ba36f62d"} Dec 10 15:00:00 crc kubenswrapper[4718]: I1210 15:00:00.376103 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-75958fc765-86zrn" Dec 10 15:00:00 crc kubenswrapper[4718]: I1210 15:00:00.455833 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=20.10395112 podStartE2EDuration="22.455799015s" podCreationTimestamp="2025-12-10 14:59:38 +0000 UTC" firstStartedPulling="2025-12-10 14:59:51.370056424 +0000 UTC m=+1696.319279841" lastFinishedPulling="2025-12-10 14:59:53.721904319 +0000 UTC m=+1698.671127736" observedRunningTime="2025-12-10 15:00:00.431264238 +0000 UTC m=+1705.380487675" watchObservedRunningTime="2025-12-10 15:00:00.455799015 +0000 UTC m=+1705.405022442" Dec 10 15:00:00 crc kubenswrapper[4718]: I1210 15:00:00.509764 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/372b2022-f87c-4e95-9831-74f4c801d98e-secret-volume\") pod \"collect-profiles-29422980-7wqr6\" (UID: \"372b2022-f87c-4e95-9831-74f4c801d98e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29422980-7wqr6" Dec 10 15:00:00 crc kubenswrapper[4718]: I1210 15:00:00.510300 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nhz6w\" (UniqueName: \"kubernetes.io/projected/372b2022-f87c-4e95-9831-74f4c801d98e-kube-api-access-nhz6w\") pod \"collect-profiles-29422980-7wqr6\" (UID: \"372b2022-f87c-4e95-9831-74f4c801d98e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29422980-7wqr6" Dec 10 15:00:00 crc kubenswrapper[4718]: I1210 15:00:00.510633 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/372b2022-f87c-4e95-9831-74f4c801d98e-config-volume\") pod \"collect-profiles-29422980-7wqr6\" (UID: \"372b2022-f87c-4e95-9831-74f4c801d98e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29422980-7wqr6" Dec 10 15:00:00 crc kubenswrapper[4718]: I1210 15:00:00.612964 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/372b2022-f87c-4e95-9831-74f4c801d98e-secret-volume\") pod \"collect-profiles-29422980-7wqr6\" (UID: \"372b2022-f87c-4e95-9831-74f4c801d98e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29422980-7wqr6" Dec 10 15:00:00 crc kubenswrapper[4718]: I1210 15:00:00.613055 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nhz6w\" (UniqueName: \"kubernetes.io/projected/372b2022-f87c-4e95-9831-74f4c801d98e-kube-api-access-nhz6w\") pod \"collect-profiles-29422980-7wqr6\" (UID: \"372b2022-f87c-4e95-9831-74f4c801d98e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29422980-7wqr6" Dec 10 15:00:00 crc kubenswrapper[4718]: I1210 15:00:00.613178 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/372b2022-f87c-4e95-9831-74f4c801d98e-config-volume\") pod \"collect-profiles-29422980-7wqr6\" (UID: \"372b2022-f87c-4e95-9831-74f4c801d98e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29422980-7wqr6" Dec 10 15:00:00 crc kubenswrapper[4718]: I1210 15:00:00.614740 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/372b2022-f87c-4e95-9831-74f4c801d98e-config-volume\") pod \"collect-profiles-29422980-7wqr6\" (UID: \"372b2022-f87c-4e95-9831-74f4c801d98e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29422980-7wqr6" Dec 10 15:00:00 crc kubenswrapper[4718]: I1210 15:00:00.620105 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/372b2022-f87c-4e95-9831-74f4c801d98e-secret-volume\") pod \"collect-profiles-29422980-7wqr6\" (UID: \"372b2022-f87c-4e95-9831-74f4c801d98e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29422980-7wqr6" Dec 10 15:00:00 crc kubenswrapper[4718]: I1210 15:00:00.637608 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nhz6w\" (UniqueName: \"kubernetes.io/projected/372b2022-f87c-4e95-9831-74f4c801d98e-kube-api-access-nhz6w\") pod \"collect-profiles-29422980-7wqr6\" (UID: \"372b2022-f87c-4e95-9831-74f4c801d98e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29422980-7wqr6" Dec 10 15:00:00 crc kubenswrapper[4718]: I1210 15:00:00.741458 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29422980-7wqr6" Dec 10 15:00:01 crc kubenswrapper[4718]: I1210 15:00:01.022004 4718 scope.go:117] "RemoveContainer" containerID="01cec3e8c46ffa2721da9de159356f6f456558ade0dd4bed790b394d87a42b36" Dec 10 15:00:01 crc kubenswrapper[4718]: I1210 15:00:01.104486 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-6q6w4"] Dec 10 15:00:01 crc kubenswrapper[4718]: I1210 15:00:01.106726 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-6q6w4" Dec 10 15:00:01 crc kubenswrapper[4718]: I1210 15:00:01.120533 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Dec 10 15:00:01 crc kubenswrapper[4718]: I1210 15:00:01.121042 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Dec 10 15:00:01 crc kubenswrapper[4718]: I1210 15:00:01.121274 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-ncg47" Dec 10 15:00:01 crc kubenswrapper[4718]: I1210 15:00:01.189863 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-6q6w4"] Dec 10 15:00:01 crc kubenswrapper[4718]: I1210 15:00:01.251795 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Dec 10 15:00:01 crc kubenswrapper[4718]: I1210 15:00:01.272426 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dae6a105-9e31-412f-8809-ce10dfacfe35-config-data\") pod \"nova-cell0-conductor-db-sync-6q6w4\" (UID: \"dae6a105-9e31-412f-8809-ce10dfacfe35\") " pod="openstack/nova-cell0-conductor-db-sync-6q6w4" Dec 10 15:00:01 crc kubenswrapper[4718]: I1210 15:00:01.272522 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dae6a105-9e31-412f-8809-ce10dfacfe35-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-6q6w4\" (UID: \"dae6a105-9e31-412f-8809-ce10dfacfe35\") " pod="openstack/nova-cell0-conductor-db-sync-6q6w4" Dec 10 15:00:01 crc kubenswrapper[4718]: I1210 15:00:01.272707 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dae6a105-9e31-412f-8809-ce10dfacfe35-scripts\") pod \"nova-cell0-conductor-db-sync-6q6w4\" (UID: \"dae6a105-9e31-412f-8809-ce10dfacfe35\") " pod="openstack/nova-cell0-conductor-db-sync-6q6w4" Dec 10 15:00:01 crc kubenswrapper[4718]: I1210 15:00:01.272763 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tdx8w\" (UniqueName: \"kubernetes.io/projected/dae6a105-9e31-412f-8809-ce10dfacfe35-kube-api-access-tdx8w\") pod \"nova-cell0-conductor-db-sync-6q6w4\" (UID: \"dae6a105-9e31-412f-8809-ce10dfacfe35\") " pod="openstack/nova-cell0-conductor-db-sync-6q6w4" Dec 10 15:00:01 crc kubenswrapper[4718]: I1210 15:00:01.374444 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hpc8h\" (UniqueName: \"kubernetes.io/projected/cc254fa2-5ac7-45d7-845b-f1ac0c898e98-kube-api-access-hpc8h\") pod \"cc254fa2-5ac7-45d7-845b-f1ac0c898e98\" (UID: \"cc254fa2-5ac7-45d7-845b-f1ac0c898e98\") " Dec 10 15:00:01 crc kubenswrapper[4718]: I1210 15:00:01.374693 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cc254fa2-5ac7-45d7-845b-f1ac0c898e98-logs\") pod \"cc254fa2-5ac7-45d7-845b-f1ac0c898e98\" (UID: \"cc254fa2-5ac7-45d7-845b-f1ac0c898e98\") " Dec 10 15:00:01 crc kubenswrapper[4718]: I1210 15:00:01.374761 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cc254fa2-5ac7-45d7-845b-f1ac0c898e98-scripts\") pod \"cc254fa2-5ac7-45d7-845b-f1ac0c898e98\" (UID: \"cc254fa2-5ac7-45d7-845b-f1ac0c898e98\") " Dec 10 15:00:01 crc kubenswrapper[4718]: I1210 15:00:01.374864 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cc254fa2-5ac7-45d7-845b-f1ac0c898e98-combined-ca-bundle\") pod \"cc254fa2-5ac7-45d7-845b-f1ac0c898e98\" (UID: \"cc254fa2-5ac7-45d7-845b-f1ac0c898e98\") " Dec 10 15:00:01 crc kubenswrapper[4718]: I1210 15:00:01.374924 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/cc254fa2-5ac7-45d7-845b-f1ac0c898e98-etc-machine-id\") pod \"cc254fa2-5ac7-45d7-845b-f1ac0c898e98\" (UID: \"cc254fa2-5ac7-45d7-845b-f1ac0c898e98\") " Dec 10 15:00:01 crc kubenswrapper[4718]: I1210 15:00:01.375042 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cc254fa2-5ac7-45d7-845b-f1ac0c898e98-config-data\") pod \"cc254fa2-5ac7-45d7-845b-f1ac0c898e98\" (UID: \"cc254fa2-5ac7-45d7-845b-f1ac0c898e98\") " Dec 10 15:00:01 crc kubenswrapper[4718]: I1210 15:00:01.375083 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/cc254fa2-5ac7-45d7-845b-f1ac0c898e98-config-data-custom\") pod \"cc254fa2-5ac7-45d7-845b-f1ac0c898e98\" (UID: \"cc254fa2-5ac7-45d7-845b-f1ac0c898e98\") " Dec 10 15:00:01 crc kubenswrapper[4718]: I1210 15:00:01.384201 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cc254fa2-5ac7-45d7-845b-f1ac0c898e98-logs" (OuterVolumeSpecName: "logs") pod "cc254fa2-5ac7-45d7-845b-f1ac0c898e98" (UID: "cc254fa2-5ac7-45d7-845b-f1ac0c898e98"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 15:00:01 crc kubenswrapper[4718]: I1210 15:00:01.384296 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cc254fa2-5ac7-45d7-845b-f1ac0c898e98-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "cc254fa2-5ac7-45d7-845b-f1ac0c898e98" (UID: "cc254fa2-5ac7-45d7-845b-f1ac0c898e98"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 10 15:00:01 crc kubenswrapper[4718]: I1210 15:00:01.388100 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dae6a105-9e31-412f-8809-ce10dfacfe35-scripts\") pod \"nova-cell0-conductor-db-sync-6q6w4\" (UID: \"dae6a105-9e31-412f-8809-ce10dfacfe35\") " pod="openstack/nova-cell0-conductor-db-sync-6q6w4" Dec 10 15:00:01 crc kubenswrapper[4718]: I1210 15:00:01.388239 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tdx8w\" (UniqueName: \"kubernetes.io/projected/dae6a105-9e31-412f-8809-ce10dfacfe35-kube-api-access-tdx8w\") pod \"nova-cell0-conductor-db-sync-6q6w4\" (UID: \"dae6a105-9e31-412f-8809-ce10dfacfe35\") " pod="openstack/nova-cell0-conductor-db-sync-6q6w4" Dec 10 15:00:01 crc kubenswrapper[4718]: I1210 15:00:01.388440 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dae6a105-9e31-412f-8809-ce10dfacfe35-config-data\") pod \"nova-cell0-conductor-db-sync-6q6w4\" (UID: \"dae6a105-9e31-412f-8809-ce10dfacfe35\") " pod="openstack/nova-cell0-conductor-db-sync-6q6w4" Dec 10 15:00:01 crc kubenswrapper[4718]: I1210 15:00:01.388499 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dae6a105-9e31-412f-8809-ce10dfacfe35-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-6q6w4\" (UID: \"dae6a105-9e31-412f-8809-ce10dfacfe35\") " pod="openstack/nova-cell0-conductor-db-sync-6q6w4" Dec 10 15:00:01 crc kubenswrapper[4718]: I1210 15:00:01.388637 4718 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/cc254fa2-5ac7-45d7-845b-f1ac0c898e98-etc-machine-id\") on node \"crc\" DevicePath \"\"" Dec 10 15:00:01 crc kubenswrapper[4718]: I1210 15:00:01.388655 4718 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cc254fa2-5ac7-45d7-845b-f1ac0c898e98-logs\") on node \"crc\" DevicePath \"\"" Dec 10 15:00:01 crc kubenswrapper[4718]: I1210 15:00:01.418192 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dae6a105-9e31-412f-8809-ce10dfacfe35-scripts\") pod \"nova-cell0-conductor-db-sync-6q6w4\" (UID: \"dae6a105-9e31-412f-8809-ce10dfacfe35\") " pod="openstack/nova-cell0-conductor-db-sync-6q6w4" Dec 10 15:00:01 crc kubenswrapper[4718]: I1210 15:00:01.418416 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cc254fa2-5ac7-45d7-845b-f1ac0c898e98-scripts" (OuterVolumeSpecName: "scripts") pod "cc254fa2-5ac7-45d7-845b-f1ac0c898e98" (UID: "cc254fa2-5ac7-45d7-845b-f1ac0c898e98"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:00:01 crc kubenswrapper[4718]: I1210 15:00:01.432937 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cc254fa2-5ac7-45d7-845b-f1ac0c898e98-kube-api-access-hpc8h" (OuterVolumeSpecName: "kube-api-access-hpc8h") pod "cc254fa2-5ac7-45d7-845b-f1ac0c898e98" (UID: "cc254fa2-5ac7-45d7-845b-f1ac0c898e98"). InnerVolumeSpecName "kube-api-access-hpc8h". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:00:01 crc kubenswrapper[4718]: I1210 15:00:01.433973 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dae6a105-9e31-412f-8809-ce10dfacfe35-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-6q6w4\" (UID: \"dae6a105-9e31-412f-8809-ce10dfacfe35\") " pod="openstack/nova-cell0-conductor-db-sync-6q6w4" Dec 10 15:00:01 crc kubenswrapper[4718]: I1210 15:00:01.444697 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dae6a105-9e31-412f-8809-ce10dfacfe35-config-data\") pod \"nova-cell0-conductor-db-sync-6q6w4\" (UID: \"dae6a105-9e31-412f-8809-ce10dfacfe35\") " pod="openstack/nova-cell0-conductor-db-sync-6q6w4" Dec 10 15:00:01 crc kubenswrapper[4718]: I1210 15:00:01.452687 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tdx8w\" (UniqueName: \"kubernetes.io/projected/dae6a105-9e31-412f-8809-ce10dfacfe35-kube-api-access-tdx8w\") pod \"nova-cell0-conductor-db-sync-6q6w4\" (UID: \"dae6a105-9e31-412f-8809-ce10dfacfe35\") " pod="openstack/nova-cell0-conductor-db-sync-6q6w4" Dec 10 15:00:01 crc kubenswrapper[4718]: I1210 15:00:01.455794 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cc254fa2-5ac7-45d7-845b-f1ac0c898e98-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "cc254fa2-5ac7-45d7-845b-f1ac0c898e98" (UID: "cc254fa2-5ac7-45d7-845b-f1ac0c898e98"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:00:01 crc kubenswrapper[4718]: I1210 15:00:01.505040 4718 generic.go:334] "Generic (PLEG): container finished" podID="cc254fa2-5ac7-45d7-845b-f1ac0c898e98" containerID="92f9e72a1cf8e90371f6c150cd64bfd9616de4b148a76d88030082c23b0d29bc" exitCode=0 Dec 10 15:00:01 crc kubenswrapper[4718]: I1210 15:00:01.506342 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Dec 10 15:00:01 crc kubenswrapper[4718]: I1210 15:00:01.507495 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"cc254fa2-5ac7-45d7-845b-f1ac0c898e98","Type":"ContainerDied","Data":"92f9e72a1cf8e90371f6c150cd64bfd9616de4b148a76d88030082c23b0d29bc"} Dec 10 15:00:01 crc kubenswrapper[4718]: I1210 15:00:01.507570 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"cc254fa2-5ac7-45d7-845b-f1ac0c898e98","Type":"ContainerDied","Data":"59e09028113e2e5577ba2309668687def831b0c8bb818f71fd313d292b0259a7"} Dec 10 15:00:01 crc kubenswrapper[4718]: I1210 15:00:01.507593 4718 scope.go:117] "RemoveContainer" containerID="92f9e72a1cf8e90371f6c150cd64bfd9616de4b148a76d88030082c23b0d29bc" Dec 10 15:00:01 crc kubenswrapper[4718]: I1210 15:00:01.519287 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cc254fa2-5ac7-45d7-845b-f1ac0c898e98-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "cc254fa2-5ac7-45d7-845b-f1ac0c898e98" (UID: "cc254fa2-5ac7-45d7-845b-f1ac0c898e98"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:00:01 crc kubenswrapper[4718]: I1210 15:00:01.530827 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hpc8h\" (UniqueName: \"kubernetes.io/projected/cc254fa2-5ac7-45d7-845b-f1ac0c898e98-kube-api-access-hpc8h\") on node \"crc\" DevicePath \"\"" Dec 10 15:00:01 crc kubenswrapper[4718]: I1210 15:00:01.530856 4718 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cc254fa2-5ac7-45d7-845b-f1ac0c898e98-scripts\") on node \"crc\" DevicePath \"\"" Dec 10 15:00:01 crc kubenswrapper[4718]: I1210 15:00:01.530869 4718 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cc254fa2-5ac7-45d7-845b-f1ac0c898e98-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 10 15:00:01 crc kubenswrapper[4718]: I1210 15:00:01.531061 4718 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/cc254fa2-5ac7-45d7-845b-f1ac0c898e98-config-data-custom\") on node \"crc\" DevicePath \"\"" Dec 10 15:00:01 crc kubenswrapper[4718]: I1210 15:00:01.551620 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-6q6w4" Dec 10 15:00:01 crc kubenswrapper[4718]: I1210 15:00:01.616730 4718 scope.go:117] "RemoveContainer" containerID="ec781200113ccc4d091b6ed5d935b9d1d8ba0d8a2174447a13133922ba36f62d" Dec 10 15:00:01 crc kubenswrapper[4718]: I1210 15:00:01.648885 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cc254fa2-5ac7-45d7-845b-f1ac0c898e98-config-data" (OuterVolumeSpecName: "config-data") pod "cc254fa2-5ac7-45d7-845b-f1ac0c898e98" (UID: "cc254fa2-5ac7-45d7-845b-f1ac0c898e98"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:00:01 crc kubenswrapper[4718]: I1210 15:00:01.740493 4718 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cc254fa2-5ac7-45d7-845b-f1ac0c898e98-config-data\") on node \"crc\" DevicePath \"\"" Dec 10 15:00:01 crc kubenswrapper[4718]: I1210 15:00:01.873459 4718 scope.go:117] "RemoveContainer" containerID="92f9e72a1cf8e90371f6c150cd64bfd9616de4b148a76d88030082c23b0d29bc" Dec 10 15:00:01 crc kubenswrapper[4718]: E1210 15:00:01.887743 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"92f9e72a1cf8e90371f6c150cd64bfd9616de4b148a76d88030082c23b0d29bc\": container with ID starting with 92f9e72a1cf8e90371f6c150cd64bfd9616de4b148a76d88030082c23b0d29bc not found: ID does not exist" containerID="92f9e72a1cf8e90371f6c150cd64bfd9616de4b148a76d88030082c23b0d29bc" Dec 10 15:00:01 crc kubenswrapper[4718]: I1210 15:00:01.887832 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"92f9e72a1cf8e90371f6c150cd64bfd9616de4b148a76d88030082c23b0d29bc"} err="failed to get container status \"92f9e72a1cf8e90371f6c150cd64bfd9616de4b148a76d88030082c23b0d29bc\": rpc error: code = NotFound desc = could not find container \"92f9e72a1cf8e90371f6c150cd64bfd9616de4b148a76d88030082c23b0d29bc\": container with ID starting with 92f9e72a1cf8e90371f6c150cd64bfd9616de4b148a76d88030082c23b0d29bc not found: ID does not exist" Dec 10 15:00:01 crc kubenswrapper[4718]: I1210 15:00:01.887875 4718 scope.go:117] "RemoveContainer" containerID="ec781200113ccc4d091b6ed5d935b9d1d8ba0d8a2174447a13133922ba36f62d" Dec 10 15:00:01 crc kubenswrapper[4718]: E1210 15:00:01.890241 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ec781200113ccc4d091b6ed5d935b9d1d8ba0d8a2174447a13133922ba36f62d\": container with ID starting with ec781200113ccc4d091b6ed5d935b9d1d8ba0d8a2174447a13133922ba36f62d not found: ID does not exist" containerID="ec781200113ccc4d091b6ed5d935b9d1d8ba0d8a2174447a13133922ba36f62d" Dec 10 15:00:01 crc kubenswrapper[4718]: I1210 15:00:01.890305 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ec781200113ccc4d091b6ed5d935b9d1d8ba0d8a2174447a13133922ba36f62d"} err="failed to get container status \"ec781200113ccc4d091b6ed5d935b9d1d8ba0d8a2174447a13133922ba36f62d\": rpc error: code = NotFound desc = could not find container \"ec781200113ccc4d091b6ed5d935b9d1d8ba0d8a2174447a13133922ba36f62d\": container with ID starting with ec781200113ccc4d091b6ed5d935b9d1d8ba0d8a2174447a13133922ba36f62d not found: ID does not exist" Dec 10 15:00:01 crc kubenswrapper[4718]: I1210 15:00:01.959497 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29422980-7wqr6"] Dec 10 15:00:01 crc kubenswrapper[4718]: I1210 15:00:01.982492 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Dec 10 15:00:02 crc kubenswrapper[4718]: I1210 15:00:02.009508 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Dec 10 15:00:02 crc kubenswrapper[4718]: I1210 15:00:02.056029 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cc254fa2-5ac7-45d7-845b-f1ac0c898e98" path="/var/lib/kubelet/pods/cc254fa2-5ac7-45d7-845b-f1ac0c898e98/volumes" Dec 10 15:00:02 crc kubenswrapper[4718]: I1210 15:00:02.057453 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Dec 10 15:00:02 crc kubenswrapper[4718]: E1210 15:00:02.057953 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc254fa2-5ac7-45d7-845b-f1ac0c898e98" containerName="cinder-api" Dec 10 15:00:02 crc kubenswrapper[4718]: I1210 15:00:02.057975 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc254fa2-5ac7-45d7-845b-f1ac0c898e98" containerName="cinder-api" Dec 10 15:00:02 crc kubenswrapper[4718]: E1210 15:00:02.058002 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc254fa2-5ac7-45d7-845b-f1ac0c898e98" containerName="cinder-api-log" Dec 10 15:00:02 crc kubenswrapper[4718]: I1210 15:00:02.058012 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc254fa2-5ac7-45d7-845b-f1ac0c898e98" containerName="cinder-api-log" Dec 10 15:00:02 crc kubenswrapper[4718]: I1210 15:00:02.058992 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="cc254fa2-5ac7-45d7-845b-f1ac0c898e98" containerName="cinder-api-log" Dec 10 15:00:02 crc kubenswrapper[4718]: I1210 15:00:02.059024 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="cc254fa2-5ac7-45d7-845b-f1ac0c898e98" containerName="cinder-api" Dec 10 15:00:02 crc kubenswrapper[4718]: I1210 15:00:02.061134 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Dec 10 15:00:02 crc kubenswrapper[4718]: I1210 15:00:02.061299 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Dec 10 15:00:02 crc kubenswrapper[4718]: I1210 15:00:02.065276 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Dec 10 15:00:02 crc kubenswrapper[4718]: I1210 15:00:02.065603 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Dec 10 15:00:02 crc kubenswrapper[4718]: I1210 15:00:02.071270 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Dec 10 15:00:02 crc kubenswrapper[4718]: I1210 15:00:02.176683 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fbb11f94-73a2-4870-94d1-f7c6a699bc57-public-tls-certs\") pod \"cinder-api-0\" (UID: \"fbb11f94-73a2-4870-94d1-f7c6a699bc57\") " pod="openstack/cinder-api-0" Dec 10 15:00:02 crc kubenswrapper[4718]: I1210 15:00:02.176774 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fbb11f94-73a2-4870-94d1-f7c6a699bc57-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"fbb11f94-73a2-4870-94d1-f7c6a699bc57\") " pod="openstack/cinder-api-0" Dec 10 15:00:02 crc kubenswrapper[4718]: I1210 15:00:02.176805 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rlkdv\" (UniqueName: \"kubernetes.io/projected/fbb11f94-73a2-4870-94d1-f7c6a699bc57-kube-api-access-rlkdv\") pod \"cinder-api-0\" (UID: \"fbb11f94-73a2-4870-94d1-f7c6a699bc57\") " pod="openstack/cinder-api-0" Dec 10 15:00:02 crc kubenswrapper[4718]: I1210 15:00:02.176864 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/fbb11f94-73a2-4870-94d1-f7c6a699bc57-etc-machine-id\") pod \"cinder-api-0\" (UID: \"fbb11f94-73a2-4870-94d1-f7c6a699bc57\") " pod="openstack/cinder-api-0" Dec 10 15:00:02 crc kubenswrapper[4718]: I1210 15:00:02.176898 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fbb11f94-73a2-4870-94d1-f7c6a699bc57-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"fbb11f94-73a2-4870-94d1-f7c6a699bc57\") " pod="openstack/cinder-api-0" Dec 10 15:00:02 crc kubenswrapper[4718]: I1210 15:00:02.176925 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fbb11f94-73a2-4870-94d1-f7c6a699bc57-config-data-custom\") pod \"cinder-api-0\" (UID: \"fbb11f94-73a2-4870-94d1-f7c6a699bc57\") " pod="openstack/cinder-api-0" Dec 10 15:00:02 crc kubenswrapper[4718]: I1210 15:00:02.176989 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fbb11f94-73a2-4870-94d1-f7c6a699bc57-logs\") pod \"cinder-api-0\" (UID: \"fbb11f94-73a2-4870-94d1-f7c6a699bc57\") " pod="openstack/cinder-api-0" Dec 10 15:00:02 crc kubenswrapper[4718]: I1210 15:00:02.177020 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fbb11f94-73a2-4870-94d1-f7c6a699bc57-scripts\") pod \"cinder-api-0\" (UID: \"fbb11f94-73a2-4870-94d1-f7c6a699bc57\") " pod="openstack/cinder-api-0" Dec 10 15:00:02 crc kubenswrapper[4718]: I1210 15:00:02.177068 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fbb11f94-73a2-4870-94d1-f7c6a699bc57-config-data\") pod \"cinder-api-0\" (UID: \"fbb11f94-73a2-4870-94d1-f7c6a699bc57\") " pod="openstack/cinder-api-0" Dec 10 15:00:02 crc kubenswrapper[4718]: I1210 15:00:02.279849 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fbb11f94-73a2-4870-94d1-f7c6a699bc57-logs\") pod \"cinder-api-0\" (UID: \"fbb11f94-73a2-4870-94d1-f7c6a699bc57\") " pod="openstack/cinder-api-0" Dec 10 15:00:02 crc kubenswrapper[4718]: I1210 15:00:02.279920 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fbb11f94-73a2-4870-94d1-f7c6a699bc57-scripts\") pod \"cinder-api-0\" (UID: \"fbb11f94-73a2-4870-94d1-f7c6a699bc57\") " pod="openstack/cinder-api-0" Dec 10 15:00:02 crc kubenswrapper[4718]: I1210 15:00:02.279987 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fbb11f94-73a2-4870-94d1-f7c6a699bc57-config-data\") pod \"cinder-api-0\" (UID: \"fbb11f94-73a2-4870-94d1-f7c6a699bc57\") " pod="openstack/cinder-api-0" Dec 10 15:00:02 crc kubenswrapper[4718]: I1210 15:00:02.280110 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fbb11f94-73a2-4870-94d1-f7c6a699bc57-public-tls-certs\") pod \"cinder-api-0\" (UID: \"fbb11f94-73a2-4870-94d1-f7c6a699bc57\") " pod="openstack/cinder-api-0" Dec 10 15:00:02 crc kubenswrapper[4718]: I1210 15:00:02.280156 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fbb11f94-73a2-4870-94d1-f7c6a699bc57-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"fbb11f94-73a2-4870-94d1-f7c6a699bc57\") " pod="openstack/cinder-api-0" Dec 10 15:00:02 crc kubenswrapper[4718]: I1210 15:00:02.280221 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rlkdv\" (UniqueName: \"kubernetes.io/projected/fbb11f94-73a2-4870-94d1-f7c6a699bc57-kube-api-access-rlkdv\") pod \"cinder-api-0\" (UID: \"fbb11f94-73a2-4870-94d1-f7c6a699bc57\") " pod="openstack/cinder-api-0" Dec 10 15:00:02 crc kubenswrapper[4718]: I1210 15:00:02.280281 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/fbb11f94-73a2-4870-94d1-f7c6a699bc57-etc-machine-id\") pod \"cinder-api-0\" (UID: \"fbb11f94-73a2-4870-94d1-f7c6a699bc57\") " pod="openstack/cinder-api-0" Dec 10 15:00:02 crc kubenswrapper[4718]: I1210 15:00:02.280311 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fbb11f94-73a2-4870-94d1-f7c6a699bc57-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"fbb11f94-73a2-4870-94d1-f7c6a699bc57\") " pod="openstack/cinder-api-0" Dec 10 15:00:02 crc kubenswrapper[4718]: I1210 15:00:02.280346 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fbb11f94-73a2-4870-94d1-f7c6a699bc57-config-data-custom\") pod \"cinder-api-0\" (UID: \"fbb11f94-73a2-4870-94d1-f7c6a699bc57\") " pod="openstack/cinder-api-0" Dec 10 15:00:02 crc kubenswrapper[4718]: I1210 15:00:02.291009 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fbb11f94-73a2-4870-94d1-f7c6a699bc57-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"fbb11f94-73a2-4870-94d1-f7c6a699bc57\") " pod="openstack/cinder-api-0" Dec 10 15:00:02 crc kubenswrapper[4718]: I1210 15:00:02.292619 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/fbb11f94-73a2-4870-94d1-f7c6a699bc57-etc-machine-id\") pod \"cinder-api-0\" (UID: \"fbb11f94-73a2-4870-94d1-f7c6a699bc57\") " pod="openstack/cinder-api-0" Dec 10 15:00:02 crc kubenswrapper[4718]: I1210 15:00:02.456526 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fbb11f94-73a2-4870-94d1-f7c6a699bc57-public-tls-certs\") pod \"cinder-api-0\" (UID: \"fbb11f94-73a2-4870-94d1-f7c6a699bc57\") " pod="openstack/cinder-api-0" Dec 10 15:00:02 crc kubenswrapper[4718]: I1210 15:00:02.456743 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fbb11f94-73a2-4870-94d1-f7c6a699bc57-config-data-custom\") pod \"cinder-api-0\" (UID: \"fbb11f94-73a2-4870-94d1-f7c6a699bc57\") " pod="openstack/cinder-api-0" Dec 10 15:00:02 crc kubenswrapper[4718]: I1210 15:00:02.458087 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fbb11f94-73a2-4870-94d1-f7c6a699bc57-logs\") pod \"cinder-api-0\" (UID: \"fbb11f94-73a2-4870-94d1-f7c6a699bc57\") " pod="openstack/cinder-api-0" Dec 10 15:00:02 crc kubenswrapper[4718]: I1210 15:00:02.462262 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fbb11f94-73a2-4870-94d1-f7c6a699bc57-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"fbb11f94-73a2-4870-94d1-f7c6a699bc57\") " pod="openstack/cinder-api-0" Dec 10 15:00:02 crc kubenswrapper[4718]: I1210 15:00:02.466474 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fbb11f94-73a2-4870-94d1-f7c6a699bc57-scripts\") pod \"cinder-api-0\" (UID: \"fbb11f94-73a2-4870-94d1-f7c6a699bc57\") " pod="openstack/cinder-api-0" Dec 10 15:00:02 crc kubenswrapper[4718]: I1210 15:00:02.483259 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rlkdv\" (UniqueName: \"kubernetes.io/projected/fbb11f94-73a2-4870-94d1-f7c6a699bc57-kube-api-access-rlkdv\") pod \"cinder-api-0\" (UID: \"fbb11f94-73a2-4870-94d1-f7c6a699bc57\") " pod="openstack/cinder-api-0" Dec 10 15:00:02 crc kubenswrapper[4718]: I1210 15:00:02.507781 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fbb11f94-73a2-4870-94d1-f7c6a699bc57-config-data\") pod \"cinder-api-0\" (UID: \"fbb11f94-73a2-4870-94d1-f7c6a699bc57\") " pod="openstack/cinder-api-0" Dec 10 15:00:02 crc kubenswrapper[4718]: I1210 15:00:02.537946 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29422980-7wqr6" event={"ID":"372b2022-f87c-4e95-9831-74f4c801d98e","Type":"ContainerStarted","Data":"77f9fd4828766907e9f8bb4391dd1efda9eaf42016db7a5af182801ca93f2acc"} Dec 10 15:00:02 crc kubenswrapper[4718]: I1210 15:00:02.558012 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-6q6w4"] Dec 10 15:00:02 crc kubenswrapper[4718]: I1210 15:00:02.716989 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Dec 10 15:00:02 crc kubenswrapper[4718]: I1210 15:00:02.973920 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 10 15:00:03 crc kubenswrapper[4718]: I1210 15:00:03.335353 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Dec 10 15:00:03 crc kubenswrapper[4718]: I1210 15:00:03.468820 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Dec 10 15:00:03 crc kubenswrapper[4718]: I1210 15:00:03.471548 4718 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/cinder-scheduler-0" podUID="b814c215-d63c-49f4-8760-fb3688f9d9e3" containerName="cinder-scheduler" probeResult="failure" output="Get \"http://10.217.0.179:8080/\": dial tcp 10.217.0.179:8080: connect: connection refused" Dec 10 15:00:03 crc kubenswrapper[4718]: I1210 15:00:03.553977 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"fbb11f94-73a2-4870-94d1-f7c6a699bc57","Type":"ContainerStarted","Data":"709bda9581a4df0303c430798958578cb574c56314cf0656ddd65e03896f502e"} Dec 10 15:00:03 crc kubenswrapper[4718]: I1210 15:00:03.555650 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-6q6w4" event={"ID":"dae6a105-9e31-412f-8809-ce10dfacfe35","Type":"ContainerStarted","Data":"1c0306920abbafd2c1c479e0d9a7fb0668c2ff4746df2b5258ebbd853e51b2d0"} Dec 10 15:00:04 crc kubenswrapper[4718]: I1210 15:00:04.595935 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-77c9ddb894-brvxz" Dec 10 15:00:04 crc kubenswrapper[4718]: I1210 15:00:04.610072 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f7cb8719-7e84-49ea-a5a7-e48eca56b5bd","Type":"ContainerStarted","Data":"55b437207d781e209e0bdb47ccc6f14714239cdd2c89f1876c97f24de28499c8"} Dec 10 15:00:04 crc kubenswrapper[4718]: I1210 15:00:04.620437 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29422980-7wqr6" event={"ID":"372b2022-f87c-4e95-9831-74f4c801d98e","Type":"ContainerStarted","Data":"78d628407787c880daab351d6dcf7b862932fa2a901de90427eb6a5194b8c793"} Dec 10 15:00:04 crc kubenswrapper[4718]: I1210 15:00:04.634666 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"95261732-95ae-4618-a8a3-c883c287553e","Type":"ContainerStarted","Data":"5ce9fd4b574e115b7c7d868acd2e27bb898593ab80a95c86d1d7d52f0aff782e"} Dec 10 15:00:04 crc kubenswrapper[4718]: I1210 15:00:04.652554 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"fbb11f94-73a2-4870-94d1-f7c6a699bc57","Type":"ContainerStarted","Data":"be3261678d4753bde0abb96e7544c61ea1ec79bde9abf8e61a78fdafab5b9772"} Dec 10 15:00:04 crc kubenswrapper[4718]: I1210 15:00:04.658559 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29422980-7wqr6" podStartSLOduration=4.658524214 podStartE2EDuration="4.658524214s" podCreationTimestamp="2025-12-10 15:00:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 15:00:04.648940714 +0000 UTC m=+1709.598164131" watchObservedRunningTime="2025-12-10 15:00:04.658524214 +0000 UTC m=+1709.607747631" Dec 10 15:00:04 crc kubenswrapper[4718]: I1210 15:00:04.668984 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"c0b43254-f8fe-4187-a8ce-aa65f7ac327e","Type":"ContainerStarted","Data":"e213901986c19676103e94bc220d0b68f95576f288c0c4ab5b83a8f50a6e3892"} Dec 10 15:00:04 crc kubenswrapper[4718]: I1210 15:00:04.742869 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=7.750440533 podStartE2EDuration="50.742836032s" podCreationTimestamp="2025-12-10 14:59:14 +0000 UTC" firstStartedPulling="2025-12-10 14:59:17.898032689 +0000 UTC m=+1662.847256106" lastFinishedPulling="2025-12-10 15:00:00.890428178 +0000 UTC m=+1705.839651605" observedRunningTime="2025-12-10 15:00:04.741056897 +0000 UTC m=+1709.690280314" watchObservedRunningTime="2025-12-10 15:00:04.742836032 +0000 UTC m=+1709.692059449" Dec 10 15:00:05 crc kubenswrapper[4718]: I1210 15:00:05.605749 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-6bb7f498bd-pjx6h" Dec 10 15:00:06 crc kubenswrapper[4718]: I1210 15:00:06.090866 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-75958fc765-86zrn" Dec 10 15:00:06 crc kubenswrapper[4718]: I1210 15:00:06.212877 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-549c96b4c7-5ccv8"] Dec 10 15:00:06 crc kubenswrapper[4718]: I1210 15:00:06.214326 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-549c96b4c7-5ccv8" podUID="12eb657b-f669-41c3-9125-9266909468a3" containerName="dnsmasq-dns" containerID="cri-o://2f90c462de2b75a96cca3cd234b5626e538f196292dbd86ecd426a5f03fda4b8" gracePeriod=10 Dec 10 15:00:07 crc kubenswrapper[4718]: I1210 15:00:07.157826 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-77c9ddb894-brvxz" Dec 10 15:00:07 crc kubenswrapper[4718]: I1210 15:00:07.267627 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-6bb7f498bd-pjx6h"] Dec 10 15:00:07 crc kubenswrapper[4718]: I1210 15:00:07.268028 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-6bb7f498bd-pjx6h" podUID="57bc5c19-c945-4bca-adef-0ddf1b9fabac" containerName="horizon-log" containerID="cri-o://23af10f064789f8e654d5fe32175e0c280568083b73723271e9b524fe4f4ed76" gracePeriod=30 Dec 10 15:00:07 crc kubenswrapper[4718]: I1210 15:00:07.268262 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-6bb7f498bd-pjx6h" podUID="57bc5c19-c945-4bca-adef-0ddf1b9fabac" containerName="horizon" containerID="cri-o://b117b070cadfb38d1e352bf4a066eddfacde995e6de0938901cbc9ed0bddda18" gracePeriod=30 Dec 10 15:00:07 crc kubenswrapper[4718]: I1210 15:00:07.277412 4718 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-6bb7f498bd-pjx6h" podUID="57bc5c19-c945-4bca-adef-0ddf1b9fabac" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.155:8443/dashboard/auth/login/?next=/dashboard/\": EOF" Dec 10 15:00:08 crc kubenswrapper[4718]: I1210 15:00:08.589786 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-decision-engine-0" Dec 10 15:00:08 crc kubenswrapper[4718]: I1210 15:00:08.606000 4718 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-549c96b4c7-5ccv8" podUID="12eb657b-f669-41c3-9125-9266909468a3" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.172:5353: connect: connection refused" Dec 10 15:00:08 crc kubenswrapper[4718]: I1210 15:00:08.628800 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/watcher-decision-engine-0" Dec 10 15:00:08 crc kubenswrapper[4718]: I1210 15:00:08.787294 4718 generic.go:334] "Generic (PLEG): container finished" podID="12eb657b-f669-41c3-9125-9266909468a3" containerID="2f90c462de2b75a96cca3cd234b5626e538f196292dbd86ecd426a5f03fda4b8" exitCode=0 Dec 10 15:00:08 crc kubenswrapper[4718]: I1210 15:00:08.787376 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-549c96b4c7-5ccv8" event={"ID":"12eb657b-f669-41c3-9125-9266909468a3","Type":"ContainerDied","Data":"2f90c462de2b75a96cca3cd234b5626e538f196292dbd86ecd426a5f03fda4b8"} Dec 10 15:00:08 crc kubenswrapper[4718]: I1210 15:00:08.806594 4718 generic.go:334] "Generic (PLEG): container finished" podID="372b2022-f87c-4e95-9831-74f4c801d98e" containerID="78d628407787c880daab351d6dcf7b862932fa2a901de90427eb6a5194b8c793" exitCode=0 Dec 10 15:00:08 crc kubenswrapper[4718]: I1210 15:00:08.806815 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29422980-7wqr6" event={"ID":"372b2022-f87c-4e95-9831-74f4c801d98e","Type":"ContainerDied","Data":"78d628407787c880daab351d6dcf7b862932fa2a901de90427eb6a5194b8c793"} Dec 10 15:00:08 crc kubenswrapper[4718]: I1210 15:00:08.806976 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-decision-engine-0" Dec 10 15:00:08 crc kubenswrapper[4718]: I1210 15:00:08.857371 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-decision-engine-0" Dec 10 15:00:08 crc kubenswrapper[4718]: I1210 15:00:08.968916 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Dec 10 15:00:09 crc kubenswrapper[4718]: I1210 15:00:09.022055 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Dec 10 15:00:09 crc kubenswrapper[4718]: I1210 15:00:09.824889 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-549c96b4c7-5ccv8" Dec 10 15:00:09 crc kubenswrapper[4718]: I1210 15:00:09.850636 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-549c96b4c7-5ccv8" Dec 10 15:00:09 crc kubenswrapper[4718]: I1210 15:00:09.850755 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-549c96b4c7-5ccv8" event={"ID":"12eb657b-f669-41c3-9125-9266909468a3","Type":"ContainerDied","Data":"e6815e87731b995b2225e0258fe50cdddcdde49264749d3a2322b5ccab7ea88e"} Dec 10 15:00:09 crc kubenswrapper[4718]: I1210 15:00:09.850850 4718 scope.go:117] "RemoveContainer" containerID="2f90c462de2b75a96cca3cd234b5626e538f196292dbd86ecd426a5f03fda4b8" Dec 10 15:00:09 crc kubenswrapper[4718]: I1210 15:00:09.851547 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="b814c215-d63c-49f4-8760-fb3688f9d9e3" containerName="cinder-scheduler" containerID="cri-o://5791b8b6236cb77364dfdedef24d9fc7a0f89cb75fd04ecf2bfd2755d31feff9" gracePeriod=30 Dec 10 15:00:09 crc kubenswrapper[4718]: I1210 15:00:09.851570 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="b814c215-d63c-49f4-8760-fb3688f9d9e3" containerName="probe" containerID="cri-o://59d04ca29c705cc505a53aa2c5e0105e010565eac6ffd489a511abfe5da5f22a" gracePeriod=30 Dec 10 15:00:10 crc kubenswrapper[4718]: I1210 15:00:10.003978 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/12eb657b-f669-41c3-9125-9266909468a3-config\") pod \"12eb657b-f669-41c3-9125-9266909468a3\" (UID: \"12eb657b-f669-41c3-9125-9266909468a3\") " Dec 10 15:00:10 crc kubenswrapper[4718]: I1210 15:00:10.004215 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/12eb657b-f669-41c3-9125-9266909468a3-ovsdbserver-nb\") pod \"12eb657b-f669-41c3-9125-9266909468a3\" (UID: \"12eb657b-f669-41c3-9125-9266909468a3\") " Dec 10 15:00:10 crc kubenswrapper[4718]: I1210 15:00:10.004255 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/12eb657b-f669-41c3-9125-9266909468a3-dns-swift-storage-0\") pod \"12eb657b-f669-41c3-9125-9266909468a3\" (UID: \"12eb657b-f669-41c3-9125-9266909468a3\") " Dec 10 15:00:10 crc kubenswrapper[4718]: I1210 15:00:10.004304 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sqw8z\" (UniqueName: \"kubernetes.io/projected/12eb657b-f669-41c3-9125-9266909468a3-kube-api-access-sqw8z\") pod \"12eb657b-f669-41c3-9125-9266909468a3\" (UID: \"12eb657b-f669-41c3-9125-9266909468a3\") " Dec 10 15:00:10 crc kubenswrapper[4718]: I1210 15:00:10.004415 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/12eb657b-f669-41c3-9125-9266909468a3-ovsdbserver-sb\") pod \"12eb657b-f669-41c3-9125-9266909468a3\" (UID: \"12eb657b-f669-41c3-9125-9266909468a3\") " Dec 10 15:00:10 crc kubenswrapper[4718]: I1210 15:00:10.004488 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/12eb657b-f669-41c3-9125-9266909468a3-dns-svc\") pod \"12eb657b-f669-41c3-9125-9266909468a3\" (UID: \"12eb657b-f669-41c3-9125-9266909468a3\") " Dec 10 15:00:10 crc kubenswrapper[4718]: I1210 15:00:10.021121 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/12eb657b-f669-41c3-9125-9266909468a3-kube-api-access-sqw8z" (OuterVolumeSpecName: "kube-api-access-sqw8z") pod "12eb657b-f669-41c3-9125-9266909468a3" (UID: "12eb657b-f669-41c3-9125-9266909468a3"). InnerVolumeSpecName "kube-api-access-sqw8z". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:00:10 crc kubenswrapper[4718]: I1210 15:00:10.117334 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sqw8z\" (UniqueName: \"kubernetes.io/projected/12eb657b-f669-41c3-9125-9266909468a3-kube-api-access-sqw8z\") on node \"crc\" DevicePath \"\"" Dec 10 15:00:10 crc kubenswrapper[4718]: I1210 15:00:10.230151 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/12eb657b-f669-41c3-9125-9266909468a3-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "12eb657b-f669-41c3-9125-9266909468a3" (UID: "12eb657b-f669-41c3-9125-9266909468a3"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 15:00:10 crc kubenswrapper[4718]: I1210 15:00:10.244931 4718 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/12eb657b-f669-41c3-9125-9266909468a3-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 10 15:00:10 crc kubenswrapper[4718]: I1210 15:00:10.248027 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/12eb657b-f669-41c3-9125-9266909468a3-config" (OuterVolumeSpecName: "config") pod "12eb657b-f669-41c3-9125-9266909468a3" (UID: "12eb657b-f669-41c3-9125-9266909468a3"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 15:00:10 crc kubenswrapper[4718]: I1210 15:00:10.260372 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/12eb657b-f669-41c3-9125-9266909468a3-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "12eb657b-f669-41c3-9125-9266909468a3" (UID: "12eb657b-f669-41c3-9125-9266909468a3"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 15:00:10 crc kubenswrapper[4718]: I1210 15:00:10.407372 4718 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/12eb657b-f669-41c3-9125-9266909468a3-config\") on node \"crc\" DevicePath \"\"" Dec 10 15:00:10 crc kubenswrapper[4718]: I1210 15:00:10.407764 4718 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/12eb657b-f669-41c3-9125-9266909468a3-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 10 15:00:10 crc kubenswrapper[4718]: I1210 15:00:10.564851 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-6bb7f498bd-pjx6h" Dec 10 15:00:10 crc kubenswrapper[4718]: I1210 15:00:10.623333 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/12eb657b-f669-41c3-9125-9266909468a3-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "12eb657b-f669-41c3-9125-9266909468a3" (UID: "12eb657b-f669-41c3-9125-9266909468a3"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 15:00:10 crc kubenswrapper[4718]: I1210 15:00:10.642312 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/12eb657b-f669-41c3-9125-9266909468a3-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "12eb657b-f669-41c3-9125-9266909468a3" (UID: "12eb657b-f669-41c3-9125-9266909468a3"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 15:00:10 crc kubenswrapper[4718]: I1210 15:00:10.724517 4718 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/12eb657b-f669-41c3-9125-9266909468a3-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 10 15:00:10 crc kubenswrapper[4718]: I1210 15:00:10.725001 4718 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/12eb657b-f669-41c3-9125-9266909468a3-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 10 15:00:10 crc kubenswrapper[4718]: I1210 15:00:10.816407 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-549c96b4c7-5ccv8"] Dec 10 15:00:10 crc kubenswrapper[4718]: I1210 15:00:10.824061 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-549c96b4c7-5ccv8"] Dec 10 15:00:11 crc kubenswrapper[4718]: I1210 15:00:11.021745 4718 scope.go:117] "RemoveContainer" containerID="76beffb68a4302af15a19b5c310d822594a21c7c7ec551bf4bf551ca3dcf1f21" Dec 10 15:00:11 crc kubenswrapper[4718]: E1210 15:00:11.022090 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8zmhn_openshift-machine-config-operator(8db53917-7cfb-496d-b8a0-5cc68f3be4e7)\"" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" podUID="8db53917-7cfb-496d-b8a0-5cc68f3be4e7" Dec 10 15:00:12 crc kubenswrapper[4718]: I1210 15:00:12.027719 4718 generic.go:334] "Generic (PLEG): container finished" podID="b814c215-d63c-49f4-8760-fb3688f9d9e3" containerID="59d04ca29c705cc505a53aa2c5e0105e010565eac6ffd489a511abfe5da5f22a" exitCode=0 Dec 10 15:00:12 crc kubenswrapper[4718]: I1210 15:00:12.055332 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="12eb657b-f669-41c3-9125-9266909468a3" path="/var/lib/kubelet/pods/12eb657b-f669-41c3-9125-9266909468a3/volumes" Dec 10 15:00:12 crc kubenswrapper[4718]: I1210 15:00:12.056372 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"b814c215-d63c-49f4-8760-fb3688f9d9e3","Type":"ContainerDied","Data":"59d04ca29c705cc505a53aa2c5e0105e010565eac6ffd489a511abfe5da5f22a"} Dec 10 15:00:12 crc kubenswrapper[4718]: I1210 15:00:12.146560 4718 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-6bb7f498bd-pjx6h" podUID="57bc5c19-c945-4bca-adef-0ddf1b9fabac" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.155:8443/dashboard/auth/login/?next=/dashboard/\": read tcp 10.217.0.2:49242->10.217.0.155:8443: read: connection reset by peer" Dec 10 15:00:13 crc kubenswrapper[4718]: I1210 15:00:13.055811 4718 generic.go:334] "Generic (PLEG): container finished" podID="57bc5c19-c945-4bca-adef-0ddf1b9fabac" containerID="b117b070cadfb38d1e352bf4a066eddfacde995e6de0938901cbc9ed0bddda18" exitCode=0 Dec 10 15:00:13 crc kubenswrapper[4718]: I1210 15:00:13.055901 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6bb7f498bd-pjx6h" event={"ID":"57bc5c19-c945-4bca-adef-0ddf1b9fabac","Type":"ContainerDied","Data":"b117b070cadfb38d1e352bf4a066eddfacde995e6de0938901cbc9ed0bddda18"} Dec 10 15:00:14 crc kubenswrapper[4718]: I1210 15:00:14.909329 4718 scope.go:117] "RemoveContainer" containerID="2ad1a6919d25e557d590927ed1b0e6f4d76c12b13b66d93244c5b7512a178906" Dec 10 15:00:15 crc kubenswrapper[4718]: I1210 15:00:15.093169 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29422980-7wqr6" event={"ID":"372b2022-f87c-4e95-9831-74f4c801d98e","Type":"ContainerDied","Data":"77f9fd4828766907e9f8bb4391dd1efda9eaf42016db7a5af182801ca93f2acc"} Dec 10 15:00:15 crc kubenswrapper[4718]: I1210 15:00:15.093218 4718 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="77f9fd4828766907e9f8bb4391dd1efda9eaf42016db7a5af182801ca93f2acc" Dec 10 15:00:15 crc kubenswrapper[4718]: I1210 15:00:15.112428 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29422980-7wqr6" Dec 10 15:00:15 crc kubenswrapper[4718]: I1210 15:00:15.279340 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/372b2022-f87c-4e95-9831-74f4c801d98e-secret-volume\") pod \"372b2022-f87c-4e95-9831-74f4c801d98e\" (UID: \"372b2022-f87c-4e95-9831-74f4c801d98e\") " Dec 10 15:00:15 crc kubenswrapper[4718]: I1210 15:00:15.279456 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nhz6w\" (UniqueName: \"kubernetes.io/projected/372b2022-f87c-4e95-9831-74f4c801d98e-kube-api-access-nhz6w\") pod \"372b2022-f87c-4e95-9831-74f4c801d98e\" (UID: \"372b2022-f87c-4e95-9831-74f4c801d98e\") " Dec 10 15:00:15 crc kubenswrapper[4718]: I1210 15:00:15.279653 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/372b2022-f87c-4e95-9831-74f4c801d98e-config-volume\") pod \"372b2022-f87c-4e95-9831-74f4c801d98e\" (UID: \"372b2022-f87c-4e95-9831-74f4c801d98e\") " Dec 10 15:00:15 crc kubenswrapper[4718]: I1210 15:00:15.280751 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/372b2022-f87c-4e95-9831-74f4c801d98e-config-volume" (OuterVolumeSpecName: "config-volume") pod "372b2022-f87c-4e95-9831-74f4c801d98e" (UID: "372b2022-f87c-4e95-9831-74f4c801d98e"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 15:00:15 crc kubenswrapper[4718]: I1210 15:00:15.289635 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/372b2022-f87c-4e95-9831-74f4c801d98e-kube-api-access-nhz6w" (OuterVolumeSpecName: "kube-api-access-nhz6w") pod "372b2022-f87c-4e95-9831-74f4c801d98e" (UID: "372b2022-f87c-4e95-9831-74f4c801d98e"). InnerVolumeSpecName "kube-api-access-nhz6w". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:00:15 crc kubenswrapper[4718]: I1210 15:00:15.291351 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/372b2022-f87c-4e95-9831-74f4c801d98e-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "372b2022-f87c-4e95-9831-74f4c801d98e" (UID: "372b2022-f87c-4e95-9831-74f4c801d98e"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:00:15 crc kubenswrapper[4718]: I1210 15:00:15.383846 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nhz6w\" (UniqueName: \"kubernetes.io/projected/372b2022-f87c-4e95-9831-74f4c801d98e-kube-api-access-nhz6w\") on node \"crc\" DevicePath \"\"" Dec 10 15:00:15 crc kubenswrapper[4718]: I1210 15:00:15.384161 4718 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/372b2022-f87c-4e95-9831-74f4c801d98e-config-volume\") on node \"crc\" DevicePath \"\"" Dec 10 15:00:15 crc kubenswrapper[4718]: I1210 15:00:15.384172 4718 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/372b2022-f87c-4e95-9831-74f4c801d98e-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 10 15:00:16 crc kubenswrapper[4718]: I1210 15:00:16.117301 4718 generic.go:334] "Generic (PLEG): container finished" podID="95261732-95ae-4618-a8a3-c883c287553e" containerID="5ce9fd4b574e115b7c7d868acd2e27bb898593ab80a95c86d1d7d52f0aff782e" exitCode=1 Dec 10 15:00:16 crc kubenswrapper[4718]: I1210 15:00:16.117480 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29422980-7wqr6" Dec 10 15:00:16 crc kubenswrapper[4718]: I1210 15:00:16.117678 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"95261732-95ae-4618-a8a3-c883c287553e","Type":"ContainerDied","Data":"5ce9fd4b574e115b7c7d868acd2e27bb898593ab80a95c86d1d7d52f0aff782e"} Dec 10 15:00:16 crc kubenswrapper[4718]: I1210 15:00:16.119687 4718 scope.go:117] "RemoveContainer" containerID="5ce9fd4b574e115b7c7d868acd2e27bb898593ab80a95c86d1d7d52f0aff782e" Dec 10 15:00:16 crc kubenswrapper[4718]: E1210 15:00:16.120106 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with CrashLoopBackOff: \"back-off 40s restarting failed container=watcher-decision-engine pod=watcher-decision-engine-0_openstack(95261732-95ae-4618-a8a3-c883c287553e)\"" pod="openstack/watcher-decision-engine-0" podUID="95261732-95ae-4618-a8a3-c883c287553e" Dec 10 15:00:17 crc kubenswrapper[4718]: I1210 15:00:17.155102 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"fbb11f94-73a2-4870-94d1-f7c6a699bc57","Type":"ContainerStarted","Data":"ce08b092773790508cc612c183af154a7072491b8ba7f01370155f1fd931af8b"} Dec 10 15:00:18 crc kubenswrapper[4718]: I1210 15:00:18.172750 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Dec 10 15:00:18 crc kubenswrapper[4718]: I1210 15:00:18.200027 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=17.199977438 podStartE2EDuration="17.199977438s" podCreationTimestamp="2025-12-10 15:00:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 15:00:18.19764547 +0000 UTC m=+1723.146868887" watchObservedRunningTime="2025-12-10 15:00:18.199977438 +0000 UTC m=+1723.149200855" Dec 10 15:00:18 crc kubenswrapper[4718]: I1210 15:00:18.590101 4718 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/watcher-decision-engine-0" Dec 10 15:00:18 crc kubenswrapper[4718]: I1210 15:00:18.591264 4718 scope.go:117] "RemoveContainer" containerID="5ce9fd4b574e115b7c7d868acd2e27bb898593ab80a95c86d1d7d52f0aff782e" Dec 10 15:00:18 crc kubenswrapper[4718]: I1210 15:00:18.591536 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-decision-engine-0" Dec 10 15:00:18 crc kubenswrapper[4718]: I1210 15:00:18.591620 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-decision-engine-0" Dec 10 15:00:18 crc kubenswrapper[4718]: E1210 15:00:18.592013 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with CrashLoopBackOff: \"back-off 40s restarting failed container=watcher-decision-engine pod=watcher-decision-engine-0_openstack(95261732-95ae-4618-a8a3-c883c287553e)\"" pod="openstack/watcher-decision-engine-0" podUID="95261732-95ae-4618-a8a3-c883c287553e" Dec 10 15:00:19 crc kubenswrapper[4718]: I1210 15:00:19.189564 4718 scope.go:117] "RemoveContainer" containerID="5ce9fd4b574e115b7c7d868acd2e27bb898593ab80a95c86d1d7d52f0aff782e" Dec 10 15:00:19 crc kubenswrapper[4718]: E1210 15:00:19.190092 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with CrashLoopBackOff: \"back-off 40s restarting failed container=watcher-decision-engine pod=watcher-decision-engine-0_openstack(95261732-95ae-4618-a8a3-c883c287553e)\"" pod="openstack/watcher-decision-engine-0" podUID="95261732-95ae-4618-a8a3-c883c287553e" Dec 10 15:00:21 crc kubenswrapper[4718]: I1210 15:00:21.307591 4718 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-6bb7f498bd-pjx6h" podUID="57bc5c19-c945-4bca-adef-0ddf1b9fabac" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.155:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.155:8443: connect: connection refused" Dec 10 15:00:21 crc kubenswrapper[4718]: I1210 15:00:21.328812 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-7cb5f4dbb8-qk86f" Dec 10 15:00:21 crc kubenswrapper[4718]: I1210 15:00:21.336000 4718 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/neutron-7cb5f4dbb8-qk86f" podUID="aef87f6d-8e6b-4569-b12d-b10b34872959" containerName="neutron-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Dec 10 15:00:21 crc kubenswrapper[4718]: I1210 15:00:21.336345 4718 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/neutron-7cb5f4dbb8-qk86f" podUID="aef87f6d-8e6b-4569-b12d-b10b34872959" containerName="neutron-api" probeResult="failure" output="HTTP probe failed with statuscode: 503" Dec 10 15:00:21 crc kubenswrapper[4718]: I1210 15:00:21.336736 4718 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/neutron-7cb5f4dbb8-qk86f" podUID="aef87f6d-8e6b-4569-b12d-b10b34872959" containerName="neutron-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Dec 10 15:00:21 crc kubenswrapper[4718]: I1210 15:00:21.338876 4718 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/neutron-7cb5f4dbb8-qk86f" podUID="aef87f6d-8e6b-4569-b12d-b10b34872959" containerName="neutron-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Dec 10 15:00:22 crc kubenswrapper[4718]: E1210 15:00:22.689121 4718 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Dec 10 15:00:22 crc kubenswrapper[4718]: E1210 15:00:22.689757 4718 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cmv67,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-5vctd_openshift-marketplace(81ebdd62-3494-4d9a-8d04-ae6122173e69): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 10 15:00:22 crc kubenswrapper[4718]: E1210 15:00:22.690970 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-5vctd" podUID="81ebdd62-3494-4d9a-8d04-ae6122173e69" Dec 10 15:00:22 crc kubenswrapper[4718]: I1210 15:00:22.856682 4718 scope.go:117] "RemoveContainer" containerID="18c2b9ed9086807a1a8a78e70010cd811dfcd5d47bc9bf542e696eb06efee24c" Dec 10 15:00:23 crc kubenswrapper[4718]: I1210 15:00:23.028873 4718 scope.go:117] "RemoveContainer" containerID="76beffb68a4302af15a19b5c310d822594a21c7c7ec551bf4bf551ca3dcf1f21" Dec 10 15:00:23 crc kubenswrapper[4718]: E1210 15:00:23.029251 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8zmhn_openshift-machine-config-operator(8db53917-7cfb-496d-b8a0-5cc68f3be4e7)\"" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" podUID="8db53917-7cfb-496d-b8a0-5cc68f3be4e7" Dec 10 15:00:24 crc kubenswrapper[4718]: I1210 15:00:24.283715 4718 generic.go:334] "Generic (PLEG): container finished" podID="b814c215-d63c-49f4-8760-fb3688f9d9e3" containerID="5791b8b6236cb77364dfdedef24d9fc7a0f89cb75fd04ecf2bfd2755d31feff9" exitCode=0 Dec 10 15:00:24 crc kubenswrapper[4718]: I1210 15:00:24.283820 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"b814c215-d63c-49f4-8760-fb3688f9d9e3","Type":"ContainerDied","Data":"5791b8b6236cb77364dfdedef24d9fc7a0f89cb75fd04ecf2bfd2755d31feff9"} Dec 10 15:00:25 crc kubenswrapper[4718]: I1210 15:00:25.301368 4718 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/neutron-5fd4cf9989-fxv7b" podUID="ddd7c56e-7efb-44f9-8da2-45d0d54a9756" containerName="neutron-api" probeResult="failure" output="HTTP probe failed with statuscode: 503" Dec 10 15:00:25 crc kubenswrapper[4718]: I1210 15:00:25.308109 4718 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/neutron-5fd4cf9989-fxv7b" podUID="ddd7c56e-7efb-44f9-8da2-45d0d54a9756" containerName="neutron-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Dec 10 15:00:25 crc kubenswrapper[4718]: I1210 15:00:25.309513 4718 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/neutron-5fd4cf9989-fxv7b" podUID="ddd7c56e-7efb-44f9-8da2-45d0d54a9756" containerName="neutron-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Dec 10 15:00:27 crc kubenswrapper[4718]: I1210 15:00:27.723647 4718 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cinder-api-0" podUID="fbb11f94-73a2-4870-94d1-f7c6a699bc57" containerName="cinder-api" probeResult="failure" output="Get \"https://10.217.0.195:8776/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 10 15:00:28 crc kubenswrapper[4718]: I1210 15:00:28.519946 4718 pod_container_manager_linux.go:210] "Failed to delete cgroup paths" cgroupName=["kubepods","besteffort","pod53c8e09f-d39f-4f38-9e9e-199399c09a14"] err="unable to destroy cgroup paths for cgroup [kubepods besteffort pod53c8e09f-d39f-4f38-9e9e-199399c09a14] : Timed out while waiting for systemd to remove kubepods-besteffort-pod53c8e09f_d39f_4f38_9e9e_199399c09a14.slice" Dec 10 15:00:28 crc kubenswrapper[4718]: I1210 15:00:28.722626 4718 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/cinder-api-0" podUID="fbb11f94-73a2-4870-94d1-f7c6a699bc57" containerName="cinder-api" probeResult="failure" output="Get \"https://10.217.0.195:8776/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 10 15:00:31 crc kubenswrapper[4718]: I1210 15:00:31.480214 4718 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-6bb7f498bd-pjx6h" podUID="57bc5c19-c945-4bca-adef-0ddf1b9fabac" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.155:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.155:8443: connect: connection refused" Dec 10 15:00:31 crc kubenswrapper[4718]: I1210 15:00:31.480432 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-6bb7f498bd-pjx6h" Dec 10 15:00:32 crc kubenswrapper[4718]: I1210 15:00:32.393856 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Dec 10 15:00:33 crc kubenswrapper[4718]: I1210 15:00:33.020638 4718 scope.go:117] "RemoveContainer" containerID="5ce9fd4b574e115b7c7d868acd2e27bb898593ab80a95c86d1d7d52f0aff782e" Dec 10 15:00:33 crc kubenswrapper[4718]: E1210 15:00:33.021095 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with CrashLoopBackOff: \"back-off 40s restarting failed container=watcher-decision-engine pod=watcher-decision-engine-0_openstack(95261732-95ae-4618-a8a3-c883c287553e)\"" pod="openstack/watcher-decision-engine-0" podUID="95261732-95ae-4618-a8a3-c883c287553e" Dec 10 15:00:34 crc kubenswrapper[4718]: I1210 15:00:34.020470 4718 scope.go:117] "RemoveContainer" containerID="76beffb68a4302af15a19b5c310d822594a21c7c7ec551bf4bf551ca3dcf1f21" Dec 10 15:00:34 crc kubenswrapper[4718]: E1210 15:00:34.020814 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8zmhn_openshift-machine-config-operator(8db53917-7cfb-496d-b8a0-5cc68f3be4e7)\"" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" podUID="8db53917-7cfb-496d-b8a0-5cc68f3be4e7" Dec 10 15:00:39 crc kubenswrapper[4718]: I1210 15:00:39.596887 4718 generic.go:334] "Generic (PLEG): container finished" podID="57bc5c19-c945-4bca-adef-0ddf1b9fabac" containerID="23af10f064789f8e654d5fe32175e0c280568083b73723271e9b524fe4f4ed76" exitCode=137 Dec 10 15:00:39 crc kubenswrapper[4718]: I1210 15:00:39.596988 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6bb7f498bd-pjx6h" event={"ID":"57bc5c19-c945-4bca-adef-0ddf1b9fabac","Type":"ContainerDied","Data":"23af10f064789f8e654d5fe32175e0c280568083b73723271e9b524fe4f4ed76"} Dec 10 15:00:41 crc kubenswrapper[4718]: I1210 15:00:41.307964 4718 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-6bb7f498bd-pjx6h" podUID="57bc5c19-c945-4bca-adef-0ddf1b9fabac" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.155:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.155:8443: connect: connection refused" Dec 10 15:00:43 crc kubenswrapper[4718]: I1210 15:00:43.643070 4718 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 10 15:00:43 crc kubenswrapper[4718]: E1210 15:00:43.945958 4718 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos10/openstack-nova-conductor:current" Dec 10 15:00:43 crc kubenswrapper[4718]: E1210 15:00:43.946320 4718 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos10/openstack-nova-conductor:current" Dec 10 15:00:43 crc kubenswrapper[4718]: E1210 15:00:43.946556 4718 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:nova-cell0-conductor-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-nova-conductor:current,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CELL_NAME,Value:cell0,ValueFrom:nil,},EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:false,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:false,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/var/lib/kolla/config_files/config.json,SubPath:nova-conductor-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tdx8w,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42436,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-cell0-conductor-db-sync-6q6w4_openstack(dae6a105-9e31-412f-8809-ce10dfacfe35): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 10 15:00:43 crc kubenswrapper[4718]: E1210 15:00:43.949927 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nova-cell0-conductor-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/nova-cell0-conductor-db-sync-6q6w4" podUID="dae6a105-9e31-412f-8809-ce10dfacfe35" Dec 10 15:00:44 crc kubenswrapper[4718]: I1210 15:00:44.024767 4718 scope.go:117] "RemoveContainer" containerID="5ce9fd4b574e115b7c7d868acd2e27bb898593ab80a95c86d1d7d52f0aff782e" Dec 10 15:00:44 crc kubenswrapper[4718]: E1210 15:00:44.025211 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with CrashLoopBackOff: \"back-off 40s restarting failed container=watcher-decision-engine pod=watcher-decision-engine-0_openstack(95261732-95ae-4618-a8a3-c883c287553e)\"" pod="openstack/watcher-decision-engine-0" podUID="95261732-95ae-4618-a8a3-c883c287553e" Dec 10 15:00:44 crc kubenswrapper[4718]: I1210 15:00:44.143923 4718 scope.go:117] "RemoveContainer" containerID="01cec3e8c46ffa2721da9de159356f6f456558ade0dd4bed790b394d87a42b36" Dec 10 15:00:44 crc kubenswrapper[4718]: I1210 15:00:44.525416 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Dec 10 15:00:44 crc kubenswrapper[4718]: I1210 15:00:44.626516 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b814c215-d63c-49f4-8760-fb3688f9d9e3-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "b814c215-d63c-49f4-8760-fb3688f9d9e3" (UID: "b814c215-d63c-49f4-8760-fb3688f9d9e3"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 10 15:00:44 crc kubenswrapper[4718]: I1210 15:00:44.626581 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b814c215-d63c-49f4-8760-fb3688f9d9e3-etc-machine-id\") pod \"b814c215-d63c-49f4-8760-fb3688f9d9e3\" (UID: \"b814c215-d63c-49f4-8760-fb3688f9d9e3\") " Dec 10 15:00:44 crc kubenswrapper[4718]: I1210 15:00:44.626678 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b814c215-d63c-49f4-8760-fb3688f9d9e3-config-data\") pod \"b814c215-d63c-49f4-8760-fb3688f9d9e3\" (UID: \"b814c215-d63c-49f4-8760-fb3688f9d9e3\") " Dec 10 15:00:44 crc kubenswrapper[4718]: I1210 15:00:44.626730 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wbsr7\" (UniqueName: \"kubernetes.io/projected/b814c215-d63c-49f4-8760-fb3688f9d9e3-kube-api-access-wbsr7\") pod \"b814c215-d63c-49f4-8760-fb3688f9d9e3\" (UID: \"b814c215-d63c-49f4-8760-fb3688f9d9e3\") " Dec 10 15:00:44 crc kubenswrapper[4718]: I1210 15:00:44.626943 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b814c215-d63c-49f4-8760-fb3688f9d9e3-combined-ca-bundle\") pod \"b814c215-d63c-49f4-8760-fb3688f9d9e3\" (UID: \"b814c215-d63c-49f4-8760-fb3688f9d9e3\") " Dec 10 15:00:44 crc kubenswrapper[4718]: I1210 15:00:44.626991 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b814c215-d63c-49f4-8760-fb3688f9d9e3-config-data-custom\") pod \"b814c215-d63c-49f4-8760-fb3688f9d9e3\" (UID: \"b814c215-d63c-49f4-8760-fb3688f9d9e3\") " Dec 10 15:00:44 crc kubenswrapper[4718]: I1210 15:00:44.627078 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b814c215-d63c-49f4-8760-fb3688f9d9e3-scripts\") pod \"b814c215-d63c-49f4-8760-fb3688f9d9e3\" (UID: \"b814c215-d63c-49f4-8760-fb3688f9d9e3\") " Dec 10 15:00:44 crc kubenswrapper[4718]: I1210 15:00:44.627666 4718 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b814c215-d63c-49f4-8760-fb3688f9d9e3-etc-machine-id\") on node \"crc\" DevicePath \"\"" Dec 10 15:00:44 crc kubenswrapper[4718]: I1210 15:00:44.637996 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b814c215-d63c-49f4-8760-fb3688f9d9e3-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "b814c215-d63c-49f4-8760-fb3688f9d9e3" (UID: "b814c215-d63c-49f4-8760-fb3688f9d9e3"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:00:44 crc kubenswrapper[4718]: I1210 15:00:44.645894 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b814c215-d63c-49f4-8760-fb3688f9d9e3-kube-api-access-wbsr7" (OuterVolumeSpecName: "kube-api-access-wbsr7") pod "b814c215-d63c-49f4-8760-fb3688f9d9e3" (UID: "b814c215-d63c-49f4-8760-fb3688f9d9e3"). InnerVolumeSpecName "kube-api-access-wbsr7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:00:44 crc kubenswrapper[4718]: I1210 15:00:44.648594 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b814c215-d63c-49f4-8760-fb3688f9d9e3-scripts" (OuterVolumeSpecName: "scripts") pod "b814c215-d63c-49f4-8760-fb3688f9d9e3" (UID: "b814c215-d63c-49f4-8760-fb3688f9d9e3"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:00:44 crc kubenswrapper[4718]: I1210 15:00:44.668804 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f7cb8719-7e84-49ea-a5a7-e48eca56b5bd","Type":"ContainerStarted","Data":"13b3a993de8477c61dfb0bdc85bda6748a6cb25aa537ce81262096a726dcce73"} Dec 10 15:00:44 crc kubenswrapper[4718]: I1210 15:00:44.669020 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Dec 10 15:00:44 crc kubenswrapper[4718]: I1210 15:00:44.669056 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f7cb8719-7e84-49ea-a5a7-e48eca56b5bd" containerName="ceilometer-central-agent" containerID="cri-o://98c682674f43ddeeef8f3dc4167fea6925bd60c6b114f68a2d403e50048e343c" gracePeriod=30 Dec 10 15:00:44 crc kubenswrapper[4718]: I1210 15:00:44.669551 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f7cb8719-7e84-49ea-a5a7-e48eca56b5bd" containerName="sg-core" containerID="cri-o://55b437207d781e209e0bdb47ccc6f14714239cdd2c89f1876c97f24de28499c8" gracePeriod=30 Dec 10 15:00:44 crc kubenswrapper[4718]: I1210 15:00:44.669579 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f7cb8719-7e84-49ea-a5a7-e48eca56b5bd" containerName="proxy-httpd" containerID="cri-o://13b3a993de8477c61dfb0bdc85bda6748a6cb25aa537ce81262096a726dcce73" gracePeriod=30 Dec 10 15:00:44 crc kubenswrapper[4718]: I1210 15:00:44.669659 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f7cb8719-7e84-49ea-a5a7-e48eca56b5bd" containerName="ceilometer-notification-agent" containerID="cri-o://9bcf705243f11ced10fe30234cf27288acb50f6abd504752daa8bf97090c014c" gracePeriod=30 Dec 10 15:00:44 crc kubenswrapper[4718]: I1210 15:00:44.671880 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6bb7f498bd-pjx6h" Dec 10 15:00:44 crc kubenswrapper[4718]: I1210 15:00:44.680229 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6bb7f498bd-pjx6h" Dec 10 15:00:44 crc kubenswrapper[4718]: I1210 15:00:44.695014 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Dec 10 15:00:44 crc kubenswrapper[4718]: I1210 15:00:44.695345 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"b814c215-d63c-49f4-8760-fb3688f9d9e3","Type":"ContainerDied","Data":"25f9b445a7e77cb5b55d723adf9fc11af4e81076a940d61ac4fe2d5646271dc7"} Dec 10 15:00:44 crc kubenswrapper[4718]: I1210 15:00:44.695430 4718 scope.go:117] "RemoveContainer" containerID="59d04ca29c705cc505a53aa2c5e0105e010565eac6ffd489a511abfe5da5f22a" Dec 10 15:00:44 crc kubenswrapper[4718]: E1210 15:00:44.700243 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nova-cell0-conductor-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-nova-conductor:current\\\"\"" pod="openstack/nova-cell0-conductor-db-sync-6q6w4" podUID="dae6a105-9e31-412f-8809-ce10dfacfe35" Dec 10 15:00:44 crc kubenswrapper[4718]: I1210 15:00:44.713830 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=4.580328144 podStartE2EDuration="54.713730005s" podCreationTimestamp="2025-12-10 14:59:50 +0000 UTC" firstStartedPulling="2025-12-10 14:59:53.721835447 +0000 UTC m=+1698.671058864" lastFinishedPulling="2025-12-10 15:00:43.855237308 +0000 UTC m=+1748.804460725" observedRunningTime="2025-12-10 15:00:44.699253411 +0000 UTC m=+1749.648476838" watchObservedRunningTime="2025-12-10 15:00:44.713730005 +0000 UTC m=+1749.662953422" Dec 10 15:00:44 crc kubenswrapper[4718]: I1210 15:00:44.731565 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wbsr7\" (UniqueName: \"kubernetes.io/projected/b814c215-d63c-49f4-8760-fb3688f9d9e3-kube-api-access-wbsr7\") on node \"crc\" DevicePath \"\"" Dec 10 15:00:44 crc kubenswrapper[4718]: I1210 15:00:44.731941 4718 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b814c215-d63c-49f4-8760-fb3688f9d9e3-config-data-custom\") on node \"crc\" DevicePath \"\"" Dec 10 15:00:44 crc kubenswrapper[4718]: I1210 15:00:44.732124 4718 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b814c215-d63c-49f4-8760-fb3688f9d9e3-scripts\") on node \"crc\" DevicePath \"\"" Dec 10 15:00:44 crc kubenswrapper[4718]: I1210 15:00:44.791989 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b814c215-d63c-49f4-8760-fb3688f9d9e3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b814c215-d63c-49f4-8760-fb3688f9d9e3" (UID: "b814c215-d63c-49f4-8760-fb3688f9d9e3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:00:44 crc kubenswrapper[4718]: I1210 15:00:44.833761 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/57bc5c19-c945-4bca-adef-0ddf1b9fabac-horizon-secret-key\") pod \"57bc5c19-c945-4bca-adef-0ddf1b9fabac\" (UID: \"57bc5c19-c945-4bca-adef-0ddf1b9fabac\") " Dec 10 15:00:44 crc kubenswrapper[4718]: I1210 15:00:44.833825 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/57bc5c19-c945-4bca-adef-0ddf1b9fabac-logs\") pod \"57bc5c19-c945-4bca-adef-0ddf1b9fabac\" (UID: \"57bc5c19-c945-4bca-adef-0ddf1b9fabac\") " Dec 10 15:00:44 crc kubenswrapper[4718]: I1210 15:00:44.833952 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/57bc5c19-c945-4bca-adef-0ddf1b9fabac-scripts\") pod \"57bc5c19-c945-4bca-adef-0ddf1b9fabac\" (UID: \"57bc5c19-c945-4bca-adef-0ddf1b9fabac\") " Dec 10 15:00:44 crc kubenswrapper[4718]: I1210 15:00:44.833997 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/57bc5c19-c945-4bca-adef-0ddf1b9fabac-horizon-tls-certs\") pod \"57bc5c19-c945-4bca-adef-0ddf1b9fabac\" (UID: \"57bc5c19-c945-4bca-adef-0ddf1b9fabac\") " Dec 10 15:00:44 crc kubenswrapper[4718]: I1210 15:00:44.834041 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57bc5c19-c945-4bca-adef-0ddf1b9fabac-combined-ca-bundle\") pod \"57bc5c19-c945-4bca-adef-0ddf1b9fabac\" (UID: \"57bc5c19-c945-4bca-adef-0ddf1b9fabac\") " Dec 10 15:00:44 crc kubenswrapper[4718]: I1210 15:00:44.834213 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngd4x\" (UniqueName: \"kubernetes.io/projected/57bc5c19-c945-4bca-adef-0ddf1b9fabac-kube-api-access-ngd4x\") pod \"57bc5c19-c945-4bca-adef-0ddf1b9fabac\" (UID: \"57bc5c19-c945-4bca-adef-0ddf1b9fabac\") " Dec 10 15:00:44 crc kubenswrapper[4718]: I1210 15:00:44.834447 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/57bc5c19-c945-4bca-adef-0ddf1b9fabac-config-data\") pod \"57bc5c19-c945-4bca-adef-0ddf1b9fabac\" (UID: \"57bc5c19-c945-4bca-adef-0ddf1b9fabac\") " Dec 10 15:00:44 crc kubenswrapper[4718]: I1210 15:00:44.835237 4718 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b814c215-d63c-49f4-8760-fb3688f9d9e3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 10 15:00:44 crc kubenswrapper[4718]: I1210 15:00:44.837696 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57bc5c19-c945-4bca-adef-0ddf1b9fabac-logs" (OuterVolumeSpecName: "logs") pod "57bc5c19-c945-4bca-adef-0ddf1b9fabac" (UID: "57bc5c19-c945-4bca-adef-0ddf1b9fabac"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 15:00:44 crc kubenswrapper[4718]: I1210 15:00:44.847590 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/57bc5c19-c945-4bca-adef-0ddf1b9fabac-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "57bc5c19-c945-4bca-adef-0ddf1b9fabac" (UID: "57bc5c19-c945-4bca-adef-0ddf1b9fabac"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:00:44 crc kubenswrapper[4718]: I1210 15:00:44.858349 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57bc5c19-c945-4bca-adef-0ddf1b9fabac-kube-api-access-ngd4x" (OuterVolumeSpecName: "kube-api-access-ngd4x") pod "57bc5c19-c945-4bca-adef-0ddf1b9fabac" (UID: "57bc5c19-c945-4bca-adef-0ddf1b9fabac"). InnerVolumeSpecName "kube-api-access-ngd4x". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:00:44 crc kubenswrapper[4718]: I1210 15:00:44.886755 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b814c215-d63c-49f4-8760-fb3688f9d9e3-config-data" (OuterVolumeSpecName: "config-data") pod "b814c215-d63c-49f4-8760-fb3688f9d9e3" (UID: "b814c215-d63c-49f4-8760-fb3688f9d9e3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:00:44 crc kubenswrapper[4718]: I1210 15:00:44.887421 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/57bc5c19-c945-4bca-adef-0ddf1b9fabac-config-data" (OuterVolumeSpecName: "config-data") pod "57bc5c19-c945-4bca-adef-0ddf1b9fabac" (UID: "57bc5c19-c945-4bca-adef-0ddf1b9fabac"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 15:00:44 crc kubenswrapper[4718]: I1210 15:00:44.899672 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/57bc5c19-c945-4bca-adef-0ddf1b9fabac-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "57bc5c19-c945-4bca-adef-0ddf1b9fabac" (UID: "57bc5c19-c945-4bca-adef-0ddf1b9fabac"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:00:44 crc kubenswrapper[4718]: I1210 15:00:44.904108 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/57bc5c19-c945-4bca-adef-0ddf1b9fabac-scripts" (OuterVolumeSpecName: "scripts") pod "57bc5c19-c945-4bca-adef-0ddf1b9fabac" (UID: "57bc5c19-c945-4bca-adef-0ddf1b9fabac"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 15:00:44 crc kubenswrapper[4718]: I1210 15:00:44.938094 4718 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/57bc5c19-c945-4bca-adef-0ddf1b9fabac-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Dec 10 15:00:44 crc kubenswrapper[4718]: I1210 15:00:44.938680 4718 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/57bc5c19-c945-4bca-adef-0ddf1b9fabac-logs\") on node \"crc\" DevicePath \"\"" Dec 10 15:00:44 crc kubenswrapper[4718]: I1210 15:00:44.938805 4718 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/57bc5c19-c945-4bca-adef-0ddf1b9fabac-scripts\") on node \"crc\" DevicePath \"\"" Dec 10 15:00:44 crc kubenswrapper[4718]: I1210 15:00:44.938937 4718 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57bc5c19-c945-4bca-adef-0ddf1b9fabac-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 10 15:00:44 crc kubenswrapper[4718]: I1210 15:00:44.939076 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngd4x\" (UniqueName: \"kubernetes.io/projected/57bc5c19-c945-4bca-adef-0ddf1b9fabac-kube-api-access-ngd4x\") on node \"crc\" DevicePath \"\"" Dec 10 15:00:44 crc kubenswrapper[4718]: I1210 15:00:44.939185 4718 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b814c215-d63c-49f4-8760-fb3688f9d9e3-config-data\") on node \"crc\" DevicePath \"\"" Dec 10 15:00:44 crc kubenswrapper[4718]: I1210 15:00:44.939291 4718 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/57bc5c19-c945-4bca-adef-0ddf1b9fabac-config-data\") on node \"crc\" DevicePath \"\"" Dec 10 15:00:44 crc kubenswrapper[4718]: I1210 15:00:44.985608 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/57bc5c19-c945-4bca-adef-0ddf1b9fabac-horizon-tls-certs" (OuterVolumeSpecName: "horizon-tls-certs") pod "57bc5c19-c945-4bca-adef-0ddf1b9fabac" (UID: "57bc5c19-c945-4bca-adef-0ddf1b9fabac"). InnerVolumeSpecName "horizon-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:00:45 crc kubenswrapper[4718]: I1210 15:00:45.009741 4718 scope.go:117] "RemoveContainer" containerID="5791b8b6236cb77364dfdedef24d9fc7a0f89cb75fd04ecf2bfd2755d31feff9" Dec 10 15:00:45 crc kubenswrapper[4718]: I1210 15:00:45.042350 4718 reconciler_common.go:293] "Volume detached for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/57bc5c19-c945-4bca-adef-0ddf1b9fabac-horizon-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 10 15:00:45 crc kubenswrapper[4718]: I1210 15:00:45.149455 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Dec 10 15:00:45 crc kubenswrapper[4718]: I1210 15:00:45.167461 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Dec 10 15:00:45 crc kubenswrapper[4718]: I1210 15:00:45.216164 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Dec 10 15:00:45 crc kubenswrapper[4718]: E1210 15:00:45.216988 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="12eb657b-f669-41c3-9125-9266909468a3" containerName="dnsmasq-dns" Dec 10 15:00:45 crc kubenswrapper[4718]: I1210 15:00:45.217013 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="12eb657b-f669-41c3-9125-9266909468a3" containerName="dnsmasq-dns" Dec 10 15:00:45 crc kubenswrapper[4718]: E1210 15:00:45.217050 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="57bc5c19-c945-4bca-adef-0ddf1b9fabac" containerName="horizon" Dec 10 15:00:45 crc kubenswrapper[4718]: I1210 15:00:45.217059 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="57bc5c19-c945-4bca-adef-0ddf1b9fabac" containerName="horizon" Dec 10 15:00:45 crc kubenswrapper[4718]: E1210 15:00:45.217079 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b814c215-d63c-49f4-8760-fb3688f9d9e3" containerName="probe" Dec 10 15:00:45 crc kubenswrapper[4718]: I1210 15:00:45.217089 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="b814c215-d63c-49f4-8760-fb3688f9d9e3" containerName="probe" Dec 10 15:00:45 crc kubenswrapper[4718]: E1210 15:00:45.217119 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="372b2022-f87c-4e95-9831-74f4c801d98e" containerName="collect-profiles" Dec 10 15:00:45 crc kubenswrapper[4718]: I1210 15:00:45.217127 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="372b2022-f87c-4e95-9831-74f4c801d98e" containerName="collect-profiles" Dec 10 15:00:45 crc kubenswrapper[4718]: E1210 15:00:45.217144 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="57bc5c19-c945-4bca-adef-0ddf1b9fabac" containerName="horizon-log" Dec 10 15:00:45 crc kubenswrapper[4718]: I1210 15:00:45.217152 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="57bc5c19-c945-4bca-adef-0ddf1b9fabac" containerName="horizon-log" Dec 10 15:00:45 crc kubenswrapper[4718]: E1210 15:00:45.217169 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b814c215-d63c-49f4-8760-fb3688f9d9e3" containerName="cinder-scheduler" Dec 10 15:00:45 crc kubenswrapper[4718]: I1210 15:00:45.217176 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="b814c215-d63c-49f4-8760-fb3688f9d9e3" containerName="cinder-scheduler" Dec 10 15:00:45 crc kubenswrapper[4718]: E1210 15:00:45.217192 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="12eb657b-f669-41c3-9125-9266909468a3" containerName="init" Dec 10 15:00:45 crc kubenswrapper[4718]: I1210 15:00:45.217197 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="12eb657b-f669-41c3-9125-9266909468a3" containerName="init" Dec 10 15:00:45 crc kubenswrapper[4718]: I1210 15:00:45.217448 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="57bc5c19-c945-4bca-adef-0ddf1b9fabac" containerName="horizon-log" Dec 10 15:00:45 crc kubenswrapper[4718]: I1210 15:00:45.217472 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="b814c215-d63c-49f4-8760-fb3688f9d9e3" containerName="probe" Dec 10 15:00:45 crc kubenswrapper[4718]: I1210 15:00:45.217500 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="12eb657b-f669-41c3-9125-9266909468a3" containerName="dnsmasq-dns" Dec 10 15:00:45 crc kubenswrapper[4718]: I1210 15:00:45.217511 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="57bc5c19-c945-4bca-adef-0ddf1b9fabac" containerName="horizon" Dec 10 15:00:45 crc kubenswrapper[4718]: I1210 15:00:45.217527 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="57bc5c19-c945-4bca-adef-0ddf1b9fabac" containerName="horizon" Dec 10 15:00:45 crc kubenswrapper[4718]: I1210 15:00:45.217549 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="372b2022-f87c-4e95-9831-74f4c801d98e" containerName="collect-profiles" Dec 10 15:00:45 crc kubenswrapper[4718]: I1210 15:00:45.217561 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="b814c215-d63c-49f4-8760-fb3688f9d9e3" containerName="cinder-scheduler" Dec 10 15:00:45 crc kubenswrapper[4718]: E1210 15:00:45.217792 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="57bc5c19-c945-4bca-adef-0ddf1b9fabac" containerName="horizon" Dec 10 15:00:45 crc kubenswrapper[4718]: I1210 15:00:45.217801 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="57bc5c19-c945-4bca-adef-0ddf1b9fabac" containerName="horizon" Dec 10 15:00:45 crc kubenswrapper[4718]: I1210 15:00:45.224313 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Dec 10 15:00:45 crc kubenswrapper[4718]: I1210 15:00:45.226754 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Dec 10 15:00:45 crc kubenswrapper[4718]: I1210 15:00:45.268630 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Dec 10 15:00:45 crc kubenswrapper[4718]: I1210 15:00:45.360539 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/74391ed2-7cf2-4c88-a940-e2bb3e6d3ad7-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"74391ed2-7cf2-4c88-a940-e2bb3e6d3ad7\") " pod="openstack/cinder-scheduler-0" Dec 10 15:00:45 crc kubenswrapper[4718]: I1210 15:00:45.360622 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/74391ed2-7cf2-4c88-a940-e2bb3e6d3ad7-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"74391ed2-7cf2-4c88-a940-e2bb3e6d3ad7\") " pod="openstack/cinder-scheduler-0" Dec 10 15:00:45 crc kubenswrapper[4718]: I1210 15:00:45.361022 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/74391ed2-7cf2-4c88-a940-e2bb3e6d3ad7-scripts\") pod \"cinder-scheduler-0\" (UID: \"74391ed2-7cf2-4c88-a940-e2bb3e6d3ad7\") " pod="openstack/cinder-scheduler-0" Dec 10 15:00:45 crc kubenswrapper[4718]: I1210 15:00:45.361097 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/74391ed2-7cf2-4c88-a940-e2bb3e6d3ad7-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"74391ed2-7cf2-4c88-a940-e2bb3e6d3ad7\") " pod="openstack/cinder-scheduler-0" Dec 10 15:00:45 crc kubenswrapper[4718]: I1210 15:00:45.361273 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/74391ed2-7cf2-4c88-a940-e2bb3e6d3ad7-config-data\") pod \"cinder-scheduler-0\" (UID: \"74391ed2-7cf2-4c88-a940-e2bb3e6d3ad7\") " pod="openstack/cinder-scheduler-0" Dec 10 15:00:45 crc kubenswrapper[4718]: I1210 15:00:45.361370 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-65zgk\" (UniqueName: \"kubernetes.io/projected/74391ed2-7cf2-4c88-a940-e2bb3e6d3ad7-kube-api-access-65zgk\") pod \"cinder-scheduler-0\" (UID: \"74391ed2-7cf2-4c88-a940-e2bb3e6d3ad7\") " pod="openstack/cinder-scheduler-0" Dec 10 15:00:45 crc kubenswrapper[4718]: I1210 15:00:45.399733 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-6bb7f498bd-pjx6h"] Dec 10 15:00:45 crc kubenswrapper[4718]: I1210 15:00:45.423594 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-6bb7f498bd-pjx6h"] Dec 10 15:00:45 crc kubenswrapper[4718]: I1210 15:00:45.465135 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/74391ed2-7cf2-4c88-a940-e2bb3e6d3ad7-scripts\") pod \"cinder-scheduler-0\" (UID: \"74391ed2-7cf2-4c88-a940-e2bb3e6d3ad7\") " pod="openstack/cinder-scheduler-0" Dec 10 15:00:45 crc kubenswrapper[4718]: I1210 15:00:45.465214 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/74391ed2-7cf2-4c88-a940-e2bb3e6d3ad7-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"74391ed2-7cf2-4c88-a940-e2bb3e6d3ad7\") " pod="openstack/cinder-scheduler-0" Dec 10 15:00:45 crc kubenswrapper[4718]: I1210 15:00:45.465299 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/74391ed2-7cf2-4c88-a940-e2bb3e6d3ad7-config-data\") pod \"cinder-scheduler-0\" (UID: \"74391ed2-7cf2-4c88-a940-e2bb3e6d3ad7\") " pod="openstack/cinder-scheduler-0" Dec 10 15:00:45 crc kubenswrapper[4718]: I1210 15:00:45.465350 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-65zgk\" (UniqueName: \"kubernetes.io/projected/74391ed2-7cf2-4c88-a940-e2bb3e6d3ad7-kube-api-access-65zgk\") pod \"cinder-scheduler-0\" (UID: \"74391ed2-7cf2-4c88-a940-e2bb3e6d3ad7\") " pod="openstack/cinder-scheduler-0" Dec 10 15:00:45 crc kubenswrapper[4718]: I1210 15:00:45.465419 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/74391ed2-7cf2-4c88-a940-e2bb3e6d3ad7-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"74391ed2-7cf2-4c88-a940-e2bb3e6d3ad7\") " pod="openstack/cinder-scheduler-0" Dec 10 15:00:45 crc kubenswrapper[4718]: I1210 15:00:45.465493 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/74391ed2-7cf2-4c88-a940-e2bb3e6d3ad7-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"74391ed2-7cf2-4c88-a940-e2bb3e6d3ad7\") " pod="openstack/cinder-scheduler-0" Dec 10 15:00:45 crc kubenswrapper[4718]: I1210 15:00:45.466514 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/74391ed2-7cf2-4c88-a940-e2bb3e6d3ad7-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"74391ed2-7cf2-4c88-a940-e2bb3e6d3ad7\") " pod="openstack/cinder-scheduler-0" Dec 10 15:00:45 crc kubenswrapper[4718]: I1210 15:00:45.475610 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/74391ed2-7cf2-4c88-a940-e2bb3e6d3ad7-scripts\") pod \"cinder-scheduler-0\" (UID: \"74391ed2-7cf2-4c88-a940-e2bb3e6d3ad7\") " pod="openstack/cinder-scheduler-0" Dec 10 15:00:45 crc kubenswrapper[4718]: I1210 15:00:45.475640 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/74391ed2-7cf2-4c88-a940-e2bb3e6d3ad7-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"74391ed2-7cf2-4c88-a940-e2bb3e6d3ad7\") " pod="openstack/cinder-scheduler-0" Dec 10 15:00:45 crc kubenswrapper[4718]: I1210 15:00:45.475685 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/74391ed2-7cf2-4c88-a940-e2bb3e6d3ad7-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"74391ed2-7cf2-4c88-a940-e2bb3e6d3ad7\") " pod="openstack/cinder-scheduler-0" Dec 10 15:00:45 crc kubenswrapper[4718]: I1210 15:00:45.476680 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/74391ed2-7cf2-4c88-a940-e2bb3e6d3ad7-config-data\") pod \"cinder-scheduler-0\" (UID: \"74391ed2-7cf2-4c88-a940-e2bb3e6d3ad7\") " pod="openstack/cinder-scheduler-0" Dec 10 15:00:45 crc kubenswrapper[4718]: I1210 15:00:45.496177 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-65zgk\" (UniqueName: \"kubernetes.io/projected/74391ed2-7cf2-4c88-a940-e2bb3e6d3ad7-kube-api-access-65zgk\") pod \"cinder-scheduler-0\" (UID: \"74391ed2-7cf2-4c88-a940-e2bb3e6d3ad7\") " pod="openstack/cinder-scheduler-0" Dec 10 15:00:45 crc kubenswrapper[4718]: I1210 15:00:45.579358 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Dec 10 15:00:45 crc kubenswrapper[4718]: I1210 15:00:45.713003 4718 scope.go:117] "RemoveContainer" containerID="b117b070cadfb38d1e352bf4a066eddfacde995e6de0938901cbc9ed0bddda18" Dec 10 15:00:45 crc kubenswrapper[4718]: I1210 15:00:45.727778 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5vctd" event={"ID":"81ebdd62-3494-4d9a-8d04-ae6122173e69","Type":"ContainerStarted","Data":"b05705fe5710eaa0adf469a4bac1ed215184752f8403121c2ba72f417c1b83b5"} Dec 10 15:00:45 crc kubenswrapper[4718]: I1210 15:00:45.758737 4718 generic.go:334] "Generic (PLEG): container finished" podID="f7cb8719-7e84-49ea-a5a7-e48eca56b5bd" containerID="55b437207d781e209e0bdb47ccc6f14714239cdd2c89f1876c97f24de28499c8" exitCode=2 Dec 10 15:00:45 crc kubenswrapper[4718]: I1210 15:00:45.759181 4718 generic.go:334] "Generic (PLEG): container finished" podID="f7cb8719-7e84-49ea-a5a7-e48eca56b5bd" containerID="98c682674f43ddeeef8f3dc4167fea6925bd60c6b114f68a2d403e50048e343c" exitCode=0 Dec 10 15:00:45 crc kubenswrapper[4718]: I1210 15:00:45.759213 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f7cb8719-7e84-49ea-a5a7-e48eca56b5bd","Type":"ContainerDied","Data":"55b437207d781e209e0bdb47ccc6f14714239cdd2c89f1876c97f24de28499c8"} Dec 10 15:00:45 crc kubenswrapper[4718]: I1210 15:00:45.759255 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f7cb8719-7e84-49ea-a5a7-e48eca56b5bd","Type":"ContainerDied","Data":"98c682674f43ddeeef8f3dc4167fea6925bd60c6b114f68a2d403e50048e343c"} Dec 10 15:00:46 crc kubenswrapper[4718]: I1210 15:00:46.019021 4718 scope.go:117] "RemoveContainer" containerID="23af10f064789f8e654d5fe32175e0c280568083b73723271e9b524fe4f4ed76" Dec 10 15:00:46 crc kubenswrapper[4718]: I1210 15:00:46.048440 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57bc5c19-c945-4bca-adef-0ddf1b9fabac" path="/var/lib/kubelet/pods/57bc5c19-c945-4bca-adef-0ddf1b9fabac/volumes" Dec 10 15:00:46 crc kubenswrapper[4718]: I1210 15:00:46.051269 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b814c215-d63c-49f4-8760-fb3688f9d9e3" path="/var/lib/kubelet/pods/b814c215-d63c-49f4-8760-fb3688f9d9e3/volumes" Dec 10 15:00:46 crc kubenswrapper[4718]: I1210 15:00:46.184104 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Dec 10 15:00:46 crc kubenswrapper[4718]: I1210 15:00:46.774507 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"74391ed2-7cf2-4c88-a940-e2bb3e6d3ad7","Type":"ContainerStarted","Data":"bd51ff942cf3e23922be1e6e5e987c60e94736f4ce1f962288a1493e9f59eb01"} Dec 10 15:00:46 crc kubenswrapper[4718]: I1210 15:00:46.779799 4718 generic.go:334] "Generic (PLEG): container finished" podID="81ebdd62-3494-4d9a-8d04-ae6122173e69" containerID="b05705fe5710eaa0adf469a4bac1ed215184752f8403121c2ba72f417c1b83b5" exitCode=0 Dec 10 15:00:46 crc kubenswrapper[4718]: I1210 15:00:46.779896 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5vctd" event={"ID":"81ebdd62-3494-4d9a-8d04-ae6122173e69","Type":"ContainerDied","Data":"b05705fe5710eaa0adf469a4bac1ed215184752f8403121c2ba72f417c1b83b5"} Dec 10 15:00:47 crc kubenswrapper[4718]: I1210 15:00:47.817634 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"74391ed2-7cf2-4c88-a940-e2bb3e6d3ad7","Type":"ContainerStarted","Data":"39291afaf2f1326728b4f6cf1fedaf6955b197914483bea2088e200221d01133"} Dec 10 15:00:47 crc kubenswrapper[4718]: I1210 15:00:47.821474 4718 generic.go:334] "Generic (PLEG): container finished" podID="f7cb8719-7e84-49ea-a5a7-e48eca56b5bd" containerID="9bcf705243f11ced10fe30234cf27288acb50f6abd504752daa8bf97090c014c" exitCode=0 Dec 10 15:00:47 crc kubenswrapper[4718]: I1210 15:00:47.821523 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f7cb8719-7e84-49ea-a5a7-e48eca56b5bd","Type":"ContainerDied","Data":"9bcf705243f11ced10fe30234cf27288acb50f6abd504752daa8bf97090c014c"} Dec 10 15:00:48 crc kubenswrapper[4718]: I1210 15:00:48.845711 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5vctd" event={"ID":"81ebdd62-3494-4d9a-8d04-ae6122173e69","Type":"ContainerStarted","Data":"3eddfc5ce47931e34cbba40080c85696756405e7b83cab98cf0d4c735bf53342"} Dec 10 15:00:49 crc kubenswrapper[4718]: I1210 15:00:49.025739 4718 scope.go:117] "RemoveContainer" containerID="76beffb68a4302af15a19b5c310d822594a21c7c7ec551bf4bf551ca3dcf1f21" Dec 10 15:00:49 crc kubenswrapper[4718]: E1210 15:00:49.026673 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8zmhn_openshift-machine-config-operator(8db53917-7cfb-496d-b8a0-5cc68f3be4e7)\"" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" podUID="8db53917-7cfb-496d-b8a0-5cc68f3be4e7" Dec 10 15:00:49 crc kubenswrapper[4718]: I1210 15:00:49.046675 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-5vctd" podStartSLOduration=10.099445257 podStartE2EDuration="1m3.046639392s" podCreationTimestamp="2025-12-10 14:59:46 +0000 UTC" firstStartedPulling="2025-12-10 14:59:54.828001443 +0000 UTC m=+1699.777224860" lastFinishedPulling="2025-12-10 15:00:47.775195578 +0000 UTC m=+1752.724418995" observedRunningTime="2025-12-10 15:00:48.973183339 +0000 UTC m=+1753.922406766" watchObservedRunningTime="2025-12-10 15:00:49.046639392 +0000 UTC m=+1753.995862809" Dec 10 15:00:49 crc kubenswrapper[4718]: I1210 15:00:49.859846 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"74391ed2-7cf2-4c88-a940-e2bb3e6d3ad7","Type":"ContainerStarted","Data":"4214b454557293b8aa0602318675f8bf964fbd2a2f910ba7fc6831cf250b602e"} Dec 10 15:00:49 crc kubenswrapper[4718]: I1210 15:00:49.903160 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=4.903125659 podStartE2EDuration="4.903125659s" podCreationTimestamp="2025-12-10 15:00:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 15:00:49.898253227 +0000 UTC m=+1754.847476644" watchObservedRunningTime="2025-12-10 15:00:49.903125659 +0000 UTC m=+1754.852349076" Dec 10 15:00:50 crc kubenswrapper[4718]: I1210 15:00:50.580289 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Dec 10 15:00:51 crc kubenswrapper[4718]: I1210 15:00:51.708044 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-7cb5f4dbb8-qk86f" Dec 10 15:00:52 crc kubenswrapper[4718]: I1210 15:00:52.983504 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 10 15:00:52 crc kubenswrapper[4718]: I1210 15:00:52.984280 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="e6d8b276-6f9f-4b17-b61c-996bdec36f85" containerName="glance-log" containerID="cri-o://b6f101c7d12c30882997bb52c30f3bffc266d2a63aafc2678435cbabb9f6a25b" gracePeriod=30 Dec 10 15:00:52 crc kubenswrapper[4718]: I1210 15:00:52.985560 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="e6d8b276-6f9f-4b17-b61c-996bdec36f85" containerName="glance-httpd" containerID="cri-o://7478490f9f7961a45a1eb2837c66227a4e5d802f587b00eb8cdcd1973c9c2efe" gracePeriod=30 Dec 10 15:00:53 crc kubenswrapper[4718]: I1210 15:00:53.930810 4718 generic.go:334] "Generic (PLEG): container finished" podID="e6d8b276-6f9f-4b17-b61c-996bdec36f85" containerID="b6f101c7d12c30882997bb52c30f3bffc266d2a63aafc2678435cbabb9f6a25b" exitCode=143 Dec 10 15:00:53 crc kubenswrapper[4718]: I1210 15:00:53.930919 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"e6d8b276-6f9f-4b17-b61c-996bdec36f85","Type":"ContainerDied","Data":"b6f101c7d12c30882997bb52c30f3bffc266d2a63aafc2678435cbabb9f6a25b"} Dec 10 15:00:54 crc kubenswrapper[4718]: I1210 15:00:54.262335 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 10 15:00:54 crc kubenswrapper[4718]: I1210 15:00:54.265759 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="7d1ecb19-5ab9-470c-8f0e-862e7675d2c6" containerName="glance-log" containerID="cri-o://07ce423c4440dd12f0a3bf202da6f5b8a56b40b1c7579615b85e0b871a76593c" gracePeriod=30 Dec 10 15:00:54 crc kubenswrapper[4718]: I1210 15:00:54.265843 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="7d1ecb19-5ab9-470c-8f0e-862e7675d2c6" containerName="glance-httpd" containerID="cri-o://c0fbe5ce5cebf4ef05059fbabea799d8263d4777808df61b99d66d470e2e1705" gracePeriod=30 Dec 10 15:00:54 crc kubenswrapper[4718]: I1210 15:00:54.947407 4718 generic.go:334] "Generic (PLEG): container finished" podID="7d1ecb19-5ab9-470c-8f0e-862e7675d2c6" containerID="07ce423c4440dd12f0a3bf202da6f5b8a56b40b1c7579615b85e0b871a76593c" exitCode=143 Dec 10 15:00:54 crc kubenswrapper[4718]: I1210 15:00:54.947443 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"7d1ecb19-5ab9-470c-8f0e-862e7675d2c6","Type":"ContainerDied","Data":"07ce423c4440dd12f0a3bf202da6f5b8a56b40b1c7579615b85e0b871a76593c"} Dec 10 15:00:54 crc kubenswrapper[4718]: I1210 15:00:54.951486 4718 generic.go:334] "Generic (PLEG): container finished" podID="e6d8b276-6f9f-4b17-b61c-996bdec36f85" containerID="7478490f9f7961a45a1eb2837c66227a4e5d802f587b00eb8cdcd1973c9c2efe" exitCode=0 Dec 10 15:00:54 crc kubenswrapper[4718]: I1210 15:00:54.951542 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"e6d8b276-6f9f-4b17-b61c-996bdec36f85","Type":"ContainerDied","Data":"7478490f9f7961a45a1eb2837c66227a4e5d802f587b00eb8cdcd1973c9c2efe"} Dec 10 15:00:55 crc kubenswrapper[4718]: I1210 15:00:55.306671 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-5fd4cf9989-fxv7b" Dec 10 15:00:55 crc kubenswrapper[4718]: I1210 15:00:55.412970 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-7cb5f4dbb8-qk86f"] Dec 10 15:00:55 crc kubenswrapper[4718]: I1210 15:00:55.413274 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-7cb5f4dbb8-qk86f" podUID="aef87f6d-8e6b-4569-b12d-b10b34872959" containerName="neutron-api" containerID="cri-o://0d6cd96b278e48853b45b028f3a3155e17599ab7a3ee02355a52f044d7125bae" gracePeriod=30 Dec 10 15:00:55 crc kubenswrapper[4718]: I1210 15:00:55.413975 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-7cb5f4dbb8-qk86f" podUID="aef87f6d-8e6b-4569-b12d-b10b34872959" containerName="neutron-httpd" containerID="cri-o://b53f56f1fd1a035ec47cc925e5efbb4afa85a55ae69e7b6f1e4a1dc4beb22080" gracePeriod=30 Dec 10 15:00:55 crc kubenswrapper[4718]: I1210 15:00:55.988570 4718 generic.go:334] "Generic (PLEG): container finished" podID="aef87f6d-8e6b-4569-b12d-b10b34872959" containerID="b53f56f1fd1a035ec47cc925e5efbb4afa85a55ae69e7b6f1e4a1dc4beb22080" exitCode=0 Dec 10 15:00:55 crc kubenswrapper[4718]: I1210 15:00:55.989212 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7cb5f4dbb8-qk86f" event={"ID":"aef87f6d-8e6b-4569-b12d-b10b34872959","Type":"ContainerDied","Data":"b53f56f1fd1a035ec47cc925e5efbb4afa85a55ae69e7b6f1e4a1dc4beb22080"} Dec 10 15:00:56 crc kubenswrapper[4718]: I1210 15:00:56.005562 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"e6d8b276-6f9f-4b17-b61c-996bdec36f85","Type":"ContainerDied","Data":"747defd0881d9398c746ed42f1d9de880dcd25b74424089274f0ad0173fca117"} Dec 10 15:00:56 crc kubenswrapper[4718]: I1210 15:00:56.005633 4718 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="747defd0881d9398c746ed42f1d9de880dcd25b74424089274f0ad0173fca117" Dec 10 15:00:56 crc kubenswrapper[4718]: I1210 15:00:56.051894 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 10 15:00:56 crc kubenswrapper[4718]: I1210 15:00:56.120525 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6d8b276-6f9f-4b17-b61c-996bdec36f85-combined-ca-bundle\") pod \"e6d8b276-6f9f-4b17-b61c-996bdec36f85\" (UID: \"e6d8b276-6f9f-4b17-b61c-996bdec36f85\") " Dec 10 15:00:56 crc kubenswrapper[4718]: I1210 15:00:56.120872 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e6d8b276-6f9f-4b17-b61c-996bdec36f85-httpd-run\") pod \"e6d8b276-6f9f-4b17-b61c-996bdec36f85\" (UID: \"e6d8b276-6f9f-4b17-b61c-996bdec36f85\") " Dec 10 15:00:56 crc kubenswrapper[4718]: I1210 15:00:56.121000 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"e6d8b276-6f9f-4b17-b61c-996bdec36f85\" (UID: \"e6d8b276-6f9f-4b17-b61c-996bdec36f85\") " Dec 10 15:00:56 crc kubenswrapper[4718]: I1210 15:00:56.121062 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e6d8b276-6f9f-4b17-b61c-996bdec36f85-public-tls-certs\") pod \"e6d8b276-6f9f-4b17-b61c-996bdec36f85\" (UID: \"e6d8b276-6f9f-4b17-b61c-996bdec36f85\") " Dec 10 15:00:56 crc kubenswrapper[4718]: I1210 15:00:56.121153 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e6d8b276-6f9f-4b17-b61c-996bdec36f85-config-data\") pod \"e6d8b276-6f9f-4b17-b61c-996bdec36f85\" (UID: \"e6d8b276-6f9f-4b17-b61c-996bdec36f85\") " Dec 10 15:00:56 crc kubenswrapper[4718]: I1210 15:00:56.121316 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e6d8b276-6f9f-4b17-b61c-996bdec36f85-scripts\") pod \"e6d8b276-6f9f-4b17-b61c-996bdec36f85\" (UID: \"e6d8b276-6f9f-4b17-b61c-996bdec36f85\") " Dec 10 15:00:56 crc kubenswrapper[4718]: I1210 15:00:56.136288 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e6d8b276-6f9f-4b17-b61c-996bdec36f85-logs\") pod \"e6d8b276-6f9f-4b17-b61c-996bdec36f85\" (UID: \"e6d8b276-6f9f-4b17-b61c-996bdec36f85\") " Dec 10 15:00:56 crc kubenswrapper[4718]: I1210 15:00:56.136452 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6dbbc\" (UniqueName: \"kubernetes.io/projected/e6d8b276-6f9f-4b17-b61c-996bdec36f85-kube-api-access-6dbbc\") pod \"e6d8b276-6f9f-4b17-b61c-996bdec36f85\" (UID: \"e6d8b276-6f9f-4b17-b61c-996bdec36f85\") " Dec 10 15:00:56 crc kubenswrapper[4718]: I1210 15:00:56.375428 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage08-crc" (OuterVolumeSpecName: "glance") pod "e6d8b276-6f9f-4b17-b61c-996bdec36f85" (UID: "e6d8b276-6f9f-4b17-b61c-996bdec36f85"). InnerVolumeSpecName "local-storage08-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Dec 10 15:00:56 crc kubenswrapper[4718]: I1210 15:00:56.381240 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e6d8b276-6f9f-4b17-b61c-996bdec36f85-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "e6d8b276-6f9f-4b17-b61c-996bdec36f85" (UID: "e6d8b276-6f9f-4b17-b61c-996bdec36f85"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 15:00:56 crc kubenswrapper[4718]: I1210 15:00:56.399509 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e6d8b276-6f9f-4b17-b61c-996bdec36f85-scripts" (OuterVolumeSpecName: "scripts") pod "e6d8b276-6f9f-4b17-b61c-996bdec36f85" (UID: "e6d8b276-6f9f-4b17-b61c-996bdec36f85"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:00:56 crc kubenswrapper[4718]: I1210 15:00:56.416703 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e6d8b276-6f9f-4b17-b61c-996bdec36f85-kube-api-access-6dbbc" (OuterVolumeSpecName: "kube-api-access-6dbbc") pod "e6d8b276-6f9f-4b17-b61c-996bdec36f85" (UID: "e6d8b276-6f9f-4b17-b61c-996bdec36f85"). InnerVolumeSpecName "kube-api-access-6dbbc". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:00:56 crc kubenswrapper[4718]: I1210 15:00:56.417130 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e6d8b276-6f9f-4b17-b61c-996bdec36f85-logs" (OuterVolumeSpecName: "logs") pod "e6d8b276-6f9f-4b17-b61c-996bdec36f85" (UID: "e6d8b276-6f9f-4b17-b61c-996bdec36f85"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 15:00:56 crc kubenswrapper[4718]: I1210 15:00:56.469247 4718 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e6d8b276-6f9f-4b17-b61c-996bdec36f85-httpd-run\") on node \"crc\" DevicePath \"\"" Dec 10 15:00:56 crc kubenswrapper[4718]: I1210 15:00:56.469309 4718 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" " Dec 10 15:00:56 crc kubenswrapper[4718]: I1210 15:00:56.469321 4718 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e6d8b276-6f9f-4b17-b61c-996bdec36f85-scripts\") on node \"crc\" DevicePath \"\"" Dec 10 15:00:56 crc kubenswrapper[4718]: I1210 15:00:56.469333 4718 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e6d8b276-6f9f-4b17-b61c-996bdec36f85-logs\") on node \"crc\" DevicePath \"\"" Dec 10 15:00:56 crc kubenswrapper[4718]: I1210 15:00:56.469347 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6dbbc\" (UniqueName: \"kubernetes.io/projected/e6d8b276-6f9f-4b17-b61c-996bdec36f85-kube-api-access-6dbbc\") on node \"crc\" DevicePath \"\"" Dec 10 15:00:56 crc kubenswrapper[4718]: I1210 15:00:56.482491 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e6d8b276-6f9f-4b17-b61c-996bdec36f85-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e6d8b276-6f9f-4b17-b61c-996bdec36f85" (UID: "e6d8b276-6f9f-4b17-b61c-996bdec36f85"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:00:56 crc kubenswrapper[4718]: I1210 15:00:56.494859 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Dec 10 15:00:56 crc kubenswrapper[4718]: I1210 15:00:56.537530 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e6d8b276-6f9f-4b17-b61c-996bdec36f85-config-data" (OuterVolumeSpecName: "config-data") pod "e6d8b276-6f9f-4b17-b61c-996bdec36f85" (UID: "e6d8b276-6f9f-4b17-b61c-996bdec36f85"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:00:56 crc kubenswrapper[4718]: I1210 15:00:56.548654 4718 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage08-crc" (UniqueName: "kubernetes.io/local-volume/local-storage08-crc") on node "crc" Dec 10 15:00:56 crc kubenswrapper[4718]: I1210 15:00:56.556569 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e6d8b276-6f9f-4b17-b61c-996bdec36f85-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "e6d8b276-6f9f-4b17-b61c-996bdec36f85" (UID: "e6d8b276-6f9f-4b17-b61c-996bdec36f85"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:00:56 crc kubenswrapper[4718]: I1210 15:00:56.572247 4718 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e6d8b276-6f9f-4b17-b61c-996bdec36f85-config-data\") on node \"crc\" DevicePath \"\"" Dec 10 15:00:56 crc kubenswrapper[4718]: I1210 15:00:56.572315 4718 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6d8b276-6f9f-4b17-b61c-996bdec36f85-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 10 15:00:56 crc kubenswrapper[4718]: I1210 15:00:56.572335 4718 reconciler_common.go:293] "Volume detached for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" DevicePath \"\"" Dec 10 15:00:56 crc kubenswrapper[4718]: I1210 15:00:56.572350 4718 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e6d8b276-6f9f-4b17-b61c-996bdec36f85-public-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 10 15:00:56 crc kubenswrapper[4718]: I1210 15:00:56.638792 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-5vctd" Dec 10 15:00:56 crc kubenswrapper[4718]: I1210 15:00:56.638852 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-5vctd" Dec 10 15:00:56 crc kubenswrapper[4718]: I1210 15:00:56.706490 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-5vctd" Dec 10 15:00:57 crc kubenswrapper[4718]: I1210 15:00:57.031991 4718 generic.go:334] "Generic (PLEG): container finished" podID="7d1ecb19-5ab9-470c-8f0e-862e7675d2c6" containerID="c0fbe5ce5cebf4ef05059fbabea799d8263d4777808df61b99d66d470e2e1705" exitCode=0 Dec 10 15:00:57 crc kubenswrapper[4718]: I1210 15:00:57.032076 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"7d1ecb19-5ab9-470c-8f0e-862e7675d2c6","Type":"ContainerDied","Data":"c0fbe5ce5cebf4ef05059fbabea799d8263d4777808df61b99d66d470e2e1705"} Dec 10 15:00:57 crc kubenswrapper[4718]: I1210 15:00:57.032323 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 10 15:00:57 crc kubenswrapper[4718]: I1210 15:00:57.084974 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 10 15:00:57 crc kubenswrapper[4718]: I1210 15:00:57.108829 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 10 15:00:57 crc kubenswrapper[4718]: I1210 15:00:57.130147 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-5vctd" Dec 10 15:00:57 crc kubenswrapper[4718]: I1210 15:00:57.136808 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Dec 10 15:00:57 crc kubenswrapper[4718]: E1210 15:00:57.137736 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e6d8b276-6f9f-4b17-b61c-996bdec36f85" containerName="glance-log" Dec 10 15:00:57 crc kubenswrapper[4718]: I1210 15:00:57.137768 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6d8b276-6f9f-4b17-b61c-996bdec36f85" containerName="glance-log" Dec 10 15:00:57 crc kubenswrapper[4718]: E1210 15:00:57.137801 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e6d8b276-6f9f-4b17-b61c-996bdec36f85" containerName="glance-httpd" Dec 10 15:00:57 crc kubenswrapper[4718]: I1210 15:00:57.137813 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6d8b276-6f9f-4b17-b61c-996bdec36f85" containerName="glance-httpd" Dec 10 15:00:57 crc kubenswrapper[4718]: I1210 15:00:57.138207 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="e6d8b276-6f9f-4b17-b61c-996bdec36f85" containerName="glance-log" Dec 10 15:00:57 crc kubenswrapper[4718]: I1210 15:00:57.138229 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="e6d8b276-6f9f-4b17-b61c-996bdec36f85" containerName="glance-httpd" Dec 10 15:00:57 crc kubenswrapper[4718]: I1210 15:00:57.140967 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 10 15:00:57 crc kubenswrapper[4718]: I1210 15:00:57.149254 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Dec 10 15:00:57 crc kubenswrapper[4718]: I1210 15:00:57.149964 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Dec 10 15:00:57 crc kubenswrapper[4718]: I1210 15:00:57.152792 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 10 15:00:57 crc kubenswrapper[4718]: I1210 15:00:57.198289 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/45fa8b4a-59e6-4adf-bbf1-4dc8d23a802b-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"45fa8b4a-59e6-4adf-bbf1-4dc8d23a802b\") " pod="openstack/glance-default-external-api-0" Dec 10 15:00:57 crc kubenswrapper[4718]: I1210 15:00:57.198416 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/45fa8b4a-59e6-4adf-bbf1-4dc8d23a802b-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"45fa8b4a-59e6-4adf-bbf1-4dc8d23a802b\") " pod="openstack/glance-default-external-api-0" Dec 10 15:00:57 crc kubenswrapper[4718]: I1210 15:00:57.198460 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/45fa8b4a-59e6-4adf-bbf1-4dc8d23a802b-logs\") pod \"glance-default-external-api-0\" (UID: \"45fa8b4a-59e6-4adf-bbf1-4dc8d23a802b\") " pod="openstack/glance-default-external-api-0" Dec 10 15:00:57 crc kubenswrapper[4718]: I1210 15:00:57.198580 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/45fa8b4a-59e6-4adf-bbf1-4dc8d23a802b-config-data\") pod \"glance-default-external-api-0\" (UID: \"45fa8b4a-59e6-4adf-bbf1-4dc8d23a802b\") " pod="openstack/glance-default-external-api-0" Dec 10 15:00:57 crc kubenswrapper[4718]: I1210 15:00:57.198612 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/45fa8b4a-59e6-4adf-bbf1-4dc8d23a802b-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"45fa8b4a-59e6-4adf-bbf1-4dc8d23a802b\") " pod="openstack/glance-default-external-api-0" Dec 10 15:00:57 crc kubenswrapper[4718]: I1210 15:00:57.198643 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"45fa8b4a-59e6-4adf-bbf1-4dc8d23a802b\") " pod="openstack/glance-default-external-api-0" Dec 10 15:00:57 crc kubenswrapper[4718]: I1210 15:00:57.198704 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wjs4w\" (UniqueName: \"kubernetes.io/projected/45fa8b4a-59e6-4adf-bbf1-4dc8d23a802b-kube-api-access-wjs4w\") pod \"glance-default-external-api-0\" (UID: \"45fa8b4a-59e6-4adf-bbf1-4dc8d23a802b\") " pod="openstack/glance-default-external-api-0" Dec 10 15:00:57 crc kubenswrapper[4718]: I1210 15:00:57.198813 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/45fa8b4a-59e6-4adf-bbf1-4dc8d23a802b-scripts\") pod \"glance-default-external-api-0\" (UID: \"45fa8b4a-59e6-4adf-bbf1-4dc8d23a802b\") " pod="openstack/glance-default-external-api-0" Dec 10 15:00:57 crc kubenswrapper[4718]: I1210 15:00:57.261079 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-5vctd"] Dec 10 15:00:57 crc kubenswrapper[4718]: I1210 15:00:57.302202 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/45fa8b4a-59e6-4adf-bbf1-4dc8d23a802b-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"45fa8b4a-59e6-4adf-bbf1-4dc8d23a802b\") " pod="openstack/glance-default-external-api-0" Dec 10 15:00:57 crc kubenswrapper[4718]: I1210 15:00:57.302316 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/45fa8b4a-59e6-4adf-bbf1-4dc8d23a802b-logs\") pod \"glance-default-external-api-0\" (UID: \"45fa8b4a-59e6-4adf-bbf1-4dc8d23a802b\") " pod="openstack/glance-default-external-api-0" Dec 10 15:00:57 crc kubenswrapper[4718]: I1210 15:00:57.302618 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/45fa8b4a-59e6-4adf-bbf1-4dc8d23a802b-config-data\") pod \"glance-default-external-api-0\" (UID: \"45fa8b4a-59e6-4adf-bbf1-4dc8d23a802b\") " pod="openstack/glance-default-external-api-0" Dec 10 15:00:57 crc kubenswrapper[4718]: I1210 15:00:57.302680 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"45fa8b4a-59e6-4adf-bbf1-4dc8d23a802b\") " pod="openstack/glance-default-external-api-0" Dec 10 15:00:57 crc kubenswrapper[4718]: I1210 15:00:57.302740 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/45fa8b4a-59e6-4adf-bbf1-4dc8d23a802b-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"45fa8b4a-59e6-4adf-bbf1-4dc8d23a802b\") " pod="openstack/glance-default-external-api-0" Dec 10 15:00:57 crc kubenswrapper[4718]: I1210 15:00:57.302828 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wjs4w\" (UniqueName: \"kubernetes.io/projected/45fa8b4a-59e6-4adf-bbf1-4dc8d23a802b-kube-api-access-wjs4w\") pod \"glance-default-external-api-0\" (UID: \"45fa8b4a-59e6-4adf-bbf1-4dc8d23a802b\") " pod="openstack/glance-default-external-api-0" Dec 10 15:00:57 crc kubenswrapper[4718]: I1210 15:00:57.302914 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/45fa8b4a-59e6-4adf-bbf1-4dc8d23a802b-scripts\") pod \"glance-default-external-api-0\" (UID: \"45fa8b4a-59e6-4adf-bbf1-4dc8d23a802b\") " pod="openstack/glance-default-external-api-0" Dec 10 15:00:57 crc kubenswrapper[4718]: I1210 15:00:57.302954 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/45fa8b4a-59e6-4adf-bbf1-4dc8d23a802b-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"45fa8b4a-59e6-4adf-bbf1-4dc8d23a802b\") " pod="openstack/glance-default-external-api-0" Dec 10 15:00:57 crc kubenswrapper[4718]: I1210 15:00:57.302973 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/45fa8b4a-59e6-4adf-bbf1-4dc8d23a802b-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"45fa8b4a-59e6-4adf-bbf1-4dc8d23a802b\") " pod="openstack/glance-default-external-api-0" Dec 10 15:00:57 crc kubenswrapper[4718]: I1210 15:00:57.303573 4718 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"45fa8b4a-59e6-4adf-bbf1-4dc8d23a802b\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/glance-default-external-api-0" Dec 10 15:00:57 crc kubenswrapper[4718]: I1210 15:00:57.558862 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/45fa8b4a-59e6-4adf-bbf1-4dc8d23a802b-logs\") pod \"glance-default-external-api-0\" (UID: \"45fa8b4a-59e6-4adf-bbf1-4dc8d23a802b\") " pod="openstack/glance-default-external-api-0" Dec 10 15:00:57 crc kubenswrapper[4718]: I1210 15:00:57.575845 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/45fa8b4a-59e6-4adf-bbf1-4dc8d23a802b-scripts\") pod \"glance-default-external-api-0\" (UID: \"45fa8b4a-59e6-4adf-bbf1-4dc8d23a802b\") " pod="openstack/glance-default-external-api-0" Dec 10 15:00:57 crc kubenswrapper[4718]: I1210 15:00:57.606079 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/45fa8b4a-59e6-4adf-bbf1-4dc8d23a802b-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"45fa8b4a-59e6-4adf-bbf1-4dc8d23a802b\") " pod="openstack/glance-default-external-api-0" Dec 10 15:00:57 crc kubenswrapper[4718]: I1210 15:00:57.609070 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wjs4w\" (UniqueName: \"kubernetes.io/projected/45fa8b4a-59e6-4adf-bbf1-4dc8d23a802b-kube-api-access-wjs4w\") pod \"glance-default-external-api-0\" (UID: \"45fa8b4a-59e6-4adf-bbf1-4dc8d23a802b\") " pod="openstack/glance-default-external-api-0" Dec 10 15:00:57 crc kubenswrapper[4718]: I1210 15:00:57.610485 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/45fa8b4a-59e6-4adf-bbf1-4dc8d23a802b-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"45fa8b4a-59e6-4adf-bbf1-4dc8d23a802b\") " pod="openstack/glance-default-external-api-0" Dec 10 15:00:57 crc kubenswrapper[4718]: I1210 15:00:57.611204 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/45fa8b4a-59e6-4adf-bbf1-4dc8d23a802b-config-data\") pod \"glance-default-external-api-0\" (UID: \"45fa8b4a-59e6-4adf-bbf1-4dc8d23a802b\") " pod="openstack/glance-default-external-api-0" Dec 10 15:00:57 crc kubenswrapper[4718]: I1210 15:00:57.623274 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-zn4lg"] Dec 10 15:00:57 crc kubenswrapper[4718]: I1210 15:00:57.624617 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-zn4lg" podUID="d8eb2321-a379-4525-a11d-2b3f6712aa35" containerName="registry-server" containerID="cri-o://dc5a564dc52aa672c70b5d9fee134aef6d3157e97aea63ed6fc136ce88356718" gracePeriod=2 Dec 10 15:00:57 crc kubenswrapper[4718]: I1210 15:00:57.660377 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"45fa8b4a-59e6-4adf-bbf1-4dc8d23a802b\") " pod="openstack/glance-default-external-api-0" Dec 10 15:00:57 crc kubenswrapper[4718]: I1210 15:00:57.778791 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 10 15:00:58 crc kubenswrapper[4718]: I1210 15:00:58.025977 4718 scope.go:117] "RemoveContainer" containerID="5ce9fd4b574e115b7c7d868acd2e27bb898593ab80a95c86d1d7d52f0aff782e" Dec 10 15:00:58 crc kubenswrapper[4718]: I1210 15:00:58.064883 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e6d8b276-6f9f-4b17-b61c-996bdec36f85" path="/var/lib/kubelet/pods/e6d8b276-6f9f-4b17-b61c-996bdec36f85/volumes" Dec 10 15:00:58 crc kubenswrapper[4718]: I1210 15:00:58.362661 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 10 15:00:58 crc kubenswrapper[4718]: I1210 15:00:58.558819 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7d1ecb19-5ab9-470c-8f0e-862e7675d2c6-scripts\") pod \"7d1ecb19-5ab9-470c-8f0e-862e7675d2c6\" (UID: \"7d1ecb19-5ab9-470c-8f0e-862e7675d2c6\") " Dec 10 15:00:58 crc kubenswrapper[4718]: I1210 15:00:58.558918 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dkmmb\" (UniqueName: \"kubernetes.io/projected/7d1ecb19-5ab9-470c-8f0e-862e7675d2c6-kube-api-access-dkmmb\") pod \"7d1ecb19-5ab9-470c-8f0e-862e7675d2c6\" (UID: \"7d1ecb19-5ab9-470c-8f0e-862e7675d2c6\") " Dec 10 15:00:58 crc kubenswrapper[4718]: I1210 15:00:58.559047 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d1ecb19-5ab9-470c-8f0e-862e7675d2c6-combined-ca-bundle\") pod \"7d1ecb19-5ab9-470c-8f0e-862e7675d2c6\" (UID: \"7d1ecb19-5ab9-470c-8f0e-862e7675d2c6\") " Dec 10 15:00:58 crc kubenswrapper[4718]: I1210 15:00:58.559099 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7d1ecb19-5ab9-470c-8f0e-862e7675d2c6-internal-tls-certs\") pod \"7d1ecb19-5ab9-470c-8f0e-862e7675d2c6\" (UID: \"7d1ecb19-5ab9-470c-8f0e-862e7675d2c6\") " Dec 10 15:00:58 crc kubenswrapper[4718]: I1210 15:00:58.559246 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7d1ecb19-5ab9-470c-8f0e-862e7675d2c6-logs\") pod \"7d1ecb19-5ab9-470c-8f0e-862e7675d2c6\" (UID: \"7d1ecb19-5ab9-470c-8f0e-862e7675d2c6\") " Dec 10 15:00:58 crc kubenswrapper[4718]: I1210 15:00:58.559360 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7d1ecb19-5ab9-470c-8f0e-862e7675d2c6-httpd-run\") pod \"7d1ecb19-5ab9-470c-8f0e-862e7675d2c6\" (UID: \"7d1ecb19-5ab9-470c-8f0e-862e7675d2c6\") " Dec 10 15:00:58 crc kubenswrapper[4718]: I1210 15:00:58.559504 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7d1ecb19-5ab9-470c-8f0e-862e7675d2c6-config-data\") pod \"7d1ecb19-5ab9-470c-8f0e-862e7675d2c6\" (UID: \"7d1ecb19-5ab9-470c-8f0e-862e7675d2c6\") " Dec 10 15:00:58 crc kubenswrapper[4718]: I1210 15:00:58.559659 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"7d1ecb19-5ab9-470c-8f0e-862e7675d2c6\" (UID: \"7d1ecb19-5ab9-470c-8f0e-862e7675d2c6\") " Dec 10 15:00:58 crc kubenswrapper[4718]: I1210 15:00:58.562047 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7d1ecb19-5ab9-470c-8f0e-862e7675d2c6-logs" (OuterVolumeSpecName: "logs") pod "7d1ecb19-5ab9-470c-8f0e-862e7675d2c6" (UID: "7d1ecb19-5ab9-470c-8f0e-862e7675d2c6"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 15:00:58 crc kubenswrapper[4718]: I1210 15:00:58.564851 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7d1ecb19-5ab9-470c-8f0e-862e7675d2c6-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "7d1ecb19-5ab9-470c-8f0e-862e7675d2c6" (UID: "7d1ecb19-5ab9-470c-8f0e-862e7675d2c6"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 15:00:58 crc kubenswrapper[4718]: I1210 15:00:58.574069 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7d1ecb19-5ab9-470c-8f0e-862e7675d2c6-kube-api-access-dkmmb" (OuterVolumeSpecName: "kube-api-access-dkmmb") pod "7d1ecb19-5ab9-470c-8f0e-862e7675d2c6" (UID: "7d1ecb19-5ab9-470c-8f0e-862e7675d2c6"). InnerVolumeSpecName "kube-api-access-dkmmb". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:00:58 crc kubenswrapper[4718]: I1210 15:00:58.576821 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage02-crc" (OuterVolumeSpecName: "glance") pod "7d1ecb19-5ab9-470c-8f0e-862e7675d2c6" (UID: "7d1ecb19-5ab9-470c-8f0e-862e7675d2c6"). InnerVolumeSpecName "local-storage02-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Dec 10 15:00:58 crc kubenswrapper[4718]: I1210 15:00:58.588786 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7d1ecb19-5ab9-470c-8f0e-862e7675d2c6-scripts" (OuterVolumeSpecName: "scripts") pod "7d1ecb19-5ab9-470c-8f0e-862e7675d2c6" (UID: "7d1ecb19-5ab9-470c-8f0e-862e7675d2c6"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:00:58 crc kubenswrapper[4718]: I1210 15:00:58.663501 4718 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7d1ecb19-5ab9-470c-8f0e-862e7675d2c6-scripts\") on node \"crc\" DevicePath \"\"" Dec 10 15:00:58 crc kubenswrapper[4718]: I1210 15:00:58.663816 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dkmmb\" (UniqueName: \"kubernetes.io/projected/7d1ecb19-5ab9-470c-8f0e-862e7675d2c6-kube-api-access-dkmmb\") on node \"crc\" DevicePath \"\"" Dec 10 15:00:58 crc kubenswrapper[4718]: I1210 15:00:58.663831 4718 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7d1ecb19-5ab9-470c-8f0e-862e7675d2c6-logs\") on node \"crc\" DevicePath \"\"" Dec 10 15:00:58 crc kubenswrapper[4718]: I1210 15:00:58.663839 4718 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7d1ecb19-5ab9-470c-8f0e-862e7675d2c6-httpd-run\") on node \"crc\" DevicePath \"\"" Dec 10 15:00:58 crc kubenswrapper[4718]: I1210 15:00:58.663879 4718 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" " Dec 10 15:00:58 crc kubenswrapper[4718]: I1210 15:00:58.665134 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 10 15:00:58 crc kubenswrapper[4718]: I1210 15:00:58.667972 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7d1ecb19-5ab9-470c-8f0e-862e7675d2c6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7d1ecb19-5ab9-470c-8f0e-862e7675d2c6" (UID: "7d1ecb19-5ab9-470c-8f0e-862e7675d2c6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:00:58 crc kubenswrapper[4718]: I1210 15:00:58.702981 4718 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage02-crc" (UniqueName: "kubernetes.io/local-volume/local-storage02-crc") on node "crc" Dec 10 15:00:58 crc kubenswrapper[4718]: I1210 15:00:58.732990 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7d1ecb19-5ab9-470c-8f0e-862e7675d2c6-config-data" (OuterVolumeSpecName: "config-data") pod "7d1ecb19-5ab9-470c-8f0e-862e7675d2c6" (UID: "7d1ecb19-5ab9-470c-8f0e-862e7675d2c6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:00:58 crc kubenswrapper[4718]: I1210 15:00:58.740877 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7d1ecb19-5ab9-470c-8f0e-862e7675d2c6-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "7d1ecb19-5ab9-470c-8f0e-862e7675d2c6" (UID: "7d1ecb19-5ab9-470c-8f0e-862e7675d2c6"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:00:58 crc kubenswrapper[4718]: I1210 15:00:58.767221 4718 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7d1ecb19-5ab9-470c-8f0e-862e7675d2c6-config-data\") on node \"crc\" DevicePath \"\"" Dec 10 15:00:58 crc kubenswrapper[4718]: I1210 15:00:58.767293 4718 reconciler_common.go:293] "Volume detached for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" DevicePath \"\"" Dec 10 15:00:58 crc kubenswrapper[4718]: I1210 15:00:58.767307 4718 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d1ecb19-5ab9-470c-8f0e-862e7675d2c6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 10 15:00:58 crc kubenswrapper[4718]: I1210 15:00:58.767329 4718 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7d1ecb19-5ab9-470c-8f0e-862e7675d2c6-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 10 15:00:59 crc kubenswrapper[4718]: I1210 15:00:59.167238 4718 generic.go:334] "Generic (PLEG): container finished" podID="d8eb2321-a379-4525-a11d-2b3f6712aa35" containerID="dc5a564dc52aa672c70b5d9fee134aef6d3157e97aea63ed6fc136ce88356718" exitCode=0 Dec 10 15:00:59 crc kubenswrapper[4718]: I1210 15:00:59.167378 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zn4lg" event={"ID":"d8eb2321-a379-4525-a11d-2b3f6712aa35","Type":"ContainerDied","Data":"dc5a564dc52aa672c70b5d9fee134aef6d3157e97aea63ed6fc136ce88356718"} Dec 10 15:00:59 crc kubenswrapper[4718]: I1210 15:00:59.181381 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"95261732-95ae-4618-a8a3-c883c287553e","Type":"ContainerStarted","Data":"4cd0ac0e3bfc181c12bfad133bbf2649aff491034cb6c0c37e1c8b126081dcc6"} Dec 10 15:00:59 crc kubenswrapper[4718]: I1210 15:00:59.208670 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"45fa8b4a-59e6-4adf-bbf1-4dc8d23a802b","Type":"ContainerStarted","Data":"ef5740d9eb3f626bfdd4b69f85d274398d286c25e1c917ed9075e010ad206f8e"} Dec 10 15:00:59 crc kubenswrapper[4718]: I1210 15:00:59.233153 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 10 15:00:59 crc kubenswrapper[4718]: I1210 15:00:59.233378 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"7d1ecb19-5ab9-470c-8f0e-862e7675d2c6","Type":"ContainerDied","Data":"b655742eb1cff476b3ee73af6d903e450810880a71c4b851ff5af3958ec0f8d6"} Dec 10 15:00:59 crc kubenswrapper[4718]: I1210 15:00:59.233469 4718 scope.go:117] "RemoveContainer" containerID="c0fbe5ce5cebf4ef05059fbabea799d8263d4777808df61b99d66d470e2e1705" Dec 10 15:00:59 crc kubenswrapper[4718]: I1210 15:00:59.442670 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 10 15:00:59 crc kubenswrapper[4718]: I1210 15:00:59.443337 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zn4lg" Dec 10 15:00:59 crc kubenswrapper[4718]: I1210 15:00:59.461556 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 10 15:00:59 crc kubenswrapper[4718]: I1210 15:00:59.494591 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 10 15:00:59 crc kubenswrapper[4718]: E1210 15:00:59.495240 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d1ecb19-5ab9-470c-8f0e-862e7675d2c6" containerName="glance-log" Dec 10 15:00:59 crc kubenswrapper[4718]: I1210 15:00:59.495258 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d1ecb19-5ab9-470c-8f0e-862e7675d2c6" containerName="glance-log" Dec 10 15:00:59 crc kubenswrapper[4718]: E1210 15:00:59.495266 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d8eb2321-a379-4525-a11d-2b3f6712aa35" containerName="extract-utilities" Dec 10 15:00:59 crc kubenswrapper[4718]: I1210 15:00:59.495285 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="d8eb2321-a379-4525-a11d-2b3f6712aa35" containerName="extract-utilities" Dec 10 15:00:59 crc kubenswrapper[4718]: E1210 15:00:59.495305 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d8eb2321-a379-4525-a11d-2b3f6712aa35" containerName="registry-server" Dec 10 15:00:59 crc kubenswrapper[4718]: I1210 15:00:59.495312 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="d8eb2321-a379-4525-a11d-2b3f6712aa35" containerName="registry-server" Dec 10 15:00:59 crc kubenswrapper[4718]: E1210 15:00:59.495335 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d1ecb19-5ab9-470c-8f0e-862e7675d2c6" containerName="glance-httpd" Dec 10 15:00:59 crc kubenswrapper[4718]: I1210 15:00:59.495341 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d1ecb19-5ab9-470c-8f0e-862e7675d2c6" containerName="glance-httpd" Dec 10 15:00:59 crc kubenswrapper[4718]: E1210 15:00:59.495357 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d8eb2321-a379-4525-a11d-2b3f6712aa35" containerName="extract-content" Dec 10 15:00:59 crc kubenswrapper[4718]: I1210 15:00:59.495363 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="d8eb2321-a379-4525-a11d-2b3f6712aa35" containerName="extract-content" Dec 10 15:00:59 crc kubenswrapper[4718]: I1210 15:00:59.495593 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="7d1ecb19-5ab9-470c-8f0e-862e7675d2c6" containerName="glance-log" Dec 10 15:00:59 crc kubenswrapper[4718]: I1210 15:00:59.495615 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="7d1ecb19-5ab9-470c-8f0e-862e7675d2c6" containerName="glance-httpd" Dec 10 15:00:59 crc kubenswrapper[4718]: I1210 15:00:59.495627 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="d8eb2321-a379-4525-a11d-2b3f6712aa35" containerName="registry-server" Dec 10 15:00:59 crc kubenswrapper[4718]: I1210 15:00:59.496956 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 10 15:00:59 crc kubenswrapper[4718]: I1210 15:00:59.503841 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Dec 10 15:00:59 crc kubenswrapper[4718]: I1210 15:00:59.504616 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Dec 10 15:00:59 crc kubenswrapper[4718]: I1210 15:00:59.529209 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 10 15:00:59 crc kubenswrapper[4718]: I1210 15:00:59.552945 4718 scope.go:117] "RemoveContainer" containerID="07ce423c4440dd12f0a3bf202da6f5b8a56b40b1c7579615b85e0b871a76593c" Dec 10 15:00:59 crc kubenswrapper[4718]: I1210 15:00:59.610482 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d8eb2321-a379-4525-a11d-2b3f6712aa35-utilities\") pod \"d8eb2321-a379-4525-a11d-2b3f6712aa35\" (UID: \"d8eb2321-a379-4525-a11d-2b3f6712aa35\") " Dec 10 15:00:59 crc kubenswrapper[4718]: I1210 15:00:59.610721 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4jmf8\" (UniqueName: \"kubernetes.io/projected/d8eb2321-a379-4525-a11d-2b3f6712aa35-kube-api-access-4jmf8\") pod \"d8eb2321-a379-4525-a11d-2b3f6712aa35\" (UID: \"d8eb2321-a379-4525-a11d-2b3f6712aa35\") " Dec 10 15:00:59 crc kubenswrapper[4718]: I1210 15:00:59.610833 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d8eb2321-a379-4525-a11d-2b3f6712aa35-catalog-content\") pod \"d8eb2321-a379-4525-a11d-2b3f6712aa35\" (UID: \"d8eb2321-a379-4525-a11d-2b3f6712aa35\") " Dec 10 15:00:59 crc kubenswrapper[4718]: I1210 15:00:59.611373 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fvw7v\" (UniqueName: \"kubernetes.io/projected/2504be79-1852-48ec-b2d2-d687ae68bd09-kube-api-access-fvw7v\") pod \"glance-default-internal-api-0\" (UID: \"2504be79-1852-48ec-b2d2-d687ae68bd09\") " pod="openstack/glance-default-internal-api-0" Dec 10 15:00:59 crc kubenswrapper[4718]: I1210 15:00:59.611445 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2504be79-1852-48ec-b2d2-d687ae68bd09-config-data\") pod \"glance-default-internal-api-0\" (UID: \"2504be79-1852-48ec-b2d2-d687ae68bd09\") " pod="openstack/glance-default-internal-api-0" Dec 10 15:00:59 crc kubenswrapper[4718]: I1210 15:00:59.611473 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2504be79-1852-48ec-b2d2-d687ae68bd09-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"2504be79-1852-48ec-b2d2-d687ae68bd09\") " pod="openstack/glance-default-internal-api-0" Dec 10 15:00:59 crc kubenswrapper[4718]: I1210 15:00:59.611521 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2504be79-1852-48ec-b2d2-d687ae68bd09-logs\") pod \"glance-default-internal-api-0\" (UID: \"2504be79-1852-48ec-b2d2-d687ae68bd09\") " pod="openstack/glance-default-internal-api-0" Dec 10 15:00:59 crc kubenswrapper[4718]: I1210 15:00:59.611573 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2504be79-1852-48ec-b2d2-d687ae68bd09-scripts\") pod \"glance-default-internal-api-0\" (UID: \"2504be79-1852-48ec-b2d2-d687ae68bd09\") " pod="openstack/glance-default-internal-api-0" Dec 10 15:00:59 crc kubenswrapper[4718]: I1210 15:00:59.611625 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/2504be79-1852-48ec-b2d2-d687ae68bd09-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"2504be79-1852-48ec-b2d2-d687ae68bd09\") " pod="openstack/glance-default-internal-api-0" Dec 10 15:00:59 crc kubenswrapper[4718]: I1210 15:00:59.611652 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2504be79-1852-48ec-b2d2-d687ae68bd09-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"2504be79-1852-48ec-b2d2-d687ae68bd09\") " pod="openstack/glance-default-internal-api-0" Dec 10 15:00:59 crc kubenswrapper[4718]: I1210 15:00:59.611694 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-internal-api-0\" (UID: \"2504be79-1852-48ec-b2d2-d687ae68bd09\") " pod="openstack/glance-default-internal-api-0" Dec 10 15:00:59 crc kubenswrapper[4718]: I1210 15:00:59.618199 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d8eb2321-a379-4525-a11d-2b3f6712aa35-utilities" (OuterVolumeSpecName: "utilities") pod "d8eb2321-a379-4525-a11d-2b3f6712aa35" (UID: "d8eb2321-a379-4525-a11d-2b3f6712aa35"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 15:00:59 crc kubenswrapper[4718]: I1210 15:00:59.621696 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d8eb2321-a379-4525-a11d-2b3f6712aa35-kube-api-access-4jmf8" (OuterVolumeSpecName: "kube-api-access-4jmf8") pod "d8eb2321-a379-4525-a11d-2b3f6712aa35" (UID: "d8eb2321-a379-4525-a11d-2b3f6712aa35"). InnerVolumeSpecName "kube-api-access-4jmf8". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:00:59 crc kubenswrapper[4718]: I1210 15:00:59.698383 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d8eb2321-a379-4525-a11d-2b3f6712aa35-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d8eb2321-a379-4525-a11d-2b3f6712aa35" (UID: "d8eb2321-a379-4525-a11d-2b3f6712aa35"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 15:00:59 crc kubenswrapper[4718]: I1210 15:00:59.715870 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fvw7v\" (UniqueName: \"kubernetes.io/projected/2504be79-1852-48ec-b2d2-d687ae68bd09-kube-api-access-fvw7v\") pod \"glance-default-internal-api-0\" (UID: \"2504be79-1852-48ec-b2d2-d687ae68bd09\") " pod="openstack/glance-default-internal-api-0" Dec 10 15:00:59 crc kubenswrapper[4718]: I1210 15:00:59.715981 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2504be79-1852-48ec-b2d2-d687ae68bd09-config-data\") pod \"glance-default-internal-api-0\" (UID: \"2504be79-1852-48ec-b2d2-d687ae68bd09\") " pod="openstack/glance-default-internal-api-0" Dec 10 15:00:59 crc kubenswrapper[4718]: I1210 15:00:59.716032 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2504be79-1852-48ec-b2d2-d687ae68bd09-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"2504be79-1852-48ec-b2d2-d687ae68bd09\") " pod="openstack/glance-default-internal-api-0" Dec 10 15:00:59 crc kubenswrapper[4718]: I1210 15:00:59.716087 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2504be79-1852-48ec-b2d2-d687ae68bd09-logs\") pod \"glance-default-internal-api-0\" (UID: \"2504be79-1852-48ec-b2d2-d687ae68bd09\") " pod="openstack/glance-default-internal-api-0" Dec 10 15:00:59 crc kubenswrapper[4718]: I1210 15:00:59.716152 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2504be79-1852-48ec-b2d2-d687ae68bd09-scripts\") pod \"glance-default-internal-api-0\" (UID: \"2504be79-1852-48ec-b2d2-d687ae68bd09\") " pod="openstack/glance-default-internal-api-0" Dec 10 15:00:59 crc kubenswrapper[4718]: I1210 15:00:59.716223 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/2504be79-1852-48ec-b2d2-d687ae68bd09-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"2504be79-1852-48ec-b2d2-d687ae68bd09\") " pod="openstack/glance-default-internal-api-0" Dec 10 15:00:59 crc kubenswrapper[4718]: I1210 15:00:59.716256 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2504be79-1852-48ec-b2d2-d687ae68bd09-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"2504be79-1852-48ec-b2d2-d687ae68bd09\") " pod="openstack/glance-default-internal-api-0" Dec 10 15:00:59 crc kubenswrapper[4718]: I1210 15:00:59.716311 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-internal-api-0\" (UID: \"2504be79-1852-48ec-b2d2-d687ae68bd09\") " pod="openstack/glance-default-internal-api-0" Dec 10 15:00:59 crc kubenswrapper[4718]: I1210 15:00:59.717176 4718 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-internal-api-0\" (UID: \"2504be79-1852-48ec-b2d2-d687ae68bd09\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/glance-default-internal-api-0" Dec 10 15:00:59 crc kubenswrapper[4718]: I1210 15:00:59.717296 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2504be79-1852-48ec-b2d2-d687ae68bd09-logs\") pod \"glance-default-internal-api-0\" (UID: \"2504be79-1852-48ec-b2d2-d687ae68bd09\") " pod="openstack/glance-default-internal-api-0" Dec 10 15:00:59 crc kubenswrapper[4718]: I1210 15:00:59.717665 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/2504be79-1852-48ec-b2d2-d687ae68bd09-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"2504be79-1852-48ec-b2d2-d687ae68bd09\") " pod="openstack/glance-default-internal-api-0" Dec 10 15:00:59 crc kubenswrapper[4718]: I1210 15:00:59.721634 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4jmf8\" (UniqueName: \"kubernetes.io/projected/d8eb2321-a379-4525-a11d-2b3f6712aa35-kube-api-access-4jmf8\") on node \"crc\" DevicePath \"\"" Dec 10 15:00:59 crc kubenswrapper[4718]: I1210 15:00:59.721672 4718 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d8eb2321-a379-4525-a11d-2b3f6712aa35-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 10 15:00:59 crc kubenswrapper[4718]: I1210 15:00:59.721689 4718 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d8eb2321-a379-4525-a11d-2b3f6712aa35-utilities\") on node \"crc\" DevicePath \"\"" Dec 10 15:00:59 crc kubenswrapper[4718]: I1210 15:00:59.730344 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2504be79-1852-48ec-b2d2-d687ae68bd09-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"2504be79-1852-48ec-b2d2-d687ae68bd09\") " pod="openstack/glance-default-internal-api-0" Dec 10 15:00:59 crc kubenswrapper[4718]: I1210 15:00:59.730383 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2504be79-1852-48ec-b2d2-d687ae68bd09-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"2504be79-1852-48ec-b2d2-d687ae68bd09\") " pod="openstack/glance-default-internal-api-0" Dec 10 15:00:59 crc kubenswrapper[4718]: I1210 15:00:59.730777 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2504be79-1852-48ec-b2d2-d687ae68bd09-scripts\") pod \"glance-default-internal-api-0\" (UID: \"2504be79-1852-48ec-b2d2-d687ae68bd09\") " pod="openstack/glance-default-internal-api-0" Dec 10 15:00:59 crc kubenswrapper[4718]: I1210 15:00:59.742522 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2504be79-1852-48ec-b2d2-d687ae68bd09-config-data\") pod \"glance-default-internal-api-0\" (UID: \"2504be79-1852-48ec-b2d2-d687ae68bd09\") " pod="openstack/glance-default-internal-api-0" Dec 10 15:00:59 crc kubenswrapper[4718]: I1210 15:00:59.779338 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fvw7v\" (UniqueName: \"kubernetes.io/projected/2504be79-1852-48ec-b2d2-d687ae68bd09-kube-api-access-fvw7v\") pod \"glance-default-internal-api-0\" (UID: \"2504be79-1852-48ec-b2d2-d687ae68bd09\") " pod="openstack/glance-default-internal-api-0" Dec 10 15:00:59 crc kubenswrapper[4718]: I1210 15:00:59.821502 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-internal-api-0\" (UID: \"2504be79-1852-48ec-b2d2-d687ae68bd09\") " pod="openstack/glance-default-internal-api-0" Dec 10 15:00:59 crc kubenswrapper[4718]: I1210 15:00:59.826227 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 10 15:01:00 crc kubenswrapper[4718]: I1210 15:01:00.075472 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7d1ecb19-5ab9-470c-8f0e-862e7675d2c6" path="/var/lib/kubelet/pods/7d1ecb19-5ab9-470c-8f0e-862e7675d2c6/volumes" Dec 10 15:01:00 crc kubenswrapper[4718]: I1210 15:01:00.254951 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29422981-fh4sm"] Dec 10 15:01:00 crc kubenswrapper[4718]: I1210 15:01:00.275023 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29422981-fh4sm" Dec 10 15:01:00 crc kubenswrapper[4718]: I1210 15:01:00.293044 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29422981-fh4sm"] Dec 10 15:01:00 crc kubenswrapper[4718]: I1210 15:01:00.321012 4718 generic.go:334] "Generic (PLEG): container finished" podID="aef87f6d-8e6b-4569-b12d-b10b34872959" containerID="0d6cd96b278e48853b45b028f3a3155e17599ab7a3ee02355a52f044d7125bae" exitCode=0 Dec 10 15:01:00 crc kubenswrapper[4718]: I1210 15:01:00.321209 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7cb5f4dbb8-qk86f" event={"ID":"aef87f6d-8e6b-4569-b12d-b10b34872959","Type":"ContainerDied","Data":"0d6cd96b278e48853b45b028f3a3155e17599ab7a3ee02355a52f044d7125bae"} Dec 10 15:01:00 crc kubenswrapper[4718]: I1210 15:01:00.364357 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hw9kv\" (UniqueName: \"kubernetes.io/projected/dc3a87e8-6bf0-43e8-a75d-d743c4182d36-kube-api-access-hw9kv\") pod \"keystone-cron-29422981-fh4sm\" (UID: \"dc3a87e8-6bf0-43e8-a75d-d743c4182d36\") " pod="openstack/keystone-cron-29422981-fh4sm" Dec 10 15:01:00 crc kubenswrapper[4718]: I1210 15:01:00.382214 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/dc3a87e8-6bf0-43e8-a75d-d743c4182d36-fernet-keys\") pod \"keystone-cron-29422981-fh4sm\" (UID: \"dc3a87e8-6bf0-43e8-a75d-d743c4182d36\") " pod="openstack/keystone-cron-29422981-fh4sm" Dec 10 15:01:00 crc kubenswrapper[4718]: I1210 15:01:00.382793 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc3a87e8-6bf0-43e8-a75d-d743c4182d36-combined-ca-bundle\") pod \"keystone-cron-29422981-fh4sm\" (UID: \"dc3a87e8-6bf0-43e8-a75d-d743c4182d36\") " pod="openstack/keystone-cron-29422981-fh4sm" Dec 10 15:01:00 crc kubenswrapper[4718]: I1210 15:01:00.382839 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc3a87e8-6bf0-43e8-a75d-d743c4182d36-config-data\") pod \"keystone-cron-29422981-fh4sm\" (UID: \"dc3a87e8-6bf0-43e8-a75d-d743c4182d36\") " pod="openstack/keystone-cron-29422981-fh4sm" Dec 10 15:01:00 crc kubenswrapper[4718]: I1210 15:01:00.408869 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7cb5f4dbb8-qk86f" Dec 10 15:01:00 crc kubenswrapper[4718]: I1210 15:01:00.416769 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zn4lg" event={"ID":"d8eb2321-a379-4525-a11d-2b3f6712aa35","Type":"ContainerDied","Data":"36373d1c4a6a44ebcf07edffda7f23d3fa75aa50bd75bd330123854e7e525b34"} Dec 10 15:01:00 crc kubenswrapper[4718]: I1210 15:01:00.416887 4718 scope.go:117] "RemoveContainer" containerID="dc5a564dc52aa672c70b5d9fee134aef6d3157e97aea63ed6fc136ce88356718" Dec 10 15:01:00 crc kubenswrapper[4718]: I1210 15:01:00.417289 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zn4lg" Dec 10 15:01:00 crc kubenswrapper[4718]: I1210 15:01:00.486264 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hw9kv\" (UniqueName: \"kubernetes.io/projected/dc3a87e8-6bf0-43e8-a75d-d743c4182d36-kube-api-access-hw9kv\") pod \"keystone-cron-29422981-fh4sm\" (UID: \"dc3a87e8-6bf0-43e8-a75d-d743c4182d36\") " pod="openstack/keystone-cron-29422981-fh4sm" Dec 10 15:01:00 crc kubenswrapper[4718]: I1210 15:01:00.486415 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/dc3a87e8-6bf0-43e8-a75d-d743c4182d36-fernet-keys\") pod \"keystone-cron-29422981-fh4sm\" (UID: \"dc3a87e8-6bf0-43e8-a75d-d743c4182d36\") " pod="openstack/keystone-cron-29422981-fh4sm" Dec 10 15:01:00 crc kubenswrapper[4718]: I1210 15:01:00.486533 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc3a87e8-6bf0-43e8-a75d-d743c4182d36-combined-ca-bundle\") pod \"keystone-cron-29422981-fh4sm\" (UID: \"dc3a87e8-6bf0-43e8-a75d-d743c4182d36\") " pod="openstack/keystone-cron-29422981-fh4sm" Dec 10 15:01:00 crc kubenswrapper[4718]: I1210 15:01:00.486553 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc3a87e8-6bf0-43e8-a75d-d743c4182d36-config-data\") pod \"keystone-cron-29422981-fh4sm\" (UID: \"dc3a87e8-6bf0-43e8-a75d-d743c4182d36\") " pod="openstack/keystone-cron-29422981-fh4sm" Dec 10 15:01:00 crc kubenswrapper[4718]: I1210 15:01:00.491651 4718 scope.go:117] "RemoveContainer" containerID="d024c5cbe9462581666877ad5f226c84f5d39bf34a588d577485aec013e52735" Dec 10 15:01:00 crc kubenswrapper[4718]: I1210 15:01:00.530004 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc3a87e8-6bf0-43e8-a75d-d743c4182d36-config-data\") pod \"keystone-cron-29422981-fh4sm\" (UID: \"dc3a87e8-6bf0-43e8-a75d-d743c4182d36\") " pod="openstack/keystone-cron-29422981-fh4sm" Dec 10 15:01:00 crc kubenswrapper[4718]: I1210 15:01:00.530211 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/dc3a87e8-6bf0-43e8-a75d-d743c4182d36-fernet-keys\") pod \"keystone-cron-29422981-fh4sm\" (UID: \"dc3a87e8-6bf0-43e8-a75d-d743c4182d36\") " pod="openstack/keystone-cron-29422981-fh4sm" Dec 10 15:01:00 crc kubenswrapper[4718]: I1210 15:01:00.532806 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hw9kv\" (UniqueName: \"kubernetes.io/projected/dc3a87e8-6bf0-43e8-a75d-d743c4182d36-kube-api-access-hw9kv\") pod \"keystone-cron-29422981-fh4sm\" (UID: \"dc3a87e8-6bf0-43e8-a75d-d743c4182d36\") " pod="openstack/keystone-cron-29422981-fh4sm" Dec 10 15:01:00 crc kubenswrapper[4718]: I1210 15:01:00.541269 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-zn4lg"] Dec 10 15:01:00 crc kubenswrapper[4718]: I1210 15:01:00.542701 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc3a87e8-6bf0-43e8-a75d-d743c4182d36-combined-ca-bundle\") pod \"keystone-cron-29422981-fh4sm\" (UID: \"dc3a87e8-6bf0-43e8-a75d-d743c4182d36\") " pod="openstack/keystone-cron-29422981-fh4sm" Dec 10 15:01:00 crc kubenswrapper[4718]: I1210 15:01:00.589590 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aef87f6d-8e6b-4569-b12d-b10b34872959-combined-ca-bundle\") pod \"aef87f6d-8e6b-4569-b12d-b10b34872959\" (UID: \"aef87f6d-8e6b-4569-b12d-b10b34872959\") " Dec 10 15:01:00 crc kubenswrapper[4718]: I1210 15:01:00.589816 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/aef87f6d-8e6b-4569-b12d-b10b34872959-ovndb-tls-certs\") pod \"aef87f6d-8e6b-4569-b12d-b10b34872959\" (UID: \"aef87f6d-8e6b-4569-b12d-b10b34872959\") " Dec 10 15:01:00 crc kubenswrapper[4718]: I1210 15:01:00.590111 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5dtrl\" (UniqueName: \"kubernetes.io/projected/aef87f6d-8e6b-4569-b12d-b10b34872959-kube-api-access-5dtrl\") pod \"aef87f6d-8e6b-4569-b12d-b10b34872959\" (UID: \"aef87f6d-8e6b-4569-b12d-b10b34872959\") " Dec 10 15:01:00 crc kubenswrapper[4718]: I1210 15:01:00.590153 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/aef87f6d-8e6b-4569-b12d-b10b34872959-httpd-config\") pod \"aef87f6d-8e6b-4569-b12d-b10b34872959\" (UID: \"aef87f6d-8e6b-4569-b12d-b10b34872959\") " Dec 10 15:01:00 crc kubenswrapper[4718]: I1210 15:01:00.590246 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/aef87f6d-8e6b-4569-b12d-b10b34872959-config\") pod \"aef87f6d-8e6b-4569-b12d-b10b34872959\" (UID: \"aef87f6d-8e6b-4569-b12d-b10b34872959\") " Dec 10 15:01:00 crc kubenswrapper[4718]: I1210 15:01:00.612513 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aef87f6d-8e6b-4569-b12d-b10b34872959-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "aef87f6d-8e6b-4569-b12d-b10b34872959" (UID: "aef87f6d-8e6b-4569-b12d-b10b34872959"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:01:00 crc kubenswrapper[4718]: I1210 15:01:00.620827 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aef87f6d-8e6b-4569-b12d-b10b34872959-kube-api-access-5dtrl" (OuterVolumeSpecName: "kube-api-access-5dtrl") pod "aef87f6d-8e6b-4569-b12d-b10b34872959" (UID: "aef87f6d-8e6b-4569-b12d-b10b34872959"). InnerVolumeSpecName "kube-api-access-5dtrl". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:01:00 crc kubenswrapper[4718]: I1210 15:01:00.621098 4718 scope.go:117] "RemoveContainer" containerID="80c3eb787783e95e9c407e3bc425fc5a6a85da6c857d5f6956eea6141fba9b9e" Dec 10 15:01:00 crc kubenswrapper[4718]: I1210 15:01:00.622862 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-zn4lg"] Dec 10 15:01:00 crc kubenswrapper[4718]: I1210 15:01:00.685102 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29422981-fh4sm" Dec 10 15:01:00 crc kubenswrapper[4718]: I1210 15:01:00.693234 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5dtrl\" (UniqueName: \"kubernetes.io/projected/aef87f6d-8e6b-4569-b12d-b10b34872959-kube-api-access-5dtrl\") on node \"crc\" DevicePath \"\"" Dec 10 15:01:00 crc kubenswrapper[4718]: I1210 15:01:00.693291 4718 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/aef87f6d-8e6b-4569-b12d-b10b34872959-httpd-config\") on node \"crc\" DevicePath \"\"" Dec 10 15:01:00 crc kubenswrapper[4718]: I1210 15:01:00.702702 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aef87f6d-8e6b-4569-b12d-b10b34872959-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "aef87f6d-8e6b-4569-b12d-b10b34872959" (UID: "aef87f6d-8e6b-4569-b12d-b10b34872959"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:01:00 crc kubenswrapper[4718]: I1210 15:01:00.791169 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aef87f6d-8e6b-4569-b12d-b10b34872959-config" (OuterVolumeSpecName: "config") pod "aef87f6d-8e6b-4569-b12d-b10b34872959" (UID: "aef87f6d-8e6b-4569-b12d-b10b34872959"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:01:00 crc kubenswrapper[4718]: I1210 15:01:00.797740 4718 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/aef87f6d-8e6b-4569-b12d-b10b34872959-config\") on node \"crc\" DevicePath \"\"" Dec 10 15:01:00 crc kubenswrapper[4718]: I1210 15:01:00.798141 4718 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aef87f6d-8e6b-4569-b12d-b10b34872959-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 10 15:01:00 crc kubenswrapper[4718]: I1210 15:01:00.834949 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 10 15:01:00 crc kubenswrapper[4718]: I1210 15:01:00.917827 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aef87f6d-8e6b-4569-b12d-b10b34872959-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "aef87f6d-8e6b-4569-b12d-b10b34872959" (UID: "aef87f6d-8e6b-4569-b12d-b10b34872959"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:01:01 crc kubenswrapper[4718]: I1210 15:01:01.007056 4718 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/aef87f6d-8e6b-4569-b12d-b10b34872959-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 10 15:01:01 crc kubenswrapper[4718]: I1210 15:01:01.496211 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29422981-fh4sm"] Dec 10 15:01:01 crc kubenswrapper[4718]: I1210 15:01:01.501273 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"45fa8b4a-59e6-4adf-bbf1-4dc8d23a802b","Type":"ContainerStarted","Data":"1511874eb3ea2ac51b958f6cf746e4a5869467fe7fac1412b5e38551dc68060c"} Dec 10 15:01:01 crc kubenswrapper[4718]: I1210 15:01:01.540089 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7cb5f4dbb8-qk86f" event={"ID":"aef87f6d-8e6b-4569-b12d-b10b34872959","Type":"ContainerDied","Data":"ae9e0a03455c5787ced40bd5de8cd24d33cc721b98f8646495fc6192b2c2097a"} Dec 10 15:01:01 crc kubenswrapper[4718]: I1210 15:01:01.540184 4718 scope.go:117] "RemoveContainer" containerID="b53f56f1fd1a035ec47cc925e5efbb4afa85a55ae69e7b6f1e4a1dc4beb22080" Dec 10 15:01:01 crc kubenswrapper[4718]: I1210 15:01:01.540495 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7cb5f4dbb8-qk86f" Dec 10 15:01:01 crc kubenswrapper[4718]: I1210 15:01:01.564533 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-6q6w4" event={"ID":"dae6a105-9e31-412f-8809-ce10dfacfe35","Type":"ContainerStarted","Data":"61f1739de043d1b2e2f7dfa75726df28ce1f3d5681467714e80208bdee87b116"} Dec 10 15:01:01 crc kubenswrapper[4718]: I1210 15:01:01.582512 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"2504be79-1852-48ec-b2d2-d687ae68bd09","Type":"ContainerStarted","Data":"5f7b6c4a040301ad50e2794dcd989a9229c9abd2e343e1d7671e189089129353"} Dec 10 15:01:01 crc kubenswrapper[4718]: I1210 15:01:01.619722 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-6q6w4" podStartSLOduration=4.924087051 podStartE2EDuration="1m1.619674551s" podCreationTimestamp="2025-12-10 15:00:00 +0000 UTC" firstStartedPulling="2025-12-10 15:00:02.581036478 +0000 UTC m=+1707.530259885" lastFinishedPulling="2025-12-10 15:00:59.276623968 +0000 UTC m=+1764.225847385" observedRunningTime="2025-12-10 15:01:01.596995802 +0000 UTC m=+1766.546219219" watchObservedRunningTime="2025-12-10 15:01:01.619674551 +0000 UTC m=+1766.568897968" Dec 10 15:01:01 crc kubenswrapper[4718]: I1210 15:01:01.649694 4718 scope.go:117] "RemoveContainer" containerID="0d6cd96b278e48853b45b028f3a3155e17599ab7a3ee02355a52f044d7125bae" Dec 10 15:01:01 crc kubenswrapper[4718]: I1210 15:01:01.658610 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-7cb5f4dbb8-qk86f"] Dec 10 15:01:01 crc kubenswrapper[4718]: I1210 15:01:01.681872 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-7cb5f4dbb8-qk86f"] Dec 10 15:01:02 crc kubenswrapper[4718]: I1210 15:01:02.082565 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aef87f6d-8e6b-4569-b12d-b10b34872959" path="/var/lib/kubelet/pods/aef87f6d-8e6b-4569-b12d-b10b34872959/volumes" Dec 10 15:01:02 crc kubenswrapper[4718]: I1210 15:01:02.085470 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d8eb2321-a379-4525-a11d-2b3f6712aa35" path="/var/lib/kubelet/pods/d8eb2321-a379-4525-a11d-2b3f6712aa35/volumes" Dec 10 15:01:02 crc kubenswrapper[4718]: I1210 15:01:02.914274 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"45fa8b4a-59e6-4adf-bbf1-4dc8d23a802b","Type":"ContainerStarted","Data":"9452d2ed429b8779c45026048640b598322301285d6c458927e3fa58a185399d"} Dec 10 15:01:02 crc kubenswrapper[4718]: I1210 15:01:02.948277 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29422981-fh4sm" event={"ID":"dc3a87e8-6bf0-43e8-a75d-d743c4182d36","Type":"ContainerStarted","Data":"c5aa4c2995c41090aea73436cfdbcb54205dbb33601498efb9b2f9f9f4407bed"} Dec 10 15:01:02 crc kubenswrapper[4718]: I1210 15:01:02.948413 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29422981-fh4sm" event={"ID":"dc3a87e8-6bf0-43e8-a75d-d743c4182d36","Type":"ContainerStarted","Data":"3b817e1a71a877da2173aac4ef9200e7aabf0ea5e9306533a2851f410b6e91ad"} Dec 10 15:01:03 crc kubenswrapper[4718]: I1210 15:01:03.019923 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=6.019887781 podStartE2EDuration="6.019887781s" podCreationTimestamp="2025-12-10 15:00:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 15:01:02.974301926 +0000 UTC m=+1767.923525343" watchObservedRunningTime="2025-12-10 15:01:03.019887781 +0000 UTC m=+1767.969111198" Dec 10 15:01:03 crc kubenswrapper[4718]: I1210 15:01:03.020112 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"2504be79-1852-48ec-b2d2-d687ae68bd09","Type":"ContainerStarted","Data":"184d74cd9dcdaf726876ff8fbc2c86a478fa8de481b3a45c5891cfab76b7fe9d"} Dec 10 15:01:03 crc kubenswrapper[4718]: I1210 15:01:03.022068 4718 scope.go:117] "RemoveContainer" containerID="76beffb68a4302af15a19b5c310d822594a21c7c7ec551bf4bf551ca3dcf1f21" Dec 10 15:01:03 crc kubenswrapper[4718]: E1210 15:01:03.022313 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8zmhn_openshift-machine-config-operator(8db53917-7cfb-496d-b8a0-5cc68f3be4e7)\"" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" podUID="8db53917-7cfb-496d-b8a0-5cc68f3be4e7" Dec 10 15:01:03 crc kubenswrapper[4718]: I1210 15:01:03.059524 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29422981-fh4sm" podStartSLOduration=3.059495155 podStartE2EDuration="3.059495155s" podCreationTimestamp="2025-12-10 15:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 15:01:03.007027588 +0000 UTC m=+1767.956250995" watchObservedRunningTime="2025-12-10 15:01:03.059495155 +0000 UTC m=+1768.008718572" Dec 10 15:01:04 crc kubenswrapper[4718]: I1210 15:01:04.222636 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"2504be79-1852-48ec-b2d2-d687ae68bd09","Type":"ContainerStarted","Data":"359815ac0253612a31649e1522d12f830ec32309531dd78e8f771854026c81f0"} Dec 10 15:01:04 crc kubenswrapper[4718]: I1210 15:01:04.267287 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=5.267233561 podStartE2EDuration="5.267233561s" podCreationTimestamp="2025-12-10 15:00:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 15:01:04.248778607 +0000 UTC m=+1769.198002024" watchObservedRunningTime="2025-12-10 15:01:04.267233561 +0000 UTC m=+1769.216456978" Dec 10 15:01:07 crc kubenswrapper[4718]: I1210 15:01:07.780199 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Dec 10 15:01:07 crc kubenswrapper[4718]: I1210 15:01:07.781340 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Dec 10 15:01:07 crc kubenswrapper[4718]: I1210 15:01:07.822884 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Dec 10 15:01:07 crc kubenswrapper[4718]: I1210 15:01:07.944432 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Dec 10 15:01:08 crc kubenswrapper[4718]: I1210 15:01:08.287707 4718 generic.go:334] "Generic (PLEG): container finished" podID="dc3a87e8-6bf0-43e8-a75d-d743c4182d36" containerID="c5aa4c2995c41090aea73436cfdbcb54205dbb33601498efb9b2f9f9f4407bed" exitCode=0 Dec 10 15:01:08 crc kubenswrapper[4718]: I1210 15:01:08.287804 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29422981-fh4sm" event={"ID":"dc3a87e8-6bf0-43e8-a75d-d743c4182d36","Type":"ContainerDied","Data":"c5aa4c2995c41090aea73436cfdbcb54205dbb33601498efb9b2f9f9f4407bed"} Dec 10 15:01:08 crc kubenswrapper[4718]: I1210 15:01:08.288532 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Dec 10 15:01:08 crc kubenswrapper[4718]: I1210 15:01:08.288706 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Dec 10 15:01:08 crc kubenswrapper[4718]: I1210 15:01:08.590512 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-decision-engine-0" Dec 10 15:01:08 crc kubenswrapper[4718]: I1210 15:01:08.631594 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/watcher-decision-engine-0" Dec 10 15:01:09 crc kubenswrapper[4718]: I1210 15:01:09.300602 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-decision-engine-0" Dec 10 15:01:09 crc kubenswrapper[4718]: I1210 15:01:09.352446 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-decision-engine-0" Dec 10 15:01:09 crc kubenswrapper[4718]: I1210 15:01:09.827560 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Dec 10 15:01:09 crc kubenswrapper[4718]: I1210 15:01:09.828170 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Dec 10 15:01:09 crc kubenswrapper[4718]: I1210 15:01:09.833604 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29422981-fh4sm" Dec 10 15:01:09 crc kubenswrapper[4718]: I1210 15:01:09.885049 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Dec 10 15:01:09 crc kubenswrapper[4718]: I1210 15:01:09.892228 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Dec 10 15:01:09 crc kubenswrapper[4718]: I1210 15:01:09.918802 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hw9kv\" (UniqueName: \"kubernetes.io/projected/dc3a87e8-6bf0-43e8-a75d-d743c4182d36-kube-api-access-hw9kv\") pod \"dc3a87e8-6bf0-43e8-a75d-d743c4182d36\" (UID: \"dc3a87e8-6bf0-43e8-a75d-d743c4182d36\") " Dec 10 15:01:09 crc kubenswrapper[4718]: I1210 15:01:09.919105 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc3a87e8-6bf0-43e8-a75d-d743c4182d36-config-data\") pod \"dc3a87e8-6bf0-43e8-a75d-d743c4182d36\" (UID: \"dc3a87e8-6bf0-43e8-a75d-d743c4182d36\") " Dec 10 15:01:09 crc kubenswrapper[4718]: I1210 15:01:09.919186 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/dc3a87e8-6bf0-43e8-a75d-d743c4182d36-fernet-keys\") pod \"dc3a87e8-6bf0-43e8-a75d-d743c4182d36\" (UID: \"dc3a87e8-6bf0-43e8-a75d-d743c4182d36\") " Dec 10 15:01:09 crc kubenswrapper[4718]: I1210 15:01:09.919267 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc3a87e8-6bf0-43e8-a75d-d743c4182d36-combined-ca-bundle\") pod \"dc3a87e8-6bf0-43e8-a75d-d743c4182d36\" (UID: \"dc3a87e8-6bf0-43e8-a75d-d743c4182d36\") " Dec 10 15:01:09 crc kubenswrapper[4718]: I1210 15:01:09.948027 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dc3a87e8-6bf0-43e8-a75d-d743c4182d36-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "dc3a87e8-6bf0-43e8-a75d-d743c4182d36" (UID: "dc3a87e8-6bf0-43e8-a75d-d743c4182d36"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:01:09 crc kubenswrapper[4718]: I1210 15:01:09.948434 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dc3a87e8-6bf0-43e8-a75d-d743c4182d36-kube-api-access-hw9kv" (OuterVolumeSpecName: "kube-api-access-hw9kv") pod "dc3a87e8-6bf0-43e8-a75d-d743c4182d36" (UID: "dc3a87e8-6bf0-43e8-a75d-d743c4182d36"). InnerVolumeSpecName "kube-api-access-hw9kv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:01:10 crc kubenswrapper[4718]: I1210 15:01:10.039431 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hw9kv\" (UniqueName: \"kubernetes.io/projected/dc3a87e8-6bf0-43e8-a75d-d743c4182d36-kube-api-access-hw9kv\") on node \"crc\" DevicePath \"\"" Dec 10 15:01:10 crc kubenswrapper[4718]: I1210 15:01:10.039478 4718 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/dc3a87e8-6bf0-43e8-a75d-d743c4182d36-fernet-keys\") on node \"crc\" DevicePath \"\"" Dec 10 15:01:10 crc kubenswrapper[4718]: I1210 15:01:10.060717 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dc3a87e8-6bf0-43e8-a75d-d743c4182d36-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "dc3a87e8-6bf0-43e8-a75d-d743c4182d36" (UID: "dc3a87e8-6bf0-43e8-a75d-d743c4182d36"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:01:10 crc kubenswrapper[4718]: I1210 15:01:10.074867 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dc3a87e8-6bf0-43e8-a75d-d743c4182d36-config-data" (OuterVolumeSpecName: "config-data") pod "dc3a87e8-6bf0-43e8-a75d-d743c4182d36" (UID: "dc3a87e8-6bf0-43e8-a75d-d743c4182d36"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:01:10 crc kubenswrapper[4718]: I1210 15:01:10.141989 4718 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc3a87e8-6bf0-43e8-a75d-d743c4182d36-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 10 15:01:10 crc kubenswrapper[4718]: I1210 15:01:10.142055 4718 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc3a87e8-6bf0-43e8-a75d-d743c4182d36-config-data\") on node \"crc\" DevicePath \"\"" Dec 10 15:01:10 crc kubenswrapper[4718]: I1210 15:01:10.313522 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29422981-fh4sm" event={"ID":"dc3a87e8-6bf0-43e8-a75d-d743c4182d36","Type":"ContainerDied","Data":"3b817e1a71a877da2173aac4ef9200e7aabf0ea5e9306533a2851f410b6e91ad"} Dec 10 15:01:10 crc kubenswrapper[4718]: I1210 15:01:10.313597 4718 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3b817e1a71a877da2173aac4ef9200e7aabf0ea5e9306533a2851f410b6e91ad" Dec 10 15:01:10 crc kubenswrapper[4718]: I1210 15:01:10.314190 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Dec 10 15:01:10 crc kubenswrapper[4718]: I1210 15:01:10.314254 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Dec 10 15:01:10 crc kubenswrapper[4718]: I1210 15:01:10.314880 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29422981-fh4sm" Dec 10 15:01:10 crc kubenswrapper[4718]: I1210 15:01:10.927968 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Dec 10 15:01:10 crc kubenswrapper[4718]: I1210 15:01:10.928635 4718 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 10 15:01:10 crc kubenswrapper[4718]: I1210 15:01:10.940315 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Dec 10 15:01:12 crc kubenswrapper[4718]: I1210 15:01:12.788095 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Dec 10 15:01:12 crc kubenswrapper[4718]: I1210 15:01:12.788775 4718 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 10 15:01:12 crc kubenswrapper[4718]: I1210 15:01:12.954976 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Dec 10 15:01:16 crc kubenswrapper[4718]: I1210 15:01:16.404874 4718 generic.go:334] "Generic (PLEG): container finished" podID="f7cb8719-7e84-49ea-a5a7-e48eca56b5bd" containerID="13b3a993de8477c61dfb0bdc85bda6748a6cb25aa537ce81262096a726dcce73" exitCode=137 Dec 10 15:01:16 crc kubenswrapper[4718]: I1210 15:01:16.404965 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f7cb8719-7e84-49ea-a5a7-e48eca56b5bd","Type":"ContainerDied","Data":"13b3a993de8477c61dfb0bdc85bda6748a6cb25aa537ce81262096a726dcce73"} Dec 10 15:01:17 crc kubenswrapper[4718]: I1210 15:01:17.021080 4718 scope.go:117] "RemoveContainer" containerID="76beffb68a4302af15a19b5c310d822594a21c7c7ec551bf4bf551ca3dcf1f21" Dec 10 15:01:17 crc kubenswrapper[4718]: E1210 15:01:17.021999 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8zmhn_openshift-machine-config-operator(8db53917-7cfb-496d-b8a0-5cc68f3be4e7)\"" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" podUID="8db53917-7cfb-496d-b8a0-5cc68f3be4e7" Dec 10 15:01:17 crc kubenswrapper[4718]: I1210 15:01:17.425196 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f7cb8719-7e84-49ea-a5a7-e48eca56b5bd","Type":"ContainerDied","Data":"df06077e44d2eac051401c5b6501f493fe043345d76299a6b66a21a8792ab47e"} Dec 10 15:01:17 crc kubenswrapper[4718]: I1210 15:01:17.425263 4718 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="df06077e44d2eac051401c5b6501f493fe043345d76299a6b66a21a8792ab47e" Dec 10 15:01:17 crc kubenswrapper[4718]: I1210 15:01:17.488360 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 10 15:01:17 crc kubenswrapper[4718]: I1210 15:01:17.540986 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f7cb8719-7e84-49ea-a5a7-e48eca56b5bd-sg-core-conf-yaml\") pod \"f7cb8719-7e84-49ea-a5a7-e48eca56b5bd\" (UID: \"f7cb8719-7e84-49ea-a5a7-e48eca56b5bd\") " Dec 10 15:01:17 crc kubenswrapper[4718]: I1210 15:01:17.541082 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j65dn\" (UniqueName: \"kubernetes.io/projected/f7cb8719-7e84-49ea-a5a7-e48eca56b5bd-kube-api-access-j65dn\") pod \"f7cb8719-7e84-49ea-a5a7-e48eca56b5bd\" (UID: \"f7cb8719-7e84-49ea-a5a7-e48eca56b5bd\") " Dec 10 15:01:17 crc kubenswrapper[4718]: I1210 15:01:17.541225 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f7cb8719-7e84-49ea-a5a7-e48eca56b5bd-scripts\") pod \"f7cb8719-7e84-49ea-a5a7-e48eca56b5bd\" (UID: \"f7cb8719-7e84-49ea-a5a7-e48eca56b5bd\") " Dec 10 15:01:17 crc kubenswrapper[4718]: I1210 15:01:17.541472 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f7cb8719-7e84-49ea-a5a7-e48eca56b5bd-config-data\") pod \"f7cb8719-7e84-49ea-a5a7-e48eca56b5bd\" (UID: \"f7cb8719-7e84-49ea-a5a7-e48eca56b5bd\") " Dec 10 15:01:17 crc kubenswrapper[4718]: I1210 15:01:17.541523 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f7cb8719-7e84-49ea-a5a7-e48eca56b5bd-run-httpd\") pod \"f7cb8719-7e84-49ea-a5a7-e48eca56b5bd\" (UID: \"f7cb8719-7e84-49ea-a5a7-e48eca56b5bd\") " Dec 10 15:01:17 crc kubenswrapper[4718]: I1210 15:01:17.541549 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f7cb8719-7e84-49ea-a5a7-e48eca56b5bd-log-httpd\") pod \"f7cb8719-7e84-49ea-a5a7-e48eca56b5bd\" (UID: \"f7cb8719-7e84-49ea-a5a7-e48eca56b5bd\") " Dec 10 15:01:17 crc kubenswrapper[4718]: I1210 15:01:17.541643 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7cb8719-7e84-49ea-a5a7-e48eca56b5bd-combined-ca-bundle\") pod \"f7cb8719-7e84-49ea-a5a7-e48eca56b5bd\" (UID: \"f7cb8719-7e84-49ea-a5a7-e48eca56b5bd\") " Dec 10 15:01:17 crc kubenswrapper[4718]: I1210 15:01:17.542012 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f7cb8719-7e84-49ea-a5a7-e48eca56b5bd-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "f7cb8719-7e84-49ea-a5a7-e48eca56b5bd" (UID: "f7cb8719-7e84-49ea-a5a7-e48eca56b5bd"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 15:01:17 crc kubenswrapper[4718]: I1210 15:01:17.542052 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f7cb8719-7e84-49ea-a5a7-e48eca56b5bd-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "f7cb8719-7e84-49ea-a5a7-e48eca56b5bd" (UID: "f7cb8719-7e84-49ea-a5a7-e48eca56b5bd"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 15:01:17 crc kubenswrapper[4718]: I1210 15:01:17.542231 4718 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f7cb8719-7e84-49ea-a5a7-e48eca56b5bd-run-httpd\") on node \"crc\" DevicePath \"\"" Dec 10 15:01:17 crc kubenswrapper[4718]: I1210 15:01:17.542254 4718 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f7cb8719-7e84-49ea-a5a7-e48eca56b5bd-log-httpd\") on node \"crc\" DevicePath \"\"" Dec 10 15:01:17 crc kubenswrapper[4718]: I1210 15:01:17.548677 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f7cb8719-7e84-49ea-a5a7-e48eca56b5bd-scripts" (OuterVolumeSpecName: "scripts") pod "f7cb8719-7e84-49ea-a5a7-e48eca56b5bd" (UID: "f7cb8719-7e84-49ea-a5a7-e48eca56b5bd"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:01:17 crc kubenswrapper[4718]: I1210 15:01:17.554278 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7cb8719-7e84-49ea-a5a7-e48eca56b5bd-kube-api-access-j65dn" (OuterVolumeSpecName: "kube-api-access-j65dn") pod "f7cb8719-7e84-49ea-a5a7-e48eca56b5bd" (UID: "f7cb8719-7e84-49ea-a5a7-e48eca56b5bd"). InnerVolumeSpecName "kube-api-access-j65dn". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:01:17 crc kubenswrapper[4718]: I1210 15:01:17.583015 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f7cb8719-7e84-49ea-a5a7-e48eca56b5bd-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "f7cb8719-7e84-49ea-a5a7-e48eca56b5bd" (UID: "f7cb8719-7e84-49ea-a5a7-e48eca56b5bd"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:01:17 crc kubenswrapper[4718]: I1210 15:01:17.637765 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f7cb8719-7e84-49ea-a5a7-e48eca56b5bd-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f7cb8719-7e84-49ea-a5a7-e48eca56b5bd" (UID: "f7cb8719-7e84-49ea-a5a7-e48eca56b5bd"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:01:17 crc kubenswrapper[4718]: I1210 15:01:17.645068 4718 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7cb8719-7e84-49ea-a5a7-e48eca56b5bd-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 10 15:01:17 crc kubenswrapper[4718]: I1210 15:01:17.645138 4718 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f7cb8719-7e84-49ea-a5a7-e48eca56b5bd-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Dec 10 15:01:17 crc kubenswrapper[4718]: I1210 15:01:17.645150 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j65dn\" (UniqueName: \"kubernetes.io/projected/f7cb8719-7e84-49ea-a5a7-e48eca56b5bd-kube-api-access-j65dn\") on node \"crc\" DevicePath \"\"" Dec 10 15:01:17 crc kubenswrapper[4718]: I1210 15:01:17.645164 4718 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f7cb8719-7e84-49ea-a5a7-e48eca56b5bd-scripts\") on node \"crc\" DevicePath \"\"" Dec 10 15:01:17 crc kubenswrapper[4718]: I1210 15:01:17.650615 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f7cb8719-7e84-49ea-a5a7-e48eca56b5bd-config-data" (OuterVolumeSpecName: "config-data") pod "f7cb8719-7e84-49ea-a5a7-e48eca56b5bd" (UID: "f7cb8719-7e84-49ea-a5a7-e48eca56b5bd"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:01:17 crc kubenswrapper[4718]: I1210 15:01:17.747306 4718 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f7cb8719-7e84-49ea-a5a7-e48eca56b5bd-config-data\") on node \"crc\" DevicePath \"\"" Dec 10 15:01:18 crc kubenswrapper[4718]: I1210 15:01:18.436225 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 10 15:01:18 crc kubenswrapper[4718]: I1210 15:01:18.468507 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 10 15:01:18 crc kubenswrapper[4718]: I1210 15:01:18.496313 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Dec 10 15:01:18 crc kubenswrapper[4718]: I1210 15:01:18.533815 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 10 15:01:18 crc kubenswrapper[4718]: E1210 15:01:18.538862 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7cb8719-7e84-49ea-a5a7-e48eca56b5bd" containerName="ceilometer-notification-agent" Dec 10 15:01:18 crc kubenswrapper[4718]: I1210 15:01:18.539310 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7cb8719-7e84-49ea-a5a7-e48eca56b5bd" containerName="ceilometer-notification-agent" Dec 10 15:01:18 crc kubenswrapper[4718]: E1210 15:01:18.539445 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7cb8719-7e84-49ea-a5a7-e48eca56b5bd" containerName="sg-core" Dec 10 15:01:18 crc kubenswrapper[4718]: I1210 15:01:18.539550 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7cb8719-7e84-49ea-a5a7-e48eca56b5bd" containerName="sg-core" Dec 10 15:01:18 crc kubenswrapper[4718]: E1210 15:01:18.539678 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc3a87e8-6bf0-43e8-a75d-d743c4182d36" containerName="keystone-cron" Dec 10 15:01:18 crc kubenswrapper[4718]: I1210 15:01:18.539781 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc3a87e8-6bf0-43e8-a75d-d743c4182d36" containerName="keystone-cron" Dec 10 15:01:18 crc kubenswrapper[4718]: E1210 15:01:18.539879 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aef87f6d-8e6b-4569-b12d-b10b34872959" containerName="neutron-api" Dec 10 15:01:18 crc kubenswrapper[4718]: I1210 15:01:18.540003 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="aef87f6d-8e6b-4569-b12d-b10b34872959" containerName="neutron-api" Dec 10 15:01:18 crc kubenswrapper[4718]: E1210 15:01:18.540095 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7cb8719-7e84-49ea-a5a7-e48eca56b5bd" containerName="proxy-httpd" Dec 10 15:01:18 crc kubenswrapper[4718]: I1210 15:01:18.540179 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7cb8719-7e84-49ea-a5a7-e48eca56b5bd" containerName="proxy-httpd" Dec 10 15:01:18 crc kubenswrapper[4718]: E1210 15:01:18.540289 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7cb8719-7e84-49ea-a5a7-e48eca56b5bd" containerName="ceilometer-central-agent" Dec 10 15:01:18 crc kubenswrapper[4718]: I1210 15:01:18.540369 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7cb8719-7e84-49ea-a5a7-e48eca56b5bd" containerName="ceilometer-central-agent" Dec 10 15:01:18 crc kubenswrapper[4718]: E1210 15:01:18.540497 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aef87f6d-8e6b-4569-b12d-b10b34872959" containerName="neutron-httpd" Dec 10 15:01:18 crc kubenswrapper[4718]: I1210 15:01:18.540720 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="aef87f6d-8e6b-4569-b12d-b10b34872959" containerName="neutron-httpd" Dec 10 15:01:18 crc kubenswrapper[4718]: I1210 15:01:18.541270 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="f7cb8719-7e84-49ea-a5a7-e48eca56b5bd" containerName="sg-core" Dec 10 15:01:18 crc kubenswrapper[4718]: I1210 15:01:18.541413 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="aef87f6d-8e6b-4569-b12d-b10b34872959" containerName="neutron-api" Dec 10 15:01:18 crc kubenswrapper[4718]: I1210 15:01:18.541564 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="f7cb8719-7e84-49ea-a5a7-e48eca56b5bd" containerName="proxy-httpd" Dec 10 15:01:18 crc kubenswrapper[4718]: I1210 15:01:18.541725 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="dc3a87e8-6bf0-43e8-a75d-d743c4182d36" containerName="keystone-cron" Dec 10 15:01:18 crc kubenswrapper[4718]: I1210 15:01:18.541859 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="f7cb8719-7e84-49ea-a5a7-e48eca56b5bd" containerName="ceilometer-notification-agent" Dec 10 15:01:18 crc kubenswrapper[4718]: I1210 15:01:18.541959 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="aef87f6d-8e6b-4569-b12d-b10b34872959" containerName="neutron-httpd" Dec 10 15:01:18 crc kubenswrapper[4718]: I1210 15:01:18.542096 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="f7cb8719-7e84-49ea-a5a7-e48eca56b5bd" containerName="ceilometer-central-agent" Dec 10 15:01:18 crc kubenswrapper[4718]: I1210 15:01:18.545580 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 10 15:01:18 crc kubenswrapper[4718]: I1210 15:01:18.546043 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 10 15:01:18 crc kubenswrapper[4718]: I1210 15:01:18.549927 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 10 15:01:18 crc kubenswrapper[4718]: I1210 15:01:18.550005 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 10 15:01:18 crc kubenswrapper[4718]: I1210 15:01:18.590574 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6d85809d-1325-40dc-8763-a62447fdaf5a-scripts\") pod \"ceilometer-0\" (UID: \"6d85809d-1325-40dc-8763-a62447fdaf5a\") " pod="openstack/ceilometer-0" Dec 10 15:01:18 crc kubenswrapper[4718]: I1210 15:01:18.590762 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6d85809d-1325-40dc-8763-a62447fdaf5a-config-data\") pod \"ceilometer-0\" (UID: \"6d85809d-1325-40dc-8763-a62447fdaf5a\") " pod="openstack/ceilometer-0" Dec 10 15:01:18 crc kubenswrapper[4718]: I1210 15:01:18.590849 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d85809d-1325-40dc-8763-a62447fdaf5a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"6d85809d-1325-40dc-8763-a62447fdaf5a\") " pod="openstack/ceilometer-0" Dec 10 15:01:18 crc kubenswrapper[4718]: I1210 15:01:18.590877 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6d85809d-1325-40dc-8763-a62447fdaf5a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"6d85809d-1325-40dc-8763-a62447fdaf5a\") " pod="openstack/ceilometer-0" Dec 10 15:01:18 crc kubenswrapper[4718]: I1210 15:01:18.591076 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6d85809d-1325-40dc-8763-a62447fdaf5a-run-httpd\") pod \"ceilometer-0\" (UID: \"6d85809d-1325-40dc-8763-a62447fdaf5a\") " pod="openstack/ceilometer-0" Dec 10 15:01:18 crc kubenswrapper[4718]: I1210 15:01:18.591170 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zdczf\" (UniqueName: \"kubernetes.io/projected/6d85809d-1325-40dc-8763-a62447fdaf5a-kube-api-access-zdczf\") pod \"ceilometer-0\" (UID: \"6d85809d-1325-40dc-8763-a62447fdaf5a\") " pod="openstack/ceilometer-0" Dec 10 15:01:18 crc kubenswrapper[4718]: I1210 15:01:18.591224 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6d85809d-1325-40dc-8763-a62447fdaf5a-log-httpd\") pod \"ceilometer-0\" (UID: \"6d85809d-1325-40dc-8763-a62447fdaf5a\") " pod="openstack/ceilometer-0" Dec 10 15:01:18 crc kubenswrapper[4718]: I1210 15:01:18.693672 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6d85809d-1325-40dc-8763-a62447fdaf5a-config-data\") pod \"ceilometer-0\" (UID: \"6d85809d-1325-40dc-8763-a62447fdaf5a\") " pod="openstack/ceilometer-0" Dec 10 15:01:18 crc kubenswrapper[4718]: I1210 15:01:18.694574 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6d85809d-1325-40dc-8763-a62447fdaf5a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"6d85809d-1325-40dc-8763-a62447fdaf5a\") " pod="openstack/ceilometer-0" Dec 10 15:01:18 crc kubenswrapper[4718]: I1210 15:01:18.694623 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d85809d-1325-40dc-8763-a62447fdaf5a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"6d85809d-1325-40dc-8763-a62447fdaf5a\") " pod="openstack/ceilometer-0" Dec 10 15:01:18 crc kubenswrapper[4718]: I1210 15:01:18.694651 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6d85809d-1325-40dc-8763-a62447fdaf5a-run-httpd\") pod \"ceilometer-0\" (UID: \"6d85809d-1325-40dc-8763-a62447fdaf5a\") " pod="openstack/ceilometer-0" Dec 10 15:01:18 crc kubenswrapper[4718]: I1210 15:01:18.694676 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6d85809d-1325-40dc-8763-a62447fdaf5a-log-httpd\") pod \"ceilometer-0\" (UID: \"6d85809d-1325-40dc-8763-a62447fdaf5a\") " pod="openstack/ceilometer-0" Dec 10 15:01:18 crc kubenswrapper[4718]: I1210 15:01:18.694693 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zdczf\" (UniqueName: \"kubernetes.io/projected/6d85809d-1325-40dc-8763-a62447fdaf5a-kube-api-access-zdczf\") pod \"ceilometer-0\" (UID: \"6d85809d-1325-40dc-8763-a62447fdaf5a\") " pod="openstack/ceilometer-0" Dec 10 15:01:18 crc kubenswrapper[4718]: I1210 15:01:18.694752 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6d85809d-1325-40dc-8763-a62447fdaf5a-scripts\") pod \"ceilometer-0\" (UID: \"6d85809d-1325-40dc-8763-a62447fdaf5a\") " pod="openstack/ceilometer-0" Dec 10 15:01:18 crc kubenswrapper[4718]: I1210 15:01:18.695375 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6d85809d-1325-40dc-8763-a62447fdaf5a-log-httpd\") pod \"ceilometer-0\" (UID: \"6d85809d-1325-40dc-8763-a62447fdaf5a\") " pod="openstack/ceilometer-0" Dec 10 15:01:18 crc kubenswrapper[4718]: I1210 15:01:18.695595 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6d85809d-1325-40dc-8763-a62447fdaf5a-run-httpd\") pod \"ceilometer-0\" (UID: \"6d85809d-1325-40dc-8763-a62447fdaf5a\") " pod="openstack/ceilometer-0" Dec 10 15:01:18 crc kubenswrapper[4718]: I1210 15:01:18.699623 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6d85809d-1325-40dc-8763-a62447fdaf5a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"6d85809d-1325-40dc-8763-a62447fdaf5a\") " pod="openstack/ceilometer-0" Dec 10 15:01:18 crc kubenswrapper[4718]: I1210 15:01:18.700129 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6d85809d-1325-40dc-8763-a62447fdaf5a-config-data\") pod \"ceilometer-0\" (UID: \"6d85809d-1325-40dc-8763-a62447fdaf5a\") " pod="openstack/ceilometer-0" Dec 10 15:01:18 crc kubenswrapper[4718]: I1210 15:01:18.700777 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6d85809d-1325-40dc-8763-a62447fdaf5a-scripts\") pod \"ceilometer-0\" (UID: \"6d85809d-1325-40dc-8763-a62447fdaf5a\") " pod="openstack/ceilometer-0" Dec 10 15:01:18 crc kubenswrapper[4718]: I1210 15:01:18.713640 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d85809d-1325-40dc-8763-a62447fdaf5a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"6d85809d-1325-40dc-8763-a62447fdaf5a\") " pod="openstack/ceilometer-0" Dec 10 15:01:18 crc kubenswrapper[4718]: I1210 15:01:18.721733 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zdczf\" (UniqueName: \"kubernetes.io/projected/6d85809d-1325-40dc-8763-a62447fdaf5a-kube-api-access-zdczf\") pod \"ceilometer-0\" (UID: \"6d85809d-1325-40dc-8763-a62447fdaf5a\") " pod="openstack/ceilometer-0" Dec 10 15:01:18 crc kubenswrapper[4718]: I1210 15:01:18.873628 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 10 15:01:19 crc kubenswrapper[4718]: I1210 15:01:19.466342 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 10 15:01:20 crc kubenswrapper[4718]: I1210 15:01:20.040799 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7cb8719-7e84-49ea-a5a7-e48eca56b5bd" path="/var/lib/kubelet/pods/f7cb8719-7e84-49ea-a5a7-e48eca56b5bd/volumes" Dec 10 15:01:20 crc kubenswrapper[4718]: I1210 15:01:20.468308 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6d85809d-1325-40dc-8763-a62447fdaf5a","Type":"ContainerStarted","Data":"2b8ab98e6151f99012c470cc90cddd64ed2a72f4ca3aad4cec9ae29de35fada8"} Dec 10 15:01:20 crc kubenswrapper[4718]: I1210 15:01:20.470089 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6d85809d-1325-40dc-8763-a62447fdaf5a","Type":"ContainerStarted","Data":"284c7e3d336e4ef89180f451225c663e7980ea190e9785a8e077def08c73b61a"} Dec 10 15:01:20 crc kubenswrapper[4718]: I1210 15:01:20.470209 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6d85809d-1325-40dc-8763-a62447fdaf5a","Type":"ContainerStarted","Data":"a14abb2dc70c7ebdbf16930b9edf397ac45221de5183469b7676361424247d9a"} Dec 10 15:01:23 crc kubenswrapper[4718]: I1210 15:01:23.513992 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6d85809d-1325-40dc-8763-a62447fdaf5a","Type":"ContainerStarted","Data":"7970db441b1805f9fe753f0823a4c225f5db5e2e8fcea707dbc0f4462329b4fe"} Dec 10 15:01:25 crc kubenswrapper[4718]: I1210 15:01:25.541933 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6d85809d-1325-40dc-8763-a62447fdaf5a","Type":"ContainerStarted","Data":"5c356c9adf955d6ecb35826a9f9f135864e6c77d6ec2705a33e01cab52803107"} Dec 10 15:01:25 crc kubenswrapper[4718]: I1210 15:01:25.543002 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Dec 10 15:01:29 crc kubenswrapper[4718]: I1210 15:01:29.020510 4718 scope.go:117] "RemoveContainer" containerID="76beffb68a4302af15a19b5c310d822594a21c7c7ec551bf4bf551ca3dcf1f21" Dec 10 15:01:29 crc kubenswrapper[4718]: E1210 15:01:29.022490 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8zmhn_openshift-machine-config-operator(8db53917-7cfb-496d-b8a0-5cc68f3be4e7)\"" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" podUID="8db53917-7cfb-496d-b8a0-5cc68f3be4e7" Dec 10 15:01:29 crc kubenswrapper[4718]: I1210 15:01:29.184106 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=5.961189024 podStartE2EDuration="11.18406092s" podCreationTimestamp="2025-12-10 15:01:18 +0000 UTC" firstStartedPulling="2025-12-10 15:01:19.468876973 +0000 UTC m=+1784.418100390" lastFinishedPulling="2025-12-10 15:01:24.691748869 +0000 UTC m=+1789.640972286" observedRunningTime="2025-12-10 15:01:25.570520724 +0000 UTC m=+1790.519744151" watchObservedRunningTime="2025-12-10 15:01:29.18406092 +0000 UTC m=+1794.133284337" Dec 10 15:01:29 crc kubenswrapper[4718]: I1210 15:01:29.196806 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-decision-engine-0"] Dec 10 15:01:29 crc kubenswrapper[4718]: I1210 15:01:29.197447 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/watcher-decision-engine-0" podUID="95261732-95ae-4618-a8a3-c883c287553e" containerName="watcher-decision-engine" containerID="cri-o://4cd0ac0e3bfc181c12bfad133bbf2649aff491034cb6c0c37e1c8b126081dcc6" gracePeriod=30 Dec 10 15:01:29 crc kubenswrapper[4718]: I1210 15:01:29.825445 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 10 15:01:29 crc kubenswrapper[4718]: I1210 15:01:29.827022 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="6d85809d-1325-40dc-8763-a62447fdaf5a" containerName="ceilometer-notification-agent" containerID="cri-o://2b8ab98e6151f99012c470cc90cddd64ed2a72f4ca3aad4cec9ae29de35fada8" gracePeriod=30 Dec 10 15:01:29 crc kubenswrapper[4718]: I1210 15:01:29.827106 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="6d85809d-1325-40dc-8763-a62447fdaf5a" containerName="proxy-httpd" containerID="cri-o://5c356c9adf955d6ecb35826a9f9f135864e6c77d6ec2705a33e01cab52803107" gracePeriod=30 Dec 10 15:01:29 crc kubenswrapper[4718]: I1210 15:01:29.827127 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="6d85809d-1325-40dc-8763-a62447fdaf5a" containerName="sg-core" containerID="cri-o://7970db441b1805f9fe753f0823a4c225f5db5e2e8fcea707dbc0f4462329b4fe" gracePeriod=30 Dec 10 15:01:29 crc kubenswrapper[4718]: I1210 15:01:29.826995 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="6d85809d-1325-40dc-8763-a62447fdaf5a" containerName="ceilometer-central-agent" containerID="cri-o://284c7e3d336e4ef89180f451225c663e7980ea190e9785a8e077def08c73b61a" gracePeriod=30 Dec 10 15:01:30 crc kubenswrapper[4718]: I1210 15:01:30.613297 4718 generic.go:334] "Generic (PLEG): container finished" podID="dae6a105-9e31-412f-8809-ce10dfacfe35" containerID="61f1739de043d1b2e2f7dfa75726df28ce1f3d5681467714e80208bdee87b116" exitCode=0 Dec 10 15:01:30 crc kubenswrapper[4718]: I1210 15:01:30.613364 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-6q6w4" event={"ID":"dae6a105-9e31-412f-8809-ce10dfacfe35","Type":"ContainerDied","Data":"61f1739de043d1b2e2f7dfa75726df28ce1f3d5681467714e80208bdee87b116"} Dec 10 15:01:30 crc kubenswrapper[4718]: I1210 15:01:30.622799 4718 generic.go:334] "Generic (PLEG): container finished" podID="6d85809d-1325-40dc-8763-a62447fdaf5a" containerID="5c356c9adf955d6ecb35826a9f9f135864e6c77d6ec2705a33e01cab52803107" exitCode=0 Dec 10 15:01:30 crc kubenswrapper[4718]: I1210 15:01:30.622835 4718 generic.go:334] "Generic (PLEG): container finished" podID="6d85809d-1325-40dc-8763-a62447fdaf5a" containerID="7970db441b1805f9fe753f0823a4c225f5db5e2e8fcea707dbc0f4462329b4fe" exitCode=2 Dec 10 15:01:30 crc kubenswrapper[4718]: I1210 15:01:30.622844 4718 generic.go:334] "Generic (PLEG): container finished" podID="6d85809d-1325-40dc-8763-a62447fdaf5a" containerID="284c7e3d336e4ef89180f451225c663e7980ea190e9785a8e077def08c73b61a" exitCode=0 Dec 10 15:01:30 crc kubenswrapper[4718]: I1210 15:01:30.622873 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6d85809d-1325-40dc-8763-a62447fdaf5a","Type":"ContainerDied","Data":"5c356c9adf955d6ecb35826a9f9f135864e6c77d6ec2705a33e01cab52803107"} Dec 10 15:01:30 crc kubenswrapper[4718]: I1210 15:01:30.622907 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6d85809d-1325-40dc-8763-a62447fdaf5a","Type":"ContainerDied","Data":"7970db441b1805f9fe753f0823a4c225f5db5e2e8fcea707dbc0f4462329b4fe"} Dec 10 15:01:30 crc kubenswrapper[4718]: I1210 15:01:30.622916 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6d85809d-1325-40dc-8763-a62447fdaf5a","Type":"ContainerDied","Data":"284c7e3d336e4ef89180f451225c663e7980ea190e9785a8e077def08c73b61a"} Dec 10 15:01:32 crc kubenswrapper[4718]: I1210 15:01:32.074032 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-6q6w4" Dec 10 15:01:32 crc kubenswrapper[4718]: I1210 15:01:32.173338 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dae6a105-9e31-412f-8809-ce10dfacfe35-scripts\") pod \"dae6a105-9e31-412f-8809-ce10dfacfe35\" (UID: \"dae6a105-9e31-412f-8809-ce10dfacfe35\") " Dec 10 15:01:32 crc kubenswrapper[4718]: I1210 15:01:32.173535 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dae6a105-9e31-412f-8809-ce10dfacfe35-config-data\") pod \"dae6a105-9e31-412f-8809-ce10dfacfe35\" (UID: \"dae6a105-9e31-412f-8809-ce10dfacfe35\") " Dec 10 15:01:32 crc kubenswrapper[4718]: I1210 15:01:32.173677 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dae6a105-9e31-412f-8809-ce10dfacfe35-combined-ca-bundle\") pod \"dae6a105-9e31-412f-8809-ce10dfacfe35\" (UID: \"dae6a105-9e31-412f-8809-ce10dfacfe35\") " Dec 10 15:01:32 crc kubenswrapper[4718]: I1210 15:01:32.173799 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tdx8w\" (UniqueName: \"kubernetes.io/projected/dae6a105-9e31-412f-8809-ce10dfacfe35-kube-api-access-tdx8w\") pod \"dae6a105-9e31-412f-8809-ce10dfacfe35\" (UID: \"dae6a105-9e31-412f-8809-ce10dfacfe35\") " Dec 10 15:01:32 crc kubenswrapper[4718]: I1210 15:01:32.190311 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dae6a105-9e31-412f-8809-ce10dfacfe35-kube-api-access-tdx8w" (OuterVolumeSpecName: "kube-api-access-tdx8w") pod "dae6a105-9e31-412f-8809-ce10dfacfe35" (UID: "dae6a105-9e31-412f-8809-ce10dfacfe35"). InnerVolumeSpecName "kube-api-access-tdx8w". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:01:32 crc kubenswrapper[4718]: I1210 15:01:32.202332 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dae6a105-9e31-412f-8809-ce10dfacfe35-scripts" (OuterVolumeSpecName: "scripts") pod "dae6a105-9e31-412f-8809-ce10dfacfe35" (UID: "dae6a105-9e31-412f-8809-ce10dfacfe35"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:01:32 crc kubenswrapper[4718]: I1210 15:01:32.219949 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dae6a105-9e31-412f-8809-ce10dfacfe35-config-data" (OuterVolumeSpecName: "config-data") pod "dae6a105-9e31-412f-8809-ce10dfacfe35" (UID: "dae6a105-9e31-412f-8809-ce10dfacfe35"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:01:32 crc kubenswrapper[4718]: I1210 15:01:32.223866 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dae6a105-9e31-412f-8809-ce10dfacfe35-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "dae6a105-9e31-412f-8809-ce10dfacfe35" (UID: "dae6a105-9e31-412f-8809-ce10dfacfe35"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:01:32 crc kubenswrapper[4718]: I1210 15:01:32.280970 4718 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dae6a105-9e31-412f-8809-ce10dfacfe35-config-data\") on node \"crc\" DevicePath \"\"" Dec 10 15:01:32 crc kubenswrapper[4718]: I1210 15:01:32.281462 4718 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dae6a105-9e31-412f-8809-ce10dfacfe35-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 10 15:01:32 crc kubenswrapper[4718]: I1210 15:01:32.281892 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tdx8w\" (UniqueName: \"kubernetes.io/projected/dae6a105-9e31-412f-8809-ce10dfacfe35-kube-api-access-tdx8w\") on node \"crc\" DevicePath \"\"" Dec 10 15:01:32 crc kubenswrapper[4718]: I1210 15:01:32.282020 4718 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dae6a105-9e31-412f-8809-ce10dfacfe35-scripts\") on node \"crc\" DevicePath \"\"" Dec 10 15:01:32 crc kubenswrapper[4718]: I1210 15:01:32.325289 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 10 15:01:32 crc kubenswrapper[4718]: I1210 15:01:32.383555 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d85809d-1325-40dc-8763-a62447fdaf5a-combined-ca-bundle\") pod \"6d85809d-1325-40dc-8763-a62447fdaf5a\" (UID: \"6d85809d-1325-40dc-8763-a62447fdaf5a\") " Dec 10 15:01:32 crc kubenswrapper[4718]: I1210 15:01:32.383687 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6d85809d-1325-40dc-8763-a62447fdaf5a-scripts\") pod \"6d85809d-1325-40dc-8763-a62447fdaf5a\" (UID: \"6d85809d-1325-40dc-8763-a62447fdaf5a\") " Dec 10 15:01:32 crc kubenswrapper[4718]: I1210 15:01:32.383778 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6d85809d-1325-40dc-8763-a62447fdaf5a-run-httpd\") pod \"6d85809d-1325-40dc-8763-a62447fdaf5a\" (UID: \"6d85809d-1325-40dc-8763-a62447fdaf5a\") " Dec 10 15:01:32 crc kubenswrapper[4718]: I1210 15:01:32.383923 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zdczf\" (UniqueName: \"kubernetes.io/projected/6d85809d-1325-40dc-8763-a62447fdaf5a-kube-api-access-zdczf\") pod \"6d85809d-1325-40dc-8763-a62447fdaf5a\" (UID: \"6d85809d-1325-40dc-8763-a62447fdaf5a\") " Dec 10 15:01:32 crc kubenswrapper[4718]: I1210 15:01:32.384023 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6d85809d-1325-40dc-8763-a62447fdaf5a-log-httpd\") pod \"6d85809d-1325-40dc-8763-a62447fdaf5a\" (UID: \"6d85809d-1325-40dc-8763-a62447fdaf5a\") " Dec 10 15:01:32 crc kubenswrapper[4718]: I1210 15:01:32.384066 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6d85809d-1325-40dc-8763-a62447fdaf5a-config-data\") pod \"6d85809d-1325-40dc-8763-a62447fdaf5a\" (UID: \"6d85809d-1325-40dc-8763-a62447fdaf5a\") " Dec 10 15:01:32 crc kubenswrapper[4718]: I1210 15:01:32.384252 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6d85809d-1325-40dc-8763-a62447fdaf5a-sg-core-conf-yaml\") pod \"6d85809d-1325-40dc-8763-a62447fdaf5a\" (UID: \"6d85809d-1325-40dc-8763-a62447fdaf5a\") " Dec 10 15:01:32 crc kubenswrapper[4718]: I1210 15:01:32.385751 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6d85809d-1325-40dc-8763-a62447fdaf5a-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "6d85809d-1325-40dc-8763-a62447fdaf5a" (UID: "6d85809d-1325-40dc-8763-a62447fdaf5a"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 15:01:32 crc kubenswrapper[4718]: I1210 15:01:32.386027 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6d85809d-1325-40dc-8763-a62447fdaf5a-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "6d85809d-1325-40dc-8763-a62447fdaf5a" (UID: "6d85809d-1325-40dc-8763-a62447fdaf5a"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 15:01:32 crc kubenswrapper[4718]: I1210 15:01:32.389210 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6d85809d-1325-40dc-8763-a62447fdaf5a-scripts" (OuterVolumeSpecName: "scripts") pod "6d85809d-1325-40dc-8763-a62447fdaf5a" (UID: "6d85809d-1325-40dc-8763-a62447fdaf5a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:01:32 crc kubenswrapper[4718]: I1210 15:01:32.390017 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6d85809d-1325-40dc-8763-a62447fdaf5a-kube-api-access-zdczf" (OuterVolumeSpecName: "kube-api-access-zdczf") pod "6d85809d-1325-40dc-8763-a62447fdaf5a" (UID: "6d85809d-1325-40dc-8763-a62447fdaf5a"). InnerVolumeSpecName "kube-api-access-zdczf". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:01:32 crc kubenswrapper[4718]: I1210 15:01:32.419769 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6d85809d-1325-40dc-8763-a62447fdaf5a-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "6d85809d-1325-40dc-8763-a62447fdaf5a" (UID: "6d85809d-1325-40dc-8763-a62447fdaf5a"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:01:32 crc kubenswrapper[4718]: I1210 15:01:32.474142 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6d85809d-1325-40dc-8763-a62447fdaf5a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6d85809d-1325-40dc-8763-a62447fdaf5a" (UID: "6d85809d-1325-40dc-8763-a62447fdaf5a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:01:32 crc kubenswrapper[4718]: I1210 15:01:32.487251 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zdczf\" (UniqueName: \"kubernetes.io/projected/6d85809d-1325-40dc-8763-a62447fdaf5a-kube-api-access-zdczf\") on node \"crc\" DevicePath \"\"" Dec 10 15:01:32 crc kubenswrapper[4718]: I1210 15:01:32.487308 4718 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6d85809d-1325-40dc-8763-a62447fdaf5a-log-httpd\") on node \"crc\" DevicePath \"\"" Dec 10 15:01:32 crc kubenswrapper[4718]: I1210 15:01:32.487319 4718 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6d85809d-1325-40dc-8763-a62447fdaf5a-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Dec 10 15:01:32 crc kubenswrapper[4718]: I1210 15:01:32.487331 4718 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d85809d-1325-40dc-8763-a62447fdaf5a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 10 15:01:32 crc kubenswrapper[4718]: I1210 15:01:32.487343 4718 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6d85809d-1325-40dc-8763-a62447fdaf5a-scripts\") on node \"crc\" DevicePath \"\"" Dec 10 15:01:32 crc kubenswrapper[4718]: I1210 15:01:32.487351 4718 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6d85809d-1325-40dc-8763-a62447fdaf5a-run-httpd\") on node \"crc\" DevicePath \"\"" Dec 10 15:01:32 crc kubenswrapper[4718]: I1210 15:01:32.499773 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6d85809d-1325-40dc-8763-a62447fdaf5a-config-data" (OuterVolumeSpecName: "config-data") pod "6d85809d-1325-40dc-8763-a62447fdaf5a" (UID: "6d85809d-1325-40dc-8763-a62447fdaf5a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:01:32 crc kubenswrapper[4718]: I1210 15:01:32.590696 4718 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6d85809d-1325-40dc-8763-a62447fdaf5a-config-data\") on node \"crc\" DevicePath \"\"" Dec 10 15:01:32 crc kubenswrapper[4718]: I1210 15:01:32.656756 4718 generic.go:334] "Generic (PLEG): container finished" podID="6d85809d-1325-40dc-8763-a62447fdaf5a" containerID="2b8ab98e6151f99012c470cc90cddd64ed2a72f4ca3aad4cec9ae29de35fada8" exitCode=0 Dec 10 15:01:32 crc kubenswrapper[4718]: I1210 15:01:32.656974 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 10 15:01:32 crc kubenswrapper[4718]: I1210 15:01:32.661109 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6d85809d-1325-40dc-8763-a62447fdaf5a","Type":"ContainerDied","Data":"2b8ab98e6151f99012c470cc90cddd64ed2a72f4ca3aad4cec9ae29de35fada8"} Dec 10 15:01:32 crc kubenswrapper[4718]: I1210 15:01:32.661380 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6d85809d-1325-40dc-8763-a62447fdaf5a","Type":"ContainerDied","Data":"a14abb2dc70c7ebdbf16930b9edf397ac45221de5183469b7676361424247d9a"} Dec 10 15:01:32 crc kubenswrapper[4718]: I1210 15:01:32.661516 4718 scope.go:117] "RemoveContainer" containerID="5c356c9adf955d6ecb35826a9f9f135864e6c77d6ec2705a33e01cab52803107" Dec 10 15:01:32 crc kubenswrapper[4718]: I1210 15:01:32.671838 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-6q6w4" Dec 10 15:01:32 crc kubenswrapper[4718]: I1210 15:01:32.673892 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-6q6w4" event={"ID":"dae6a105-9e31-412f-8809-ce10dfacfe35","Type":"ContainerDied","Data":"1c0306920abbafd2c1c479e0d9a7fb0668c2ff4746df2b5258ebbd853e51b2d0"} Dec 10 15:01:32 crc kubenswrapper[4718]: I1210 15:01:32.674018 4718 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1c0306920abbafd2c1c479e0d9a7fb0668c2ff4746df2b5258ebbd853e51b2d0" Dec 10 15:01:32 crc kubenswrapper[4718]: I1210 15:01:32.703535 4718 scope.go:117] "RemoveContainer" containerID="7970db441b1805f9fe753f0823a4c225f5db5e2e8fcea707dbc0f4462329b4fe" Dec 10 15:01:32 crc kubenswrapper[4718]: I1210 15:01:32.784259 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 10 15:01:32 crc kubenswrapper[4718]: I1210 15:01:32.791406 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Dec 10 15:01:32 crc kubenswrapper[4718]: I1210 15:01:32.799501 4718 scope.go:117] "RemoveContainer" containerID="2b8ab98e6151f99012c470cc90cddd64ed2a72f4ca3aad4cec9ae29de35fada8" Dec 10 15:01:32 crc kubenswrapper[4718]: I1210 15:01:32.805919 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 10 15:01:32 crc kubenswrapper[4718]: E1210 15:01:32.806821 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d85809d-1325-40dc-8763-a62447fdaf5a" containerName="ceilometer-central-agent" Dec 10 15:01:32 crc kubenswrapper[4718]: I1210 15:01:32.806922 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d85809d-1325-40dc-8763-a62447fdaf5a" containerName="ceilometer-central-agent" Dec 10 15:01:32 crc kubenswrapper[4718]: E1210 15:01:32.806957 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d85809d-1325-40dc-8763-a62447fdaf5a" containerName="ceilometer-notification-agent" Dec 10 15:01:32 crc kubenswrapper[4718]: I1210 15:01:32.806966 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d85809d-1325-40dc-8763-a62447fdaf5a" containerName="ceilometer-notification-agent" Dec 10 15:01:32 crc kubenswrapper[4718]: E1210 15:01:32.806992 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dae6a105-9e31-412f-8809-ce10dfacfe35" containerName="nova-cell0-conductor-db-sync" Dec 10 15:01:32 crc kubenswrapper[4718]: I1210 15:01:32.807001 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="dae6a105-9e31-412f-8809-ce10dfacfe35" containerName="nova-cell0-conductor-db-sync" Dec 10 15:01:32 crc kubenswrapper[4718]: E1210 15:01:32.807026 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d85809d-1325-40dc-8763-a62447fdaf5a" containerName="sg-core" Dec 10 15:01:32 crc kubenswrapper[4718]: I1210 15:01:32.807034 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d85809d-1325-40dc-8763-a62447fdaf5a" containerName="sg-core" Dec 10 15:01:32 crc kubenswrapper[4718]: E1210 15:01:32.807048 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d85809d-1325-40dc-8763-a62447fdaf5a" containerName="proxy-httpd" Dec 10 15:01:32 crc kubenswrapper[4718]: I1210 15:01:32.807056 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d85809d-1325-40dc-8763-a62447fdaf5a" containerName="proxy-httpd" Dec 10 15:01:32 crc kubenswrapper[4718]: I1210 15:01:32.807334 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="6d85809d-1325-40dc-8763-a62447fdaf5a" containerName="sg-core" Dec 10 15:01:32 crc kubenswrapper[4718]: I1210 15:01:32.807361 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="6d85809d-1325-40dc-8763-a62447fdaf5a" containerName="ceilometer-notification-agent" Dec 10 15:01:32 crc kubenswrapper[4718]: I1210 15:01:32.807373 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="6d85809d-1325-40dc-8763-a62447fdaf5a" containerName="proxy-httpd" Dec 10 15:01:32 crc kubenswrapper[4718]: I1210 15:01:32.807412 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="dae6a105-9e31-412f-8809-ce10dfacfe35" containerName="nova-cell0-conductor-db-sync" Dec 10 15:01:32 crc kubenswrapper[4718]: I1210 15:01:32.807423 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="6d85809d-1325-40dc-8763-a62447fdaf5a" containerName="ceilometer-central-agent" Dec 10 15:01:32 crc kubenswrapper[4718]: I1210 15:01:32.810272 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 10 15:01:32 crc kubenswrapper[4718]: I1210 15:01:32.816468 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 10 15:01:32 crc kubenswrapper[4718]: I1210 15:01:32.818304 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 10 15:01:32 crc kubenswrapper[4718]: I1210 15:01:32.864485 4718 scope.go:117] "RemoveContainer" containerID="284c7e3d336e4ef89180f451225c663e7980ea190e9785a8e077def08c73b61a" Dec 10 15:01:32 crc kubenswrapper[4718]: I1210 15:01:32.865800 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 10 15:01:32 crc kubenswrapper[4718]: I1210 15:01:32.884487 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Dec 10 15:01:32 crc kubenswrapper[4718]: I1210 15:01:32.886698 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Dec 10 15:01:32 crc kubenswrapper[4718]: I1210 15:01:32.891082 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-ncg47" Dec 10 15:01:32 crc kubenswrapper[4718]: I1210 15:01:32.891377 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Dec 10 15:01:32 crc kubenswrapper[4718]: I1210 15:01:32.897716 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Dec 10 15:01:32 crc kubenswrapper[4718]: I1210 15:01:32.918750 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8eeb4354-d452-4452-8b5e-b792475c3f53-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"8eeb4354-d452-4452-8b5e-b792475c3f53\") " pod="openstack/ceilometer-0" Dec 10 15:01:32 crc kubenswrapper[4718]: I1210 15:01:32.918826 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8eeb4354-d452-4452-8b5e-b792475c3f53-log-httpd\") pod \"ceilometer-0\" (UID: \"8eeb4354-d452-4452-8b5e-b792475c3f53\") " pod="openstack/ceilometer-0" Dec 10 15:01:32 crc kubenswrapper[4718]: I1210 15:01:32.918866 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8eeb4354-d452-4452-8b5e-b792475c3f53-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"8eeb4354-d452-4452-8b5e-b792475c3f53\") " pod="openstack/ceilometer-0" Dec 10 15:01:32 crc kubenswrapper[4718]: I1210 15:01:32.920746 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8eeb4354-d452-4452-8b5e-b792475c3f53-run-httpd\") pod \"ceilometer-0\" (UID: \"8eeb4354-d452-4452-8b5e-b792475c3f53\") " pod="openstack/ceilometer-0" Dec 10 15:01:32 crc kubenswrapper[4718]: I1210 15:01:32.920830 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8eeb4354-d452-4452-8b5e-b792475c3f53-config-data\") pod \"ceilometer-0\" (UID: \"8eeb4354-d452-4452-8b5e-b792475c3f53\") " pod="openstack/ceilometer-0" Dec 10 15:01:32 crc kubenswrapper[4718]: I1210 15:01:32.920924 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8eeb4354-d452-4452-8b5e-b792475c3f53-scripts\") pod \"ceilometer-0\" (UID: \"8eeb4354-d452-4452-8b5e-b792475c3f53\") " pod="openstack/ceilometer-0" Dec 10 15:01:32 crc kubenswrapper[4718]: I1210 15:01:32.920977 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-85mxc\" (UniqueName: \"kubernetes.io/projected/8eeb4354-d452-4452-8b5e-b792475c3f53-kube-api-access-85mxc\") pod \"ceilometer-0\" (UID: \"8eeb4354-d452-4452-8b5e-b792475c3f53\") " pod="openstack/ceilometer-0" Dec 10 15:01:32 crc kubenswrapper[4718]: I1210 15:01:32.971424 4718 scope.go:117] "RemoveContainer" containerID="5c356c9adf955d6ecb35826a9f9f135864e6c77d6ec2705a33e01cab52803107" Dec 10 15:01:32 crc kubenswrapper[4718]: E1210 15:01:32.972006 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5c356c9adf955d6ecb35826a9f9f135864e6c77d6ec2705a33e01cab52803107\": container with ID starting with 5c356c9adf955d6ecb35826a9f9f135864e6c77d6ec2705a33e01cab52803107 not found: ID does not exist" containerID="5c356c9adf955d6ecb35826a9f9f135864e6c77d6ec2705a33e01cab52803107" Dec 10 15:01:32 crc kubenswrapper[4718]: I1210 15:01:32.972054 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5c356c9adf955d6ecb35826a9f9f135864e6c77d6ec2705a33e01cab52803107"} err="failed to get container status \"5c356c9adf955d6ecb35826a9f9f135864e6c77d6ec2705a33e01cab52803107\": rpc error: code = NotFound desc = could not find container \"5c356c9adf955d6ecb35826a9f9f135864e6c77d6ec2705a33e01cab52803107\": container with ID starting with 5c356c9adf955d6ecb35826a9f9f135864e6c77d6ec2705a33e01cab52803107 not found: ID does not exist" Dec 10 15:01:32 crc kubenswrapper[4718]: I1210 15:01:32.972081 4718 scope.go:117] "RemoveContainer" containerID="7970db441b1805f9fe753f0823a4c225f5db5e2e8fcea707dbc0f4462329b4fe" Dec 10 15:01:32 crc kubenswrapper[4718]: E1210 15:01:32.972735 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7970db441b1805f9fe753f0823a4c225f5db5e2e8fcea707dbc0f4462329b4fe\": container with ID starting with 7970db441b1805f9fe753f0823a4c225f5db5e2e8fcea707dbc0f4462329b4fe not found: ID does not exist" containerID="7970db441b1805f9fe753f0823a4c225f5db5e2e8fcea707dbc0f4462329b4fe" Dec 10 15:01:32 crc kubenswrapper[4718]: I1210 15:01:32.972764 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7970db441b1805f9fe753f0823a4c225f5db5e2e8fcea707dbc0f4462329b4fe"} err="failed to get container status \"7970db441b1805f9fe753f0823a4c225f5db5e2e8fcea707dbc0f4462329b4fe\": rpc error: code = NotFound desc = could not find container \"7970db441b1805f9fe753f0823a4c225f5db5e2e8fcea707dbc0f4462329b4fe\": container with ID starting with 7970db441b1805f9fe753f0823a4c225f5db5e2e8fcea707dbc0f4462329b4fe not found: ID does not exist" Dec 10 15:01:32 crc kubenswrapper[4718]: I1210 15:01:32.972778 4718 scope.go:117] "RemoveContainer" containerID="2b8ab98e6151f99012c470cc90cddd64ed2a72f4ca3aad4cec9ae29de35fada8" Dec 10 15:01:32 crc kubenswrapper[4718]: E1210 15:01:32.973229 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2b8ab98e6151f99012c470cc90cddd64ed2a72f4ca3aad4cec9ae29de35fada8\": container with ID starting with 2b8ab98e6151f99012c470cc90cddd64ed2a72f4ca3aad4cec9ae29de35fada8 not found: ID does not exist" containerID="2b8ab98e6151f99012c470cc90cddd64ed2a72f4ca3aad4cec9ae29de35fada8" Dec 10 15:01:32 crc kubenswrapper[4718]: I1210 15:01:32.973372 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2b8ab98e6151f99012c470cc90cddd64ed2a72f4ca3aad4cec9ae29de35fada8"} err="failed to get container status \"2b8ab98e6151f99012c470cc90cddd64ed2a72f4ca3aad4cec9ae29de35fada8\": rpc error: code = NotFound desc = could not find container \"2b8ab98e6151f99012c470cc90cddd64ed2a72f4ca3aad4cec9ae29de35fada8\": container with ID starting with 2b8ab98e6151f99012c470cc90cddd64ed2a72f4ca3aad4cec9ae29de35fada8 not found: ID does not exist" Dec 10 15:01:32 crc kubenswrapper[4718]: I1210 15:01:32.973446 4718 scope.go:117] "RemoveContainer" containerID="284c7e3d336e4ef89180f451225c663e7980ea190e9785a8e077def08c73b61a" Dec 10 15:01:32 crc kubenswrapper[4718]: E1210 15:01:32.975278 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"284c7e3d336e4ef89180f451225c663e7980ea190e9785a8e077def08c73b61a\": container with ID starting with 284c7e3d336e4ef89180f451225c663e7980ea190e9785a8e077def08c73b61a not found: ID does not exist" containerID="284c7e3d336e4ef89180f451225c663e7980ea190e9785a8e077def08c73b61a" Dec 10 15:01:32 crc kubenswrapper[4718]: I1210 15:01:32.975332 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"284c7e3d336e4ef89180f451225c663e7980ea190e9785a8e077def08c73b61a"} err="failed to get container status \"284c7e3d336e4ef89180f451225c663e7980ea190e9785a8e077def08c73b61a\": rpc error: code = NotFound desc = could not find container \"284c7e3d336e4ef89180f451225c663e7980ea190e9785a8e077def08c73b61a\": container with ID starting with 284c7e3d336e4ef89180f451225c663e7980ea190e9785a8e077def08c73b61a not found: ID does not exist" Dec 10 15:01:33 crc kubenswrapper[4718]: I1210 15:01:33.024015 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5l4pg\" (UniqueName: \"kubernetes.io/projected/61020ad7-3d1d-4dab-9d46-2bb54b2e92d0-kube-api-access-5l4pg\") pod \"nova-cell0-conductor-0\" (UID: \"61020ad7-3d1d-4dab-9d46-2bb54b2e92d0\") " pod="openstack/nova-cell0-conductor-0" Dec 10 15:01:33 crc kubenswrapper[4718]: I1210 15:01:33.024345 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/61020ad7-3d1d-4dab-9d46-2bb54b2e92d0-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"61020ad7-3d1d-4dab-9d46-2bb54b2e92d0\") " pod="openstack/nova-cell0-conductor-0" Dec 10 15:01:33 crc kubenswrapper[4718]: I1210 15:01:33.024502 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8eeb4354-d452-4452-8b5e-b792475c3f53-run-httpd\") pod \"ceilometer-0\" (UID: \"8eeb4354-d452-4452-8b5e-b792475c3f53\") " pod="openstack/ceilometer-0" Dec 10 15:01:33 crc kubenswrapper[4718]: I1210 15:01:33.024563 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8eeb4354-d452-4452-8b5e-b792475c3f53-config-data\") pod \"ceilometer-0\" (UID: \"8eeb4354-d452-4452-8b5e-b792475c3f53\") " pod="openstack/ceilometer-0" Dec 10 15:01:33 crc kubenswrapper[4718]: I1210 15:01:33.024650 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8eeb4354-d452-4452-8b5e-b792475c3f53-scripts\") pod \"ceilometer-0\" (UID: \"8eeb4354-d452-4452-8b5e-b792475c3f53\") " pod="openstack/ceilometer-0" Dec 10 15:01:33 crc kubenswrapper[4718]: I1210 15:01:33.024713 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-85mxc\" (UniqueName: \"kubernetes.io/projected/8eeb4354-d452-4452-8b5e-b792475c3f53-kube-api-access-85mxc\") pod \"ceilometer-0\" (UID: \"8eeb4354-d452-4452-8b5e-b792475c3f53\") " pod="openstack/ceilometer-0" Dec 10 15:01:33 crc kubenswrapper[4718]: I1210 15:01:33.024948 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8eeb4354-d452-4452-8b5e-b792475c3f53-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"8eeb4354-d452-4452-8b5e-b792475c3f53\") " pod="openstack/ceilometer-0" Dec 10 15:01:33 crc kubenswrapper[4718]: I1210 15:01:33.025023 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8eeb4354-d452-4452-8b5e-b792475c3f53-log-httpd\") pod \"ceilometer-0\" (UID: \"8eeb4354-d452-4452-8b5e-b792475c3f53\") " pod="openstack/ceilometer-0" Dec 10 15:01:33 crc kubenswrapper[4718]: I1210 15:01:33.025084 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8eeb4354-d452-4452-8b5e-b792475c3f53-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"8eeb4354-d452-4452-8b5e-b792475c3f53\") " pod="openstack/ceilometer-0" Dec 10 15:01:33 crc kubenswrapper[4718]: I1210 15:01:33.025114 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8eeb4354-d452-4452-8b5e-b792475c3f53-run-httpd\") pod \"ceilometer-0\" (UID: \"8eeb4354-d452-4452-8b5e-b792475c3f53\") " pod="openstack/ceilometer-0" Dec 10 15:01:33 crc kubenswrapper[4718]: I1210 15:01:33.025136 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61020ad7-3d1d-4dab-9d46-2bb54b2e92d0-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"61020ad7-3d1d-4dab-9d46-2bb54b2e92d0\") " pod="openstack/nova-cell0-conductor-0" Dec 10 15:01:33 crc kubenswrapper[4718]: I1210 15:01:33.025625 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8eeb4354-d452-4452-8b5e-b792475c3f53-log-httpd\") pod \"ceilometer-0\" (UID: \"8eeb4354-d452-4452-8b5e-b792475c3f53\") " pod="openstack/ceilometer-0" Dec 10 15:01:33 crc kubenswrapper[4718]: I1210 15:01:33.029477 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8eeb4354-d452-4452-8b5e-b792475c3f53-scripts\") pod \"ceilometer-0\" (UID: \"8eeb4354-d452-4452-8b5e-b792475c3f53\") " pod="openstack/ceilometer-0" Dec 10 15:01:33 crc kubenswrapper[4718]: I1210 15:01:33.031512 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8eeb4354-d452-4452-8b5e-b792475c3f53-config-data\") pod \"ceilometer-0\" (UID: \"8eeb4354-d452-4452-8b5e-b792475c3f53\") " pod="openstack/ceilometer-0" Dec 10 15:01:33 crc kubenswrapper[4718]: I1210 15:01:33.031620 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8eeb4354-d452-4452-8b5e-b792475c3f53-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"8eeb4354-d452-4452-8b5e-b792475c3f53\") " pod="openstack/ceilometer-0" Dec 10 15:01:33 crc kubenswrapper[4718]: I1210 15:01:33.033457 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8eeb4354-d452-4452-8b5e-b792475c3f53-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"8eeb4354-d452-4452-8b5e-b792475c3f53\") " pod="openstack/ceilometer-0" Dec 10 15:01:33 crc kubenswrapper[4718]: I1210 15:01:33.049240 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-85mxc\" (UniqueName: \"kubernetes.io/projected/8eeb4354-d452-4452-8b5e-b792475c3f53-kube-api-access-85mxc\") pod \"ceilometer-0\" (UID: \"8eeb4354-d452-4452-8b5e-b792475c3f53\") " pod="openstack/ceilometer-0" Dec 10 15:01:33 crc kubenswrapper[4718]: I1210 15:01:33.128000 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61020ad7-3d1d-4dab-9d46-2bb54b2e92d0-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"61020ad7-3d1d-4dab-9d46-2bb54b2e92d0\") " pod="openstack/nova-cell0-conductor-0" Dec 10 15:01:33 crc kubenswrapper[4718]: I1210 15:01:33.128193 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5l4pg\" (UniqueName: \"kubernetes.io/projected/61020ad7-3d1d-4dab-9d46-2bb54b2e92d0-kube-api-access-5l4pg\") pod \"nova-cell0-conductor-0\" (UID: \"61020ad7-3d1d-4dab-9d46-2bb54b2e92d0\") " pod="openstack/nova-cell0-conductor-0" Dec 10 15:01:33 crc kubenswrapper[4718]: I1210 15:01:33.128252 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/61020ad7-3d1d-4dab-9d46-2bb54b2e92d0-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"61020ad7-3d1d-4dab-9d46-2bb54b2e92d0\") " pod="openstack/nova-cell0-conductor-0" Dec 10 15:01:33 crc kubenswrapper[4718]: I1210 15:01:33.133496 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/61020ad7-3d1d-4dab-9d46-2bb54b2e92d0-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"61020ad7-3d1d-4dab-9d46-2bb54b2e92d0\") " pod="openstack/nova-cell0-conductor-0" Dec 10 15:01:33 crc kubenswrapper[4718]: I1210 15:01:33.134678 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61020ad7-3d1d-4dab-9d46-2bb54b2e92d0-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"61020ad7-3d1d-4dab-9d46-2bb54b2e92d0\") " pod="openstack/nova-cell0-conductor-0" Dec 10 15:01:33 crc kubenswrapper[4718]: I1210 15:01:33.153879 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5l4pg\" (UniqueName: \"kubernetes.io/projected/61020ad7-3d1d-4dab-9d46-2bb54b2e92d0-kube-api-access-5l4pg\") pod \"nova-cell0-conductor-0\" (UID: \"61020ad7-3d1d-4dab-9d46-2bb54b2e92d0\") " pod="openstack/nova-cell0-conductor-0" Dec 10 15:01:33 crc kubenswrapper[4718]: I1210 15:01:33.172229 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 10 15:01:33 crc kubenswrapper[4718]: I1210 15:01:33.278764 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Dec 10 15:01:33 crc kubenswrapper[4718]: I1210 15:01:33.682907 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 10 15:01:33 crc kubenswrapper[4718]: I1210 15:01:33.810216 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Dec 10 15:01:33 crc kubenswrapper[4718]: W1210 15:01:33.812103 4718 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod61020ad7_3d1d_4dab_9d46_2bb54b2e92d0.slice/crio-962ddeba6d1a2db1dfe532d713767d0482bdb349b3ba51d553e0a114448db7f3 WatchSource:0}: Error finding container 962ddeba6d1a2db1dfe532d713767d0482bdb349b3ba51d553e0a114448db7f3: Status 404 returned error can't find the container with id 962ddeba6d1a2db1dfe532d713767d0482bdb349b3ba51d553e0a114448db7f3 Dec 10 15:01:34 crc kubenswrapper[4718]: I1210 15:01:34.035269 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6d85809d-1325-40dc-8763-a62447fdaf5a" path="/var/lib/kubelet/pods/6d85809d-1325-40dc-8763-a62447fdaf5a/volumes" Dec 10 15:01:34 crc kubenswrapper[4718]: I1210 15:01:34.703287 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8eeb4354-d452-4452-8b5e-b792475c3f53","Type":"ContainerStarted","Data":"4dd01362cc6310fe799bfd4d2cd248b6ef7c8b294b36462c3ec296eae9027c28"} Dec 10 15:01:34 crc kubenswrapper[4718]: I1210 15:01:34.703365 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8eeb4354-d452-4452-8b5e-b792475c3f53","Type":"ContainerStarted","Data":"6e4548f06a1ff5038801bc924a4821a1feb6caeb273d331797e12ea015e29cc3"} Dec 10 15:01:34 crc kubenswrapper[4718]: I1210 15:01:34.703380 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8eeb4354-d452-4452-8b5e-b792475c3f53","Type":"ContainerStarted","Data":"17da44dcea5d5e3863fe33af609d773d20b42b2b7f7e550e04c8792aa147ac45"} Dec 10 15:01:34 crc kubenswrapper[4718]: I1210 15:01:34.707221 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"61020ad7-3d1d-4dab-9d46-2bb54b2e92d0","Type":"ContainerStarted","Data":"28c3eea0f770a017024eb108df2c439d80c5c1147ff8245251bc55299347a43c"} Dec 10 15:01:34 crc kubenswrapper[4718]: I1210 15:01:34.707290 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"61020ad7-3d1d-4dab-9d46-2bb54b2e92d0","Type":"ContainerStarted","Data":"962ddeba6d1a2db1dfe532d713767d0482bdb349b3ba51d553e0a114448db7f3"} Dec 10 15:01:34 crc kubenswrapper[4718]: I1210 15:01:34.707424 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Dec 10 15:01:34 crc kubenswrapper[4718]: I1210 15:01:34.741851 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.741820315 podStartE2EDuration="2.741820315s" podCreationTimestamp="2025-12-10 15:01:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 15:01:34.732795499 +0000 UTC m=+1799.682018936" watchObservedRunningTime="2025-12-10 15:01:34.741820315 +0000 UTC m=+1799.691043752" Dec 10 15:01:35 crc kubenswrapper[4718]: I1210 15:01:35.727084 4718 generic.go:334] "Generic (PLEG): container finished" podID="95261732-95ae-4618-a8a3-c883c287553e" containerID="4cd0ac0e3bfc181c12bfad133bbf2649aff491034cb6c0c37e1c8b126081dcc6" exitCode=0 Dec 10 15:01:35 crc kubenswrapper[4718]: I1210 15:01:35.727170 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"95261732-95ae-4618-a8a3-c883c287553e","Type":"ContainerDied","Data":"4cd0ac0e3bfc181c12bfad133bbf2649aff491034cb6c0c37e1c8b126081dcc6"} Dec 10 15:01:35 crc kubenswrapper[4718]: I1210 15:01:35.730307 4718 scope.go:117] "RemoveContainer" containerID="5ce9fd4b574e115b7c7d868acd2e27bb898593ab80a95c86d1d7d52f0aff782e" Dec 10 15:01:35 crc kubenswrapper[4718]: I1210 15:01:35.736829 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8eeb4354-d452-4452-8b5e-b792475c3f53","Type":"ContainerStarted","Data":"ccef5060c0744d1e798e79df05f9f97e208cc3c02db7ce4f32b9281730f77811"} Dec 10 15:01:35 crc kubenswrapper[4718]: I1210 15:01:35.818543 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Dec 10 15:01:35 crc kubenswrapper[4718]: I1210 15:01:35.986508 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95261732-95ae-4618-a8a3-c883c287553e-combined-ca-bundle\") pod \"95261732-95ae-4618-a8a3-c883c287553e\" (UID: \"95261732-95ae-4618-a8a3-c883c287553e\") " Dec 10 15:01:35 crc kubenswrapper[4718]: I1210 15:01:35.986727 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/95261732-95ae-4618-a8a3-c883c287553e-custom-prometheus-ca\") pod \"95261732-95ae-4618-a8a3-c883c287553e\" (UID: \"95261732-95ae-4618-a8a3-c883c287553e\") " Dec 10 15:01:35 crc kubenswrapper[4718]: I1210 15:01:35.986784 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/95261732-95ae-4618-a8a3-c883c287553e-logs\") pod \"95261732-95ae-4618-a8a3-c883c287553e\" (UID: \"95261732-95ae-4618-a8a3-c883c287553e\") " Dec 10 15:01:35 crc kubenswrapper[4718]: I1210 15:01:35.986811 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/95261732-95ae-4618-a8a3-c883c287553e-config-data\") pod \"95261732-95ae-4618-a8a3-c883c287553e\" (UID: \"95261732-95ae-4618-a8a3-c883c287553e\") " Dec 10 15:01:35 crc kubenswrapper[4718]: I1210 15:01:35.986864 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b2t2m\" (UniqueName: \"kubernetes.io/projected/95261732-95ae-4618-a8a3-c883c287553e-kube-api-access-b2t2m\") pod \"95261732-95ae-4618-a8a3-c883c287553e\" (UID: \"95261732-95ae-4618-a8a3-c883c287553e\") " Dec 10 15:01:35 crc kubenswrapper[4718]: I1210 15:01:35.987911 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/95261732-95ae-4618-a8a3-c883c287553e-logs" (OuterVolumeSpecName: "logs") pod "95261732-95ae-4618-a8a3-c883c287553e" (UID: "95261732-95ae-4618-a8a3-c883c287553e"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 15:01:36 crc kubenswrapper[4718]: I1210 15:01:36.037541 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/95261732-95ae-4618-a8a3-c883c287553e-kube-api-access-b2t2m" (OuterVolumeSpecName: "kube-api-access-b2t2m") pod "95261732-95ae-4618-a8a3-c883c287553e" (UID: "95261732-95ae-4618-a8a3-c883c287553e"). InnerVolumeSpecName "kube-api-access-b2t2m". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:01:36 crc kubenswrapper[4718]: I1210 15:01:36.049955 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/95261732-95ae-4618-a8a3-c883c287553e-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "95261732-95ae-4618-a8a3-c883c287553e" (UID: "95261732-95ae-4618-a8a3-c883c287553e"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:01:36 crc kubenswrapper[4718]: I1210 15:01:36.050146 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/95261732-95ae-4618-a8a3-c883c287553e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "95261732-95ae-4618-a8a3-c883c287553e" (UID: "95261732-95ae-4618-a8a3-c883c287553e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:01:36 crc kubenswrapper[4718]: I1210 15:01:36.093350 4718 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95261732-95ae-4618-a8a3-c883c287553e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 10 15:01:36 crc kubenswrapper[4718]: I1210 15:01:36.093428 4718 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/95261732-95ae-4618-a8a3-c883c287553e-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Dec 10 15:01:36 crc kubenswrapper[4718]: I1210 15:01:36.093440 4718 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/95261732-95ae-4618-a8a3-c883c287553e-logs\") on node \"crc\" DevicePath \"\"" Dec 10 15:01:36 crc kubenswrapper[4718]: I1210 15:01:36.093454 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b2t2m\" (UniqueName: \"kubernetes.io/projected/95261732-95ae-4618-a8a3-c883c287553e-kube-api-access-b2t2m\") on node \"crc\" DevicePath \"\"" Dec 10 15:01:36 crc kubenswrapper[4718]: I1210 15:01:36.103119 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/95261732-95ae-4618-a8a3-c883c287553e-config-data" (OuterVolumeSpecName: "config-data") pod "95261732-95ae-4618-a8a3-c883c287553e" (UID: "95261732-95ae-4618-a8a3-c883c287553e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:01:36 crc kubenswrapper[4718]: I1210 15:01:36.196439 4718 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/95261732-95ae-4618-a8a3-c883c287553e-config-data\") on node \"crc\" DevicePath \"\"" Dec 10 15:01:36 crc kubenswrapper[4718]: I1210 15:01:36.752209 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"95261732-95ae-4618-a8a3-c883c287553e","Type":"ContainerDied","Data":"724665ed02d0c341ded0e4f7dd51c6951e127f62923f1875a65ea5b1a7cd066d"} Dec 10 15:01:36 crc kubenswrapper[4718]: I1210 15:01:36.752276 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Dec 10 15:01:36 crc kubenswrapper[4718]: I1210 15:01:36.752293 4718 scope.go:117] "RemoveContainer" containerID="4cd0ac0e3bfc181c12bfad133bbf2649aff491034cb6c0c37e1c8b126081dcc6" Dec 10 15:01:36 crc kubenswrapper[4718]: I1210 15:01:36.811157 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-decision-engine-0"] Dec 10 15:01:36 crc kubenswrapper[4718]: I1210 15:01:36.824264 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/watcher-decision-engine-0"] Dec 10 15:01:36 crc kubenswrapper[4718]: I1210 15:01:36.862892 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-decision-engine-0"] Dec 10 15:01:36 crc kubenswrapper[4718]: E1210 15:01:36.863797 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="95261732-95ae-4618-a8a3-c883c287553e" containerName="watcher-decision-engine" Dec 10 15:01:36 crc kubenswrapper[4718]: I1210 15:01:36.863832 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="95261732-95ae-4618-a8a3-c883c287553e" containerName="watcher-decision-engine" Dec 10 15:01:36 crc kubenswrapper[4718]: E1210 15:01:36.863855 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="95261732-95ae-4618-a8a3-c883c287553e" containerName="watcher-decision-engine" Dec 10 15:01:36 crc kubenswrapper[4718]: I1210 15:01:36.863876 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="95261732-95ae-4618-a8a3-c883c287553e" containerName="watcher-decision-engine" Dec 10 15:01:36 crc kubenswrapper[4718]: E1210 15:01:36.863939 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="95261732-95ae-4618-a8a3-c883c287553e" containerName="watcher-decision-engine" Dec 10 15:01:36 crc kubenswrapper[4718]: I1210 15:01:36.863949 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="95261732-95ae-4618-a8a3-c883c287553e" containerName="watcher-decision-engine" Dec 10 15:01:36 crc kubenswrapper[4718]: I1210 15:01:36.864364 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="95261732-95ae-4618-a8a3-c883c287553e" containerName="watcher-decision-engine" Dec 10 15:01:36 crc kubenswrapper[4718]: I1210 15:01:36.864420 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="95261732-95ae-4618-a8a3-c883c287553e" containerName="watcher-decision-engine" Dec 10 15:01:36 crc kubenswrapper[4718]: I1210 15:01:36.864438 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="95261732-95ae-4618-a8a3-c883c287553e" containerName="watcher-decision-engine" Dec 10 15:01:36 crc kubenswrapper[4718]: I1210 15:01:36.864454 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="95261732-95ae-4618-a8a3-c883c287553e" containerName="watcher-decision-engine" Dec 10 15:01:36 crc kubenswrapper[4718]: I1210 15:01:36.866151 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Dec 10 15:01:36 crc kubenswrapper[4718]: I1210 15:01:36.882731 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-decision-engine-config-data" Dec 10 15:01:36 crc kubenswrapper[4718]: I1210 15:01:36.890100 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-decision-engine-0"] Dec 10 15:01:36 crc kubenswrapper[4718]: I1210 15:01:36.920736 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0105b21d-8a6a-4368-aec4-80c009daecd1-logs\") pod \"watcher-decision-engine-0\" (UID: \"0105b21d-8a6a-4368-aec4-80c009daecd1\") " pod="openstack/watcher-decision-engine-0" Dec 10 15:01:36 crc kubenswrapper[4718]: I1210 15:01:36.920876 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p645w\" (UniqueName: \"kubernetes.io/projected/0105b21d-8a6a-4368-aec4-80c009daecd1-kube-api-access-p645w\") pod \"watcher-decision-engine-0\" (UID: \"0105b21d-8a6a-4368-aec4-80c009daecd1\") " pod="openstack/watcher-decision-engine-0" Dec 10 15:01:36 crc kubenswrapper[4718]: I1210 15:01:36.920942 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0105b21d-8a6a-4368-aec4-80c009daecd1-config-data\") pod \"watcher-decision-engine-0\" (UID: \"0105b21d-8a6a-4368-aec4-80c009daecd1\") " pod="openstack/watcher-decision-engine-0" Dec 10 15:01:36 crc kubenswrapper[4718]: I1210 15:01:36.921078 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0105b21d-8a6a-4368-aec4-80c009daecd1-combined-ca-bundle\") pod \"watcher-decision-engine-0\" (UID: \"0105b21d-8a6a-4368-aec4-80c009daecd1\") " pod="openstack/watcher-decision-engine-0" Dec 10 15:01:36 crc kubenswrapper[4718]: I1210 15:01:36.921150 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/0105b21d-8a6a-4368-aec4-80c009daecd1-custom-prometheus-ca\") pod \"watcher-decision-engine-0\" (UID: \"0105b21d-8a6a-4368-aec4-80c009daecd1\") " pod="openstack/watcher-decision-engine-0" Dec 10 15:01:37 crc kubenswrapper[4718]: I1210 15:01:37.023411 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p645w\" (UniqueName: \"kubernetes.io/projected/0105b21d-8a6a-4368-aec4-80c009daecd1-kube-api-access-p645w\") pod \"watcher-decision-engine-0\" (UID: \"0105b21d-8a6a-4368-aec4-80c009daecd1\") " pod="openstack/watcher-decision-engine-0" Dec 10 15:01:37 crc kubenswrapper[4718]: I1210 15:01:37.023498 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0105b21d-8a6a-4368-aec4-80c009daecd1-config-data\") pod \"watcher-decision-engine-0\" (UID: \"0105b21d-8a6a-4368-aec4-80c009daecd1\") " pod="openstack/watcher-decision-engine-0" Dec 10 15:01:37 crc kubenswrapper[4718]: I1210 15:01:37.023637 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0105b21d-8a6a-4368-aec4-80c009daecd1-combined-ca-bundle\") pod \"watcher-decision-engine-0\" (UID: \"0105b21d-8a6a-4368-aec4-80c009daecd1\") " pod="openstack/watcher-decision-engine-0" Dec 10 15:01:37 crc kubenswrapper[4718]: I1210 15:01:37.023706 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/0105b21d-8a6a-4368-aec4-80c009daecd1-custom-prometheus-ca\") pod \"watcher-decision-engine-0\" (UID: \"0105b21d-8a6a-4368-aec4-80c009daecd1\") " pod="openstack/watcher-decision-engine-0" Dec 10 15:01:37 crc kubenswrapper[4718]: I1210 15:01:37.025451 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0105b21d-8a6a-4368-aec4-80c009daecd1-logs\") pod \"watcher-decision-engine-0\" (UID: \"0105b21d-8a6a-4368-aec4-80c009daecd1\") " pod="openstack/watcher-decision-engine-0" Dec 10 15:01:37 crc kubenswrapper[4718]: I1210 15:01:37.026277 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0105b21d-8a6a-4368-aec4-80c009daecd1-logs\") pod \"watcher-decision-engine-0\" (UID: \"0105b21d-8a6a-4368-aec4-80c009daecd1\") " pod="openstack/watcher-decision-engine-0" Dec 10 15:01:37 crc kubenswrapper[4718]: I1210 15:01:37.030555 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0105b21d-8a6a-4368-aec4-80c009daecd1-combined-ca-bundle\") pod \"watcher-decision-engine-0\" (UID: \"0105b21d-8a6a-4368-aec4-80c009daecd1\") " pod="openstack/watcher-decision-engine-0" Dec 10 15:01:37 crc kubenswrapper[4718]: I1210 15:01:37.041951 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/0105b21d-8a6a-4368-aec4-80c009daecd1-custom-prometheus-ca\") pod \"watcher-decision-engine-0\" (UID: \"0105b21d-8a6a-4368-aec4-80c009daecd1\") " pod="openstack/watcher-decision-engine-0" Dec 10 15:01:37 crc kubenswrapper[4718]: I1210 15:01:37.044587 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0105b21d-8a6a-4368-aec4-80c009daecd1-config-data\") pod \"watcher-decision-engine-0\" (UID: \"0105b21d-8a6a-4368-aec4-80c009daecd1\") " pod="openstack/watcher-decision-engine-0" Dec 10 15:01:37 crc kubenswrapper[4718]: I1210 15:01:37.050289 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p645w\" (UniqueName: \"kubernetes.io/projected/0105b21d-8a6a-4368-aec4-80c009daecd1-kube-api-access-p645w\") pod \"watcher-decision-engine-0\" (UID: \"0105b21d-8a6a-4368-aec4-80c009daecd1\") " pod="openstack/watcher-decision-engine-0" Dec 10 15:01:37 crc kubenswrapper[4718]: I1210 15:01:37.214200 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Dec 10 15:01:37 crc kubenswrapper[4718]: I1210 15:01:37.909660 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-decision-engine-0"] Dec 10 15:01:37 crc kubenswrapper[4718]: W1210 15:01:37.920465 4718 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0105b21d_8a6a_4368_aec4_80c009daecd1.slice/crio-b1cca1c50eda4206243b761e93097eacacac6ef1b66645651b8f2f47b3955b0e WatchSource:0}: Error finding container b1cca1c50eda4206243b761e93097eacacac6ef1b66645651b8f2f47b3955b0e: Status 404 returned error can't find the container with id b1cca1c50eda4206243b761e93097eacacac6ef1b66645651b8f2f47b3955b0e Dec 10 15:01:38 crc kubenswrapper[4718]: I1210 15:01:38.039418 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="95261732-95ae-4618-a8a3-c883c287553e" path="/var/lib/kubelet/pods/95261732-95ae-4618-a8a3-c883c287553e/volumes" Dec 10 15:01:38 crc kubenswrapper[4718]: I1210 15:01:38.786008 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8eeb4354-d452-4452-8b5e-b792475c3f53","Type":"ContainerStarted","Data":"59e3921320acf5a0d11557951f079bc66ad3451dd80f2aca7ae387cafd549949"} Dec 10 15:01:38 crc kubenswrapper[4718]: I1210 15:01:38.789564 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Dec 10 15:01:38 crc kubenswrapper[4718]: I1210 15:01:38.803993 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"0105b21d-8a6a-4368-aec4-80c009daecd1","Type":"ContainerStarted","Data":"37b36f85cc707887ec742472818a3504306c1a8b71191022cdb1f713a6bb1a65"} Dec 10 15:01:38 crc kubenswrapper[4718]: I1210 15:01:38.804772 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"0105b21d-8a6a-4368-aec4-80c009daecd1","Type":"ContainerStarted","Data":"b1cca1c50eda4206243b761e93097eacacac6ef1b66645651b8f2f47b3955b0e"} Dec 10 15:01:38 crc kubenswrapper[4718]: I1210 15:01:38.837264 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.7308170990000002 podStartE2EDuration="6.83721732s" podCreationTimestamp="2025-12-10 15:01:32 +0000 UTC" firstStartedPulling="2025-12-10 15:01:33.684453255 +0000 UTC m=+1798.633676672" lastFinishedPulling="2025-12-10 15:01:37.790853476 +0000 UTC m=+1802.740076893" observedRunningTime="2025-12-10 15:01:38.825126747 +0000 UTC m=+1803.774350194" watchObservedRunningTime="2025-12-10 15:01:38.83721732 +0000 UTC m=+1803.786440737" Dec 10 15:01:43 crc kubenswrapper[4718]: I1210 15:01:43.314988 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Dec 10 15:01:43 crc kubenswrapper[4718]: I1210 15:01:43.342888 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-decision-engine-0" podStartSLOduration=7.342857016 podStartE2EDuration="7.342857016s" podCreationTimestamp="2025-12-10 15:01:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 15:01:38.862934556 +0000 UTC m=+1803.812157973" watchObservedRunningTime="2025-12-10 15:01:43.342857016 +0000 UTC m=+1808.292080433" Dec 10 15:01:43 crc kubenswrapper[4718]: I1210 15:01:43.886433 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-4pldw"] Dec 10 15:01:43 crc kubenswrapper[4718]: E1210 15:01:43.887313 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="95261732-95ae-4618-a8a3-c883c287553e" containerName="watcher-decision-engine" Dec 10 15:01:43 crc kubenswrapper[4718]: I1210 15:01:43.887329 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="95261732-95ae-4618-a8a3-c883c287553e" containerName="watcher-decision-engine" Dec 10 15:01:43 crc kubenswrapper[4718]: E1210 15:01:43.887349 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="95261732-95ae-4618-a8a3-c883c287553e" containerName="watcher-decision-engine" Dec 10 15:01:43 crc kubenswrapper[4718]: I1210 15:01:43.887356 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="95261732-95ae-4618-a8a3-c883c287553e" containerName="watcher-decision-engine" Dec 10 15:01:43 crc kubenswrapper[4718]: I1210 15:01:43.887624 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="95261732-95ae-4618-a8a3-c883c287553e" containerName="watcher-decision-engine" Dec 10 15:01:43 crc kubenswrapper[4718]: I1210 15:01:43.888561 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-4pldw" Dec 10 15:01:43 crc kubenswrapper[4718]: I1210 15:01:43.894136 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Dec 10 15:01:43 crc kubenswrapper[4718]: I1210 15:01:43.894499 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Dec 10 15:01:43 crc kubenswrapper[4718]: I1210 15:01:43.949519 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-4pldw"] Dec 10 15:01:44 crc kubenswrapper[4718]: I1210 15:01:44.032856 4718 scope.go:117] "RemoveContainer" containerID="76beffb68a4302af15a19b5c310d822594a21c7c7ec551bf4bf551ca3dcf1f21" Dec 10 15:01:44 crc kubenswrapper[4718]: E1210 15:01:44.033272 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8zmhn_openshift-machine-config-operator(8db53917-7cfb-496d-b8a0-5cc68f3be4e7)\"" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" podUID="8db53917-7cfb-496d-b8a0-5cc68f3be4e7" Dec 10 15:01:44 crc kubenswrapper[4718]: I1210 15:01:44.049376 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mgvg5\" (UniqueName: \"kubernetes.io/projected/4aa714e1-7bc1-4d1d-b829-f053c2a4404c-kube-api-access-mgvg5\") pod \"nova-cell0-cell-mapping-4pldw\" (UID: \"4aa714e1-7bc1-4d1d-b829-f053c2a4404c\") " pod="openstack/nova-cell0-cell-mapping-4pldw" Dec 10 15:01:44 crc kubenswrapper[4718]: I1210 15:01:44.049473 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4aa714e1-7bc1-4d1d-b829-f053c2a4404c-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-4pldw\" (UID: \"4aa714e1-7bc1-4d1d-b829-f053c2a4404c\") " pod="openstack/nova-cell0-cell-mapping-4pldw" Dec 10 15:01:44 crc kubenswrapper[4718]: I1210 15:01:44.049631 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4aa714e1-7bc1-4d1d-b829-f053c2a4404c-scripts\") pod \"nova-cell0-cell-mapping-4pldw\" (UID: \"4aa714e1-7bc1-4d1d-b829-f053c2a4404c\") " pod="openstack/nova-cell0-cell-mapping-4pldw" Dec 10 15:01:44 crc kubenswrapper[4718]: I1210 15:01:44.049660 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4aa714e1-7bc1-4d1d-b829-f053c2a4404c-config-data\") pod \"nova-cell0-cell-mapping-4pldw\" (UID: \"4aa714e1-7bc1-4d1d-b829-f053c2a4404c\") " pod="openstack/nova-cell0-cell-mapping-4pldw" Dec 10 15:01:44 crc kubenswrapper[4718]: I1210 15:01:44.157198 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mgvg5\" (UniqueName: \"kubernetes.io/projected/4aa714e1-7bc1-4d1d-b829-f053c2a4404c-kube-api-access-mgvg5\") pod \"nova-cell0-cell-mapping-4pldw\" (UID: \"4aa714e1-7bc1-4d1d-b829-f053c2a4404c\") " pod="openstack/nova-cell0-cell-mapping-4pldw" Dec 10 15:01:44 crc kubenswrapper[4718]: I1210 15:01:44.157277 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4aa714e1-7bc1-4d1d-b829-f053c2a4404c-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-4pldw\" (UID: \"4aa714e1-7bc1-4d1d-b829-f053c2a4404c\") " pod="openstack/nova-cell0-cell-mapping-4pldw" Dec 10 15:01:44 crc kubenswrapper[4718]: I1210 15:01:44.157352 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4aa714e1-7bc1-4d1d-b829-f053c2a4404c-scripts\") pod \"nova-cell0-cell-mapping-4pldw\" (UID: \"4aa714e1-7bc1-4d1d-b829-f053c2a4404c\") " pod="openstack/nova-cell0-cell-mapping-4pldw" Dec 10 15:01:44 crc kubenswrapper[4718]: I1210 15:01:44.157411 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4aa714e1-7bc1-4d1d-b829-f053c2a4404c-config-data\") pod \"nova-cell0-cell-mapping-4pldw\" (UID: \"4aa714e1-7bc1-4d1d-b829-f053c2a4404c\") " pod="openstack/nova-cell0-cell-mapping-4pldw" Dec 10 15:01:44 crc kubenswrapper[4718]: I1210 15:01:44.187193 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4aa714e1-7bc1-4d1d-b829-f053c2a4404c-config-data\") pod \"nova-cell0-cell-mapping-4pldw\" (UID: \"4aa714e1-7bc1-4d1d-b829-f053c2a4404c\") " pod="openstack/nova-cell0-cell-mapping-4pldw" Dec 10 15:01:44 crc kubenswrapper[4718]: I1210 15:01:44.188166 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4aa714e1-7bc1-4d1d-b829-f053c2a4404c-scripts\") pod \"nova-cell0-cell-mapping-4pldw\" (UID: \"4aa714e1-7bc1-4d1d-b829-f053c2a4404c\") " pod="openstack/nova-cell0-cell-mapping-4pldw" Dec 10 15:01:44 crc kubenswrapper[4718]: I1210 15:01:44.189758 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4aa714e1-7bc1-4d1d-b829-f053c2a4404c-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-4pldw\" (UID: \"4aa714e1-7bc1-4d1d-b829-f053c2a4404c\") " pod="openstack/nova-cell0-cell-mapping-4pldw" Dec 10 15:01:44 crc kubenswrapper[4718]: I1210 15:01:44.251492 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mgvg5\" (UniqueName: \"kubernetes.io/projected/4aa714e1-7bc1-4d1d-b829-f053c2a4404c-kube-api-access-mgvg5\") pod \"nova-cell0-cell-mapping-4pldw\" (UID: \"4aa714e1-7bc1-4d1d-b829-f053c2a4404c\") " pod="openstack/nova-cell0-cell-mapping-4pldw" Dec 10 15:01:44 crc kubenswrapper[4718]: I1210 15:01:44.277142 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-4pldw" Dec 10 15:01:44 crc kubenswrapper[4718]: I1210 15:01:44.488527 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Dec 10 15:01:44 crc kubenswrapper[4718]: I1210 15:01:44.490981 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 10 15:01:44 crc kubenswrapper[4718]: I1210 15:01:44.506033 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Dec 10 15:01:44 crc kubenswrapper[4718]: I1210 15:01:44.515941 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Dec 10 15:01:44 crc kubenswrapper[4718]: I1210 15:01:44.574062 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d3a55a46-7909-4170-a010-afc35dade12f-logs\") pod \"nova-metadata-0\" (UID: \"d3a55a46-7909-4170-a010-afc35dade12f\") " pod="openstack/nova-metadata-0" Dec 10 15:01:44 crc kubenswrapper[4718]: I1210 15:01:44.574133 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3a55a46-7909-4170-a010-afc35dade12f-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"d3a55a46-7909-4170-a010-afc35dade12f\") " pod="openstack/nova-metadata-0" Dec 10 15:01:44 crc kubenswrapper[4718]: I1210 15:01:44.574292 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6qw88\" (UniqueName: \"kubernetes.io/projected/d3a55a46-7909-4170-a010-afc35dade12f-kube-api-access-6qw88\") pod \"nova-metadata-0\" (UID: \"d3a55a46-7909-4170-a010-afc35dade12f\") " pod="openstack/nova-metadata-0" Dec 10 15:01:44 crc kubenswrapper[4718]: I1210 15:01:44.574347 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d3a55a46-7909-4170-a010-afc35dade12f-config-data\") pod \"nova-metadata-0\" (UID: \"d3a55a46-7909-4170-a010-afc35dade12f\") " pod="openstack/nova-metadata-0" Dec 10 15:01:44 crc kubenswrapper[4718]: I1210 15:01:44.656364 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 10 15:01:44 crc kubenswrapper[4718]: I1210 15:01:44.659459 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Dec 10 15:01:44 crc kubenswrapper[4718]: I1210 15:01:44.667414 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Dec 10 15:01:44 crc kubenswrapper[4718]: I1210 15:01:44.682599 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cfa18867-53a1-4a8f-a010-318cefec946c-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"cfa18867-53a1-4a8f-a010-318cefec946c\") " pod="openstack/nova-cell1-novncproxy-0" Dec 10 15:01:44 crc kubenswrapper[4718]: I1210 15:01:44.683125 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6qw88\" (UniqueName: \"kubernetes.io/projected/d3a55a46-7909-4170-a010-afc35dade12f-kube-api-access-6qw88\") pod \"nova-metadata-0\" (UID: \"d3a55a46-7909-4170-a010-afc35dade12f\") " pod="openstack/nova-metadata-0" Dec 10 15:01:44 crc kubenswrapper[4718]: I1210 15:01:44.683240 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d3a55a46-7909-4170-a010-afc35dade12f-config-data\") pod \"nova-metadata-0\" (UID: \"d3a55a46-7909-4170-a010-afc35dade12f\") " pod="openstack/nova-metadata-0" Dec 10 15:01:44 crc kubenswrapper[4718]: I1210 15:01:44.683368 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cfa18867-53a1-4a8f-a010-318cefec946c-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"cfa18867-53a1-4a8f-a010-318cefec946c\") " pod="openstack/nova-cell1-novncproxy-0" Dec 10 15:01:44 crc kubenswrapper[4718]: I1210 15:01:44.683474 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d3a55a46-7909-4170-a010-afc35dade12f-logs\") pod \"nova-metadata-0\" (UID: \"d3a55a46-7909-4170-a010-afc35dade12f\") " pod="openstack/nova-metadata-0" Dec 10 15:01:44 crc kubenswrapper[4718]: I1210 15:01:44.683547 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3a55a46-7909-4170-a010-afc35dade12f-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"d3a55a46-7909-4170-a010-afc35dade12f\") " pod="openstack/nova-metadata-0" Dec 10 15:01:44 crc kubenswrapper[4718]: I1210 15:01:44.683678 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ftrrg\" (UniqueName: \"kubernetes.io/projected/cfa18867-53a1-4a8f-a010-318cefec946c-kube-api-access-ftrrg\") pod \"nova-cell1-novncproxy-0\" (UID: \"cfa18867-53a1-4a8f-a010-318cefec946c\") " pod="openstack/nova-cell1-novncproxy-0" Dec 10 15:01:44 crc kubenswrapper[4718]: I1210 15:01:44.685092 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d3a55a46-7909-4170-a010-afc35dade12f-logs\") pod \"nova-metadata-0\" (UID: \"d3a55a46-7909-4170-a010-afc35dade12f\") " pod="openstack/nova-metadata-0" Dec 10 15:01:44 crc kubenswrapper[4718]: I1210 15:01:44.697602 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d3a55a46-7909-4170-a010-afc35dade12f-config-data\") pod \"nova-metadata-0\" (UID: \"d3a55a46-7909-4170-a010-afc35dade12f\") " pod="openstack/nova-metadata-0" Dec 10 15:01:44 crc kubenswrapper[4718]: I1210 15:01:44.699310 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3a55a46-7909-4170-a010-afc35dade12f-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"d3a55a46-7909-4170-a010-afc35dade12f\") " pod="openstack/nova-metadata-0" Dec 10 15:01:44 crc kubenswrapper[4718]: I1210 15:01:44.763823 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 10 15:01:44 crc kubenswrapper[4718]: I1210 15:01:44.765723 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6qw88\" (UniqueName: \"kubernetes.io/projected/d3a55a46-7909-4170-a010-afc35dade12f-kube-api-access-6qw88\") pod \"nova-metadata-0\" (UID: \"d3a55a46-7909-4170-a010-afc35dade12f\") " pod="openstack/nova-metadata-0" Dec 10 15:01:44 crc kubenswrapper[4718]: I1210 15:01:44.830815 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cfa18867-53a1-4a8f-a010-318cefec946c-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"cfa18867-53a1-4a8f-a010-318cefec946c\") " pod="openstack/nova-cell1-novncproxy-0" Dec 10 15:01:44 crc kubenswrapper[4718]: I1210 15:01:44.874152 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ftrrg\" (UniqueName: \"kubernetes.io/projected/cfa18867-53a1-4a8f-a010-318cefec946c-kube-api-access-ftrrg\") pod \"nova-cell1-novncproxy-0\" (UID: \"cfa18867-53a1-4a8f-a010-318cefec946c\") " pod="openstack/nova-cell1-novncproxy-0" Dec 10 15:01:44 crc kubenswrapper[4718]: I1210 15:01:44.879803 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cfa18867-53a1-4a8f-a010-318cefec946c-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"cfa18867-53a1-4a8f-a010-318cefec946c\") " pod="openstack/nova-cell1-novncproxy-0" Dec 10 15:01:44 crc kubenswrapper[4718]: I1210 15:01:44.839722 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cfa18867-53a1-4a8f-a010-318cefec946c-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"cfa18867-53a1-4a8f-a010-318cefec946c\") " pod="openstack/nova-cell1-novncproxy-0" Dec 10 15:01:44 crc kubenswrapper[4718]: I1210 15:01:44.896407 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cfa18867-53a1-4a8f-a010-318cefec946c-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"cfa18867-53a1-4a8f-a010-318cefec946c\") " pod="openstack/nova-cell1-novncproxy-0" Dec 10 15:01:44 crc kubenswrapper[4718]: I1210 15:01:44.912778 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ftrrg\" (UniqueName: \"kubernetes.io/projected/cfa18867-53a1-4a8f-a010-318cefec946c-kube-api-access-ftrrg\") pod \"nova-cell1-novncproxy-0\" (UID: \"cfa18867-53a1-4a8f-a010-318cefec946c\") " pod="openstack/nova-cell1-novncproxy-0" Dec 10 15:01:44 crc kubenswrapper[4718]: I1210 15:01:44.961497 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 10 15:01:45 crc kubenswrapper[4718]: I1210 15:01:45.009893 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Dec 10 15:01:45 crc kubenswrapper[4718]: I1210 15:01:45.039075 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Dec 10 15:01:45 crc kubenswrapper[4718]: I1210 15:01:45.041997 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 10 15:01:45 crc kubenswrapper[4718]: I1210 15:01:45.049045 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Dec 10 15:01:45 crc kubenswrapper[4718]: I1210 15:01:45.090771 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 10 15:01:45 crc kubenswrapper[4718]: I1210 15:01:45.099022 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8fdh2\" (UniqueName: \"kubernetes.io/projected/c7a2c239-7449-4684-a8c8-2d295d790367-kube-api-access-8fdh2\") pod \"nova-api-0\" (UID: \"c7a2c239-7449-4684-a8c8-2d295d790367\") " pod="openstack/nova-api-0" Dec 10 15:01:45 crc kubenswrapper[4718]: I1210 15:01:45.099420 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c7a2c239-7449-4684-a8c8-2d295d790367-config-data\") pod \"nova-api-0\" (UID: \"c7a2c239-7449-4684-a8c8-2d295d790367\") " pod="openstack/nova-api-0" Dec 10 15:01:45 crc kubenswrapper[4718]: I1210 15:01:45.099624 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7a2c239-7449-4684-a8c8-2d295d790367-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"c7a2c239-7449-4684-a8c8-2d295d790367\") " pod="openstack/nova-api-0" Dec 10 15:01:45 crc kubenswrapper[4718]: I1210 15:01:45.099766 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c7a2c239-7449-4684-a8c8-2d295d790367-logs\") pod \"nova-api-0\" (UID: \"c7a2c239-7449-4684-a8c8-2d295d790367\") " pod="openstack/nova-api-0" Dec 10 15:01:45 crc kubenswrapper[4718]: I1210 15:01:45.161505 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Dec 10 15:01:45 crc kubenswrapper[4718]: I1210 15:01:45.164297 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 10 15:01:45 crc kubenswrapper[4718]: I1210 15:01:45.171347 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Dec 10 15:01:45 crc kubenswrapper[4718]: I1210 15:01:45.179461 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Dec 10 15:01:45 crc kubenswrapper[4718]: I1210 15:01:45.222010 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d30ea5cb-94c9-4beb-83a0-7f076696c395-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"d30ea5cb-94c9-4beb-83a0-7f076696c395\") " pod="openstack/nova-scheduler-0" Dec 10 15:01:45 crc kubenswrapper[4718]: I1210 15:01:45.222093 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d30ea5cb-94c9-4beb-83a0-7f076696c395-config-data\") pod \"nova-scheduler-0\" (UID: \"d30ea5cb-94c9-4beb-83a0-7f076696c395\") " pod="openstack/nova-scheduler-0" Dec 10 15:01:45 crc kubenswrapper[4718]: I1210 15:01:45.222152 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8fdh2\" (UniqueName: \"kubernetes.io/projected/c7a2c239-7449-4684-a8c8-2d295d790367-kube-api-access-8fdh2\") pod \"nova-api-0\" (UID: \"c7a2c239-7449-4684-a8c8-2d295d790367\") " pod="openstack/nova-api-0" Dec 10 15:01:45 crc kubenswrapper[4718]: I1210 15:01:45.222203 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c7a2c239-7449-4684-a8c8-2d295d790367-config-data\") pod \"nova-api-0\" (UID: \"c7a2c239-7449-4684-a8c8-2d295d790367\") " pod="openstack/nova-api-0" Dec 10 15:01:45 crc kubenswrapper[4718]: I1210 15:01:45.222847 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7a2c239-7449-4684-a8c8-2d295d790367-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"c7a2c239-7449-4684-a8c8-2d295d790367\") " pod="openstack/nova-api-0" Dec 10 15:01:45 crc kubenswrapper[4718]: I1210 15:01:45.222879 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c7a2c239-7449-4684-a8c8-2d295d790367-logs\") pod \"nova-api-0\" (UID: \"c7a2c239-7449-4684-a8c8-2d295d790367\") " pod="openstack/nova-api-0" Dec 10 15:01:45 crc kubenswrapper[4718]: I1210 15:01:45.222901 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bf67j\" (UniqueName: \"kubernetes.io/projected/d30ea5cb-94c9-4beb-83a0-7f076696c395-kube-api-access-bf67j\") pod \"nova-scheduler-0\" (UID: \"d30ea5cb-94c9-4beb-83a0-7f076696c395\") " pod="openstack/nova-scheduler-0" Dec 10 15:01:45 crc kubenswrapper[4718]: I1210 15:01:45.228154 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c7a2c239-7449-4684-a8c8-2d295d790367-logs\") pod \"nova-api-0\" (UID: \"c7a2c239-7449-4684-a8c8-2d295d790367\") " pod="openstack/nova-api-0" Dec 10 15:01:45 crc kubenswrapper[4718]: I1210 15:01:45.238260 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7a2c239-7449-4684-a8c8-2d295d790367-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"c7a2c239-7449-4684-a8c8-2d295d790367\") " pod="openstack/nova-api-0" Dec 10 15:01:45 crc kubenswrapper[4718]: I1210 15:01:45.239142 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c7a2c239-7449-4684-a8c8-2d295d790367-config-data\") pod \"nova-api-0\" (UID: \"c7a2c239-7449-4684-a8c8-2d295d790367\") " pod="openstack/nova-api-0" Dec 10 15:01:45 crc kubenswrapper[4718]: I1210 15:01:45.247694 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-844fc57f6f-clkpk"] Dec 10 15:01:45 crc kubenswrapper[4718]: I1210 15:01:45.260545 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-844fc57f6f-clkpk"] Dec 10 15:01:45 crc kubenswrapper[4718]: I1210 15:01:45.279602 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8fdh2\" (UniqueName: \"kubernetes.io/projected/c7a2c239-7449-4684-a8c8-2d295d790367-kube-api-access-8fdh2\") pod \"nova-api-0\" (UID: \"c7a2c239-7449-4684-a8c8-2d295d790367\") " pod="openstack/nova-api-0" Dec 10 15:01:45 crc kubenswrapper[4718]: I1210 15:01:45.279873 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-844fc57f6f-clkpk" Dec 10 15:01:45 crc kubenswrapper[4718]: I1210 15:01:45.331437 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d30ea5cb-94c9-4beb-83a0-7f076696c395-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"d30ea5cb-94c9-4beb-83a0-7f076696c395\") " pod="openstack/nova-scheduler-0" Dec 10 15:01:45 crc kubenswrapper[4718]: I1210 15:01:45.332007 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d52f76ed-011b-46f0-b5a7-653e1c481595-dns-svc\") pod \"dnsmasq-dns-844fc57f6f-clkpk\" (UID: \"d52f76ed-011b-46f0-b5a7-653e1c481595\") " pod="openstack/dnsmasq-dns-844fc57f6f-clkpk" Dec 10 15:01:45 crc kubenswrapper[4718]: I1210 15:01:45.332060 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d30ea5cb-94c9-4beb-83a0-7f076696c395-config-data\") pod \"nova-scheduler-0\" (UID: \"d30ea5cb-94c9-4beb-83a0-7f076696c395\") " pod="openstack/nova-scheduler-0" Dec 10 15:01:45 crc kubenswrapper[4718]: I1210 15:01:45.332090 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d52f76ed-011b-46f0-b5a7-653e1c481595-dns-swift-storage-0\") pod \"dnsmasq-dns-844fc57f6f-clkpk\" (UID: \"d52f76ed-011b-46f0-b5a7-653e1c481595\") " pod="openstack/dnsmasq-dns-844fc57f6f-clkpk" Dec 10 15:01:45 crc kubenswrapper[4718]: I1210 15:01:45.332205 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d52f76ed-011b-46f0-b5a7-653e1c481595-ovsdbserver-nb\") pod \"dnsmasq-dns-844fc57f6f-clkpk\" (UID: \"d52f76ed-011b-46f0-b5a7-653e1c481595\") " pod="openstack/dnsmasq-dns-844fc57f6f-clkpk" Dec 10 15:01:45 crc kubenswrapper[4718]: I1210 15:01:45.332274 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ps7wz\" (UniqueName: \"kubernetes.io/projected/d52f76ed-011b-46f0-b5a7-653e1c481595-kube-api-access-ps7wz\") pod \"dnsmasq-dns-844fc57f6f-clkpk\" (UID: \"d52f76ed-011b-46f0-b5a7-653e1c481595\") " pod="openstack/dnsmasq-dns-844fc57f6f-clkpk" Dec 10 15:01:45 crc kubenswrapper[4718]: I1210 15:01:45.332366 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bf67j\" (UniqueName: \"kubernetes.io/projected/d30ea5cb-94c9-4beb-83a0-7f076696c395-kube-api-access-bf67j\") pod \"nova-scheduler-0\" (UID: \"d30ea5cb-94c9-4beb-83a0-7f076696c395\") " pod="openstack/nova-scheduler-0" Dec 10 15:01:45 crc kubenswrapper[4718]: I1210 15:01:45.332432 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d52f76ed-011b-46f0-b5a7-653e1c481595-config\") pod \"dnsmasq-dns-844fc57f6f-clkpk\" (UID: \"d52f76ed-011b-46f0-b5a7-653e1c481595\") " pod="openstack/dnsmasq-dns-844fc57f6f-clkpk" Dec 10 15:01:45 crc kubenswrapper[4718]: I1210 15:01:45.332461 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d52f76ed-011b-46f0-b5a7-653e1c481595-ovsdbserver-sb\") pod \"dnsmasq-dns-844fc57f6f-clkpk\" (UID: \"d52f76ed-011b-46f0-b5a7-653e1c481595\") " pod="openstack/dnsmasq-dns-844fc57f6f-clkpk" Dec 10 15:01:45 crc kubenswrapper[4718]: I1210 15:01:45.343658 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d30ea5cb-94c9-4beb-83a0-7f076696c395-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"d30ea5cb-94c9-4beb-83a0-7f076696c395\") " pod="openstack/nova-scheduler-0" Dec 10 15:01:45 crc kubenswrapper[4718]: I1210 15:01:45.357353 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d30ea5cb-94c9-4beb-83a0-7f076696c395-config-data\") pod \"nova-scheduler-0\" (UID: \"d30ea5cb-94c9-4beb-83a0-7f076696c395\") " pod="openstack/nova-scheduler-0" Dec 10 15:01:45 crc kubenswrapper[4718]: I1210 15:01:45.373361 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 10 15:01:45 crc kubenswrapper[4718]: I1210 15:01:45.390370 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bf67j\" (UniqueName: \"kubernetes.io/projected/d30ea5cb-94c9-4beb-83a0-7f076696c395-kube-api-access-bf67j\") pod \"nova-scheduler-0\" (UID: \"d30ea5cb-94c9-4beb-83a0-7f076696c395\") " pod="openstack/nova-scheduler-0" Dec 10 15:01:45 crc kubenswrapper[4718]: I1210 15:01:45.436347 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d52f76ed-011b-46f0-b5a7-653e1c481595-config\") pod \"dnsmasq-dns-844fc57f6f-clkpk\" (UID: \"d52f76ed-011b-46f0-b5a7-653e1c481595\") " pod="openstack/dnsmasq-dns-844fc57f6f-clkpk" Dec 10 15:01:45 crc kubenswrapper[4718]: I1210 15:01:45.438302 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d52f76ed-011b-46f0-b5a7-653e1c481595-config\") pod \"dnsmasq-dns-844fc57f6f-clkpk\" (UID: \"d52f76ed-011b-46f0-b5a7-653e1c481595\") " pod="openstack/dnsmasq-dns-844fc57f6f-clkpk" Dec 10 15:01:45 crc kubenswrapper[4718]: I1210 15:01:45.436789 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d52f76ed-011b-46f0-b5a7-653e1c481595-ovsdbserver-sb\") pod \"dnsmasq-dns-844fc57f6f-clkpk\" (UID: \"d52f76ed-011b-46f0-b5a7-653e1c481595\") " pod="openstack/dnsmasq-dns-844fc57f6f-clkpk" Dec 10 15:01:45 crc kubenswrapper[4718]: I1210 15:01:45.439974 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d52f76ed-011b-46f0-b5a7-653e1c481595-ovsdbserver-sb\") pod \"dnsmasq-dns-844fc57f6f-clkpk\" (UID: \"d52f76ed-011b-46f0-b5a7-653e1c481595\") " pod="openstack/dnsmasq-dns-844fc57f6f-clkpk" Dec 10 15:01:45 crc kubenswrapper[4718]: I1210 15:01:45.440188 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d52f76ed-011b-46f0-b5a7-653e1c481595-dns-svc\") pod \"dnsmasq-dns-844fc57f6f-clkpk\" (UID: \"d52f76ed-011b-46f0-b5a7-653e1c481595\") " pod="openstack/dnsmasq-dns-844fc57f6f-clkpk" Dec 10 15:01:45 crc kubenswrapper[4718]: I1210 15:01:45.442001 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d52f76ed-011b-46f0-b5a7-653e1c481595-dns-swift-storage-0\") pod \"dnsmasq-dns-844fc57f6f-clkpk\" (UID: \"d52f76ed-011b-46f0-b5a7-653e1c481595\") " pod="openstack/dnsmasq-dns-844fc57f6f-clkpk" Dec 10 15:01:45 crc kubenswrapper[4718]: I1210 15:01:45.442214 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d52f76ed-011b-46f0-b5a7-653e1c481595-ovsdbserver-nb\") pod \"dnsmasq-dns-844fc57f6f-clkpk\" (UID: \"d52f76ed-011b-46f0-b5a7-653e1c481595\") " pod="openstack/dnsmasq-dns-844fc57f6f-clkpk" Dec 10 15:01:45 crc kubenswrapper[4718]: I1210 15:01:45.442340 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ps7wz\" (UniqueName: \"kubernetes.io/projected/d52f76ed-011b-46f0-b5a7-653e1c481595-kube-api-access-ps7wz\") pod \"dnsmasq-dns-844fc57f6f-clkpk\" (UID: \"d52f76ed-011b-46f0-b5a7-653e1c481595\") " pod="openstack/dnsmasq-dns-844fc57f6f-clkpk" Dec 10 15:01:45 crc kubenswrapper[4718]: I1210 15:01:45.443919 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d52f76ed-011b-46f0-b5a7-653e1c481595-dns-swift-storage-0\") pod \"dnsmasq-dns-844fc57f6f-clkpk\" (UID: \"d52f76ed-011b-46f0-b5a7-653e1c481595\") " pod="openstack/dnsmasq-dns-844fc57f6f-clkpk" Dec 10 15:01:45 crc kubenswrapper[4718]: I1210 15:01:45.444920 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d52f76ed-011b-46f0-b5a7-653e1c481595-dns-svc\") pod \"dnsmasq-dns-844fc57f6f-clkpk\" (UID: \"d52f76ed-011b-46f0-b5a7-653e1c481595\") " pod="openstack/dnsmasq-dns-844fc57f6f-clkpk" Dec 10 15:01:45 crc kubenswrapper[4718]: I1210 15:01:45.446158 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d52f76ed-011b-46f0-b5a7-653e1c481595-ovsdbserver-nb\") pod \"dnsmasq-dns-844fc57f6f-clkpk\" (UID: \"d52f76ed-011b-46f0-b5a7-653e1c481595\") " pod="openstack/dnsmasq-dns-844fc57f6f-clkpk" Dec 10 15:01:45 crc kubenswrapper[4718]: I1210 15:01:45.451247 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-4pldw"] Dec 10 15:01:45 crc kubenswrapper[4718]: I1210 15:01:45.489076 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ps7wz\" (UniqueName: \"kubernetes.io/projected/d52f76ed-011b-46f0-b5a7-653e1c481595-kube-api-access-ps7wz\") pod \"dnsmasq-dns-844fc57f6f-clkpk\" (UID: \"d52f76ed-011b-46f0-b5a7-653e1c481595\") " pod="openstack/dnsmasq-dns-844fc57f6f-clkpk" Dec 10 15:01:45 crc kubenswrapper[4718]: I1210 15:01:45.539884 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 10 15:01:45 crc kubenswrapper[4718]: I1210 15:01:45.557609 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-844fc57f6f-clkpk" Dec 10 15:01:45 crc kubenswrapper[4718]: I1210 15:01:45.934226 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-5pzcj"] Dec 10 15:01:45 crc kubenswrapper[4718]: I1210 15:01:45.937319 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-5pzcj" Dec 10 15:01:45 crc kubenswrapper[4718]: I1210 15:01:45.980528 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Dec 10 15:01:45 crc kubenswrapper[4718]: I1210 15:01:45.986732 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Dec 10 15:01:45 crc kubenswrapper[4718]: I1210 15:01:45.990534 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Dec 10 15:01:46 crc kubenswrapper[4718]: I1210 15:01:46.011101 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-5pzcj"] Dec 10 15:01:46 crc kubenswrapper[4718]: W1210 15:01:46.040021 4718 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd3a55a46_7909_4170_a010_afc35dade12f.slice/crio-72dbc5a3836060ada1e109ce8aa0d0d892afcfb949efaf1e808f2dbd61e0612f WatchSource:0}: Error finding container 72dbc5a3836060ada1e109ce8aa0d0d892afcfb949efaf1e808f2dbd61e0612f: Status 404 returned error can't find the container with id 72dbc5a3836060ada1e109ce8aa0d0d892afcfb949efaf1e808f2dbd61e0612f Dec 10 15:01:46 crc kubenswrapper[4718]: I1210 15:01:46.073094 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/32c38ab3-5fa2-49cd-b76c-625823fb56a6-config-data\") pod \"nova-cell1-conductor-db-sync-5pzcj\" (UID: \"32c38ab3-5fa2-49cd-b76c-625823fb56a6\") " pod="openstack/nova-cell1-conductor-db-sync-5pzcj" Dec 10 15:01:46 crc kubenswrapper[4718]: I1210 15:01:46.073185 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/32c38ab3-5fa2-49cd-b76c-625823fb56a6-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-5pzcj\" (UID: \"32c38ab3-5fa2-49cd-b76c-625823fb56a6\") " pod="openstack/nova-cell1-conductor-db-sync-5pzcj" Dec 10 15:01:46 crc kubenswrapper[4718]: I1210 15:01:46.073224 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-955v5\" (UniqueName: \"kubernetes.io/projected/32c38ab3-5fa2-49cd-b76c-625823fb56a6-kube-api-access-955v5\") pod \"nova-cell1-conductor-db-sync-5pzcj\" (UID: \"32c38ab3-5fa2-49cd-b76c-625823fb56a6\") " pod="openstack/nova-cell1-conductor-db-sync-5pzcj" Dec 10 15:01:46 crc kubenswrapper[4718]: I1210 15:01:46.073246 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/32c38ab3-5fa2-49cd-b76c-625823fb56a6-scripts\") pod \"nova-cell1-conductor-db-sync-5pzcj\" (UID: \"32c38ab3-5fa2-49cd-b76c-625823fb56a6\") " pod="openstack/nova-cell1-conductor-db-sync-5pzcj" Dec 10 15:01:46 crc kubenswrapper[4718]: I1210 15:01:46.089809 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-4pldw" event={"ID":"4aa714e1-7bc1-4d1d-b829-f053c2a4404c","Type":"ContainerStarted","Data":"39aa8e53511f1f882bb455ca76886606af0b926eed37395af86628aaa0c9ae9c"} Dec 10 15:01:46 crc kubenswrapper[4718]: I1210 15:01:46.175708 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/32c38ab3-5fa2-49cd-b76c-625823fb56a6-config-data\") pod \"nova-cell1-conductor-db-sync-5pzcj\" (UID: \"32c38ab3-5fa2-49cd-b76c-625823fb56a6\") " pod="openstack/nova-cell1-conductor-db-sync-5pzcj" Dec 10 15:01:46 crc kubenswrapper[4718]: I1210 15:01:46.175795 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/32c38ab3-5fa2-49cd-b76c-625823fb56a6-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-5pzcj\" (UID: \"32c38ab3-5fa2-49cd-b76c-625823fb56a6\") " pod="openstack/nova-cell1-conductor-db-sync-5pzcj" Dec 10 15:01:46 crc kubenswrapper[4718]: I1210 15:01:46.175834 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-955v5\" (UniqueName: \"kubernetes.io/projected/32c38ab3-5fa2-49cd-b76c-625823fb56a6-kube-api-access-955v5\") pod \"nova-cell1-conductor-db-sync-5pzcj\" (UID: \"32c38ab3-5fa2-49cd-b76c-625823fb56a6\") " pod="openstack/nova-cell1-conductor-db-sync-5pzcj" Dec 10 15:01:46 crc kubenswrapper[4718]: I1210 15:01:46.175860 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/32c38ab3-5fa2-49cd-b76c-625823fb56a6-scripts\") pod \"nova-cell1-conductor-db-sync-5pzcj\" (UID: \"32c38ab3-5fa2-49cd-b76c-625823fb56a6\") " pod="openstack/nova-cell1-conductor-db-sync-5pzcj" Dec 10 15:01:46 crc kubenswrapper[4718]: I1210 15:01:46.191766 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/32c38ab3-5fa2-49cd-b76c-625823fb56a6-scripts\") pod \"nova-cell1-conductor-db-sync-5pzcj\" (UID: \"32c38ab3-5fa2-49cd-b76c-625823fb56a6\") " pod="openstack/nova-cell1-conductor-db-sync-5pzcj" Dec 10 15:01:46 crc kubenswrapper[4718]: I1210 15:01:46.200304 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/32c38ab3-5fa2-49cd-b76c-625823fb56a6-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-5pzcj\" (UID: \"32c38ab3-5fa2-49cd-b76c-625823fb56a6\") " pod="openstack/nova-cell1-conductor-db-sync-5pzcj" Dec 10 15:01:46 crc kubenswrapper[4718]: I1210 15:01:46.221884 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/32c38ab3-5fa2-49cd-b76c-625823fb56a6-config-data\") pod \"nova-cell1-conductor-db-sync-5pzcj\" (UID: \"32c38ab3-5fa2-49cd-b76c-625823fb56a6\") " pod="openstack/nova-cell1-conductor-db-sync-5pzcj" Dec 10 15:01:46 crc kubenswrapper[4718]: I1210 15:01:46.284270 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-955v5\" (UniqueName: \"kubernetes.io/projected/32c38ab3-5fa2-49cd-b76c-625823fb56a6-kube-api-access-955v5\") pod \"nova-cell1-conductor-db-sync-5pzcj\" (UID: \"32c38ab3-5fa2-49cd-b76c-625823fb56a6\") " pod="openstack/nova-cell1-conductor-db-sync-5pzcj" Dec 10 15:01:46 crc kubenswrapper[4718]: I1210 15:01:46.297163 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-955v5\" (UniqueName: \"kubernetes.io/projected/32c38ab3-5fa2-49cd-b76c-625823fb56a6-kube-api-access-955v5\") pod \"nova-cell1-conductor-db-sync-5pzcj\" (UID: \"32c38ab3-5fa2-49cd-b76c-625823fb56a6\") " pod="openstack/nova-cell1-conductor-db-sync-5pzcj" Dec 10 15:01:46 crc kubenswrapper[4718]: I1210 15:01:46.298082 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-955v5\" (UniqueName: \"kubernetes.io/projected/32c38ab3-5fa2-49cd-b76c-625823fb56a6-kube-api-access-955v5\") pod \"nova-cell1-conductor-db-sync-5pzcj\" (UID: \"32c38ab3-5fa2-49cd-b76c-625823fb56a6\") " pod="openstack/nova-cell1-conductor-db-sync-5pzcj" Dec 10 15:01:46 crc kubenswrapper[4718]: I1210 15:01:46.342305 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-5pzcj" Dec 10 15:01:46 crc kubenswrapper[4718]: I1210 15:01:46.430068 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 10 15:01:46 crc kubenswrapper[4718]: I1210 15:01:46.506778 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 10 15:01:46 crc kubenswrapper[4718]: I1210 15:01:46.703195 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Dec 10 15:01:46 crc kubenswrapper[4718]: I1210 15:01:46.805092 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-844fc57f6f-clkpk"] Dec 10 15:01:47 crc kubenswrapper[4718]: I1210 15:01:47.103652 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"d30ea5cb-94c9-4beb-83a0-7f076696c395","Type":"ContainerStarted","Data":"2a801fd8ada83477cdf8f048ee2bf5f8e4543f6bc7515bc705bcc8a7002efbaf"} Dec 10 15:01:47 crc kubenswrapper[4718]: I1210 15:01:47.125197 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-844fc57f6f-clkpk" event={"ID":"d52f76ed-011b-46f0-b5a7-653e1c481595","Type":"ContainerStarted","Data":"b561495b4bd9042c18bd67116d44b9321065bbcb23079a1080716eaeafac2b9d"} Dec 10 15:01:47 crc kubenswrapper[4718]: I1210 15:01:47.130811 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"cfa18867-53a1-4a8f-a010-318cefec946c","Type":"ContainerStarted","Data":"e9e8dc52c1b5e1d0049bdd9ec622169708d464ba57a51654564fc9793c9f3cdd"} Dec 10 15:01:47 crc kubenswrapper[4718]: I1210 15:01:47.135514 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c7a2c239-7449-4684-a8c8-2d295d790367","Type":"ContainerStarted","Data":"24e12d387e8f33b968386586eaaac6b6db0bdbea04ff120660fe433fb21f654a"} Dec 10 15:01:47 crc kubenswrapper[4718]: I1210 15:01:47.138156 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-4pldw" event={"ID":"4aa714e1-7bc1-4d1d-b829-f053c2a4404c","Type":"ContainerStarted","Data":"4fcbacd776f0cf7418050c3e0442a241e1ac5e95255f3e49040d04b9790e13c8"} Dec 10 15:01:47 crc kubenswrapper[4718]: I1210 15:01:47.149591 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d3a55a46-7909-4170-a010-afc35dade12f","Type":"ContainerStarted","Data":"72dbc5a3836060ada1e109ce8aa0d0d892afcfb949efaf1e808f2dbd61e0612f"} Dec 10 15:01:47 crc kubenswrapper[4718]: I1210 15:01:47.175234 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-4pldw" podStartSLOduration=4.175207966 podStartE2EDuration="4.175207966s" podCreationTimestamp="2025-12-10 15:01:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 15:01:47.168047366 +0000 UTC m=+1812.117270783" watchObservedRunningTime="2025-12-10 15:01:47.175207966 +0000 UTC m=+1812.124431383" Dec 10 15:01:47 crc kubenswrapper[4718]: I1210 15:01:47.215669 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-decision-engine-0" Dec 10 15:01:47 crc kubenswrapper[4718]: I1210 15:01:47.289035 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/watcher-decision-engine-0" Dec 10 15:01:47 crc kubenswrapper[4718]: I1210 15:01:47.527018 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-5pzcj"] Dec 10 15:01:47 crc kubenswrapper[4718]: W1210 15:01:47.585407 4718 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod32c38ab3_5fa2_49cd_b76c_625823fb56a6.slice/crio-3804897b5148179e1737a76a96ff222ba4166cf69f09fe60e868bee52ed26bb3 WatchSource:0}: Error finding container 3804897b5148179e1737a76a96ff222ba4166cf69f09fe60e868bee52ed26bb3: Status 404 returned error can't find the container with id 3804897b5148179e1737a76a96ff222ba4166cf69f09fe60e868bee52ed26bb3 Dec 10 15:01:48 crc kubenswrapper[4718]: I1210 15:01:48.253553 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-5pzcj" event={"ID":"32c38ab3-5fa2-49cd-b76c-625823fb56a6","Type":"ContainerStarted","Data":"43b23f2d7a8abf2cb507c784db0e8e74a942b2fb0ae70adf892ff570fb9e939a"} Dec 10 15:01:48 crc kubenswrapper[4718]: I1210 15:01:48.254054 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-5pzcj" event={"ID":"32c38ab3-5fa2-49cd-b76c-625823fb56a6","Type":"ContainerStarted","Data":"3804897b5148179e1737a76a96ff222ba4166cf69f09fe60e868bee52ed26bb3"} Dec 10 15:01:48 crc kubenswrapper[4718]: I1210 15:01:48.272131 4718 generic.go:334] "Generic (PLEG): container finished" podID="d52f76ed-011b-46f0-b5a7-653e1c481595" containerID="68dfd1fce9ee9d2991347b6313e2473d30c5c28b5cf40ddbb0fe3721189c8e52" exitCode=0 Dec 10 15:01:48 crc kubenswrapper[4718]: I1210 15:01:48.272971 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-844fc57f6f-clkpk" event={"ID":"d52f76ed-011b-46f0-b5a7-653e1c481595","Type":"ContainerDied","Data":"68dfd1fce9ee9d2991347b6313e2473d30c5c28b5cf40ddbb0fe3721189c8e52"} Dec 10 15:01:48 crc kubenswrapper[4718]: I1210 15:01:48.275750 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-decision-engine-0" Dec 10 15:01:48 crc kubenswrapper[4718]: I1210 15:01:48.320494 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-5pzcj" podStartSLOduration=3.320461744 podStartE2EDuration="3.320461744s" podCreationTimestamp="2025-12-10 15:01:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 15:01:48.282565632 +0000 UTC m=+1813.231789049" watchObservedRunningTime="2025-12-10 15:01:48.320461744 +0000 UTC m=+1813.269685161" Dec 10 15:01:48 crc kubenswrapper[4718]: I1210 15:01:48.392852 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-decision-engine-0" Dec 10 15:01:49 crc kubenswrapper[4718]: I1210 15:01:49.054021 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Dec 10 15:01:49 crc kubenswrapper[4718]: I1210 15:01:49.115007 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 10 15:01:52 crc kubenswrapper[4718]: I1210 15:01:52.428407 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-844fc57f6f-clkpk" event={"ID":"d52f76ed-011b-46f0-b5a7-653e1c481595","Type":"ContainerStarted","Data":"7102780e09e56dfb8f29e1b7a522c30470f322e32f0b15ccf6c1b46ae7177ac7"} Dec 10 15:01:52 crc kubenswrapper[4718]: I1210 15:01:52.430795 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-844fc57f6f-clkpk" Dec 10 15:01:52 crc kubenswrapper[4718]: I1210 15:01:52.440911 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"cfa18867-53a1-4a8f-a010-318cefec946c","Type":"ContainerStarted","Data":"3a9c0ba4b3ae903c327c813aa2d0f69a2a9c9688da494e9f82cddb28b5ce6177"} Dec 10 15:01:52 crc kubenswrapper[4718]: I1210 15:01:52.441000 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="cfa18867-53a1-4a8f-a010-318cefec946c" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://3a9c0ba4b3ae903c327c813aa2d0f69a2a9c9688da494e9f82cddb28b5ce6177" gracePeriod=30 Dec 10 15:01:52 crc kubenswrapper[4718]: I1210 15:01:52.464334 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c7a2c239-7449-4684-a8c8-2d295d790367","Type":"ContainerStarted","Data":"251c19bffd4a53f48ebe75bb361beae9b0bac0ce251f3cb70330164617182dc6"} Dec 10 15:01:52 crc kubenswrapper[4718]: I1210 15:01:52.464561 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c7a2c239-7449-4684-a8c8-2d295d790367","Type":"ContainerStarted","Data":"81a8725c2e556abec8f3a73eb66872e8d3d305b3136a394a3a303058b8403e16"} Dec 10 15:01:52 crc kubenswrapper[4718]: I1210 15:01:52.472294 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-844fc57f6f-clkpk" podStartSLOduration=8.472260915 podStartE2EDuration="8.472260915s" podCreationTimestamp="2025-12-10 15:01:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 15:01:52.462727905 +0000 UTC m=+1817.411951332" watchObservedRunningTime="2025-12-10 15:01:52.472260915 +0000 UTC m=+1817.421484332" Dec 10 15:01:52 crc kubenswrapper[4718]: I1210 15:01:52.478664 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d3a55a46-7909-4170-a010-afc35dade12f","Type":"ContainerStarted","Data":"4b92e8ffea29fa770dd17b90fe1563746135a5ac1e9053ba80dadf7ab2835398"} Dec 10 15:01:52 crc kubenswrapper[4718]: I1210 15:01:52.478760 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d3a55a46-7909-4170-a010-afc35dade12f","Type":"ContainerStarted","Data":"a34edd718765bcbc9b7fe19d9bc8beb0550a549a0f558648513843d9bd853674"} Dec 10 15:01:52 crc kubenswrapper[4718]: I1210 15:01:52.479154 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="d3a55a46-7909-4170-a010-afc35dade12f" containerName="nova-metadata-log" containerID="cri-o://a34edd718765bcbc9b7fe19d9bc8beb0550a549a0f558648513843d9bd853674" gracePeriod=30 Dec 10 15:01:52 crc kubenswrapper[4718]: I1210 15:01:52.479195 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="d3a55a46-7909-4170-a010-afc35dade12f" containerName="nova-metadata-metadata" containerID="cri-o://4b92e8ffea29fa770dd17b90fe1563746135a5ac1e9053ba80dadf7ab2835398" gracePeriod=30 Dec 10 15:01:52 crc kubenswrapper[4718]: I1210 15:01:52.490321 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"d30ea5cb-94c9-4beb-83a0-7f076696c395","Type":"ContainerStarted","Data":"d37e4d0d16625f8940f1159e3d619f8e3a1df51c20100283cac38dd774ba7a70"} Dec 10 15:01:52 crc kubenswrapper[4718]: I1210 15:01:52.508359 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=3.432764594 podStartE2EDuration="8.50832218s" podCreationTimestamp="2025-12-10 15:01:44 +0000 UTC" firstStartedPulling="2025-12-10 15:01:46.269537086 +0000 UTC m=+1811.218760503" lastFinishedPulling="2025-12-10 15:01:51.345094672 +0000 UTC m=+1816.294318089" observedRunningTime="2025-12-10 15:01:52.496470553 +0000 UTC m=+1817.445693970" watchObservedRunningTime="2025-12-10 15:01:52.50832218 +0000 UTC m=+1817.457545597" Dec 10 15:01:52 crc kubenswrapper[4718]: I1210 15:01:52.543820 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=3.251204995 podStartE2EDuration="8.543776391s" podCreationTimestamp="2025-12-10 15:01:44 +0000 UTC" firstStartedPulling="2025-12-10 15:01:46.055668795 +0000 UTC m=+1811.004892212" lastFinishedPulling="2025-12-10 15:01:51.348240191 +0000 UTC m=+1816.297463608" observedRunningTime="2025-12-10 15:01:52.529122773 +0000 UTC m=+1817.478346200" watchObservedRunningTime="2025-12-10 15:01:52.543776391 +0000 UTC m=+1817.492999808" Dec 10 15:01:52 crc kubenswrapper[4718]: I1210 15:01:52.565206 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=3.9287583980000003 podStartE2EDuration="8.565175958s" podCreationTimestamp="2025-12-10 15:01:44 +0000 UTC" firstStartedPulling="2025-12-10 15:01:46.675605601 +0000 UTC m=+1811.624829018" lastFinishedPulling="2025-12-10 15:01:51.312023151 +0000 UTC m=+1816.261246578" observedRunningTime="2025-12-10 15:01:52.557723611 +0000 UTC m=+1817.506947028" watchObservedRunningTime="2025-12-10 15:01:52.565175958 +0000 UTC m=+1817.514399375" Dec 10 15:01:52 crc kubenswrapper[4718]: I1210 15:01:52.584514 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=4.004615142 podStartE2EDuration="8.584468872s" podCreationTimestamp="2025-12-10 15:01:44 +0000 UTC" firstStartedPulling="2025-12-10 15:01:46.702413334 +0000 UTC m=+1811.651636751" lastFinishedPulling="2025-12-10 15:01:51.282267064 +0000 UTC m=+1816.231490481" observedRunningTime="2025-12-10 15:01:52.578772849 +0000 UTC m=+1817.527996266" watchObservedRunningTime="2025-12-10 15:01:52.584468872 +0000 UTC m=+1817.533692289" Dec 10 15:01:53 crc kubenswrapper[4718]: I1210 15:01:53.507268 4718 generic.go:334] "Generic (PLEG): container finished" podID="d3a55a46-7909-4170-a010-afc35dade12f" containerID="4b92e8ffea29fa770dd17b90fe1563746135a5ac1e9053ba80dadf7ab2835398" exitCode=0 Dec 10 15:01:53 crc kubenswrapper[4718]: I1210 15:01:53.507741 4718 generic.go:334] "Generic (PLEG): container finished" podID="d3a55a46-7909-4170-a010-afc35dade12f" containerID="a34edd718765bcbc9b7fe19d9bc8beb0550a549a0f558648513843d9bd853674" exitCode=143 Dec 10 15:01:53 crc kubenswrapper[4718]: I1210 15:01:53.507455 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d3a55a46-7909-4170-a010-afc35dade12f","Type":"ContainerDied","Data":"4b92e8ffea29fa770dd17b90fe1563746135a5ac1e9053ba80dadf7ab2835398"} Dec 10 15:01:53 crc kubenswrapper[4718]: I1210 15:01:53.508803 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d3a55a46-7909-4170-a010-afc35dade12f","Type":"ContainerDied","Data":"a34edd718765bcbc9b7fe19d9bc8beb0550a549a0f558648513843d9bd853674"} Dec 10 15:01:54 crc kubenswrapper[4718]: I1210 15:01:54.185406 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 10 15:01:54 crc kubenswrapper[4718]: I1210 15:01:54.343203 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d3a55a46-7909-4170-a010-afc35dade12f-config-data\") pod \"d3a55a46-7909-4170-a010-afc35dade12f\" (UID: \"d3a55a46-7909-4170-a010-afc35dade12f\") " Dec 10 15:01:54 crc kubenswrapper[4718]: I1210 15:01:54.343570 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6qw88\" (UniqueName: \"kubernetes.io/projected/d3a55a46-7909-4170-a010-afc35dade12f-kube-api-access-6qw88\") pod \"d3a55a46-7909-4170-a010-afc35dade12f\" (UID: \"d3a55a46-7909-4170-a010-afc35dade12f\") " Dec 10 15:01:54 crc kubenswrapper[4718]: I1210 15:01:54.343619 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3a55a46-7909-4170-a010-afc35dade12f-combined-ca-bundle\") pod \"d3a55a46-7909-4170-a010-afc35dade12f\" (UID: \"d3a55a46-7909-4170-a010-afc35dade12f\") " Dec 10 15:01:54 crc kubenswrapper[4718]: I1210 15:01:54.343887 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d3a55a46-7909-4170-a010-afc35dade12f-logs\") pod \"d3a55a46-7909-4170-a010-afc35dade12f\" (UID: \"d3a55a46-7909-4170-a010-afc35dade12f\") " Dec 10 15:01:54 crc kubenswrapper[4718]: I1210 15:01:54.345173 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d3a55a46-7909-4170-a010-afc35dade12f-logs" (OuterVolumeSpecName: "logs") pod "d3a55a46-7909-4170-a010-afc35dade12f" (UID: "d3a55a46-7909-4170-a010-afc35dade12f"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 15:01:54 crc kubenswrapper[4718]: I1210 15:01:54.356760 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d3a55a46-7909-4170-a010-afc35dade12f-kube-api-access-6qw88" (OuterVolumeSpecName: "kube-api-access-6qw88") pod "d3a55a46-7909-4170-a010-afc35dade12f" (UID: "d3a55a46-7909-4170-a010-afc35dade12f"). InnerVolumeSpecName "kube-api-access-6qw88". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:01:54 crc kubenswrapper[4718]: I1210 15:01:54.396737 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d3a55a46-7909-4170-a010-afc35dade12f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d3a55a46-7909-4170-a010-afc35dade12f" (UID: "d3a55a46-7909-4170-a010-afc35dade12f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:01:54 crc kubenswrapper[4718]: I1210 15:01:54.426157 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d3a55a46-7909-4170-a010-afc35dade12f-config-data" (OuterVolumeSpecName: "config-data") pod "d3a55a46-7909-4170-a010-afc35dade12f" (UID: "d3a55a46-7909-4170-a010-afc35dade12f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:01:54 crc kubenswrapper[4718]: I1210 15:01:54.449354 4718 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d3a55a46-7909-4170-a010-afc35dade12f-logs\") on node \"crc\" DevicePath \"\"" Dec 10 15:01:54 crc kubenswrapper[4718]: I1210 15:01:54.449420 4718 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d3a55a46-7909-4170-a010-afc35dade12f-config-data\") on node \"crc\" DevicePath \"\"" Dec 10 15:01:54 crc kubenswrapper[4718]: I1210 15:01:54.449436 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6qw88\" (UniqueName: \"kubernetes.io/projected/d3a55a46-7909-4170-a010-afc35dade12f-kube-api-access-6qw88\") on node \"crc\" DevicePath \"\"" Dec 10 15:01:54 crc kubenswrapper[4718]: I1210 15:01:54.449451 4718 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3a55a46-7909-4170-a010-afc35dade12f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 10 15:01:54 crc kubenswrapper[4718]: I1210 15:01:54.527114 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d3a55a46-7909-4170-a010-afc35dade12f","Type":"ContainerDied","Data":"72dbc5a3836060ada1e109ce8aa0d0d892afcfb949efaf1e808f2dbd61e0612f"} Dec 10 15:01:54 crc kubenswrapper[4718]: I1210 15:01:54.527192 4718 scope.go:117] "RemoveContainer" containerID="4b92e8ffea29fa770dd17b90fe1563746135a5ac1e9053ba80dadf7ab2835398" Dec 10 15:01:54 crc kubenswrapper[4718]: I1210 15:01:54.527469 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 10 15:01:54 crc kubenswrapper[4718]: I1210 15:01:54.591507 4718 scope.go:117] "RemoveContainer" containerID="a34edd718765bcbc9b7fe19d9bc8beb0550a549a0f558648513843d9bd853674" Dec 10 15:01:54 crc kubenswrapper[4718]: I1210 15:01:54.601197 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Dec 10 15:01:54 crc kubenswrapper[4718]: I1210 15:01:54.619306 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Dec 10 15:01:54 crc kubenswrapper[4718]: I1210 15:01:54.649993 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Dec 10 15:01:54 crc kubenswrapper[4718]: E1210 15:01:54.650937 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d3a55a46-7909-4170-a010-afc35dade12f" containerName="nova-metadata-metadata" Dec 10 15:01:54 crc kubenswrapper[4718]: I1210 15:01:54.650971 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="d3a55a46-7909-4170-a010-afc35dade12f" containerName="nova-metadata-metadata" Dec 10 15:01:54 crc kubenswrapper[4718]: E1210 15:01:54.650993 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d3a55a46-7909-4170-a010-afc35dade12f" containerName="nova-metadata-log" Dec 10 15:01:54 crc kubenswrapper[4718]: I1210 15:01:54.651004 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="d3a55a46-7909-4170-a010-afc35dade12f" containerName="nova-metadata-log" Dec 10 15:01:54 crc kubenswrapper[4718]: I1210 15:01:54.651340 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="d3a55a46-7909-4170-a010-afc35dade12f" containerName="nova-metadata-metadata" Dec 10 15:01:54 crc kubenswrapper[4718]: I1210 15:01:54.651409 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="d3a55a46-7909-4170-a010-afc35dade12f" containerName="nova-metadata-log" Dec 10 15:01:54 crc kubenswrapper[4718]: I1210 15:01:54.653478 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 10 15:01:54 crc kubenswrapper[4718]: I1210 15:01:54.658735 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Dec 10 15:01:54 crc kubenswrapper[4718]: I1210 15:01:54.662029 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Dec 10 15:01:54 crc kubenswrapper[4718]: I1210 15:01:54.681377 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Dec 10 15:01:54 crc kubenswrapper[4718]: I1210 15:01:54.774246 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f0e21c07-1414-4283-b8ad-e315fde18459-logs\") pod \"nova-metadata-0\" (UID: \"f0e21c07-1414-4283-b8ad-e315fde18459\") " pod="openstack/nova-metadata-0" Dec 10 15:01:54 crc kubenswrapper[4718]: I1210 15:01:54.774828 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f0e21c07-1414-4283-b8ad-e315fde18459-config-data\") pod \"nova-metadata-0\" (UID: \"f0e21c07-1414-4283-b8ad-e315fde18459\") " pod="openstack/nova-metadata-0" Dec 10 15:01:54 crc kubenswrapper[4718]: I1210 15:01:54.774905 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q2cwk\" (UniqueName: \"kubernetes.io/projected/f0e21c07-1414-4283-b8ad-e315fde18459-kube-api-access-q2cwk\") pod \"nova-metadata-0\" (UID: \"f0e21c07-1414-4283-b8ad-e315fde18459\") " pod="openstack/nova-metadata-0" Dec 10 15:01:54 crc kubenswrapper[4718]: I1210 15:01:54.774946 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f0e21c07-1414-4283-b8ad-e315fde18459-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"f0e21c07-1414-4283-b8ad-e315fde18459\") " pod="openstack/nova-metadata-0" Dec 10 15:01:54 crc kubenswrapper[4718]: I1210 15:01:54.774975 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/f0e21c07-1414-4283-b8ad-e315fde18459-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"f0e21c07-1414-4283-b8ad-e315fde18459\") " pod="openstack/nova-metadata-0" Dec 10 15:01:54 crc kubenswrapper[4718]: I1210 15:01:54.877283 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f0e21c07-1414-4283-b8ad-e315fde18459-config-data\") pod \"nova-metadata-0\" (UID: \"f0e21c07-1414-4283-b8ad-e315fde18459\") " pod="openstack/nova-metadata-0" Dec 10 15:01:54 crc kubenswrapper[4718]: I1210 15:01:54.877450 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q2cwk\" (UniqueName: \"kubernetes.io/projected/f0e21c07-1414-4283-b8ad-e315fde18459-kube-api-access-q2cwk\") pod \"nova-metadata-0\" (UID: \"f0e21c07-1414-4283-b8ad-e315fde18459\") " pod="openstack/nova-metadata-0" Dec 10 15:01:54 crc kubenswrapper[4718]: I1210 15:01:54.877486 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f0e21c07-1414-4283-b8ad-e315fde18459-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"f0e21c07-1414-4283-b8ad-e315fde18459\") " pod="openstack/nova-metadata-0" Dec 10 15:01:54 crc kubenswrapper[4718]: I1210 15:01:54.877518 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/f0e21c07-1414-4283-b8ad-e315fde18459-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"f0e21c07-1414-4283-b8ad-e315fde18459\") " pod="openstack/nova-metadata-0" Dec 10 15:01:54 crc kubenswrapper[4718]: I1210 15:01:54.877619 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f0e21c07-1414-4283-b8ad-e315fde18459-logs\") pod \"nova-metadata-0\" (UID: \"f0e21c07-1414-4283-b8ad-e315fde18459\") " pod="openstack/nova-metadata-0" Dec 10 15:01:54 crc kubenswrapper[4718]: I1210 15:01:54.878490 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f0e21c07-1414-4283-b8ad-e315fde18459-logs\") pod \"nova-metadata-0\" (UID: \"f0e21c07-1414-4283-b8ad-e315fde18459\") " pod="openstack/nova-metadata-0" Dec 10 15:01:54 crc kubenswrapper[4718]: I1210 15:01:54.887943 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f0e21c07-1414-4283-b8ad-e315fde18459-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"f0e21c07-1414-4283-b8ad-e315fde18459\") " pod="openstack/nova-metadata-0" Dec 10 15:01:54 crc kubenswrapper[4718]: I1210 15:01:54.890983 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/f0e21c07-1414-4283-b8ad-e315fde18459-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"f0e21c07-1414-4283-b8ad-e315fde18459\") " pod="openstack/nova-metadata-0" Dec 10 15:01:54 crc kubenswrapper[4718]: I1210 15:01:54.894326 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f0e21c07-1414-4283-b8ad-e315fde18459-config-data\") pod \"nova-metadata-0\" (UID: \"f0e21c07-1414-4283-b8ad-e315fde18459\") " pod="openstack/nova-metadata-0" Dec 10 15:01:54 crc kubenswrapper[4718]: I1210 15:01:54.906268 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q2cwk\" (UniqueName: \"kubernetes.io/projected/f0e21c07-1414-4283-b8ad-e315fde18459-kube-api-access-q2cwk\") pod \"nova-metadata-0\" (UID: \"f0e21c07-1414-4283-b8ad-e315fde18459\") " pod="openstack/nova-metadata-0" Dec 10 15:01:54 crc kubenswrapper[4718]: I1210 15:01:54.984133 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 10 15:01:55 crc kubenswrapper[4718]: I1210 15:01:55.011570 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Dec 10 15:01:55 crc kubenswrapper[4718]: I1210 15:01:55.375367 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Dec 10 15:01:55 crc kubenswrapper[4718]: I1210 15:01:55.375770 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Dec 10 15:01:55 crc kubenswrapper[4718]: I1210 15:01:55.542423 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Dec 10 15:01:55 crc kubenswrapper[4718]: I1210 15:01:55.544131 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Dec 10 15:01:55 crc kubenswrapper[4718]: I1210 15:01:55.594419 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Dec 10 15:01:55 crc kubenswrapper[4718]: I1210 15:01:55.622053 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Dec 10 15:01:56 crc kubenswrapper[4718]: I1210 15:01:56.135509 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d3a55a46-7909-4170-a010-afc35dade12f" path="/var/lib/kubelet/pods/d3a55a46-7909-4170-a010-afc35dade12f/volumes" Dec 10 15:01:56 crc kubenswrapper[4718]: I1210 15:01:56.459874 4718 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="c7a2c239-7449-4684-a8c8-2d295d790367" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.207:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 10 15:01:56 crc kubenswrapper[4718]: I1210 15:01:56.459876 4718 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="c7a2c239-7449-4684-a8c8-2d295d790367" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.207:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 10 15:01:56 crc kubenswrapper[4718]: I1210 15:01:56.575365 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"f0e21c07-1414-4283-b8ad-e315fde18459","Type":"ContainerStarted","Data":"18f12522ac7bca50c83ea3fbe66b16cad0f7558b428d60802cd584634921cc64"} Dec 10 15:01:56 crc kubenswrapper[4718]: I1210 15:01:56.576824 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"f0e21c07-1414-4283-b8ad-e315fde18459","Type":"ContainerStarted","Data":"51f23c23018d2f110c6a360ee8fec50d9b2972bbe684b69a67dcba4ec452ea21"} Dec 10 15:01:56 crc kubenswrapper[4718]: I1210 15:01:56.611227 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Dec 10 15:01:57 crc kubenswrapper[4718]: I1210 15:01:57.593879 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"f0e21c07-1414-4283-b8ad-e315fde18459","Type":"ContainerStarted","Data":"916835527bbe8ba773919dd753a07cc802c2716f4cf136b9cef1bc630f9a4180"} Dec 10 15:01:57 crc kubenswrapper[4718]: I1210 15:01:57.642182 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=3.642147059 podStartE2EDuration="3.642147059s" podCreationTimestamp="2025-12-10 15:01:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 15:01:57.617124931 +0000 UTC m=+1822.566348348" watchObservedRunningTime="2025-12-10 15:01:57.642147059 +0000 UTC m=+1822.591370496" Dec 10 15:01:59 crc kubenswrapper[4718]: I1210 15:01:59.021542 4718 scope.go:117] "RemoveContainer" containerID="76beffb68a4302af15a19b5c310d822594a21c7c7ec551bf4bf551ca3dcf1f21" Dec 10 15:01:59 crc kubenswrapper[4718]: E1210 15:01:59.022666 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8zmhn_openshift-machine-config-operator(8db53917-7cfb-496d-b8a0-5cc68f3be4e7)\"" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" podUID="8db53917-7cfb-496d-b8a0-5cc68f3be4e7" Dec 10 15:01:59 crc kubenswrapper[4718]: I1210 15:01:59.984369 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Dec 10 15:01:59 crc kubenswrapper[4718]: I1210 15:01:59.984460 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Dec 10 15:02:00 crc kubenswrapper[4718]: I1210 15:02:00.560668 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-844fc57f6f-clkpk" Dec 10 15:02:00 crc kubenswrapper[4718]: I1210 15:02:00.638595 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-75958fc765-86zrn"] Dec 10 15:02:00 crc kubenswrapper[4718]: I1210 15:02:00.639281 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-75958fc765-86zrn" podUID="dffdff5c-1fe4-4dbb-93c7-14f497a4e939" containerName="dnsmasq-dns" containerID="cri-o://1239efa9e73f912dd8e937153183ce37860c0b79e87c56baa44eb8b5fb376ac7" gracePeriod=10 Dec 10 15:02:00 crc kubenswrapper[4718]: I1210 15:02:00.687004 4718 generic.go:334] "Generic (PLEG): container finished" podID="4aa714e1-7bc1-4d1d-b829-f053c2a4404c" containerID="4fcbacd776f0cf7418050c3e0442a241e1ac5e95255f3e49040d04b9790e13c8" exitCode=0 Dec 10 15:02:00 crc kubenswrapper[4718]: I1210 15:02:00.687094 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-4pldw" event={"ID":"4aa714e1-7bc1-4d1d-b829-f053c2a4404c","Type":"ContainerDied","Data":"4fcbacd776f0cf7418050c3e0442a241e1ac5e95255f3e49040d04b9790e13c8"} Dec 10 15:02:01 crc kubenswrapper[4718]: I1210 15:02:01.088238 4718 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-75958fc765-86zrn" podUID="dffdff5c-1fe4-4dbb-93c7-14f497a4e939" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.189:5353: connect: connection refused" Dec 10 15:02:01 crc kubenswrapper[4718]: I1210 15:02:01.706099 4718 generic.go:334] "Generic (PLEG): container finished" podID="dffdff5c-1fe4-4dbb-93c7-14f497a4e939" containerID="1239efa9e73f912dd8e937153183ce37860c0b79e87c56baa44eb8b5fb376ac7" exitCode=0 Dec 10 15:02:01 crc kubenswrapper[4718]: I1210 15:02:01.706266 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75958fc765-86zrn" event={"ID":"dffdff5c-1fe4-4dbb-93c7-14f497a4e939","Type":"ContainerDied","Data":"1239efa9e73f912dd8e937153183ce37860c0b79e87c56baa44eb8b5fb376ac7"} Dec 10 15:02:01 crc kubenswrapper[4718]: I1210 15:02:01.706889 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75958fc765-86zrn" event={"ID":"dffdff5c-1fe4-4dbb-93c7-14f497a4e939","Type":"ContainerDied","Data":"06c9969e1d51169ad21079e6aed664ea730a710536e2c86b706fc0d6ffc8ef5e"} Dec 10 15:02:01 crc kubenswrapper[4718]: I1210 15:02:01.706914 4718 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="06c9969e1d51169ad21079e6aed664ea730a710536e2c86b706fc0d6ffc8ef5e" Dec 10 15:02:01 crc kubenswrapper[4718]: I1210 15:02:01.753498 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75958fc765-86zrn" Dec 10 15:02:01 crc kubenswrapper[4718]: I1210 15:02:01.919058 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/dffdff5c-1fe4-4dbb-93c7-14f497a4e939-ovsdbserver-nb\") pod \"dffdff5c-1fe4-4dbb-93c7-14f497a4e939\" (UID: \"dffdff5c-1fe4-4dbb-93c7-14f497a4e939\") " Dec 10 15:02:01 crc kubenswrapper[4718]: I1210 15:02:01.919168 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/dffdff5c-1fe4-4dbb-93c7-14f497a4e939-dns-swift-storage-0\") pod \"dffdff5c-1fe4-4dbb-93c7-14f497a4e939\" (UID: \"dffdff5c-1fe4-4dbb-93c7-14f497a4e939\") " Dec 10 15:02:01 crc kubenswrapper[4718]: I1210 15:02:01.919250 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dffdff5c-1fe4-4dbb-93c7-14f497a4e939-dns-svc\") pod \"dffdff5c-1fe4-4dbb-93c7-14f497a4e939\" (UID: \"dffdff5c-1fe4-4dbb-93c7-14f497a4e939\") " Dec 10 15:02:01 crc kubenswrapper[4718]: I1210 15:02:01.919477 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b2fk4\" (UniqueName: \"kubernetes.io/projected/dffdff5c-1fe4-4dbb-93c7-14f497a4e939-kube-api-access-b2fk4\") pod \"dffdff5c-1fe4-4dbb-93c7-14f497a4e939\" (UID: \"dffdff5c-1fe4-4dbb-93c7-14f497a4e939\") " Dec 10 15:02:01 crc kubenswrapper[4718]: I1210 15:02:01.919598 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dffdff5c-1fe4-4dbb-93c7-14f497a4e939-config\") pod \"dffdff5c-1fe4-4dbb-93c7-14f497a4e939\" (UID: \"dffdff5c-1fe4-4dbb-93c7-14f497a4e939\") " Dec 10 15:02:01 crc kubenswrapper[4718]: I1210 15:02:01.919635 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/dffdff5c-1fe4-4dbb-93c7-14f497a4e939-ovsdbserver-sb\") pod \"dffdff5c-1fe4-4dbb-93c7-14f497a4e939\" (UID: \"dffdff5c-1fe4-4dbb-93c7-14f497a4e939\") " Dec 10 15:02:01 crc kubenswrapper[4718]: I1210 15:02:01.949080 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dffdff5c-1fe4-4dbb-93c7-14f497a4e939-kube-api-access-b2fk4" (OuterVolumeSpecName: "kube-api-access-b2fk4") pod "dffdff5c-1fe4-4dbb-93c7-14f497a4e939" (UID: "dffdff5c-1fe4-4dbb-93c7-14f497a4e939"). InnerVolumeSpecName "kube-api-access-b2fk4". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:02:02 crc kubenswrapper[4718]: I1210 15:02:02.000630 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dffdff5c-1fe4-4dbb-93c7-14f497a4e939-config" (OuterVolumeSpecName: "config") pod "dffdff5c-1fe4-4dbb-93c7-14f497a4e939" (UID: "dffdff5c-1fe4-4dbb-93c7-14f497a4e939"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 15:02:02 crc kubenswrapper[4718]: I1210 15:02:02.031241 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dffdff5c-1fe4-4dbb-93c7-14f497a4e939-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "dffdff5c-1fe4-4dbb-93c7-14f497a4e939" (UID: "dffdff5c-1fe4-4dbb-93c7-14f497a4e939"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 15:02:02 crc kubenswrapper[4718]: I1210 15:02:02.031708 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dffdff5c-1fe4-4dbb-93c7-14f497a4e939-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "dffdff5c-1fe4-4dbb-93c7-14f497a4e939" (UID: "dffdff5c-1fe4-4dbb-93c7-14f497a4e939"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 15:02:02 crc kubenswrapper[4718]: I1210 15:02:02.036631 4718 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dffdff5c-1fe4-4dbb-93c7-14f497a4e939-config\") on node \"crc\" DevicePath \"\"" Dec 10 15:02:02 crc kubenswrapper[4718]: I1210 15:02:02.036999 4718 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/dffdff5c-1fe4-4dbb-93c7-14f497a4e939-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 10 15:02:02 crc kubenswrapper[4718]: I1210 15:02:02.037016 4718 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dffdff5c-1fe4-4dbb-93c7-14f497a4e939-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 10 15:02:02 crc kubenswrapper[4718]: I1210 15:02:02.037032 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b2fk4\" (UniqueName: \"kubernetes.io/projected/dffdff5c-1fe4-4dbb-93c7-14f497a4e939-kube-api-access-b2fk4\") on node \"crc\" DevicePath \"\"" Dec 10 15:02:02 crc kubenswrapper[4718]: I1210 15:02:02.047064 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dffdff5c-1fe4-4dbb-93c7-14f497a4e939-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "dffdff5c-1fe4-4dbb-93c7-14f497a4e939" (UID: "dffdff5c-1fe4-4dbb-93c7-14f497a4e939"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 15:02:02 crc kubenswrapper[4718]: I1210 15:02:02.052324 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dffdff5c-1fe4-4dbb-93c7-14f497a4e939-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "dffdff5c-1fe4-4dbb-93c7-14f497a4e939" (UID: "dffdff5c-1fe4-4dbb-93c7-14f497a4e939"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 15:02:02 crc kubenswrapper[4718]: I1210 15:02:02.138992 4718 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/dffdff5c-1fe4-4dbb-93c7-14f497a4e939-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 10 15:02:02 crc kubenswrapper[4718]: I1210 15:02:02.139037 4718 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/dffdff5c-1fe4-4dbb-93c7-14f497a4e939-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 10 15:02:02 crc kubenswrapper[4718]: I1210 15:02:02.231601 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-4pldw" Dec 10 15:02:02 crc kubenswrapper[4718]: I1210 15:02:02.342998 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4aa714e1-7bc1-4d1d-b829-f053c2a4404c-scripts\") pod \"4aa714e1-7bc1-4d1d-b829-f053c2a4404c\" (UID: \"4aa714e1-7bc1-4d1d-b829-f053c2a4404c\") " Dec 10 15:02:02 crc kubenswrapper[4718]: I1210 15:02:02.343251 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4aa714e1-7bc1-4d1d-b829-f053c2a4404c-combined-ca-bundle\") pod \"4aa714e1-7bc1-4d1d-b829-f053c2a4404c\" (UID: \"4aa714e1-7bc1-4d1d-b829-f053c2a4404c\") " Dec 10 15:02:02 crc kubenswrapper[4718]: I1210 15:02:02.343408 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mgvg5\" (UniqueName: \"kubernetes.io/projected/4aa714e1-7bc1-4d1d-b829-f053c2a4404c-kube-api-access-mgvg5\") pod \"4aa714e1-7bc1-4d1d-b829-f053c2a4404c\" (UID: \"4aa714e1-7bc1-4d1d-b829-f053c2a4404c\") " Dec 10 15:02:02 crc kubenswrapper[4718]: I1210 15:02:02.343544 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4aa714e1-7bc1-4d1d-b829-f053c2a4404c-config-data\") pod \"4aa714e1-7bc1-4d1d-b829-f053c2a4404c\" (UID: \"4aa714e1-7bc1-4d1d-b829-f053c2a4404c\") " Dec 10 15:02:02 crc kubenswrapper[4718]: I1210 15:02:02.361313 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4aa714e1-7bc1-4d1d-b829-f053c2a4404c-scripts" (OuterVolumeSpecName: "scripts") pod "4aa714e1-7bc1-4d1d-b829-f053c2a4404c" (UID: "4aa714e1-7bc1-4d1d-b829-f053c2a4404c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:02:02 crc kubenswrapper[4718]: I1210 15:02:02.363766 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4aa714e1-7bc1-4d1d-b829-f053c2a4404c-kube-api-access-mgvg5" (OuterVolumeSpecName: "kube-api-access-mgvg5") pod "4aa714e1-7bc1-4d1d-b829-f053c2a4404c" (UID: "4aa714e1-7bc1-4d1d-b829-f053c2a4404c"). InnerVolumeSpecName "kube-api-access-mgvg5". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:02:02 crc kubenswrapper[4718]: I1210 15:02:02.380028 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4aa714e1-7bc1-4d1d-b829-f053c2a4404c-config-data" (OuterVolumeSpecName: "config-data") pod "4aa714e1-7bc1-4d1d-b829-f053c2a4404c" (UID: "4aa714e1-7bc1-4d1d-b829-f053c2a4404c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:02:02 crc kubenswrapper[4718]: I1210 15:02:02.380428 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4aa714e1-7bc1-4d1d-b829-f053c2a4404c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4aa714e1-7bc1-4d1d-b829-f053c2a4404c" (UID: "4aa714e1-7bc1-4d1d-b829-f053c2a4404c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:02:02 crc kubenswrapper[4718]: I1210 15:02:02.447664 4718 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4aa714e1-7bc1-4d1d-b829-f053c2a4404c-config-data\") on node \"crc\" DevicePath \"\"" Dec 10 15:02:02 crc kubenswrapper[4718]: I1210 15:02:02.447755 4718 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4aa714e1-7bc1-4d1d-b829-f053c2a4404c-scripts\") on node \"crc\" DevicePath \"\"" Dec 10 15:02:02 crc kubenswrapper[4718]: I1210 15:02:02.447773 4718 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4aa714e1-7bc1-4d1d-b829-f053c2a4404c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 10 15:02:02 crc kubenswrapper[4718]: I1210 15:02:02.447791 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mgvg5\" (UniqueName: \"kubernetes.io/projected/4aa714e1-7bc1-4d1d-b829-f053c2a4404c-kube-api-access-mgvg5\") on node \"crc\" DevicePath \"\"" Dec 10 15:02:02 crc kubenswrapper[4718]: I1210 15:02:02.722331 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-4pldw" event={"ID":"4aa714e1-7bc1-4d1d-b829-f053c2a4404c","Type":"ContainerDied","Data":"39aa8e53511f1f882bb455ca76886606af0b926eed37395af86628aaa0c9ae9c"} Dec 10 15:02:02 crc kubenswrapper[4718]: I1210 15:02:02.722414 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-4pldw" Dec 10 15:02:02 crc kubenswrapper[4718]: I1210 15:02:02.722443 4718 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="39aa8e53511f1f882bb455ca76886606af0b926eed37395af86628aaa0c9ae9c" Dec 10 15:02:02 crc kubenswrapper[4718]: I1210 15:02:02.722374 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75958fc765-86zrn" Dec 10 15:02:02 crc kubenswrapper[4718]: I1210 15:02:02.794976 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-75958fc765-86zrn"] Dec 10 15:02:02 crc kubenswrapper[4718]: I1210 15:02:02.807471 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-75958fc765-86zrn"] Dec 10 15:02:03 crc kubenswrapper[4718]: I1210 15:02:03.002757 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Dec 10 15:02:03 crc kubenswrapper[4718]: I1210 15:02:03.003241 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="c7a2c239-7449-4684-a8c8-2d295d790367" containerName="nova-api-log" containerID="cri-o://81a8725c2e556abec8f3a73eb66872e8d3d305b3136a394a3a303058b8403e16" gracePeriod=30 Dec 10 15:02:03 crc kubenswrapper[4718]: I1210 15:02:03.003582 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="c7a2c239-7449-4684-a8c8-2d295d790367" containerName="nova-api-api" containerID="cri-o://251c19bffd4a53f48ebe75bb361beae9b0bac0ce251f3cb70330164617182dc6" gracePeriod=30 Dec 10 15:02:03 crc kubenswrapper[4718]: I1210 15:02:03.084485 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Dec 10 15:02:03 crc kubenswrapper[4718]: I1210 15:02:03.084870 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="f0e21c07-1414-4283-b8ad-e315fde18459" containerName="nova-metadata-log" containerID="cri-o://18f12522ac7bca50c83ea3fbe66b16cad0f7558b428d60802cd584634921cc64" gracePeriod=30 Dec 10 15:02:03 crc kubenswrapper[4718]: I1210 15:02:03.085694 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="f0e21c07-1414-4283-b8ad-e315fde18459" containerName="nova-metadata-metadata" containerID="cri-o://916835527bbe8ba773919dd753a07cc802c2716f4cf136b9cef1bc630f9a4180" gracePeriod=30 Dec 10 15:02:03 crc kubenswrapper[4718]: I1210 15:02:03.108367 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Dec 10 15:02:03 crc kubenswrapper[4718]: I1210 15:02:03.108688 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="d30ea5cb-94c9-4beb-83a0-7f076696c395" containerName="nova-scheduler-scheduler" containerID="cri-o://d37e4d0d16625f8940f1159e3d619f8e3a1df51c20100283cac38dd774ba7a70" gracePeriod=30 Dec 10 15:02:03 crc kubenswrapper[4718]: I1210 15:02:03.244164 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Dec 10 15:02:03 crc kubenswrapper[4718]: I1210 15:02:03.753030 4718 generic.go:334] "Generic (PLEG): container finished" podID="c7a2c239-7449-4684-a8c8-2d295d790367" containerID="81a8725c2e556abec8f3a73eb66872e8d3d305b3136a394a3a303058b8403e16" exitCode=143 Dec 10 15:02:03 crc kubenswrapper[4718]: I1210 15:02:03.753105 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c7a2c239-7449-4684-a8c8-2d295d790367","Type":"ContainerDied","Data":"81a8725c2e556abec8f3a73eb66872e8d3d305b3136a394a3a303058b8403e16"} Dec 10 15:02:03 crc kubenswrapper[4718]: I1210 15:02:03.760171 4718 generic.go:334] "Generic (PLEG): container finished" podID="f0e21c07-1414-4283-b8ad-e315fde18459" containerID="916835527bbe8ba773919dd753a07cc802c2716f4cf136b9cef1bc630f9a4180" exitCode=0 Dec 10 15:02:03 crc kubenswrapper[4718]: I1210 15:02:03.760229 4718 generic.go:334] "Generic (PLEG): container finished" podID="f0e21c07-1414-4283-b8ad-e315fde18459" containerID="18f12522ac7bca50c83ea3fbe66b16cad0f7558b428d60802cd584634921cc64" exitCode=143 Dec 10 15:02:03 crc kubenswrapper[4718]: I1210 15:02:03.760237 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"f0e21c07-1414-4283-b8ad-e315fde18459","Type":"ContainerDied","Data":"916835527bbe8ba773919dd753a07cc802c2716f4cf136b9cef1bc630f9a4180"} Dec 10 15:02:03 crc kubenswrapper[4718]: I1210 15:02:03.760341 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"f0e21c07-1414-4283-b8ad-e315fde18459","Type":"ContainerDied","Data":"18f12522ac7bca50c83ea3fbe66b16cad0f7558b428d60802cd584634921cc64"} Dec 10 15:02:04 crc kubenswrapper[4718]: I1210 15:02:04.052930 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dffdff5c-1fe4-4dbb-93c7-14f497a4e939" path="/var/lib/kubelet/pods/dffdff5c-1fe4-4dbb-93c7-14f497a4e939/volumes" Dec 10 15:02:04 crc kubenswrapper[4718]: I1210 15:02:04.352254 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 10 15:02:04 crc kubenswrapper[4718]: I1210 15:02:04.432211 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/f0e21c07-1414-4283-b8ad-e315fde18459-nova-metadata-tls-certs\") pod \"f0e21c07-1414-4283-b8ad-e315fde18459\" (UID: \"f0e21c07-1414-4283-b8ad-e315fde18459\") " Dec 10 15:02:04 crc kubenswrapper[4718]: I1210 15:02:04.432458 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q2cwk\" (UniqueName: \"kubernetes.io/projected/f0e21c07-1414-4283-b8ad-e315fde18459-kube-api-access-q2cwk\") pod \"f0e21c07-1414-4283-b8ad-e315fde18459\" (UID: \"f0e21c07-1414-4283-b8ad-e315fde18459\") " Dec 10 15:02:04 crc kubenswrapper[4718]: I1210 15:02:04.432539 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f0e21c07-1414-4283-b8ad-e315fde18459-combined-ca-bundle\") pod \"f0e21c07-1414-4283-b8ad-e315fde18459\" (UID: \"f0e21c07-1414-4283-b8ad-e315fde18459\") " Dec 10 15:02:04 crc kubenswrapper[4718]: I1210 15:02:04.432650 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f0e21c07-1414-4283-b8ad-e315fde18459-config-data\") pod \"f0e21c07-1414-4283-b8ad-e315fde18459\" (UID: \"f0e21c07-1414-4283-b8ad-e315fde18459\") " Dec 10 15:02:04 crc kubenswrapper[4718]: I1210 15:02:04.432779 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f0e21c07-1414-4283-b8ad-e315fde18459-logs\") pod \"f0e21c07-1414-4283-b8ad-e315fde18459\" (UID: \"f0e21c07-1414-4283-b8ad-e315fde18459\") " Dec 10 15:02:04 crc kubenswrapper[4718]: I1210 15:02:04.434320 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f0e21c07-1414-4283-b8ad-e315fde18459-logs" (OuterVolumeSpecName: "logs") pod "f0e21c07-1414-4283-b8ad-e315fde18459" (UID: "f0e21c07-1414-4283-b8ad-e315fde18459"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 15:02:04 crc kubenswrapper[4718]: I1210 15:02:04.454127 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f0e21c07-1414-4283-b8ad-e315fde18459-kube-api-access-q2cwk" (OuterVolumeSpecName: "kube-api-access-q2cwk") pod "f0e21c07-1414-4283-b8ad-e315fde18459" (UID: "f0e21c07-1414-4283-b8ad-e315fde18459"). InnerVolumeSpecName "kube-api-access-q2cwk". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:02:04 crc kubenswrapper[4718]: I1210 15:02:04.487977 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f0e21c07-1414-4283-b8ad-e315fde18459-config-data" (OuterVolumeSpecName: "config-data") pod "f0e21c07-1414-4283-b8ad-e315fde18459" (UID: "f0e21c07-1414-4283-b8ad-e315fde18459"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:02:04 crc kubenswrapper[4718]: I1210 15:02:04.497738 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f0e21c07-1414-4283-b8ad-e315fde18459-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f0e21c07-1414-4283-b8ad-e315fde18459" (UID: "f0e21c07-1414-4283-b8ad-e315fde18459"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:02:04 crc kubenswrapper[4718]: I1210 15:02:04.519808 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f0e21c07-1414-4283-b8ad-e315fde18459-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "f0e21c07-1414-4283-b8ad-e315fde18459" (UID: "f0e21c07-1414-4283-b8ad-e315fde18459"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:02:04 crc kubenswrapper[4718]: I1210 15:02:04.538314 4718 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f0e21c07-1414-4283-b8ad-e315fde18459-config-data\") on node \"crc\" DevicePath \"\"" Dec 10 15:02:04 crc kubenswrapper[4718]: I1210 15:02:04.538362 4718 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f0e21c07-1414-4283-b8ad-e315fde18459-logs\") on node \"crc\" DevicePath \"\"" Dec 10 15:02:04 crc kubenswrapper[4718]: I1210 15:02:04.538373 4718 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/f0e21c07-1414-4283-b8ad-e315fde18459-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 10 15:02:04 crc kubenswrapper[4718]: I1210 15:02:04.538403 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q2cwk\" (UniqueName: \"kubernetes.io/projected/f0e21c07-1414-4283-b8ad-e315fde18459-kube-api-access-q2cwk\") on node \"crc\" DevicePath \"\"" Dec 10 15:02:04 crc kubenswrapper[4718]: I1210 15:02:04.538414 4718 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f0e21c07-1414-4283-b8ad-e315fde18459-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 10 15:02:04 crc kubenswrapper[4718]: I1210 15:02:04.778000 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"f0e21c07-1414-4283-b8ad-e315fde18459","Type":"ContainerDied","Data":"51f23c23018d2f110c6a360ee8fec50d9b2972bbe684b69a67dcba4ec452ea21"} Dec 10 15:02:04 crc kubenswrapper[4718]: I1210 15:02:04.778118 4718 scope.go:117] "RemoveContainer" containerID="916835527bbe8ba773919dd753a07cc802c2716f4cf136b9cef1bc630f9a4180" Dec 10 15:02:04 crc kubenswrapper[4718]: I1210 15:02:04.778180 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 10 15:02:04 crc kubenswrapper[4718]: I1210 15:02:04.814217 4718 scope.go:117] "RemoveContainer" containerID="18f12522ac7bca50c83ea3fbe66b16cad0f7558b428d60802cd584634921cc64" Dec 10 15:02:04 crc kubenswrapper[4718]: I1210 15:02:04.898454 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Dec 10 15:02:04 crc kubenswrapper[4718]: I1210 15:02:04.924534 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Dec 10 15:02:04 crc kubenswrapper[4718]: I1210 15:02:04.952449 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Dec 10 15:02:04 crc kubenswrapper[4718]: E1210 15:02:04.953144 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f0e21c07-1414-4283-b8ad-e315fde18459" containerName="nova-metadata-log" Dec 10 15:02:04 crc kubenswrapper[4718]: I1210 15:02:04.953168 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="f0e21c07-1414-4283-b8ad-e315fde18459" containerName="nova-metadata-log" Dec 10 15:02:04 crc kubenswrapper[4718]: E1210 15:02:04.953220 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f0e21c07-1414-4283-b8ad-e315fde18459" containerName="nova-metadata-metadata" Dec 10 15:02:04 crc kubenswrapper[4718]: I1210 15:02:04.953228 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="f0e21c07-1414-4283-b8ad-e315fde18459" containerName="nova-metadata-metadata" Dec 10 15:02:04 crc kubenswrapper[4718]: E1210 15:02:04.953243 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4aa714e1-7bc1-4d1d-b829-f053c2a4404c" containerName="nova-manage" Dec 10 15:02:04 crc kubenswrapper[4718]: I1210 15:02:04.953255 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="4aa714e1-7bc1-4d1d-b829-f053c2a4404c" containerName="nova-manage" Dec 10 15:02:04 crc kubenswrapper[4718]: E1210 15:02:04.953282 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dffdff5c-1fe4-4dbb-93c7-14f497a4e939" containerName="init" Dec 10 15:02:04 crc kubenswrapper[4718]: I1210 15:02:04.953290 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="dffdff5c-1fe4-4dbb-93c7-14f497a4e939" containerName="init" Dec 10 15:02:04 crc kubenswrapper[4718]: E1210 15:02:04.953305 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dffdff5c-1fe4-4dbb-93c7-14f497a4e939" containerName="dnsmasq-dns" Dec 10 15:02:04 crc kubenswrapper[4718]: I1210 15:02:04.953312 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="dffdff5c-1fe4-4dbb-93c7-14f497a4e939" containerName="dnsmasq-dns" Dec 10 15:02:04 crc kubenswrapper[4718]: I1210 15:02:04.953622 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="4aa714e1-7bc1-4d1d-b829-f053c2a4404c" containerName="nova-manage" Dec 10 15:02:04 crc kubenswrapper[4718]: I1210 15:02:04.953657 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="dffdff5c-1fe4-4dbb-93c7-14f497a4e939" containerName="dnsmasq-dns" Dec 10 15:02:04 crc kubenswrapper[4718]: I1210 15:02:04.953685 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="f0e21c07-1414-4283-b8ad-e315fde18459" containerName="nova-metadata-metadata" Dec 10 15:02:04 crc kubenswrapper[4718]: I1210 15:02:04.953702 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="f0e21c07-1414-4283-b8ad-e315fde18459" containerName="nova-metadata-log" Dec 10 15:02:04 crc kubenswrapper[4718]: I1210 15:02:04.955832 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 10 15:02:04 crc kubenswrapper[4718]: I1210 15:02:04.961606 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Dec 10 15:02:04 crc kubenswrapper[4718]: I1210 15:02:04.970835 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Dec 10 15:02:04 crc kubenswrapper[4718]: I1210 15:02:04.973176 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Dec 10 15:02:05 crc kubenswrapper[4718]: I1210 15:02:05.052819 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4aed149-3d16-484b-bef2-7828f1ffda8e-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"c4aed149-3d16-484b-bef2-7828f1ffda8e\") " pod="openstack/nova-metadata-0" Dec 10 15:02:05 crc kubenswrapper[4718]: I1210 15:02:05.053066 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c4aed149-3d16-484b-bef2-7828f1ffda8e-logs\") pod \"nova-metadata-0\" (UID: \"c4aed149-3d16-484b-bef2-7828f1ffda8e\") " pod="openstack/nova-metadata-0" Dec 10 15:02:05 crc kubenswrapper[4718]: I1210 15:02:05.053294 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/c4aed149-3d16-484b-bef2-7828f1ffda8e-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"c4aed149-3d16-484b-bef2-7828f1ffda8e\") " pod="openstack/nova-metadata-0" Dec 10 15:02:05 crc kubenswrapper[4718]: I1210 15:02:05.053437 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jzn2v\" (UniqueName: \"kubernetes.io/projected/c4aed149-3d16-484b-bef2-7828f1ffda8e-kube-api-access-jzn2v\") pod \"nova-metadata-0\" (UID: \"c4aed149-3d16-484b-bef2-7828f1ffda8e\") " pod="openstack/nova-metadata-0" Dec 10 15:02:05 crc kubenswrapper[4718]: I1210 15:02:05.053498 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c4aed149-3d16-484b-bef2-7828f1ffda8e-config-data\") pod \"nova-metadata-0\" (UID: \"c4aed149-3d16-484b-bef2-7828f1ffda8e\") " pod="openstack/nova-metadata-0" Dec 10 15:02:05 crc kubenswrapper[4718]: I1210 15:02:05.156415 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4aed149-3d16-484b-bef2-7828f1ffda8e-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"c4aed149-3d16-484b-bef2-7828f1ffda8e\") " pod="openstack/nova-metadata-0" Dec 10 15:02:05 crc kubenswrapper[4718]: I1210 15:02:05.157212 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c4aed149-3d16-484b-bef2-7828f1ffda8e-logs\") pod \"nova-metadata-0\" (UID: \"c4aed149-3d16-484b-bef2-7828f1ffda8e\") " pod="openstack/nova-metadata-0" Dec 10 15:02:05 crc kubenswrapper[4718]: I1210 15:02:05.157415 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/c4aed149-3d16-484b-bef2-7828f1ffda8e-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"c4aed149-3d16-484b-bef2-7828f1ffda8e\") " pod="openstack/nova-metadata-0" Dec 10 15:02:05 crc kubenswrapper[4718]: I1210 15:02:05.157517 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jzn2v\" (UniqueName: \"kubernetes.io/projected/c4aed149-3d16-484b-bef2-7828f1ffda8e-kube-api-access-jzn2v\") pod \"nova-metadata-0\" (UID: \"c4aed149-3d16-484b-bef2-7828f1ffda8e\") " pod="openstack/nova-metadata-0" Dec 10 15:02:05 crc kubenswrapper[4718]: I1210 15:02:05.157566 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c4aed149-3d16-484b-bef2-7828f1ffda8e-config-data\") pod \"nova-metadata-0\" (UID: \"c4aed149-3d16-484b-bef2-7828f1ffda8e\") " pod="openstack/nova-metadata-0" Dec 10 15:02:05 crc kubenswrapper[4718]: I1210 15:02:05.159448 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c4aed149-3d16-484b-bef2-7828f1ffda8e-logs\") pod \"nova-metadata-0\" (UID: \"c4aed149-3d16-484b-bef2-7828f1ffda8e\") " pod="openstack/nova-metadata-0" Dec 10 15:02:05 crc kubenswrapper[4718]: I1210 15:02:05.173444 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/c4aed149-3d16-484b-bef2-7828f1ffda8e-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"c4aed149-3d16-484b-bef2-7828f1ffda8e\") " pod="openstack/nova-metadata-0" Dec 10 15:02:05 crc kubenswrapper[4718]: I1210 15:02:05.174138 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4aed149-3d16-484b-bef2-7828f1ffda8e-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"c4aed149-3d16-484b-bef2-7828f1ffda8e\") " pod="openstack/nova-metadata-0" Dec 10 15:02:05 crc kubenswrapper[4718]: I1210 15:02:05.191762 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c4aed149-3d16-484b-bef2-7828f1ffda8e-config-data\") pod \"nova-metadata-0\" (UID: \"c4aed149-3d16-484b-bef2-7828f1ffda8e\") " pod="openstack/nova-metadata-0" Dec 10 15:02:05 crc kubenswrapper[4718]: I1210 15:02:05.197217 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jzn2v\" (UniqueName: \"kubernetes.io/projected/c4aed149-3d16-484b-bef2-7828f1ffda8e-kube-api-access-jzn2v\") pod \"nova-metadata-0\" (UID: \"c4aed149-3d16-484b-bef2-7828f1ffda8e\") " pod="openstack/nova-metadata-0" Dec 10 15:02:05 crc kubenswrapper[4718]: I1210 15:02:05.309827 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 10 15:02:05 crc kubenswrapper[4718]: I1210 15:02:05.442370 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 10 15:02:05 crc kubenswrapper[4718]: I1210 15:02:05.473367 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8fdh2\" (UniqueName: \"kubernetes.io/projected/c7a2c239-7449-4684-a8c8-2d295d790367-kube-api-access-8fdh2\") pod \"c7a2c239-7449-4684-a8c8-2d295d790367\" (UID: \"c7a2c239-7449-4684-a8c8-2d295d790367\") " Dec 10 15:02:05 crc kubenswrapper[4718]: I1210 15:02:05.473926 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7a2c239-7449-4684-a8c8-2d295d790367-combined-ca-bundle\") pod \"c7a2c239-7449-4684-a8c8-2d295d790367\" (UID: \"c7a2c239-7449-4684-a8c8-2d295d790367\") " Dec 10 15:02:05 crc kubenswrapper[4718]: I1210 15:02:05.474009 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c7a2c239-7449-4684-a8c8-2d295d790367-config-data\") pod \"c7a2c239-7449-4684-a8c8-2d295d790367\" (UID: \"c7a2c239-7449-4684-a8c8-2d295d790367\") " Dec 10 15:02:05 crc kubenswrapper[4718]: I1210 15:02:05.474220 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c7a2c239-7449-4684-a8c8-2d295d790367-logs\") pod \"c7a2c239-7449-4684-a8c8-2d295d790367\" (UID: \"c7a2c239-7449-4684-a8c8-2d295d790367\") " Dec 10 15:02:05 crc kubenswrapper[4718]: I1210 15:02:05.476060 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c7a2c239-7449-4684-a8c8-2d295d790367-logs" (OuterVolumeSpecName: "logs") pod "c7a2c239-7449-4684-a8c8-2d295d790367" (UID: "c7a2c239-7449-4684-a8c8-2d295d790367"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 15:02:05 crc kubenswrapper[4718]: I1210 15:02:05.645742 4718 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c7a2c239-7449-4684-a8c8-2d295d790367-logs\") on node \"crc\" DevicePath \"\"" Dec 10 15:02:05 crc kubenswrapper[4718]: E1210 15:02:05.662102 4718 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="d37e4d0d16625f8940f1159e3d619f8e3a1df51c20100283cac38dd774ba7a70" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Dec 10 15:02:05 crc kubenswrapper[4718]: I1210 15:02:05.669804 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c7a2c239-7449-4684-a8c8-2d295d790367-config-data" (OuterVolumeSpecName: "config-data") pod "c7a2c239-7449-4684-a8c8-2d295d790367" (UID: "c7a2c239-7449-4684-a8c8-2d295d790367"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:02:05 crc kubenswrapper[4718]: I1210 15:02:05.676559 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c7a2c239-7449-4684-a8c8-2d295d790367-kube-api-access-8fdh2" (OuterVolumeSpecName: "kube-api-access-8fdh2") pod "c7a2c239-7449-4684-a8c8-2d295d790367" (UID: "c7a2c239-7449-4684-a8c8-2d295d790367"). InnerVolumeSpecName "kube-api-access-8fdh2". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:02:05 crc kubenswrapper[4718]: E1210 15:02:05.693862 4718 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="d37e4d0d16625f8940f1159e3d619f8e3a1df51c20100283cac38dd774ba7a70" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Dec 10 15:02:05 crc kubenswrapper[4718]: E1210 15:02:05.698910 4718 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="d37e4d0d16625f8940f1159e3d619f8e3a1df51c20100283cac38dd774ba7a70" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Dec 10 15:02:05 crc kubenswrapper[4718]: E1210 15:02:05.699003 4718 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="d30ea5cb-94c9-4beb-83a0-7f076696c395" containerName="nova-scheduler-scheduler" Dec 10 15:02:05 crc kubenswrapper[4718]: I1210 15:02:05.719738 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c7a2c239-7449-4684-a8c8-2d295d790367-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c7a2c239-7449-4684-a8c8-2d295d790367" (UID: "c7a2c239-7449-4684-a8c8-2d295d790367"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:02:05 crc kubenswrapper[4718]: I1210 15:02:05.748231 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8fdh2\" (UniqueName: \"kubernetes.io/projected/c7a2c239-7449-4684-a8c8-2d295d790367-kube-api-access-8fdh2\") on node \"crc\" DevicePath \"\"" Dec 10 15:02:05 crc kubenswrapper[4718]: I1210 15:02:05.748301 4718 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7a2c239-7449-4684-a8c8-2d295d790367-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 10 15:02:05 crc kubenswrapper[4718]: I1210 15:02:05.748313 4718 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c7a2c239-7449-4684-a8c8-2d295d790367-config-data\") on node \"crc\" DevicePath \"\"" Dec 10 15:02:05 crc kubenswrapper[4718]: I1210 15:02:05.800186 4718 generic.go:334] "Generic (PLEG): container finished" podID="c7a2c239-7449-4684-a8c8-2d295d790367" containerID="251c19bffd4a53f48ebe75bb361beae9b0bac0ce251f3cb70330164617182dc6" exitCode=0 Dec 10 15:02:05 crc kubenswrapper[4718]: I1210 15:02:05.800338 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 10 15:02:05 crc kubenswrapper[4718]: I1210 15:02:05.800316 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c7a2c239-7449-4684-a8c8-2d295d790367","Type":"ContainerDied","Data":"251c19bffd4a53f48ebe75bb361beae9b0bac0ce251f3cb70330164617182dc6"} Dec 10 15:02:05 crc kubenswrapper[4718]: I1210 15:02:05.800434 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c7a2c239-7449-4684-a8c8-2d295d790367","Type":"ContainerDied","Data":"24e12d387e8f33b968386586eaaac6b6db0bdbea04ff120660fe433fb21f654a"} Dec 10 15:02:05 crc kubenswrapper[4718]: I1210 15:02:05.800461 4718 scope.go:117] "RemoveContainer" containerID="251c19bffd4a53f48ebe75bb361beae9b0bac0ce251f3cb70330164617182dc6" Dec 10 15:02:05 crc kubenswrapper[4718]: I1210 15:02:05.870454 4718 scope.go:117] "RemoveContainer" containerID="81a8725c2e556abec8f3a73eb66872e8d3d305b3136a394a3a303058b8403e16" Dec 10 15:02:05 crc kubenswrapper[4718]: I1210 15:02:05.873718 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Dec 10 15:02:05 crc kubenswrapper[4718]: I1210 15:02:05.971939 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Dec 10 15:02:05 crc kubenswrapper[4718]: I1210 15:02:05.986472 4718 scope.go:117] "RemoveContainer" containerID="251c19bffd4a53f48ebe75bb361beae9b0bac0ce251f3cb70330164617182dc6" Dec 10 15:02:05 crc kubenswrapper[4718]: E1210 15:02:05.987287 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"251c19bffd4a53f48ebe75bb361beae9b0bac0ce251f3cb70330164617182dc6\": container with ID starting with 251c19bffd4a53f48ebe75bb361beae9b0bac0ce251f3cb70330164617182dc6 not found: ID does not exist" containerID="251c19bffd4a53f48ebe75bb361beae9b0bac0ce251f3cb70330164617182dc6" Dec 10 15:02:05 crc kubenswrapper[4718]: I1210 15:02:05.987409 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"251c19bffd4a53f48ebe75bb361beae9b0bac0ce251f3cb70330164617182dc6"} err="failed to get container status \"251c19bffd4a53f48ebe75bb361beae9b0bac0ce251f3cb70330164617182dc6\": rpc error: code = NotFound desc = could not find container \"251c19bffd4a53f48ebe75bb361beae9b0bac0ce251f3cb70330164617182dc6\": container with ID starting with 251c19bffd4a53f48ebe75bb361beae9b0bac0ce251f3cb70330164617182dc6 not found: ID does not exist" Dec 10 15:02:05 crc kubenswrapper[4718]: I1210 15:02:05.987448 4718 scope.go:117] "RemoveContainer" containerID="81a8725c2e556abec8f3a73eb66872e8d3d305b3136a394a3a303058b8403e16" Dec 10 15:02:05 crc kubenswrapper[4718]: E1210 15:02:05.989269 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"81a8725c2e556abec8f3a73eb66872e8d3d305b3136a394a3a303058b8403e16\": container with ID starting with 81a8725c2e556abec8f3a73eb66872e8d3d305b3136a394a3a303058b8403e16 not found: ID does not exist" containerID="81a8725c2e556abec8f3a73eb66872e8d3d305b3136a394a3a303058b8403e16" Dec 10 15:02:05 crc kubenswrapper[4718]: I1210 15:02:05.989317 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"81a8725c2e556abec8f3a73eb66872e8d3d305b3136a394a3a303058b8403e16"} err="failed to get container status \"81a8725c2e556abec8f3a73eb66872e8d3d305b3136a394a3a303058b8403e16\": rpc error: code = NotFound desc = could not find container \"81a8725c2e556abec8f3a73eb66872e8d3d305b3136a394a3a303058b8403e16\": container with ID starting with 81a8725c2e556abec8f3a73eb66872e8d3d305b3136a394a3a303058b8403e16 not found: ID does not exist" Dec 10 15:02:06 crc kubenswrapper[4718]: I1210 15:02:06.118929 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c7a2c239-7449-4684-a8c8-2d295d790367" path="/var/lib/kubelet/pods/c7a2c239-7449-4684-a8c8-2d295d790367/volumes" Dec 10 15:02:06 crc kubenswrapper[4718]: I1210 15:02:06.120614 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f0e21c07-1414-4283-b8ad-e315fde18459" path="/var/lib/kubelet/pods/f0e21c07-1414-4283-b8ad-e315fde18459/volumes" Dec 10 15:02:06 crc kubenswrapper[4718]: I1210 15:02:06.121595 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Dec 10 15:02:06 crc kubenswrapper[4718]: E1210 15:02:06.122163 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c7a2c239-7449-4684-a8c8-2d295d790367" containerName="nova-api-log" Dec 10 15:02:06 crc kubenswrapper[4718]: I1210 15:02:06.122189 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="c7a2c239-7449-4684-a8c8-2d295d790367" containerName="nova-api-log" Dec 10 15:02:06 crc kubenswrapper[4718]: E1210 15:02:06.122206 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c7a2c239-7449-4684-a8c8-2d295d790367" containerName="nova-api-api" Dec 10 15:02:06 crc kubenswrapper[4718]: I1210 15:02:06.122215 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="c7a2c239-7449-4684-a8c8-2d295d790367" containerName="nova-api-api" Dec 10 15:02:06 crc kubenswrapper[4718]: I1210 15:02:06.123952 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="c7a2c239-7449-4684-a8c8-2d295d790367" containerName="nova-api-api" Dec 10 15:02:06 crc kubenswrapper[4718]: I1210 15:02:06.124001 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="c7a2c239-7449-4684-a8c8-2d295d790367" containerName="nova-api-log" Dec 10 15:02:06 crc kubenswrapper[4718]: I1210 15:02:06.129312 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 10 15:02:06 crc kubenswrapper[4718]: I1210 15:02:06.129374 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Dec 10 15:02:06 crc kubenswrapper[4718]: I1210 15:02:06.129582 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 10 15:02:06 crc kubenswrapper[4718]: I1210 15:02:06.136138 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Dec 10 15:02:06 crc kubenswrapper[4718]: I1210 15:02:06.311555 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0266bf72-60bf-4eb9-992a-79f1f5d35ced-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"0266bf72-60bf-4eb9-992a-79f1f5d35ced\") " pod="openstack/nova-api-0" Dec 10 15:02:06 crc kubenswrapper[4718]: I1210 15:02:06.311673 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0266bf72-60bf-4eb9-992a-79f1f5d35ced-config-data\") pod \"nova-api-0\" (UID: \"0266bf72-60bf-4eb9-992a-79f1f5d35ced\") " pod="openstack/nova-api-0" Dec 10 15:02:06 crc kubenswrapper[4718]: I1210 15:02:06.311882 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-57gm4\" (UniqueName: \"kubernetes.io/projected/0266bf72-60bf-4eb9-992a-79f1f5d35ced-kube-api-access-57gm4\") pod \"nova-api-0\" (UID: \"0266bf72-60bf-4eb9-992a-79f1f5d35ced\") " pod="openstack/nova-api-0" Dec 10 15:02:06 crc kubenswrapper[4718]: I1210 15:02:06.311924 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0266bf72-60bf-4eb9-992a-79f1f5d35ced-logs\") pod \"nova-api-0\" (UID: \"0266bf72-60bf-4eb9-992a-79f1f5d35ced\") " pod="openstack/nova-api-0" Dec 10 15:02:06 crc kubenswrapper[4718]: E1210 15:02:06.324099 4718 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc7a2c239_7449_4684_a8c8_2d295d790367.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc7a2c239_7449_4684_a8c8_2d295d790367.slice/crio-24e12d387e8f33b968386586eaaac6b6db0bdbea04ff120660fe433fb21f654a\": RecentStats: unable to find data in memory cache]" Dec 10 15:02:06 crc kubenswrapper[4718]: I1210 15:02:06.414157 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0266bf72-60bf-4eb9-992a-79f1f5d35ced-config-data\") pod \"nova-api-0\" (UID: \"0266bf72-60bf-4eb9-992a-79f1f5d35ced\") " pod="openstack/nova-api-0" Dec 10 15:02:06 crc kubenswrapper[4718]: I1210 15:02:06.414803 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-57gm4\" (UniqueName: \"kubernetes.io/projected/0266bf72-60bf-4eb9-992a-79f1f5d35ced-kube-api-access-57gm4\") pod \"nova-api-0\" (UID: \"0266bf72-60bf-4eb9-992a-79f1f5d35ced\") " pod="openstack/nova-api-0" Dec 10 15:02:06 crc kubenswrapper[4718]: I1210 15:02:06.414851 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0266bf72-60bf-4eb9-992a-79f1f5d35ced-logs\") pod \"nova-api-0\" (UID: \"0266bf72-60bf-4eb9-992a-79f1f5d35ced\") " pod="openstack/nova-api-0" Dec 10 15:02:06 crc kubenswrapper[4718]: I1210 15:02:06.414935 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0266bf72-60bf-4eb9-992a-79f1f5d35ced-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"0266bf72-60bf-4eb9-992a-79f1f5d35ced\") " pod="openstack/nova-api-0" Dec 10 15:02:06 crc kubenswrapper[4718]: I1210 15:02:06.417942 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0266bf72-60bf-4eb9-992a-79f1f5d35ced-logs\") pod \"nova-api-0\" (UID: \"0266bf72-60bf-4eb9-992a-79f1f5d35ced\") " pod="openstack/nova-api-0" Dec 10 15:02:06 crc kubenswrapper[4718]: I1210 15:02:06.421896 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0266bf72-60bf-4eb9-992a-79f1f5d35ced-config-data\") pod \"nova-api-0\" (UID: \"0266bf72-60bf-4eb9-992a-79f1f5d35ced\") " pod="openstack/nova-api-0" Dec 10 15:02:06 crc kubenswrapper[4718]: I1210 15:02:06.425312 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0266bf72-60bf-4eb9-992a-79f1f5d35ced-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"0266bf72-60bf-4eb9-992a-79f1f5d35ced\") " pod="openstack/nova-api-0" Dec 10 15:02:06 crc kubenswrapper[4718]: I1210 15:02:06.458771 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-57gm4\" (UniqueName: \"kubernetes.io/projected/0266bf72-60bf-4eb9-992a-79f1f5d35ced-kube-api-access-57gm4\") pod \"nova-api-0\" (UID: \"0266bf72-60bf-4eb9-992a-79f1f5d35ced\") " pod="openstack/nova-api-0" Dec 10 15:02:06 crc kubenswrapper[4718]: I1210 15:02:06.519625 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 10 15:02:06 crc kubenswrapper[4718]: I1210 15:02:06.970657 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"c4aed149-3d16-484b-bef2-7828f1ffda8e","Type":"ContainerStarted","Data":"557591a1a71180960c70024cdaabc7123b9d03913c6069cf5a5e26eb6ce3fbc5"} Dec 10 15:02:06 crc kubenswrapper[4718]: I1210 15:02:06.971449 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"c4aed149-3d16-484b-bef2-7828f1ffda8e","Type":"ContainerStarted","Data":"c84f7331c71059452e4c28d59fcb1a8ef176e8bb8aa7e797cfc20c661dc36b71"} Dec 10 15:02:07 crc kubenswrapper[4718]: I1210 15:02:07.315354 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 10 15:02:07 crc kubenswrapper[4718]: W1210 15:02:07.351119 4718 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0266bf72_60bf_4eb9_992a_79f1f5d35ced.slice/crio-4ccc4866b3c7af9cd43ee2e5142da412a68cdd296ee8a3f0296975aafa22d63a WatchSource:0}: Error finding container 4ccc4866b3c7af9cd43ee2e5142da412a68cdd296ee8a3f0296975aafa22d63a: Status 404 returned error can't find the container with id 4ccc4866b3c7af9cd43ee2e5142da412a68cdd296ee8a3f0296975aafa22d63a Dec 10 15:02:08 crc kubenswrapper[4718]: I1210 15:02:08.011965 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"0266bf72-60bf-4eb9-992a-79f1f5d35ced","Type":"ContainerStarted","Data":"014c8d70fbcd263fe41b9c290022d85ffc3ed9ddc05b8f404a5dff862dae217e"} Dec 10 15:02:08 crc kubenswrapper[4718]: I1210 15:02:08.013061 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"0266bf72-60bf-4eb9-992a-79f1f5d35ced","Type":"ContainerStarted","Data":"4ccc4866b3c7af9cd43ee2e5142da412a68cdd296ee8a3f0296975aafa22d63a"} Dec 10 15:02:08 crc kubenswrapper[4718]: I1210 15:02:08.035829 4718 generic.go:334] "Generic (PLEG): container finished" podID="d30ea5cb-94c9-4beb-83a0-7f076696c395" containerID="d37e4d0d16625f8940f1159e3d619f8e3a1df51c20100283cac38dd774ba7a70" exitCode=0 Dec 10 15:02:08 crc kubenswrapper[4718]: I1210 15:02:08.049241 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"c4aed149-3d16-484b-bef2-7828f1ffda8e","Type":"ContainerStarted","Data":"23d4dcb35f647dcee7ccd4837f689e76a074605956223bd46cff165951a06847"} Dec 10 15:02:08 crc kubenswrapper[4718]: I1210 15:02:08.049324 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"d30ea5cb-94c9-4beb-83a0-7f076696c395","Type":"ContainerDied","Data":"d37e4d0d16625f8940f1159e3d619f8e3a1df51c20100283cac38dd774ba7a70"} Dec 10 15:02:08 crc kubenswrapper[4718]: I1210 15:02:08.077317 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=4.077277694 podStartE2EDuration="4.077277694s" podCreationTimestamp="2025-12-10 15:02:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 15:02:08.064111953 +0000 UTC m=+1833.013335370" watchObservedRunningTime="2025-12-10 15:02:08.077277694 +0000 UTC m=+1833.026501111" Dec 10 15:02:08 crc kubenswrapper[4718]: I1210 15:02:08.172277 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 10 15:02:08 crc kubenswrapper[4718]: I1210 15:02:08.233440 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d30ea5cb-94c9-4beb-83a0-7f076696c395-config-data\") pod \"d30ea5cb-94c9-4beb-83a0-7f076696c395\" (UID: \"d30ea5cb-94c9-4beb-83a0-7f076696c395\") " Dec 10 15:02:08 crc kubenswrapper[4718]: I1210 15:02:08.234133 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d30ea5cb-94c9-4beb-83a0-7f076696c395-combined-ca-bundle\") pod \"d30ea5cb-94c9-4beb-83a0-7f076696c395\" (UID: \"d30ea5cb-94c9-4beb-83a0-7f076696c395\") " Dec 10 15:02:08 crc kubenswrapper[4718]: I1210 15:02:08.234748 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf67j\" (UniqueName: \"kubernetes.io/projected/d30ea5cb-94c9-4beb-83a0-7f076696c395-kube-api-access-bf67j\") pod \"d30ea5cb-94c9-4beb-83a0-7f076696c395\" (UID: \"d30ea5cb-94c9-4beb-83a0-7f076696c395\") " Dec 10 15:02:08 crc kubenswrapper[4718]: I1210 15:02:08.250402 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d30ea5cb-94c9-4beb-83a0-7f076696c395-kube-api-access-bf67j" (OuterVolumeSpecName: "kube-api-access-bf67j") pod "d30ea5cb-94c9-4beb-83a0-7f076696c395" (UID: "d30ea5cb-94c9-4beb-83a0-7f076696c395"). InnerVolumeSpecName "kube-api-access-bf67j". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:02:08 crc kubenswrapper[4718]: I1210 15:02:08.279930 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d30ea5cb-94c9-4beb-83a0-7f076696c395-config-data" (OuterVolumeSpecName: "config-data") pod "d30ea5cb-94c9-4beb-83a0-7f076696c395" (UID: "d30ea5cb-94c9-4beb-83a0-7f076696c395"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:02:08 crc kubenswrapper[4718]: I1210 15:02:08.286172 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d30ea5cb-94c9-4beb-83a0-7f076696c395-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d30ea5cb-94c9-4beb-83a0-7f076696c395" (UID: "d30ea5cb-94c9-4beb-83a0-7f076696c395"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:02:08 crc kubenswrapper[4718]: I1210 15:02:08.339154 4718 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d30ea5cb-94c9-4beb-83a0-7f076696c395-config-data\") on node \"crc\" DevicePath \"\"" Dec 10 15:02:08 crc kubenswrapper[4718]: I1210 15:02:08.339274 4718 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d30ea5cb-94c9-4beb-83a0-7f076696c395-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 10 15:02:08 crc kubenswrapper[4718]: I1210 15:02:08.339293 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf67j\" (UniqueName: \"kubernetes.io/projected/d30ea5cb-94c9-4beb-83a0-7f076696c395-kube-api-access-bf67j\") on node \"crc\" DevicePath \"\"" Dec 10 15:02:09 crc kubenswrapper[4718]: I1210 15:02:09.052509 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"0266bf72-60bf-4eb9-992a-79f1f5d35ced","Type":"ContainerStarted","Data":"a4107564f24550776609d7394904582aa4fc800b6a16c9584877960184fbf4fd"} Dec 10 15:02:09 crc kubenswrapper[4718]: I1210 15:02:09.055718 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 10 15:02:09 crc kubenswrapper[4718]: I1210 15:02:09.056349 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"d30ea5cb-94c9-4beb-83a0-7f076696c395","Type":"ContainerDied","Data":"2a801fd8ada83477cdf8f048ee2bf5f8e4543f6bc7515bc705bcc8a7002efbaf"} Dec 10 15:02:09 crc kubenswrapper[4718]: I1210 15:02:09.056443 4718 scope.go:117] "RemoveContainer" containerID="d37e4d0d16625f8940f1159e3d619f8e3a1df51c20100283cac38dd774ba7a70" Dec 10 15:02:09 crc kubenswrapper[4718]: I1210 15:02:09.111550 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=4.111511544 podStartE2EDuration="4.111511544s" podCreationTimestamp="2025-12-10 15:02:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 15:02:09.096308632 +0000 UTC m=+1834.045532049" watchObservedRunningTime="2025-12-10 15:02:09.111511544 +0000 UTC m=+1834.060734961" Dec 10 15:02:09 crc kubenswrapper[4718]: I1210 15:02:09.153107 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Dec 10 15:02:09 crc kubenswrapper[4718]: I1210 15:02:09.168373 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Dec 10 15:02:09 crc kubenswrapper[4718]: I1210 15:02:09.182381 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Dec 10 15:02:09 crc kubenswrapper[4718]: E1210 15:02:09.183535 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d30ea5cb-94c9-4beb-83a0-7f076696c395" containerName="nova-scheduler-scheduler" Dec 10 15:02:09 crc kubenswrapper[4718]: I1210 15:02:09.183571 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="d30ea5cb-94c9-4beb-83a0-7f076696c395" containerName="nova-scheduler-scheduler" Dec 10 15:02:09 crc kubenswrapper[4718]: I1210 15:02:09.184071 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="d30ea5cb-94c9-4beb-83a0-7f076696c395" containerName="nova-scheduler-scheduler" Dec 10 15:02:09 crc kubenswrapper[4718]: I1210 15:02:09.185640 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 10 15:02:09 crc kubenswrapper[4718]: I1210 15:02:09.193816 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Dec 10 15:02:09 crc kubenswrapper[4718]: I1210 15:02:09.202533 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Dec 10 15:02:09 crc kubenswrapper[4718]: I1210 15:02:09.264096 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-66z7j\" (UniqueName: \"kubernetes.io/projected/09e476e7-3088-49bf-8e65-94212290e5e8-kube-api-access-66z7j\") pod \"nova-scheduler-0\" (UID: \"09e476e7-3088-49bf-8e65-94212290e5e8\") " pod="openstack/nova-scheduler-0" Dec 10 15:02:09 crc kubenswrapper[4718]: I1210 15:02:09.264212 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09e476e7-3088-49bf-8e65-94212290e5e8-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"09e476e7-3088-49bf-8e65-94212290e5e8\") " pod="openstack/nova-scheduler-0" Dec 10 15:02:09 crc kubenswrapper[4718]: I1210 15:02:09.264356 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/09e476e7-3088-49bf-8e65-94212290e5e8-config-data\") pod \"nova-scheduler-0\" (UID: \"09e476e7-3088-49bf-8e65-94212290e5e8\") " pod="openstack/nova-scheduler-0" Dec 10 15:02:09 crc kubenswrapper[4718]: I1210 15:02:09.366445 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09e476e7-3088-49bf-8e65-94212290e5e8-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"09e476e7-3088-49bf-8e65-94212290e5e8\") " pod="openstack/nova-scheduler-0" Dec 10 15:02:09 crc kubenswrapper[4718]: I1210 15:02:09.366760 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/09e476e7-3088-49bf-8e65-94212290e5e8-config-data\") pod \"nova-scheduler-0\" (UID: \"09e476e7-3088-49bf-8e65-94212290e5e8\") " pod="openstack/nova-scheduler-0" Dec 10 15:02:09 crc kubenswrapper[4718]: I1210 15:02:09.366925 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-66z7j\" (UniqueName: \"kubernetes.io/projected/09e476e7-3088-49bf-8e65-94212290e5e8-kube-api-access-66z7j\") pod \"nova-scheduler-0\" (UID: \"09e476e7-3088-49bf-8e65-94212290e5e8\") " pod="openstack/nova-scheduler-0" Dec 10 15:02:09 crc kubenswrapper[4718]: I1210 15:02:09.378384 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09e476e7-3088-49bf-8e65-94212290e5e8-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"09e476e7-3088-49bf-8e65-94212290e5e8\") " pod="openstack/nova-scheduler-0" Dec 10 15:02:09 crc kubenswrapper[4718]: I1210 15:02:09.396176 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/09e476e7-3088-49bf-8e65-94212290e5e8-config-data\") pod \"nova-scheduler-0\" (UID: \"09e476e7-3088-49bf-8e65-94212290e5e8\") " pod="openstack/nova-scheduler-0" Dec 10 15:02:09 crc kubenswrapper[4718]: I1210 15:02:09.404836 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-66z7j\" (UniqueName: \"kubernetes.io/projected/09e476e7-3088-49bf-8e65-94212290e5e8-kube-api-access-66z7j\") pod \"nova-scheduler-0\" (UID: \"09e476e7-3088-49bf-8e65-94212290e5e8\") " pod="openstack/nova-scheduler-0" Dec 10 15:02:09 crc kubenswrapper[4718]: I1210 15:02:09.526332 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 10 15:02:10 crc kubenswrapper[4718]: I1210 15:02:10.041322 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d30ea5cb-94c9-4beb-83a0-7f076696c395" path="/var/lib/kubelet/pods/d30ea5cb-94c9-4beb-83a0-7f076696c395/volumes" Dec 10 15:02:10 crc kubenswrapper[4718]: I1210 15:02:10.243842 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Dec 10 15:02:10 crc kubenswrapper[4718]: I1210 15:02:10.310579 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Dec 10 15:02:10 crc kubenswrapper[4718]: I1210 15:02:10.310659 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Dec 10 15:02:10 crc kubenswrapper[4718]: I1210 15:02:10.899489 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Dec 10 15:02:10 crc kubenswrapper[4718]: I1210 15:02:10.900199 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="df93c205-2f35-4c9d-b3ce-45174d5bfc2d" containerName="kube-state-metrics" containerID="cri-o://dd5efa31a3cc66a353bf75473378d6508fad8bf88fa1da9766951ed362f1ef91" gracePeriod=30 Dec 10 15:02:11 crc kubenswrapper[4718]: I1210 15:02:11.087567 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"09e476e7-3088-49bf-8e65-94212290e5e8","Type":"ContainerStarted","Data":"96e5f13f37b794cf6aebbe98bd418c61caa91fc2023329cd42cead2df6c0b096"} Dec 10 15:02:11 crc kubenswrapper[4718]: I1210 15:02:11.087649 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"09e476e7-3088-49bf-8e65-94212290e5e8","Type":"ContainerStarted","Data":"c24bca1ed04358f77735fbddc266861382fa036041c72503ba795915d45395bc"} Dec 10 15:02:11 crc kubenswrapper[4718]: I1210 15:02:11.115412 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.115344891 podStartE2EDuration="2.115344891s" podCreationTimestamp="2025-12-10 15:02:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 15:02:11.107731579 +0000 UTC m=+1836.056955006" watchObservedRunningTime="2025-12-10 15:02:11.115344891 +0000 UTC m=+1836.064568308" Dec 10 15:02:12 crc kubenswrapper[4718]: I1210 15:02:12.118137 4718 generic.go:334] "Generic (PLEG): container finished" podID="df93c205-2f35-4c9d-b3ce-45174d5bfc2d" containerID="dd5efa31a3cc66a353bf75473378d6508fad8bf88fa1da9766951ed362f1ef91" exitCode=2 Dec 10 15:02:12 crc kubenswrapper[4718]: I1210 15:02:12.118217 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"df93c205-2f35-4c9d-b3ce-45174d5bfc2d","Type":"ContainerDied","Data":"dd5efa31a3cc66a353bf75473378d6508fad8bf88fa1da9766951ed362f1ef91"} Dec 10 15:02:12 crc kubenswrapper[4718]: I1210 15:02:12.768738 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Dec 10 15:02:12 crc kubenswrapper[4718]: I1210 15:02:12.797710 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8rnc7\" (UniqueName: \"kubernetes.io/projected/df93c205-2f35-4c9d-b3ce-45174d5bfc2d-kube-api-access-8rnc7\") pod \"df93c205-2f35-4c9d-b3ce-45174d5bfc2d\" (UID: \"df93c205-2f35-4c9d-b3ce-45174d5bfc2d\") " Dec 10 15:02:12 crc kubenswrapper[4718]: I1210 15:02:12.803795 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/df93c205-2f35-4c9d-b3ce-45174d5bfc2d-kube-api-access-8rnc7" (OuterVolumeSpecName: "kube-api-access-8rnc7") pod "df93c205-2f35-4c9d-b3ce-45174d5bfc2d" (UID: "df93c205-2f35-4c9d-b3ce-45174d5bfc2d"). InnerVolumeSpecName "kube-api-access-8rnc7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:02:12 crc kubenswrapper[4718]: I1210 15:02:12.900313 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8rnc7\" (UniqueName: \"kubernetes.io/projected/df93c205-2f35-4c9d-b3ce-45174d5bfc2d-kube-api-access-8rnc7\") on node \"crc\" DevicePath \"\"" Dec 10 15:02:13 crc kubenswrapper[4718]: I1210 15:02:13.133304 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"df93c205-2f35-4c9d-b3ce-45174d5bfc2d","Type":"ContainerDied","Data":"ac5f251cc90fdf5c3f86c9f2b98c3eaac1c8218b5b067fede14aa969b67906cc"} Dec 10 15:02:13 crc kubenswrapper[4718]: I1210 15:02:13.133367 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Dec 10 15:02:13 crc kubenswrapper[4718]: I1210 15:02:13.133405 4718 scope.go:117] "RemoveContainer" containerID="dd5efa31a3cc66a353bf75473378d6508fad8bf88fa1da9766951ed362f1ef91" Dec 10 15:02:13 crc kubenswrapper[4718]: I1210 15:02:13.182272 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Dec 10 15:02:13 crc kubenswrapper[4718]: I1210 15:02:13.194259 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Dec 10 15:02:13 crc kubenswrapper[4718]: I1210 15:02:13.208017 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Dec 10 15:02:13 crc kubenswrapper[4718]: E1210 15:02:13.208744 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df93c205-2f35-4c9d-b3ce-45174d5bfc2d" containerName="kube-state-metrics" Dec 10 15:02:13 crc kubenswrapper[4718]: I1210 15:02:13.208770 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="df93c205-2f35-4c9d-b3ce-45174d5bfc2d" containerName="kube-state-metrics" Dec 10 15:02:13 crc kubenswrapper[4718]: I1210 15:02:13.208976 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="df93c205-2f35-4c9d-b3ce-45174d5bfc2d" containerName="kube-state-metrics" Dec 10 15:02:13 crc kubenswrapper[4718]: I1210 15:02:13.210032 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Dec 10 15:02:13 crc kubenswrapper[4718]: I1210 15:02:13.223379 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Dec 10 15:02:13 crc kubenswrapper[4718]: I1210 15:02:13.225567 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Dec 10 15:02:13 crc kubenswrapper[4718]: I1210 15:02:13.231124 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Dec 10 15:02:13 crc kubenswrapper[4718]: I1210 15:02:13.312145 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hjd76\" (UniqueName: \"kubernetes.io/projected/7a8da87b-0de3-4e86-ad3d-b29f4cd4ad1c-kube-api-access-hjd76\") pod \"kube-state-metrics-0\" (UID: \"7a8da87b-0de3-4e86-ad3d-b29f4cd4ad1c\") " pod="openstack/kube-state-metrics-0" Dec 10 15:02:13 crc kubenswrapper[4718]: I1210 15:02:13.312324 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/7a8da87b-0de3-4e86-ad3d-b29f4cd4ad1c-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"7a8da87b-0de3-4e86-ad3d-b29f4cd4ad1c\") " pod="openstack/kube-state-metrics-0" Dec 10 15:02:13 crc kubenswrapper[4718]: I1210 15:02:13.312428 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/7a8da87b-0de3-4e86-ad3d-b29f4cd4ad1c-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"7a8da87b-0de3-4e86-ad3d-b29f4cd4ad1c\") " pod="openstack/kube-state-metrics-0" Dec 10 15:02:13 crc kubenswrapper[4718]: I1210 15:02:13.312917 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a8da87b-0de3-4e86-ad3d-b29f4cd4ad1c-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"7a8da87b-0de3-4e86-ad3d-b29f4cd4ad1c\") " pod="openstack/kube-state-metrics-0" Dec 10 15:02:13 crc kubenswrapper[4718]: I1210 15:02:13.416296 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a8da87b-0de3-4e86-ad3d-b29f4cd4ad1c-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"7a8da87b-0de3-4e86-ad3d-b29f4cd4ad1c\") " pod="openstack/kube-state-metrics-0" Dec 10 15:02:13 crc kubenswrapper[4718]: I1210 15:02:13.416444 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hjd76\" (UniqueName: \"kubernetes.io/projected/7a8da87b-0de3-4e86-ad3d-b29f4cd4ad1c-kube-api-access-hjd76\") pod \"kube-state-metrics-0\" (UID: \"7a8da87b-0de3-4e86-ad3d-b29f4cd4ad1c\") " pod="openstack/kube-state-metrics-0" Dec 10 15:02:13 crc kubenswrapper[4718]: I1210 15:02:13.416514 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/7a8da87b-0de3-4e86-ad3d-b29f4cd4ad1c-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"7a8da87b-0de3-4e86-ad3d-b29f4cd4ad1c\") " pod="openstack/kube-state-metrics-0" Dec 10 15:02:13 crc kubenswrapper[4718]: I1210 15:02:13.416564 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/7a8da87b-0de3-4e86-ad3d-b29f4cd4ad1c-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"7a8da87b-0de3-4e86-ad3d-b29f4cd4ad1c\") " pod="openstack/kube-state-metrics-0" Dec 10 15:02:13 crc kubenswrapper[4718]: I1210 15:02:13.423503 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a8da87b-0de3-4e86-ad3d-b29f4cd4ad1c-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"7a8da87b-0de3-4e86-ad3d-b29f4cd4ad1c\") " pod="openstack/kube-state-metrics-0" Dec 10 15:02:13 crc kubenswrapper[4718]: I1210 15:02:13.423766 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/7a8da87b-0de3-4e86-ad3d-b29f4cd4ad1c-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"7a8da87b-0de3-4e86-ad3d-b29f4cd4ad1c\") " pod="openstack/kube-state-metrics-0" Dec 10 15:02:13 crc kubenswrapper[4718]: I1210 15:02:13.425097 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/7a8da87b-0de3-4e86-ad3d-b29f4cd4ad1c-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"7a8da87b-0de3-4e86-ad3d-b29f4cd4ad1c\") " pod="openstack/kube-state-metrics-0" Dec 10 15:02:13 crc kubenswrapper[4718]: I1210 15:02:13.435980 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hjd76\" (UniqueName: \"kubernetes.io/projected/7a8da87b-0de3-4e86-ad3d-b29f4cd4ad1c-kube-api-access-hjd76\") pod \"kube-state-metrics-0\" (UID: \"7a8da87b-0de3-4e86-ad3d-b29f4cd4ad1c\") " pod="openstack/kube-state-metrics-0" Dec 10 15:02:13 crc kubenswrapper[4718]: I1210 15:02:13.549724 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Dec 10 15:02:14 crc kubenswrapper[4718]: I1210 15:02:14.022009 4718 scope.go:117] "RemoveContainer" containerID="76beffb68a4302af15a19b5c310d822594a21c7c7ec551bf4bf551ca3dcf1f21" Dec 10 15:02:14 crc kubenswrapper[4718]: E1210 15:02:14.022923 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8zmhn_openshift-machine-config-operator(8db53917-7cfb-496d-b8a0-5cc68f3be4e7)\"" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" podUID="8db53917-7cfb-496d-b8a0-5cc68f3be4e7" Dec 10 15:02:14 crc kubenswrapper[4718]: I1210 15:02:14.041133 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="df93c205-2f35-4c9d-b3ce-45174d5bfc2d" path="/var/lib/kubelet/pods/df93c205-2f35-4c9d-b3ce-45174d5bfc2d/volumes" Dec 10 15:02:14 crc kubenswrapper[4718]: I1210 15:02:14.166337 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Dec 10 15:02:14 crc kubenswrapper[4718]: W1210 15:02:14.214499 4718 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7a8da87b_0de3_4e86_ad3d_b29f4cd4ad1c.slice/crio-cb7649e035ec2a00d98516327c60dee702cdddb3e029a01e52c9db3811dd3673 WatchSource:0}: Error finding container cb7649e035ec2a00d98516327c60dee702cdddb3e029a01e52c9db3811dd3673: Status 404 returned error can't find the container with id cb7649e035ec2a00d98516327c60dee702cdddb3e029a01e52c9db3811dd3673 Dec 10 15:02:14 crc kubenswrapper[4718]: I1210 15:02:14.234416 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 10 15:02:14 crc kubenswrapper[4718]: I1210 15:02:14.235073 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="8eeb4354-d452-4452-8b5e-b792475c3f53" containerName="ceilometer-central-agent" containerID="cri-o://6e4548f06a1ff5038801bc924a4821a1feb6caeb273d331797e12ea015e29cc3" gracePeriod=30 Dec 10 15:02:14 crc kubenswrapper[4718]: I1210 15:02:14.235155 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="8eeb4354-d452-4452-8b5e-b792475c3f53" containerName="ceilometer-notification-agent" containerID="cri-o://4dd01362cc6310fe799bfd4d2cd248b6ef7c8b294b36462c3ec296eae9027c28" gracePeriod=30 Dec 10 15:02:14 crc kubenswrapper[4718]: I1210 15:02:14.235166 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="8eeb4354-d452-4452-8b5e-b792475c3f53" containerName="sg-core" containerID="cri-o://ccef5060c0744d1e798e79df05f9f97e208cc3c02db7ce4f32b9281730f77811" gracePeriod=30 Dec 10 15:02:14 crc kubenswrapper[4718]: I1210 15:02:14.235149 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="8eeb4354-d452-4452-8b5e-b792475c3f53" containerName="proxy-httpd" containerID="cri-o://59e3921320acf5a0d11557951f079bc66ad3451dd80f2aca7ae387cafd549949" gracePeriod=30 Dec 10 15:02:14 crc kubenswrapper[4718]: I1210 15:02:14.527180 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Dec 10 15:02:15 crc kubenswrapper[4718]: I1210 15:02:15.219023 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"7a8da87b-0de3-4e86-ad3d-b29f4cd4ad1c","Type":"ContainerStarted","Data":"cb7649e035ec2a00d98516327c60dee702cdddb3e029a01e52c9db3811dd3673"} Dec 10 15:02:15 crc kubenswrapper[4718]: I1210 15:02:15.227717 4718 generic.go:334] "Generic (PLEG): container finished" podID="32c38ab3-5fa2-49cd-b76c-625823fb56a6" containerID="43b23f2d7a8abf2cb507c784db0e8e74a942b2fb0ae70adf892ff570fb9e939a" exitCode=0 Dec 10 15:02:15 crc kubenswrapper[4718]: I1210 15:02:15.227849 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-5pzcj" event={"ID":"32c38ab3-5fa2-49cd-b76c-625823fb56a6","Type":"ContainerDied","Data":"43b23f2d7a8abf2cb507c784db0e8e74a942b2fb0ae70adf892ff570fb9e939a"} Dec 10 15:02:15 crc kubenswrapper[4718]: I1210 15:02:15.483608 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Dec 10 15:02:15 crc kubenswrapper[4718]: I1210 15:02:15.484691 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Dec 10 15:02:15 crc kubenswrapper[4718]: I1210 15:02:15.492651 4718 generic.go:334] "Generic (PLEG): container finished" podID="8eeb4354-d452-4452-8b5e-b792475c3f53" containerID="59e3921320acf5a0d11557951f079bc66ad3451dd80f2aca7ae387cafd549949" exitCode=0 Dec 10 15:02:15 crc kubenswrapper[4718]: I1210 15:02:15.492698 4718 generic.go:334] "Generic (PLEG): container finished" podID="8eeb4354-d452-4452-8b5e-b792475c3f53" containerID="ccef5060c0744d1e798e79df05f9f97e208cc3c02db7ce4f32b9281730f77811" exitCode=2 Dec 10 15:02:15 crc kubenswrapper[4718]: I1210 15:02:15.492706 4718 generic.go:334] "Generic (PLEG): container finished" podID="8eeb4354-d452-4452-8b5e-b792475c3f53" containerID="6e4548f06a1ff5038801bc924a4821a1feb6caeb273d331797e12ea015e29cc3" exitCode=0 Dec 10 15:02:15 crc kubenswrapper[4718]: I1210 15:02:15.492741 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8eeb4354-d452-4452-8b5e-b792475c3f53","Type":"ContainerDied","Data":"59e3921320acf5a0d11557951f079bc66ad3451dd80f2aca7ae387cafd549949"} Dec 10 15:02:15 crc kubenswrapper[4718]: I1210 15:02:15.492793 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8eeb4354-d452-4452-8b5e-b792475c3f53","Type":"ContainerDied","Data":"ccef5060c0744d1e798e79df05f9f97e208cc3c02db7ce4f32b9281730f77811"} Dec 10 15:02:15 crc kubenswrapper[4718]: I1210 15:02:15.492828 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8eeb4354-d452-4452-8b5e-b792475c3f53","Type":"ContainerDied","Data":"6e4548f06a1ff5038801bc924a4821a1feb6caeb273d331797e12ea015e29cc3"} Dec 10 15:02:16 crc kubenswrapper[4718]: I1210 15:02:16.502082 4718 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="c4aed149-3d16-484b-bef2-7828f1ffda8e" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.212:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 10 15:02:16 crc kubenswrapper[4718]: I1210 15:02:16.502184 4718 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="c4aed149-3d16-484b-bef2-7828f1ffda8e" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.212:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 10 15:02:16 crc kubenswrapper[4718]: I1210 15:02:16.516493 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"7a8da87b-0de3-4e86-ad3d-b29f4cd4ad1c","Type":"ContainerStarted","Data":"d6cca2b967cdf378e43cc634ffd50e72ef8048610fe75b8c7e1ebabc96382b24"} Dec 10 15:02:16 crc kubenswrapper[4718]: I1210 15:02:16.516951 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Dec 10 15:02:16 crc kubenswrapper[4718]: I1210 15:02:16.521155 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Dec 10 15:02:16 crc kubenswrapper[4718]: I1210 15:02:16.521255 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Dec 10 15:02:16 crc kubenswrapper[4718]: I1210 15:02:16.554208 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=2.712824644 podStartE2EDuration="3.554175199s" podCreationTimestamp="2025-12-10 15:02:13 +0000 UTC" firstStartedPulling="2025-12-10 15:02:14.219117796 +0000 UTC m=+1839.168341213" lastFinishedPulling="2025-12-10 15:02:15.060468351 +0000 UTC m=+1840.009691768" observedRunningTime="2025-12-10 15:02:16.542559617 +0000 UTC m=+1841.491783034" watchObservedRunningTime="2025-12-10 15:02:16.554175199 +0000 UTC m=+1841.503398616" Dec 10 15:02:17 crc kubenswrapper[4718]: I1210 15:02:17.095230 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-5pzcj" Dec 10 15:02:17 crc kubenswrapper[4718]: I1210 15:02:17.275579 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/32c38ab3-5fa2-49cd-b76c-625823fb56a6-scripts\") pod \"32c38ab3-5fa2-49cd-b76c-625823fb56a6\" (UID: \"32c38ab3-5fa2-49cd-b76c-625823fb56a6\") " Dec 10 15:02:17 crc kubenswrapper[4718]: I1210 15:02:17.275762 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/32c38ab3-5fa2-49cd-b76c-625823fb56a6-combined-ca-bundle\") pod \"32c38ab3-5fa2-49cd-b76c-625823fb56a6\" (UID: \"32c38ab3-5fa2-49cd-b76c-625823fb56a6\") " Dec 10 15:02:17 crc kubenswrapper[4718]: I1210 15:02:17.275893 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-955v5\" (UniqueName: \"kubernetes.io/projected/32c38ab3-5fa2-49cd-b76c-625823fb56a6-kube-api-access-955v5\") pod \"32c38ab3-5fa2-49cd-b76c-625823fb56a6\" (UID: \"32c38ab3-5fa2-49cd-b76c-625823fb56a6\") " Dec 10 15:02:17 crc kubenswrapper[4718]: I1210 15:02:17.276090 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/32c38ab3-5fa2-49cd-b76c-625823fb56a6-config-data\") pod \"32c38ab3-5fa2-49cd-b76c-625823fb56a6\" (UID: \"32c38ab3-5fa2-49cd-b76c-625823fb56a6\") " Dec 10 15:02:17 crc kubenswrapper[4718]: I1210 15:02:17.300879 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/32c38ab3-5fa2-49cd-b76c-625823fb56a6-kube-api-access-955v5" (OuterVolumeSpecName: "kube-api-access-955v5") pod "32c38ab3-5fa2-49cd-b76c-625823fb56a6" (UID: "32c38ab3-5fa2-49cd-b76c-625823fb56a6"). InnerVolumeSpecName "kube-api-access-955v5". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:02:17 crc kubenswrapper[4718]: I1210 15:02:17.312660 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/32c38ab3-5fa2-49cd-b76c-625823fb56a6-scripts" (OuterVolumeSpecName: "scripts") pod "32c38ab3-5fa2-49cd-b76c-625823fb56a6" (UID: "32c38ab3-5fa2-49cd-b76c-625823fb56a6"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:02:17 crc kubenswrapper[4718]: I1210 15:02:17.322723 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/32c38ab3-5fa2-49cd-b76c-625823fb56a6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "32c38ab3-5fa2-49cd-b76c-625823fb56a6" (UID: "32c38ab3-5fa2-49cd-b76c-625823fb56a6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:02:17 crc kubenswrapper[4718]: I1210 15:02:17.331634 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/32c38ab3-5fa2-49cd-b76c-625823fb56a6-config-data" (OuterVolumeSpecName: "config-data") pod "32c38ab3-5fa2-49cd-b76c-625823fb56a6" (UID: "32c38ab3-5fa2-49cd-b76c-625823fb56a6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:02:17 crc kubenswrapper[4718]: I1210 15:02:17.379531 4718 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/32c38ab3-5fa2-49cd-b76c-625823fb56a6-scripts\") on node \"crc\" DevicePath \"\"" Dec 10 15:02:17 crc kubenswrapper[4718]: I1210 15:02:17.379595 4718 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/32c38ab3-5fa2-49cd-b76c-625823fb56a6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 10 15:02:17 crc kubenswrapper[4718]: I1210 15:02:17.379613 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-955v5\" (UniqueName: \"kubernetes.io/projected/32c38ab3-5fa2-49cd-b76c-625823fb56a6-kube-api-access-955v5\") on node \"crc\" DevicePath \"\"" Dec 10 15:02:17 crc kubenswrapper[4718]: I1210 15:02:17.379627 4718 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/32c38ab3-5fa2-49cd-b76c-625823fb56a6-config-data\") on node \"crc\" DevicePath \"\"" Dec 10 15:02:17 crc kubenswrapper[4718]: I1210 15:02:17.533139 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-5pzcj" event={"ID":"32c38ab3-5fa2-49cd-b76c-625823fb56a6","Type":"ContainerDied","Data":"3804897b5148179e1737a76a96ff222ba4166cf69f09fe60e868bee52ed26bb3"} Dec 10 15:02:17 crc kubenswrapper[4718]: I1210 15:02:17.533216 4718 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3804897b5148179e1737a76a96ff222ba4166cf69f09fe60e868bee52ed26bb3" Dec 10 15:02:17 crc kubenswrapper[4718]: I1210 15:02:17.533290 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-5pzcj" Dec 10 15:02:17 crc kubenswrapper[4718]: I1210 15:02:17.603773 4718 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="0266bf72-60bf-4eb9-992a-79f1f5d35ced" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.213:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 10 15:02:17 crc kubenswrapper[4718]: I1210 15:02:17.604159 4718 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="0266bf72-60bf-4eb9-992a-79f1f5d35ced" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.213:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 10 15:02:17 crc kubenswrapper[4718]: I1210 15:02:17.664010 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Dec 10 15:02:17 crc kubenswrapper[4718]: E1210 15:02:17.664819 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="32c38ab3-5fa2-49cd-b76c-625823fb56a6" containerName="nova-cell1-conductor-db-sync" Dec 10 15:02:17 crc kubenswrapper[4718]: I1210 15:02:17.664850 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="32c38ab3-5fa2-49cd-b76c-625823fb56a6" containerName="nova-cell1-conductor-db-sync" Dec 10 15:02:17 crc kubenswrapper[4718]: I1210 15:02:17.665161 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="32c38ab3-5fa2-49cd-b76c-625823fb56a6" containerName="nova-cell1-conductor-db-sync" Dec 10 15:02:17 crc kubenswrapper[4718]: I1210 15:02:17.666320 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Dec 10 15:02:17 crc kubenswrapper[4718]: I1210 15:02:17.670419 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Dec 10 15:02:17 crc kubenswrapper[4718]: I1210 15:02:17.678856 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Dec 10 15:02:17 crc kubenswrapper[4718]: I1210 15:02:17.792630 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/612c753e-cb9b-4995-b219-6e3b0d60cc22-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"612c753e-cb9b-4995-b219-6e3b0d60cc22\") " pod="openstack/nova-cell1-conductor-0" Dec 10 15:02:17 crc kubenswrapper[4718]: I1210 15:02:17.792909 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pbrjz\" (UniqueName: \"kubernetes.io/projected/612c753e-cb9b-4995-b219-6e3b0d60cc22-kube-api-access-pbrjz\") pod \"nova-cell1-conductor-0\" (UID: \"612c753e-cb9b-4995-b219-6e3b0d60cc22\") " pod="openstack/nova-cell1-conductor-0" Dec 10 15:02:17 crc kubenswrapper[4718]: I1210 15:02:17.792950 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/612c753e-cb9b-4995-b219-6e3b0d60cc22-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"612c753e-cb9b-4995-b219-6e3b0d60cc22\") " pod="openstack/nova-cell1-conductor-0" Dec 10 15:02:17 crc kubenswrapper[4718]: I1210 15:02:17.895440 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pbrjz\" (UniqueName: \"kubernetes.io/projected/612c753e-cb9b-4995-b219-6e3b0d60cc22-kube-api-access-pbrjz\") pod \"nova-cell1-conductor-0\" (UID: \"612c753e-cb9b-4995-b219-6e3b0d60cc22\") " pod="openstack/nova-cell1-conductor-0" Dec 10 15:02:17 crc kubenswrapper[4718]: I1210 15:02:17.895529 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/612c753e-cb9b-4995-b219-6e3b0d60cc22-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"612c753e-cb9b-4995-b219-6e3b0d60cc22\") " pod="openstack/nova-cell1-conductor-0" Dec 10 15:02:17 crc kubenswrapper[4718]: I1210 15:02:17.895576 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/612c753e-cb9b-4995-b219-6e3b0d60cc22-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"612c753e-cb9b-4995-b219-6e3b0d60cc22\") " pod="openstack/nova-cell1-conductor-0" Dec 10 15:02:17 crc kubenswrapper[4718]: I1210 15:02:17.900365 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/612c753e-cb9b-4995-b219-6e3b0d60cc22-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"612c753e-cb9b-4995-b219-6e3b0d60cc22\") " pod="openstack/nova-cell1-conductor-0" Dec 10 15:02:17 crc kubenswrapper[4718]: I1210 15:02:17.904091 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/612c753e-cb9b-4995-b219-6e3b0d60cc22-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"612c753e-cb9b-4995-b219-6e3b0d60cc22\") " pod="openstack/nova-cell1-conductor-0" Dec 10 15:02:17 crc kubenswrapper[4718]: I1210 15:02:17.947791 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pbrjz\" (UniqueName: \"kubernetes.io/projected/612c753e-cb9b-4995-b219-6e3b0d60cc22-kube-api-access-pbrjz\") pod \"nova-cell1-conductor-0\" (UID: \"612c753e-cb9b-4995-b219-6e3b0d60cc22\") " pod="openstack/nova-cell1-conductor-0" Dec 10 15:02:18 crc kubenswrapper[4718]: I1210 15:02:18.013054 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Dec 10 15:02:18 crc kubenswrapper[4718]: I1210 15:02:18.720343 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Dec 10 15:02:18 crc kubenswrapper[4718]: W1210 15:02:18.745743 4718 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod612c753e_cb9b_4995_b219_6e3b0d60cc22.slice/crio-c45480db959a7e03a5bf59083e104ccd21b04d1e2734cb615ca101b7010c0db9 WatchSource:0}: Error finding container c45480db959a7e03a5bf59083e104ccd21b04d1e2734cb615ca101b7010c0db9: Status 404 returned error can't find the container with id c45480db959a7e03a5bf59083e104ccd21b04d1e2734cb615ca101b7010c0db9 Dec 10 15:02:19 crc kubenswrapper[4718]: I1210 15:02:19.527170 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Dec 10 15:02:19 crc kubenswrapper[4718]: I1210 15:02:19.570312 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Dec 10 15:02:19 crc kubenswrapper[4718]: I1210 15:02:19.579024 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 10 15:02:19 crc kubenswrapper[4718]: I1210 15:02:19.595019 4718 generic.go:334] "Generic (PLEG): container finished" podID="8eeb4354-d452-4452-8b5e-b792475c3f53" containerID="4dd01362cc6310fe799bfd4d2cd248b6ef7c8b294b36462c3ec296eae9027c28" exitCode=0 Dec 10 15:02:19 crc kubenswrapper[4718]: I1210 15:02:19.595098 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 10 15:02:19 crc kubenswrapper[4718]: I1210 15:02:19.595120 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8eeb4354-d452-4452-8b5e-b792475c3f53","Type":"ContainerDied","Data":"4dd01362cc6310fe799bfd4d2cd248b6ef7c8b294b36462c3ec296eae9027c28"} Dec 10 15:02:19 crc kubenswrapper[4718]: I1210 15:02:19.596602 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8eeb4354-d452-4452-8b5e-b792475c3f53","Type":"ContainerDied","Data":"17da44dcea5d5e3863fe33af609d773d20b42b2b7f7e550e04c8792aa147ac45"} Dec 10 15:02:19 crc kubenswrapper[4718]: I1210 15:02:19.596710 4718 scope.go:117] "RemoveContainer" containerID="59e3921320acf5a0d11557951f079bc66ad3451dd80f2aca7ae387cafd549949" Dec 10 15:02:19 crc kubenswrapper[4718]: I1210 15:02:19.607933 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"612c753e-cb9b-4995-b219-6e3b0d60cc22","Type":"ContainerStarted","Data":"98b8fa9adf2d61d9d08c01ce0143e7055da87769ac58058fe46be8d3a0d4482a"} Dec 10 15:02:19 crc kubenswrapper[4718]: I1210 15:02:19.607987 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"612c753e-cb9b-4995-b219-6e3b0d60cc22","Type":"ContainerStarted","Data":"c45480db959a7e03a5bf59083e104ccd21b04d1e2734cb615ca101b7010c0db9"} Dec 10 15:02:19 crc kubenswrapper[4718]: I1210 15:02:19.608001 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Dec 10 15:02:19 crc kubenswrapper[4718]: I1210 15:02:19.645719 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.645690346 podStartE2EDuration="2.645690346s" podCreationTimestamp="2025-12-10 15:02:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 15:02:19.645205744 +0000 UTC m=+1844.594429171" watchObservedRunningTime="2025-12-10 15:02:19.645690346 +0000 UTC m=+1844.594913763" Dec 10 15:02:19 crc kubenswrapper[4718]: I1210 15:02:19.646342 4718 scope.go:117] "RemoveContainer" containerID="ccef5060c0744d1e798e79df05f9f97e208cc3c02db7ce4f32b9281730f77811" Dec 10 15:02:19 crc kubenswrapper[4718]: I1210 15:02:19.665701 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Dec 10 15:02:19 crc kubenswrapper[4718]: I1210 15:02:19.684237 4718 scope.go:117] "RemoveContainer" containerID="4dd01362cc6310fe799bfd4d2cd248b6ef7c8b294b36462c3ec296eae9027c28" Dec 10 15:02:19 crc kubenswrapper[4718]: I1210 15:02:19.740735 4718 scope.go:117] "RemoveContainer" containerID="6e4548f06a1ff5038801bc924a4821a1feb6caeb273d331797e12ea015e29cc3" Dec 10 15:02:19 crc kubenswrapper[4718]: I1210 15:02:19.755408 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8eeb4354-d452-4452-8b5e-b792475c3f53-run-httpd\") pod \"8eeb4354-d452-4452-8b5e-b792475c3f53\" (UID: \"8eeb4354-d452-4452-8b5e-b792475c3f53\") " Dec 10 15:02:19 crc kubenswrapper[4718]: I1210 15:02:19.755519 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8eeb4354-d452-4452-8b5e-b792475c3f53-config-data\") pod \"8eeb4354-d452-4452-8b5e-b792475c3f53\" (UID: \"8eeb4354-d452-4452-8b5e-b792475c3f53\") " Dec 10 15:02:19 crc kubenswrapper[4718]: I1210 15:02:19.756214 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8eeb4354-d452-4452-8b5e-b792475c3f53-scripts\") pod \"8eeb4354-d452-4452-8b5e-b792475c3f53\" (UID: \"8eeb4354-d452-4452-8b5e-b792475c3f53\") " Dec 10 15:02:19 crc kubenswrapper[4718]: I1210 15:02:19.756281 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8eeb4354-d452-4452-8b5e-b792475c3f53-combined-ca-bundle\") pod \"8eeb4354-d452-4452-8b5e-b792475c3f53\" (UID: \"8eeb4354-d452-4452-8b5e-b792475c3f53\") " Dec 10 15:02:19 crc kubenswrapper[4718]: I1210 15:02:19.756306 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8eeb4354-d452-4452-8b5e-b792475c3f53-log-httpd\") pod \"8eeb4354-d452-4452-8b5e-b792475c3f53\" (UID: \"8eeb4354-d452-4452-8b5e-b792475c3f53\") " Dec 10 15:02:19 crc kubenswrapper[4718]: I1210 15:02:19.756323 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-85mxc\" (UniqueName: \"kubernetes.io/projected/8eeb4354-d452-4452-8b5e-b792475c3f53-kube-api-access-85mxc\") pod \"8eeb4354-d452-4452-8b5e-b792475c3f53\" (UID: \"8eeb4354-d452-4452-8b5e-b792475c3f53\") " Dec 10 15:02:19 crc kubenswrapper[4718]: I1210 15:02:19.756485 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8eeb4354-d452-4452-8b5e-b792475c3f53-sg-core-conf-yaml\") pod \"8eeb4354-d452-4452-8b5e-b792475c3f53\" (UID: \"8eeb4354-d452-4452-8b5e-b792475c3f53\") " Dec 10 15:02:19 crc kubenswrapper[4718]: I1210 15:02:19.757848 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8eeb4354-d452-4452-8b5e-b792475c3f53-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "8eeb4354-d452-4452-8b5e-b792475c3f53" (UID: "8eeb4354-d452-4452-8b5e-b792475c3f53"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 15:02:19 crc kubenswrapper[4718]: I1210 15:02:19.758266 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8eeb4354-d452-4452-8b5e-b792475c3f53-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "8eeb4354-d452-4452-8b5e-b792475c3f53" (UID: "8eeb4354-d452-4452-8b5e-b792475c3f53"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 15:02:19 crc kubenswrapper[4718]: I1210 15:02:19.789925 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8eeb4354-d452-4452-8b5e-b792475c3f53-kube-api-access-85mxc" (OuterVolumeSpecName: "kube-api-access-85mxc") pod "8eeb4354-d452-4452-8b5e-b792475c3f53" (UID: "8eeb4354-d452-4452-8b5e-b792475c3f53"). InnerVolumeSpecName "kube-api-access-85mxc". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:02:19 crc kubenswrapper[4718]: I1210 15:02:19.818523 4718 scope.go:117] "RemoveContainer" containerID="59e3921320acf5a0d11557951f079bc66ad3451dd80f2aca7ae387cafd549949" Dec 10 15:02:19 crc kubenswrapper[4718]: E1210 15:02:19.822697 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"59e3921320acf5a0d11557951f079bc66ad3451dd80f2aca7ae387cafd549949\": container with ID starting with 59e3921320acf5a0d11557951f079bc66ad3451dd80f2aca7ae387cafd549949 not found: ID does not exist" containerID="59e3921320acf5a0d11557951f079bc66ad3451dd80f2aca7ae387cafd549949" Dec 10 15:02:19 crc kubenswrapper[4718]: I1210 15:02:19.822777 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"59e3921320acf5a0d11557951f079bc66ad3451dd80f2aca7ae387cafd549949"} err="failed to get container status \"59e3921320acf5a0d11557951f079bc66ad3451dd80f2aca7ae387cafd549949\": rpc error: code = NotFound desc = could not find container \"59e3921320acf5a0d11557951f079bc66ad3451dd80f2aca7ae387cafd549949\": container with ID starting with 59e3921320acf5a0d11557951f079bc66ad3451dd80f2aca7ae387cafd549949 not found: ID does not exist" Dec 10 15:02:19 crc kubenswrapper[4718]: I1210 15:02:19.822824 4718 scope.go:117] "RemoveContainer" containerID="ccef5060c0744d1e798e79df05f9f97e208cc3c02db7ce4f32b9281730f77811" Dec 10 15:02:19 crc kubenswrapper[4718]: E1210 15:02:19.823894 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ccef5060c0744d1e798e79df05f9f97e208cc3c02db7ce4f32b9281730f77811\": container with ID starting with ccef5060c0744d1e798e79df05f9f97e208cc3c02db7ce4f32b9281730f77811 not found: ID does not exist" containerID="ccef5060c0744d1e798e79df05f9f97e208cc3c02db7ce4f32b9281730f77811" Dec 10 15:02:19 crc kubenswrapper[4718]: I1210 15:02:19.824050 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ccef5060c0744d1e798e79df05f9f97e208cc3c02db7ce4f32b9281730f77811"} err="failed to get container status \"ccef5060c0744d1e798e79df05f9f97e208cc3c02db7ce4f32b9281730f77811\": rpc error: code = NotFound desc = could not find container \"ccef5060c0744d1e798e79df05f9f97e208cc3c02db7ce4f32b9281730f77811\": container with ID starting with ccef5060c0744d1e798e79df05f9f97e208cc3c02db7ce4f32b9281730f77811 not found: ID does not exist" Dec 10 15:02:19 crc kubenswrapper[4718]: I1210 15:02:19.824204 4718 scope.go:117] "RemoveContainer" containerID="4dd01362cc6310fe799bfd4d2cd248b6ef7c8b294b36462c3ec296eae9027c28" Dec 10 15:02:19 crc kubenswrapper[4718]: I1210 15:02:19.824511 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8eeb4354-d452-4452-8b5e-b792475c3f53-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "8eeb4354-d452-4452-8b5e-b792475c3f53" (UID: "8eeb4354-d452-4452-8b5e-b792475c3f53"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:02:19 crc kubenswrapper[4718]: E1210 15:02:19.824744 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4dd01362cc6310fe799bfd4d2cd248b6ef7c8b294b36462c3ec296eae9027c28\": container with ID starting with 4dd01362cc6310fe799bfd4d2cd248b6ef7c8b294b36462c3ec296eae9027c28 not found: ID does not exist" containerID="4dd01362cc6310fe799bfd4d2cd248b6ef7c8b294b36462c3ec296eae9027c28" Dec 10 15:02:19 crc kubenswrapper[4718]: I1210 15:02:19.824869 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4dd01362cc6310fe799bfd4d2cd248b6ef7c8b294b36462c3ec296eae9027c28"} err="failed to get container status \"4dd01362cc6310fe799bfd4d2cd248b6ef7c8b294b36462c3ec296eae9027c28\": rpc error: code = NotFound desc = could not find container \"4dd01362cc6310fe799bfd4d2cd248b6ef7c8b294b36462c3ec296eae9027c28\": container with ID starting with 4dd01362cc6310fe799bfd4d2cd248b6ef7c8b294b36462c3ec296eae9027c28 not found: ID does not exist" Dec 10 15:02:19 crc kubenswrapper[4718]: I1210 15:02:19.824965 4718 scope.go:117] "RemoveContainer" containerID="6e4548f06a1ff5038801bc924a4821a1feb6caeb273d331797e12ea015e29cc3" Dec 10 15:02:19 crc kubenswrapper[4718]: E1210 15:02:19.825312 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6e4548f06a1ff5038801bc924a4821a1feb6caeb273d331797e12ea015e29cc3\": container with ID starting with 6e4548f06a1ff5038801bc924a4821a1feb6caeb273d331797e12ea015e29cc3 not found: ID does not exist" containerID="6e4548f06a1ff5038801bc924a4821a1feb6caeb273d331797e12ea015e29cc3" Dec 10 15:02:19 crc kubenswrapper[4718]: I1210 15:02:19.825454 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6e4548f06a1ff5038801bc924a4821a1feb6caeb273d331797e12ea015e29cc3"} err="failed to get container status \"6e4548f06a1ff5038801bc924a4821a1feb6caeb273d331797e12ea015e29cc3\": rpc error: code = NotFound desc = could not find container \"6e4548f06a1ff5038801bc924a4821a1feb6caeb273d331797e12ea015e29cc3\": container with ID starting with 6e4548f06a1ff5038801bc924a4821a1feb6caeb273d331797e12ea015e29cc3 not found: ID does not exist" Dec 10 15:02:19 crc kubenswrapper[4718]: I1210 15:02:19.857096 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8eeb4354-d452-4452-8b5e-b792475c3f53-scripts" (OuterVolumeSpecName: "scripts") pod "8eeb4354-d452-4452-8b5e-b792475c3f53" (UID: "8eeb4354-d452-4452-8b5e-b792475c3f53"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:02:19 crc kubenswrapper[4718]: I1210 15:02:19.859296 4718 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8eeb4354-d452-4452-8b5e-b792475c3f53-run-httpd\") on node \"crc\" DevicePath \"\"" Dec 10 15:02:19 crc kubenswrapper[4718]: I1210 15:02:19.859321 4718 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8eeb4354-d452-4452-8b5e-b792475c3f53-scripts\") on node \"crc\" DevicePath \"\"" Dec 10 15:02:19 crc kubenswrapper[4718]: I1210 15:02:19.859331 4718 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8eeb4354-d452-4452-8b5e-b792475c3f53-log-httpd\") on node \"crc\" DevicePath \"\"" Dec 10 15:02:19 crc kubenswrapper[4718]: I1210 15:02:19.859342 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-85mxc\" (UniqueName: \"kubernetes.io/projected/8eeb4354-d452-4452-8b5e-b792475c3f53-kube-api-access-85mxc\") on node \"crc\" DevicePath \"\"" Dec 10 15:02:19 crc kubenswrapper[4718]: I1210 15:02:19.859356 4718 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8eeb4354-d452-4452-8b5e-b792475c3f53-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Dec 10 15:02:20 crc kubenswrapper[4718]: I1210 15:02:20.002334 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8eeb4354-d452-4452-8b5e-b792475c3f53-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8eeb4354-d452-4452-8b5e-b792475c3f53" (UID: "8eeb4354-d452-4452-8b5e-b792475c3f53"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:02:20 crc kubenswrapper[4718]: I1210 15:02:20.033330 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8eeb4354-d452-4452-8b5e-b792475c3f53-config-data" (OuterVolumeSpecName: "config-data") pod "8eeb4354-d452-4452-8b5e-b792475c3f53" (UID: "8eeb4354-d452-4452-8b5e-b792475c3f53"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:02:20 crc kubenswrapper[4718]: I1210 15:02:20.068894 4718 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8eeb4354-d452-4452-8b5e-b792475c3f53-config-data\") on node \"crc\" DevicePath \"\"" Dec 10 15:02:20 crc kubenswrapper[4718]: I1210 15:02:20.068964 4718 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8eeb4354-d452-4452-8b5e-b792475c3f53-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 10 15:02:20 crc kubenswrapper[4718]: I1210 15:02:20.230819 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 10 15:02:20 crc kubenswrapper[4718]: I1210 15:02:20.245720 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Dec 10 15:02:20 crc kubenswrapper[4718]: I1210 15:02:20.271555 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 10 15:02:20 crc kubenswrapper[4718]: E1210 15:02:20.272292 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8eeb4354-d452-4452-8b5e-b792475c3f53" containerName="proxy-httpd" Dec 10 15:02:20 crc kubenswrapper[4718]: I1210 15:02:20.272323 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="8eeb4354-d452-4452-8b5e-b792475c3f53" containerName="proxy-httpd" Dec 10 15:02:20 crc kubenswrapper[4718]: E1210 15:02:20.272340 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8eeb4354-d452-4452-8b5e-b792475c3f53" containerName="ceilometer-notification-agent" Dec 10 15:02:20 crc kubenswrapper[4718]: I1210 15:02:20.272350 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="8eeb4354-d452-4452-8b5e-b792475c3f53" containerName="ceilometer-notification-agent" Dec 10 15:02:20 crc kubenswrapper[4718]: E1210 15:02:20.272365 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8eeb4354-d452-4452-8b5e-b792475c3f53" containerName="ceilometer-central-agent" Dec 10 15:02:20 crc kubenswrapper[4718]: I1210 15:02:20.272373 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="8eeb4354-d452-4452-8b5e-b792475c3f53" containerName="ceilometer-central-agent" Dec 10 15:02:20 crc kubenswrapper[4718]: E1210 15:02:20.272409 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8eeb4354-d452-4452-8b5e-b792475c3f53" containerName="sg-core" Dec 10 15:02:20 crc kubenswrapper[4718]: I1210 15:02:20.272418 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="8eeb4354-d452-4452-8b5e-b792475c3f53" containerName="sg-core" Dec 10 15:02:20 crc kubenswrapper[4718]: I1210 15:02:20.272727 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="8eeb4354-d452-4452-8b5e-b792475c3f53" containerName="sg-core" Dec 10 15:02:20 crc kubenswrapper[4718]: I1210 15:02:20.272760 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="8eeb4354-d452-4452-8b5e-b792475c3f53" containerName="proxy-httpd" Dec 10 15:02:20 crc kubenswrapper[4718]: I1210 15:02:20.272780 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="8eeb4354-d452-4452-8b5e-b792475c3f53" containerName="ceilometer-notification-agent" Dec 10 15:02:20 crc kubenswrapper[4718]: I1210 15:02:20.272798 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="8eeb4354-d452-4452-8b5e-b792475c3f53" containerName="ceilometer-central-agent" Dec 10 15:02:20 crc kubenswrapper[4718]: I1210 15:02:20.279451 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 10 15:02:20 crc kubenswrapper[4718]: I1210 15:02:20.283827 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 10 15:02:20 crc kubenswrapper[4718]: I1210 15:02:20.284862 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Dec 10 15:02:20 crc kubenswrapper[4718]: I1210 15:02:20.284918 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 10 15:02:20 crc kubenswrapper[4718]: I1210 15:02:20.295735 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 10 15:02:20 crc kubenswrapper[4718]: I1210 15:02:20.388810 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/950afc4b-3374-46ee-9a3e-aea3ee2f6232-scripts\") pod \"ceilometer-0\" (UID: \"950afc4b-3374-46ee-9a3e-aea3ee2f6232\") " pod="openstack/ceilometer-0" Dec 10 15:02:20 crc kubenswrapper[4718]: I1210 15:02:20.388996 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/950afc4b-3374-46ee-9a3e-aea3ee2f6232-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"950afc4b-3374-46ee-9a3e-aea3ee2f6232\") " pod="openstack/ceilometer-0" Dec 10 15:02:20 crc kubenswrapper[4718]: I1210 15:02:20.391566 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/950afc4b-3374-46ee-9a3e-aea3ee2f6232-log-httpd\") pod \"ceilometer-0\" (UID: \"950afc4b-3374-46ee-9a3e-aea3ee2f6232\") " pod="openstack/ceilometer-0" Dec 10 15:02:20 crc kubenswrapper[4718]: I1210 15:02:20.391698 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/950afc4b-3374-46ee-9a3e-aea3ee2f6232-run-httpd\") pod \"ceilometer-0\" (UID: \"950afc4b-3374-46ee-9a3e-aea3ee2f6232\") " pod="openstack/ceilometer-0" Dec 10 15:02:20 crc kubenswrapper[4718]: I1210 15:02:20.392265 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x7h6v\" (UniqueName: \"kubernetes.io/projected/950afc4b-3374-46ee-9a3e-aea3ee2f6232-kube-api-access-x7h6v\") pod \"ceilometer-0\" (UID: \"950afc4b-3374-46ee-9a3e-aea3ee2f6232\") " pod="openstack/ceilometer-0" Dec 10 15:02:20 crc kubenswrapper[4718]: I1210 15:02:20.392532 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/950afc4b-3374-46ee-9a3e-aea3ee2f6232-config-data\") pod \"ceilometer-0\" (UID: \"950afc4b-3374-46ee-9a3e-aea3ee2f6232\") " pod="openstack/ceilometer-0" Dec 10 15:02:20 crc kubenswrapper[4718]: I1210 15:02:20.392742 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/950afc4b-3374-46ee-9a3e-aea3ee2f6232-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"950afc4b-3374-46ee-9a3e-aea3ee2f6232\") " pod="openstack/ceilometer-0" Dec 10 15:02:20 crc kubenswrapper[4718]: I1210 15:02:20.392806 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/950afc4b-3374-46ee-9a3e-aea3ee2f6232-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"950afc4b-3374-46ee-9a3e-aea3ee2f6232\") " pod="openstack/ceilometer-0" Dec 10 15:02:20 crc kubenswrapper[4718]: I1210 15:02:20.495433 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/950afc4b-3374-46ee-9a3e-aea3ee2f6232-log-httpd\") pod \"ceilometer-0\" (UID: \"950afc4b-3374-46ee-9a3e-aea3ee2f6232\") " pod="openstack/ceilometer-0" Dec 10 15:02:20 crc kubenswrapper[4718]: I1210 15:02:20.495496 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/950afc4b-3374-46ee-9a3e-aea3ee2f6232-run-httpd\") pod \"ceilometer-0\" (UID: \"950afc4b-3374-46ee-9a3e-aea3ee2f6232\") " pod="openstack/ceilometer-0" Dec 10 15:02:20 crc kubenswrapper[4718]: I1210 15:02:20.495531 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x7h6v\" (UniqueName: \"kubernetes.io/projected/950afc4b-3374-46ee-9a3e-aea3ee2f6232-kube-api-access-x7h6v\") pod \"ceilometer-0\" (UID: \"950afc4b-3374-46ee-9a3e-aea3ee2f6232\") " pod="openstack/ceilometer-0" Dec 10 15:02:20 crc kubenswrapper[4718]: I1210 15:02:20.495586 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/950afc4b-3374-46ee-9a3e-aea3ee2f6232-config-data\") pod \"ceilometer-0\" (UID: \"950afc4b-3374-46ee-9a3e-aea3ee2f6232\") " pod="openstack/ceilometer-0" Dec 10 15:02:20 crc kubenswrapper[4718]: I1210 15:02:20.495642 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/950afc4b-3374-46ee-9a3e-aea3ee2f6232-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"950afc4b-3374-46ee-9a3e-aea3ee2f6232\") " pod="openstack/ceilometer-0" Dec 10 15:02:20 crc kubenswrapper[4718]: I1210 15:02:20.495675 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/950afc4b-3374-46ee-9a3e-aea3ee2f6232-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"950afc4b-3374-46ee-9a3e-aea3ee2f6232\") " pod="openstack/ceilometer-0" Dec 10 15:02:20 crc kubenswrapper[4718]: I1210 15:02:20.495738 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/950afc4b-3374-46ee-9a3e-aea3ee2f6232-scripts\") pod \"ceilometer-0\" (UID: \"950afc4b-3374-46ee-9a3e-aea3ee2f6232\") " pod="openstack/ceilometer-0" Dec 10 15:02:20 crc kubenswrapper[4718]: I1210 15:02:20.495780 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/950afc4b-3374-46ee-9a3e-aea3ee2f6232-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"950afc4b-3374-46ee-9a3e-aea3ee2f6232\") " pod="openstack/ceilometer-0" Dec 10 15:02:20 crc kubenswrapper[4718]: I1210 15:02:20.496226 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/950afc4b-3374-46ee-9a3e-aea3ee2f6232-log-httpd\") pod \"ceilometer-0\" (UID: \"950afc4b-3374-46ee-9a3e-aea3ee2f6232\") " pod="openstack/ceilometer-0" Dec 10 15:02:20 crc kubenswrapper[4718]: I1210 15:02:20.496823 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/950afc4b-3374-46ee-9a3e-aea3ee2f6232-run-httpd\") pod \"ceilometer-0\" (UID: \"950afc4b-3374-46ee-9a3e-aea3ee2f6232\") " pod="openstack/ceilometer-0" Dec 10 15:02:20 crc kubenswrapper[4718]: I1210 15:02:20.500996 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/950afc4b-3374-46ee-9a3e-aea3ee2f6232-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"950afc4b-3374-46ee-9a3e-aea3ee2f6232\") " pod="openstack/ceilometer-0" Dec 10 15:02:20 crc kubenswrapper[4718]: I1210 15:02:20.503844 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/950afc4b-3374-46ee-9a3e-aea3ee2f6232-scripts\") pod \"ceilometer-0\" (UID: \"950afc4b-3374-46ee-9a3e-aea3ee2f6232\") " pod="openstack/ceilometer-0" Dec 10 15:02:20 crc kubenswrapper[4718]: I1210 15:02:20.503900 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/950afc4b-3374-46ee-9a3e-aea3ee2f6232-config-data\") pod \"ceilometer-0\" (UID: \"950afc4b-3374-46ee-9a3e-aea3ee2f6232\") " pod="openstack/ceilometer-0" Dec 10 15:02:20 crc kubenswrapper[4718]: I1210 15:02:20.504659 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/950afc4b-3374-46ee-9a3e-aea3ee2f6232-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"950afc4b-3374-46ee-9a3e-aea3ee2f6232\") " pod="openstack/ceilometer-0" Dec 10 15:02:20 crc kubenswrapper[4718]: I1210 15:02:20.508565 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/950afc4b-3374-46ee-9a3e-aea3ee2f6232-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"950afc4b-3374-46ee-9a3e-aea3ee2f6232\") " pod="openstack/ceilometer-0" Dec 10 15:02:20 crc kubenswrapper[4718]: I1210 15:02:20.521279 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x7h6v\" (UniqueName: \"kubernetes.io/projected/950afc4b-3374-46ee-9a3e-aea3ee2f6232-kube-api-access-x7h6v\") pod \"ceilometer-0\" (UID: \"950afc4b-3374-46ee-9a3e-aea3ee2f6232\") " pod="openstack/ceilometer-0" Dec 10 15:02:20 crc kubenswrapper[4718]: I1210 15:02:20.610110 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 10 15:02:24 crc kubenswrapper[4718]: I1210 15:02:21.396096 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 10 15:02:24 crc kubenswrapper[4718]: I1210 15:02:21.641533 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"950afc4b-3374-46ee-9a3e-aea3ee2f6232","Type":"ContainerStarted","Data":"08649fcfbd482a3600aaf5340d9f2207914bf6e4f97f1b048c13df1dfba40245"} Dec 10 15:02:24 crc kubenswrapper[4718]: I1210 15:02:22.058417 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8eeb4354-d452-4452-8b5e-b792475c3f53" path="/var/lib/kubelet/pods/8eeb4354-d452-4452-8b5e-b792475c3f53/volumes" Dec 10 15:02:24 crc kubenswrapper[4718]: I1210 15:02:23.574595 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Dec 10 15:02:24 crc kubenswrapper[4718]: I1210 15:02:23.672376 4718 generic.go:334] "Generic (PLEG): container finished" podID="cfa18867-53a1-4a8f-a010-318cefec946c" containerID="3a9c0ba4b3ae903c327c813aa2d0f69a2a9c9688da494e9f82cddb28b5ce6177" exitCode=137 Dec 10 15:02:24 crc kubenswrapper[4718]: I1210 15:02:23.672432 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"cfa18867-53a1-4a8f-a010-318cefec946c","Type":"ContainerDied","Data":"3a9c0ba4b3ae903c327c813aa2d0f69a2a9c9688da494e9f82cddb28b5ce6177"} Dec 10 15:02:25 crc kubenswrapper[4718]: I1210 15:02:25.307047 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Dec 10 15:02:25 crc kubenswrapper[4718]: I1210 15:02:25.324259 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Dec 10 15:02:25 crc kubenswrapper[4718]: I1210 15:02:25.325058 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Dec 10 15:02:25 crc kubenswrapper[4718]: I1210 15:02:25.351810 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Dec 10 15:02:25 crc kubenswrapper[4718]: I1210 15:02:25.376359 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Dec 10 15:02:25 crc kubenswrapper[4718]: I1210 15:02:25.465547 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cfa18867-53a1-4a8f-a010-318cefec946c-combined-ca-bundle\") pod \"cfa18867-53a1-4a8f-a010-318cefec946c\" (UID: \"cfa18867-53a1-4a8f-a010-318cefec946c\") " Dec 10 15:02:25 crc kubenswrapper[4718]: I1210 15:02:25.465679 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cfa18867-53a1-4a8f-a010-318cefec946c-config-data\") pod \"cfa18867-53a1-4a8f-a010-318cefec946c\" (UID: \"cfa18867-53a1-4a8f-a010-318cefec946c\") " Dec 10 15:02:25 crc kubenswrapper[4718]: I1210 15:02:25.465936 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ftrrg\" (UniqueName: \"kubernetes.io/projected/cfa18867-53a1-4a8f-a010-318cefec946c-kube-api-access-ftrrg\") pod \"cfa18867-53a1-4a8f-a010-318cefec946c\" (UID: \"cfa18867-53a1-4a8f-a010-318cefec946c\") " Dec 10 15:02:25 crc kubenswrapper[4718]: I1210 15:02:25.480270 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cfa18867-53a1-4a8f-a010-318cefec946c-kube-api-access-ftrrg" (OuterVolumeSpecName: "kube-api-access-ftrrg") pod "cfa18867-53a1-4a8f-a010-318cefec946c" (UID: "cfa18867-53a1-4a8f-a010-318cefec946c"). InnerVolumeSpecName "kube-api-access-ftrrg". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:02:25 crc kubenswrapper[4718]: I1210 15:02:25.525700 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cfa18867-53a1-4a8f-a010-318cefec946c-config-data" (OuterVolumeSpecName: "config-data") pod "cfa18867-53a1-4a8f-a010-318cefec946c" (UID: "cfa18867-53a1-4a8f-a010-318cefec946c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:02:25 crc kubenswrapper[4718]: I1210 15:02:25.560413 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cfa18867-53a1-4a8f-a010-318cefec946c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "cfa18867-53a1-4a8f-a010-318cefec946c" (UID: "cfa18867-53a1-4a8f-a010-318cefec946c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:02:25 crc kubenswrapper[4718]: I1210 15:02:25.569224 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ftrrg\" (UniqueName: \"kubernetes.io/projected/cfa18867-53a1-4a8f-a010-318cefec946c-kube-api-access-ftrrg\") on node \"crc\" DevicePath \"\"" Dec 10 15:02:25 crc kubenswrapper[4718]: I1210 15:02:25.570184 4718 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cfa18867-53a1-4a8f-a010-318cefec946c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 10 15:02:25 crc kubenswrapper[4718]: I1210 15:02:25.570247 4718 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cfa18867-53a1-4a8f-a010-318cefec946c-config-data\") on node \"crc\" DevicePath \"\"" Dec 10 15:02:25 crc kubenswrapper[4718]: I1210 15:02:25.723342 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"950afc4b-3374-46ee-9a3e-aea3ee2f6232","Type":"ContainerStarted","Data":"5b3f3f47f18df10b2a82e44511c2a7f374437a154b3b1340b993b716bb59ccca"} Dec 10 15:02:25 crc kubenswrapper[4718]: I1210 15:02:25.730057 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"cfa18867-53a1-4a8f-a010-318cefec946c","Type":"ContainerDied","Data":"e9e8dc52c1b5e1d0049bdd9ec622169708d464ba57a51654564fc9793c9f3cdd"} Dec 10 15:02:25 crc kubenswrapper[4718]: I1210 15:02:25.730192 4718 scope.go:117] "RemoveContainer" containerID="3a9c0ba4b3ae903c327c813aa2d0f69a2a9c9688da494e9f82cddb28b5ce6177" Dec 10 15:02:25 crc kubenswrapper[4718]: I1210 15:02:25.730119 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Dec 10 15:02:25 crc kubenswrapper[4718]: I1210 15:02:25.837630 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 10 15:02:25 crc kubenswrapper[4718]: I1210 15:02:25.849875 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 10 15:02:25 crc kubenswrapper[4718]: I1210 15:02:25.881550 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 10 15:02:25 crc kubenswrapper[4718]: E1210 15:02:25.882658 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cfa18867-53a1-4a8f-a010-318cefec946c" containerName="nova-cell1-novncproxy-novncproxy" Dec 10 15:02:25 crc kubenswrapper[4718]: I1210 15:02:25.882752 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="cfa18867-53a1-4a8f-a010-318cefec946c" containerName="nova-cell1-novncproxy-novncproxy" Dec 10 15:02:25 crc kubenswrapper[4718]: I1210 15:02:25.883205 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="cfa18867-53a1-4a8f-a010-318cefec946c" containerName="nova-cell1-novncproxy-novncproxy" Dec 10 15:02:25 crc kubenswrapper[4718]: I1210 15:02:25.884430 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Dec 10 15:02:25 crc kubenswrapper[4718]: I1210 15:02:25.889108 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Dec 10 15:02:25 crc kubenswrapper[4718]: I1210 15:02:25.889700 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Dec 10 15:02:25 crc kubenswrapper[4718]: I1210 15:02:25.894194 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 10 15:02:25 crc kubenswrapper[4718]: I1210 15:02:25.896194 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Dec 10 15:02:25 crc kubenswrapper[4718]: I1210 15:02:25.986963 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/da134815-ca06-4544-86a3-ebbc3d219c56-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"da134815-ca06-4544-86a3-ebbc3d219c56\") " pod="openstack/nova-cell1-novncproxy-0" Dec 10 15:02:25 crc kubenswrapper[4718]: I1210 15:02:25.987010 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/da134815-ca06-4544-86a3-ebbc3d219c56-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"da134815-ca06-4544-86a3-ebbc3d219c56\") " pod="openstack/nova-cell1-novncproxy-0" Dec 10 15:02:25 crc kubenswrapper[4718]: I1210 15:02:25.987159 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/da134815-ca06-4544-86a3-ebbc3d219c56-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"da134815-ca06-4544-86a3-ebbc3d219c56\") " pod="openstack/nova-cell1-novncproxy-0" Dec 10 15:02:25 crc kubenswrapper[4718]: I1210 15:02:25.987241 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da134815-ca06-4544-86a3-ebbc3d219c56-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"da134815-ca06-4544-86a3-ebbc3d219c56\") " pod="openstack/nova-cell1-novncproxy-0" Dec 10 15:02:25 crc kubenswrapper[4718]: I1210 15:02:25.987308 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9cdnc\" (UniqueName: \"kubernetes.io/projected/da134815-ca06-4544-86a3-ebbc3d219c56-kube-api-access-9cdnc\") pod \"nova-cell1-novncproxy-0\" (UID: \"da134815-ca06-4544-86a3-ebbc3d219c56\") " pod="openstack/nova-cell1-novncproxy-0" Dec 10 15:02:26 crc kubenswrapper[4718]: I1210 15:02:26.040594 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cfa18867-53a1-4a8f-a010-318cefec946c" path="/var/lib/kubelet/pods/cfa18867-53a1-4a8f-a010-318cefec946c/volumes" Dec 10 15:02:26 crc kubenswrapper[4718]: I1210 15:02:26.089737 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/da134815-ca06-4544-86a3-ebbc3d219c56-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"da134815-ca06-4544-86a3-ebbc3d219c56\") " pod="openstack/nova-cell1-novncproxy-0" Dec 10 15:02:26 crc kubenswrapper[4718]: I1210 15:02:26.089872 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da134815-ca06-4544-86a3-ebbc3d219c56-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"da134815-ca06-4544-86a3-ebbc3d219c56\") " pod="openstack/nova-cell1-novncproxy-0" Dec 10 15:02:26 crc kubenswrapper[4718]: I1210 15:02:26.089921 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9cdnc\" (UniqueName: \"kubernetes.io/projected/da134815-ca06-4544-86a3-ebbc3d219c56-kube-api-access-9cdnc\") pod \"nova-cell1-novncproxy-0\" (UID: \"da134815-ca06-4544-86a3-ebbc3d219c56\") " pod="openstack/nova-cell1-novncproxy-0" Dec 10 15:02:26 crc kubenswrapper[4718]: I1210 15:02:26.090010 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/da134815-ca06-4544-86a3-ebbc3d219c56-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"da134815-ca06-4544-86a3-ebbc3d219c56\") " pod="openstack/nova-cell1-novncproxy-0" Dec 10 15:02:26 crc kubenswrapper[4718]: I1210 15:02:26.090054 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/da134815-ca06-4544-86a3-ebbc3d219c56-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"da134815-ca06-4544-86a3-ebbc3d219c56\") " pod="openstack/nova-cell1-novncproxy-0" Dec 10 15:02:26 crc kubenswrapper[4718]: I1210 15:02:26.097218 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/da134815-ca06-4544-86a3-ebbc3d219c56-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"da134815-ca06-4544-86a3-ebbc3d219c56\") " pod="openstack/nova-cell1-novncproxy-0" Dec 10 15:02:26 crc kubenswrapper[4718]: I1210 15:02:26.097424 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/da134815-ca06-4544-86a3-ebbc3d219c56-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"da134815-ca06-4544-86a3-ebbc3d219c56\") " pod="openstack/nova-cell1-novncproxy-0" Dec 10 15:02:26 crc kubenswrapper[4718]: I1210 15:02:26.098239 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da134815-ca06-4544-86a3-ebbc3d219c56-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"da134815-ca06-4544-86a3-ebbc3d219c56\") " pod="openstack/nova-cell1-novncproxy-0" Dec 10 15:02:26 crc kubenswrapper[4718]: I1210 15:02:26.100178 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/da134815-ca06-4544-86a3-ebbc3d219c56-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"da134815-ca06-4544-86a3-ebbc3d219c56\") " pod="openstack/nova-cell1-novncproxy-0" Dec 10 15:02:26 crc kubenswrapper[4718]: I1210 15:02:26.118844 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9cdnc\" (UniqueName: \"kubernetes.io/projected/da134815-ca06-4544-86a3-ebbc3d219c56-kube-api-access-9cdnc\") pod \"nova-cell1-novncproxy-0\" (UID: \"da134815-ca06-4544-86a3-ebbc3d219c56\") " pod="openstack/nova-cell1-novncproxy-0" Dec 10 15:02:26 crc kubenswrapper[4718]: I1210 15:02:26.220277 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Dec 10 15:02:26 crc kubenswrapper[4718]: I1210 15:02:26.537045 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Dec 10 15:02:26 crc kubenswrapper[4718]: I1210 15:02:26.539080 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Dec 10 15:02:26 crc kubenswrapper[4718]: I1210 15:02:26.549235 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Dec 10 15:02:26 crc kubenswrapper[4718]: I1210 15:02:26.577898 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Dec 10 15:02:26 crc kubenswrapper[4718]: I1210 15:02:26.773542 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"950afc4b-3374-46ee-9a3e-aea3ee2f6232","Type":"ContainerStarted","Data":"adb5a54eca985ccdee086ffda1e2849f4f722c50eb15f50b7ac7de86b56df800"} Dec 10 15:02:26 crc kubenswrapper[4718]: I1210 15:02:26.774714 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Dec 10 15:02:26 crc kubenswrapper[4718]: I1210 15:02:26.801296 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Dec 10 15:02:26 crc kubenswrapper[4718]: I1210 15:02:26.805999 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 10 15:02:27 crc kubenswrapper[4718]: I1210 15:02:27.124199 4718 scope.go:117] "RemoveContainer" containerID="76beffb68a4302af15a19b5c310d822594a21c7c7ec551bf4bf551ca3dcf1f21" Dec 10 15:02:27 crc kubenswrapper[4718]: I1210 15:02:27.175365 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-54599d8f7-hnvbs"] Dec 10 15:02:27 crc kubenswrapper[4718]: I1210 15:02:27.218445 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-54599d8f7-hnvbs" Dec 10 15:02:27 crc kubenswrapper[4718]: I1210 15:02:27.252358 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-54599d8f7-hnvbs"] Dec 10 15:02:27 crc kubenswrapper[4718]: I1210 15:02:27.341183 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e45712e4-047b-4b52-bc5f-983749972808-ovsdbserver-sb\") pod \"dnsmasq-dns-54599d8f7-hnvbs\" (UID: \"e45712e4-047b-4b52-bc5f-983749972808\") " pod="openstack/dnsmasq-dns-54599d8f7-hnvbs" Dec 10 15:02:27 crc kubenswrapper[4718]: I1210 15:02:27.343076 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2kp2c\" (UniqueName: \"kubernetes.io/projected/e45712e4-047b-4b52-bc5f-983749972808-kube-api-access-2kp2c\") pod \"dnsmasq-dns-54599d8f7-hnvbs\" (UID: \"e45712e4-047b-4b52-bc5f-983749972808\") " pod="openstack/dnsmasq-dns-54599d8f7-hnvbs" Dec 10 15:02:27 crc kubenswrapper[4718]: I1210 15:02:27.343224 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e45712e4-047b-4b52-bc5f-983749972808-config\") pod \"dnsmasq-dns-54599d8f7-hnvbs\" (UID: \"e45712e4-047b-4b52-bc5f-983749972808\") " pod="openstack/dnsmasq-dns-54599d8f7-hnvbs" Dec 10 15:02:27 crc kubenswrapper[4718]: I1210 15:02:27.343467 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e45712e4-047b-4b52-bc5f-983749972808-dns-svc\") pod \"dnsmasq-dns-54599d8f7-hnvbs\" (UID: \"e45712e4-047b-4b52-bc5f-983749972808\") " pod="openstack/dnsmasq-dns-54599d8f7-hnvbs" Dec 10 15:02:27 crc kubenswrapper[4718]: I1210 15:02:27.343593 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e45712e4-047b-4b52-bc5f-983749972808-dns-swift-storage-0\") pod \"dnsmasq-dns-54599d8f7-hnvbs\" (UID: \"e45712e4-047b-4b52-bc5f-983749972808\") " pod="openstack/dnsmasq-dns-54599d8f7-hnvbs" Dec 10 15:02:27 crc kubenswrapper[4718]: I1210 15:02:27.343685 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e45712e4-047b-4b52-bc5f-983749972808-ovsdbserver-nb\") pod \"dnsmasq-dns-54599d8f7-hnvbs\" (UID: \"e45712e4-047b-4b52-bc5f-983749972808\") " pod="openstack/dnsmasq-dns-54599d8f7-hnvbs" Dec 10 15:02:27 crc kubenswrapper[4718]: I1210 15:02:27.446724 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e45712e4-047b-4b52-bc5f-983749972808-config\") pod \"dnsmasq-dns-54599d8f7-hnvbs\" (UID: \"e45712e4-047b-4b52-bc5f-983749972808\") " pod="openstack/dnsmasq-dns-54599d8f7-hnvbs" Dec 10 15:02:27 crc kubenswrapper[4718]: I1210 15:02:27.446968 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e45712e4-047b-4b52-bc5f-983749972808-dns-svc\") pod \"dnsmasq-dns-54599d8f7-hnvbs\" (UID: \"e45712e4-047b-4b52-bc5f-983749972808\") " pod="openstack/dnsmasq-dns-54599d8f7-hnvbs" Dec 10 15:02:27 crc kubenswrapper[4718]: I1210 15:02:27.447052 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e45712e4-047b-4b52-bc5f-983749972808-dns-swift-storage-0\") pod \"dnsmasq-dns-54599d8f7-hnvbs\" (UID: \"e45712e4-047b-4b52-bc5f-983749972808\") " pod="openstack/dnsmasq-dns-54599d8f7-hnvbs" Dec 10 15:02:27 crc kubenswrapper[4718]: I1210 15:02:27.447105 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e45712e4-047b-4b52-bc5f-983749972808-ovsdbserver-nb\") pod \"dnsmasq-dns-54599d8f7-hnvbs\" (UID: \"e45712e4-047b-4b52-bc5f-983749972808\") " pod="openstack/dnsmasq-dns-54599d8f7-hnvbs" Dec 10 15:02:27 crc kubenswrapper[4718]: I1210 15:02:27.447149 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e45712e4-047b-4b52-bc5f-983749972808-ovsdbserver-sb\") pod \"dnsmasq-dns-54599d8f7-hnvbs\" (UID: \"e45712e4-047b-4b52-bc5f-983749972808\") " pod="openstack/dnsmasq-dns-54599d8f7-hnvbs" Dec 10 15:02:27 crc kubenswrapper[4718]: I1210 15:02:27.447172 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2kp2c\" (UniqueName: \"kubernetes.io/projected/e45712e4-047b-4b52-bc5f-983749972808-kube-api-access-2kp2c\") pod \"dnsmasq-dns-54599d8f7-hnvbs\" (UID: \"e45712e4-047b-4b52-bc5f-983749972808\") " pod="openstack/dnsmasq-dns-54599d8f7-hnvbs" Dec 10 15:02:27 crc kubenswrapper[4718]: I1210 15:02:27.447935 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e45712e4-047b-4b52-bc5f-983749972808-config\") pod \"dnsmasq-dns-54599d8f7-hnvbs\" (UID: \"e45712e4-047b-4b52-bc5f-983749972808\") " pod="openstack/dnsmasq-dns-54599d8f7-hnvbs" Dec 10 15:02:27 crc kubenswrapper[4718]: I1210 15:02:27.448238 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e45712e4-047b-4b52-bc5f-983749972808-dns-svc\") pod \"dnsmasq-dns-54599d8f7-hnvbs\" (UID: \"e45712e4-047b-4b52-bc5f-983749972808\") " pod="openstack/dnsmasq-dns-54599d8f7-hnvbs" Dec 10 15:02:27 crc kubenswrapper[4718]: I1210 15:02:27.448539 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e45712e4-047b-4b52-bc5f-983749972808-ovsdbserver-sb\") pod \"dnsmasq-dns-54599d8f7-hnvbs\" (UID: \"e45712e4-047b-4b52-bc5f-983749972808\") " pod="openstack/dnsmasq-dns-54599d8f7-hnvbs" Dec 10 15:02:27 crc kubenswrapper[4718]: I1210 15:02:27.448833 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e45712e4-047b-4b52-bc5f-983749972808-ovsdbserver-nb\") pod \"dnsmasq-dns-54599d8f7-hnvbs\" (UID: \"e45712e4-047b-4b52-bc5f-983749972808\") " pod="openstack/dnsmasq-dns-54599d8f7-hnvbs" Dec 10 15:02:27 crc kubenswrapper[4718]: I1210 15:02:27.450040 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e45712e4-047b-4b52-bc5f-983749972808-dns-swift-storage-0\") pod \"dnsmasq-dns-54599d8f7-hnvbs\" (UID: \"e45712e4-047b-4b52-bc5f-983749972808\") " pod="openstack/dnsmasq-dns-54599d8f7-hnvbs" Dec 10 15:02:27 crc kubenswrapper[4718]: I1210 15:02:27.587987 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2kp2c\" (UniqueName: \"kubernetes.io/projected/e45712e4-047b-4b52-bc5f-983749972808-kube-api-access-2kp2c\") pod \"dnsmasq-dns-54599d8f7-hnvbs\" (UID: \"e45712e4-047b-4b52-bc5f-983749972808\") " pod="openstack/dnsmasq-dns-54599d8f7-hnvbs" Dec 10 15:02:27 crc kubenswrapper[4718]: I1210 15:02:27.627563 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-54599d8f7-hnvbs" Dec 10 15:02:27 crc kubenswrapper[4718]: I1210 15:02:27.845164 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"da134815-ca06-4544-86a3-ebbc3d219c56","Type":"ContainerStarted","Data":"aaf20e6c7a594eb6ae25a16ccdfe95a62753730a4ba6d433e0c0e384a5435fe4"} Dec 10 15:02:28 crc kubenswrapper[4718]: I1210 15:02:28.094571 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Dec 10 15:02:28 crc kubenswrapper[4718]: I1210 15:02:28.420503 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-54599d8f7-hnvbs"] Dec 10 15:02:28 crc kubenswrapper[4718]: I1210 15:02:28.869447 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" event={"ID":"8db53917-7cfb-496d-b8a0-5cc68f3be4e7","Type":"ContainerStarted","Data":"5ce4270c443111ab27138184c2f5dff045fa11d171a6178ec7316680440101fb"} Dec 10 15:02:28 crc kubenswrapper[4718]: I1210 15:02:28.892226 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"950afc4b-3374-46ee-9a3e-aea3ee2f6232","Type":"ContainerStarted","Data":"20dab44fbfea6e9f3e8d19b1ec1618aefcfe359c3cf41d11b54933f49884c31a"} Dec 10 15:02:28 crc kubenswrapper[4718]: I1210 15:02:28.903906 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"da134815-ca06-4544-86a3-ebbc3d219c56","Type":"ContainerStarted","Data":"be06679ae88a393016deef6e656e0ecc454b29f15196b839ee3e00e3d2708852"} Dec 10 15:02:28 crc kubenswrapper[4718]: I1210 15:02:28.914263 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-54599d8f7-hnvbs" event={"ID":"e45712e4-047b-4b52-bc5f-983749972808","Type":"ContainerStarted","Data":"41707bb20433d0cd14ee0c3e5f13c73be1421803852b14041574809ba2eb4977"} Dec 10 15:02:28 crc kubenswrapper[4718]: I1210 15:02:28.914358 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-54599d8f7-hnvbs" event={"ID":"e45712e4-047b-4b52-bc5f-983749972808","Type":"ContainerStarted","Data":"cb30b3f5fad89918cbd2d2909046529bc1b25488f143de3a9c88e881372df233"} Dec 10 15:02:28 crc kubenswrapper[4718]: I1210 15:02:28.958351 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=3.9583096859999998 podStartE2EDuration="3.958309686s" podCreationTimestamp="2025-12-10 15:02:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 15:02:28.949502445 +0000 UTC m=+1853.898725882" watchObservedRunningTime="2025-12-10 15:02:28.958309686 +0000 UTC m=+1853.907533093" Dec 10 15:02:29 crc kubenswrapper[4718]: I1210 15:02:29.967553 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"950afc4b-3374-46ee-9a3e-aea3ee2f6232","Type":"ContainerStarted","Data":"5728dc3eccfeac390a2aa759cd32fb0b005d126f4718457d6c99dcb64a13f2e7"} Dec 10 15:02:29 crc kubenswrapper[4718]: I1210 15:02:29.970417 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Dec 10 15:02:29 crc kubenswrapper[4718]: I1210 15:02:29.987091 4718 generic.go:334] "Generic (PLEG): container finished" podID="e45712e4-047b-4b52-bc5f-983749972808" containerID="41707bb20433d0cd14ee0c3e5f13c73be1421803852b14041574809ba2eb4977" exitCode=0 Dec 10 15:02:29 crc kubenswrapper[4718]: I1210 15:02:29.987349 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-54599d8f7-hnvbs" event={"ID":"e45712e4-047b-4b52-bc5f-983749972808","Type":"ContainerDied","Data":"41707bb20433d0cd14ee0c3e5f13c73be1421803852b14041574809ba2eb4977"} Dec 10 15:02:30 crc kubenswrapper[4718]: I1210 15:02:30.042637 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.5288468809999998 podStartE2EDuration="10.042592721s" podCreationTimestamp="2025-12-10 15:02:20 +0000 UTC" firstStartedPulling="2025-12-10 15:02:21.406292865 +0000 UTC m=+1846.355516282" lastFinishedPulling="2025-12-10 15:02:28.920038705 +0000 UTC m=+1853.869262122" observedRunningTime="2025-12-10 15:02:30.000986316 +0000 UTC m=+1854.950209733" watchObservedRunningTime="2025-12-10 15:02:30.042592721 +0000 UTC m=+1854.991816128" Dec 10 15:02:31 crc kubenswrapper[4718]: I1210 15:02:31.003404 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-54599d8f7-hnvbs" event={"ID":"e45712e4-047b-4b52-bc5f-983749972808","Type":"ContainerStarted","Data":"af29a698ed499dcfbdb927491560ac434ff739be2b99122711e0955f85d32e07"} Dec 10 15:02:31 crc kubenswrapper[4718]: I1210 15:02:31.004113 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-54599d8f7-hnvbs" Dec 10 15:02:31 crc kubenswrapper[4718]: I1210 15:02:31.052402 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-54599d8f7-hnvbs" podStartSLOduration=4.052356566 podStartE2EDuration="4.052356566s" podCreationTimestamp="2025-12-10 15:02:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 15:02:31.045223527 +0000 UTC m=+1855.994446944" watchObservedRunningTime="2025-12-10 15:02:31.052356566 +0000 UTC m=+1856.001579993" Dec 10 15:02:31 crc kubenswrapper[4718]: I1210 15:02:31.224506 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Dec 10 15:02:31 crc kubenswrapper[4718]: I1210 15:02:31.796679 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Dec 10 15:02:31 crc kubenswrapper[4718]: I1210 15:02:31.797006 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="0266bf72-60bf-4eb9-992a-79f1f5d35ced" containerName="nova-api-log" containerID="cri-o://014c8d70fbcd263fe41b9c290022d85ffc3ed9ddc05b8f404a5dff862dae217e" gracePeriod=30 Dec 10 15:02:31 crc kubenswrapper[4718]: I1210 15:02:31.797203 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="0266bf72-60bf-4eb9-992a-79f1f5d35ced" containerName="nova-api-api" containerID="cri-o://a4107564f24550776609d7394904582aa4fc800b6a16c9584877960184fbf4fd" gracePeriod=30 Dec 10 15:02:32 crc kubenswrapper[4718]: I1210 15:02:32.022589 4718 generic.go:334] "Generic (PLEG): container finished" podID="0266bf72-60bf-4eb9-992a-79f1f5d35ced" containerID="014c8d70fbcd263fe41b9c290022d85ffc3ed9ddc05b8f404a5dff862dae217e" exitCode=143 Dec 10 15:02:32 crc kubenswrapper[4718]: I1210 15:02:32.034596 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"0266bf72-60bf-4eb9-992a-79f1f5d35ced","Type":"ContainerDied","Data":"014c8d70fbcd263fe41b9c290022d85ffc3ed9ddc05b8f404a5dff862dae217e"} Dec 10 15:02:32 crc kubenswrapper[4718]: I1210 15:02:32.861412 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 10 15:02:33 crc kubenswrapper[4718]: I1210 15:02:33.039243 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="950afc4b-3374-46ee-9a3e-aea3ee2f6232" containerName="ceilometer-central-agent" containerID="cri-o://5b3f3f47f18df10b2a82e44511c2a7f374437a154b3b1340b993b716bb59ccca" gracePeriod=30 Dec 10 15:02:33 crc kubenswrapper[4718]: I1210 15:02:33.040081 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="950afc4b-3374-46ee-9a3e-aea3ee2f6232" containerName="proxy-httpd" containerID="cri-o://5728dc3eccfeac390a2aa759cd32fb0b005d126f4718457d6c99dcb64a13f2e7" gracePeriod=30 Dec 10 15:02:33 crc kubenswrapper[4718]: I1210 15:02:33.040279 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="950afc4b-3374-46ee-9a3e-aea3ee2f6232" containerName="sg-core" containerID="cri-o://20dab44fbfea6e9f3e8d19b1ec1618aefcfe359c3cf41d11b54933f49884c31a" gracePeriod=30 Dec 10 15:02:33 crc kubenswrapper[4718]: I1210 15:02:33.040323 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="950afc4b-3374-46ee-9a3e-aea3ee2f6232" containerName="ceilometer-notification-agent" containerID="cri-o://adb5a54eca985ccdee086ffda1e2849f4f722c50eb15f50b7ac7de86b56df800" gracePeriod=30 Dec 10 15:02:34 crc kubenswrapper[4718]: I1210 15:02:34.101424 4718 generic.go:334] "Generic (PLEG): container finished" podID="950afc4b-3374-46ee-9a3e-aea3ee2f6232" containerID="5728dc3eccfeac390a2aa759cd32fb0b005d126f4718457d6c99dcb64a13f2e7" exitCode=0 Dec 10 15:02:34 crc kubenswrapper[4718]: I1210 15:02:34.101497 4718 generic.go:334] "Generic (PLEG): container finished" podID="950afc4b-3374-46ee-9a3e-aea3ee2f6232" containerID="20dab44fbfea6e9f3e8d19b1ec1618aefcfe359c3cf41d11b54933f49884c31a" exitCode=2 Dec 10 15:02:34 crc kubenswrapper[4718]: I1210 15:02:34.101509 4718 generic.go:334] "Generic (PLEG): container finished" podID="950afc4b-3374-46ee-9a3e-aea3ee2f6232" containerID="adb5a54eca985ccdee086ffda1e2849f4f722c50eb15f50b7ac7de86b56df800" exitCode=0 Dec 10 15:02:34 crc kubenswrapper[4718]: I1210 15:02:34.101556 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"950afc4b-3374-46ee-9a3e-aea3ee2f6232","Type":"ContainerDied","Data":"5728dc3eccfeac390a2aa759cd32fb0b005d126f4718457d6c99dcb64a13f2e7"} Dec 10 15:02:34 crc kubenswrapper[4718]: I1210 15:02:34.101595 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"950afc4b-3374-46ee-9a3e-aea3ee2f6232","Type":"ContainerDied","Data":"20dab44fbfea6e9f3e8d19b1ec1618aefcfe359c3cf41d11b54933f49884c31a"} Dec 10 15:02:34 crc kubenswrapper[4718]: I1210 15:02:34.101608 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"950afc4b-3374-46ee-9a3e-aea3ee2f6232","Type":"ContainerDied","Data":"adb5a54eca985ccdee086ffda1e2849f4f722c50eb15f50b7ac7de86b56df800"} Dec 10 15:02:35 crc kubenswrapper[4718]: I1210 15:02:35.830901 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 10 15:02:35 crc kubenswrapper[4718]: I1210 15:02:35.940667 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0266bf72-60bf-4eb9-992a-79f1f5d35ced-combined-ca-bundle\") pod \"0266bf72-60bf-4eb9-992a-79f1f5d35ced\" (UID: \"0266bf72-60bf-4eb9-992a-79f1f5d35ced\") " Dec 10 15:02:35 crc kubenswrapper[4718]: I1210 15:02:35.940907 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0266bf72-60bf-4eb9-992a-79f1f5d35ced-config-data\") pod \"0266bf72-60bf-4eb9-992a-79f1f5d35ced\" (UID: \"0266bf72-60bf-4eb9-992a-79f1f5d35ced\") " Dec 10 15:02:35 crc kubenswrapper[4718]: I1210 15:02:35.941139 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0266bf72-60bf-4eb9-992a-79f1f5d35ced-logs\") pod \"0266bf72-60bf-4eb9-992a-79f1f5d35ced\" (UID: \"0266bf72-60bf-4eb9-992a-79f1f5d35ced\") " Dec 10 15:02:35 crc kubenswrapper[4718]: I1210 15:02:35.941213 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-57gm4\" (UniqueName: \"kubernetes.io/projected/0266bf72-60bf-4eb9-992a-79f1f5d35ced-kube-api-access-57gm4\") pod \"0266bf72-60bf-4eb9-992a-79f1f5d35ced\" (UID: \"0266bf72-60bf-4eb9-992a-79f1f5d35ced\") " Dec 10 15:02:35 crc kubenswrapper[4718]: I1210 15:02:35.943424 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0266bf72-60bf-4eb9-992a-79f1f5d35ced-logs" (OuterVolumeSpecName: "logs") pod "0266bf72-60bf-4eb9-992a-79f1f5d35ced" (UID: "0266bf72-60bf-4eb9-992a-79f1f5d35ced"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 15:02:35 crc kubenswrapper[4718]: I1210 15:02:35.990636 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0266bf72-60bf-4eb9-992a-79f1f5d35ced-kube-api-access-57gm4" (OuterVolumeSpecName: "kube-api-access-57gm4") pod "0266bf72-60bf-4eb9-992a-79f1f5d35ced" (UID: "0266bf72-60bf-4eb9-992a-79f1f5d35ced"). InnerVolumeSpecName "kube-api-access-57gm4". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:02:35 crc kubenswrapper[4718]: I1210 15:02:35.993908 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0266bf72-60bf-4eb9-992a-79f1f5d35ced-config-data" (OuterVolumeSpecName: "config-data") pod "0266bf72-60bf-4eb9-992a-79f1f5d35ced" (UID: "0266bf72-60bf-4eb9-992a-79f1f5d35ced"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:02:36 crc kubenswrapper[4718]: I1210 15:02:36.017329 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0266bf72-60bf-4eb9-992a-79f1f5d35ced-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0266bf72-60bf-4eb9-992a-79f1f5d35ced" (UID: "0266bf72-60bf-4eb9-992a-79f1f5d35ced"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:02:36 crc kubenswrapper[4718]: I1210 15:02:36.045491 4718 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0266bf72-60bf-4eb9-992a-79f1f5d35ced-logs\") on node \"crc\" DevicePath \"\"" Dec 10 15:02:36 crc kubenswrapper[4718]: I1210 15:02:36.045542 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-57gm4\" (UniqueName: \"kubernetes.io/projected/0266bf72-60bf-4eb9-992a-79f1f5d35ced-kube-api-access-57gm4\") on node \"crc\" DevicePath \"\"" Dec 10 15:02:36 crc kubenswrapper[4718]: I1210 15:02:36.045555 4718 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0266bf72-60bf-4eb9-992a-79f1f5d35ced-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 10 15:02:36 crc kubenswrapper[4718]: I1210 15:02:36.045566 4718 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0266bf72-60bf-4eb9-992a-79f1f5d35ced-config-data\") on node \"crc\" DevicePath \"\"" Dec 10 15:02:36 crc kubenswrapper[4718]: I1210 15:02:36.140052 4718 generic.go:334] "Generic (PLEG): container finished" podID="0266bf72-60bf-4eb9-992a-79f1f5d35ced" containerID="a4107564f24550776609d7394904582aa4fc800b6a16c9584877960184fbf4fd" exitCode=0 Dec 10 15:02:36 crc kubenswrapper[4718]: I1210 15:02:36.140114 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"0266bf72-60bf-4eb9-992a-79f1f5d35ced","Type":"ContainerDied","Data":"a4107564f24550776609d7394904582aa4fc800b6a16c9584877960184fbf4fd"} Dec 10 15:02:36 crc kubenswrapper[4718]: I1210 15:02:36.140149 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"0266bf72-60bf-4eb9-992a-79f1f5d35ced","Type":"ContainerDied","Data":"4ccc4866b3c7af9cd43ee2e5142da412a68cdd296ee8a3f0296975aafa22d63a"} Dec 10 15:02:36 crc kubenswrapper[4718]: I1210 15:02:36.140169 4718 scope.go:117] "RemoveContainer" containerID="a4107564f24550776609d7394904582aa4fc800b6a16c9584877960184fbf4fd" Dec 10 15:02:36 crc kubenswrapper[4718]: I1210 15:02:36.140376 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 10 15:02:36 crc kubenswrapper[4718]: I1210 15:02:36.175903 4718 scope.go:117] "RemoveContainer" containerID="014c8d70fbcd263fe41b9c290022d85ffc3ed9ddc05b8f404a5dff862dae217e" Dec 10 15:02:36 crc kubenswrapper[4718]: I1210 15:02:36.189805 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Dec 10 15:02:36 crc kubenswrapper[4718]: I1210 15:02:36.204369 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Dec 10 15:02:36 crc kubenswrapper[4718]: I1210 15:02:36.222013 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Dec 10 15:02:36 crc kubenswrapper[4718]: I1210 15:02:36.225584 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Dec 10 15:02:36 crc kubenswrapper[4718]: E1210 15:02:36.226417 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0266bf72-60bf-4eb9-992a-79f1f5d35ced" containerName="nova-api-api" Dec 10 15:02:36 crc kubenswrapper[4718]: I1210 15:02:36.226435 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="0266bf72-60bf-4eb9-992a-79f1f5d35ced" containerName="nova-api-api" Dec 10 15:02:36 crc kubenswrapper[4718]: E1210 15:02:36.226457 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0266bf72-60bf-4eb9-992a-79f1f5d35ced" containerName="nova-api-log" Dec 10 15:02:36 crc kubenswrapper[4718]: I1210 15:02:36.226463 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="0266bf72-60bf-4eb9-992a-79f1f5d35ced" containerName="nova-api-log" Dec 10 15:02:36 crc kubenswrapper[4718]: I1210 15:02:36.226701 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="0266bf72-60bf-4eb9-992a-79f1f5d35ced" containerName="nova-api-log" Dec 10 15:02:36 crc kubenswrapper[4718]: I1210 15:02:36.226724 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="0266bf72-60bf-4eb9-992a-79f1f5d35ced" containerName="nova-api-api" Dec 10 15:02:36 crc kubenswrapper[4718]: I1210 15:02:36.228223 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 10 15:02:36 crc kubenswrapper[4718]: I1210 15:02:36.230605 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Dec 10 15:02:36 crc kubenswrapper[4718]: I1210 15:02:36.231015 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Dec 10 15:02:36 crc kubenswrapper[4718]: I1210 15:02:36.231266 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Dec 10 15:02:36 crc kubenswrapper[4718]: I1210 15:02:36.257565 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd4410ea-13ef-4e0e-bb41-7758005636f2-config-data\") pod \"nova-api-0\" (UID: \"fd4410ea-13ef-4e0e-bb41-7758005636f2\") " pod="openstack/nova-api-0" Dec 10 15:02:36 crc kubenswrapper[4718]: I1210 15:02:36.257635 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fd4410ea-13ef-4e0e-bb41-7758005636f2-internal-tls-certs\") pod \"nova-api-0\" (UID: \"fd4410ea-13ef-4e0e-bb41-7758005636f2\") " pod="openstack/nova-api-0" Dec 10 15:02:36 crc kubenswrapper[4718]: I1210 15:02:36.257732 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd4410ea-13ef-4e0e-bb41-7758005636f2-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"fd4410ea-13ef-4e0e-bb41-7758005636f2\") " pod="openstack/nova-api-0" Dec 10 15:02:36 crc kubenswrapper[4718]: I1210 15:02:36.257776 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fcs9c\" (UniqueName: \"kubernetes.io/projected/fd4410ea-13ef-4e0e-bb41-7758005636f2-kube-api-access-fcs9c\") pod \"nova-api-0\" (UID: \"fd4410ea-13ef-4e0e-bb41-7758005636f2\") " pod="openstack/nova-api-0" Dec 10 15:02:36 crc kubenswrapper[4718]: I1210 15:02:36.257849 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fd4410ea-13ef-4e0e-bb41-7758005636f2-logs\") pod \"nova-api-0\" (UID: \"fd4410ea-13ef-4e0e-bb41-7758005636f2\") " pod="openstack/nova-api-0" Dec 10 15:02:36 crc kubenswrapper[4718]: I1210 15:02:36.257979 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fd4410ea-13ef-4e0e-bb41-7758005636f2-public-tls-certs\") pod \"nova-api-0\" (UID: \"fd4410ea-13ef-4e0e-bb41-7758005636f2\") " pod="openstack/nova-api-0" Dec 10 15:02:36 crc kubenswrapper[4718]: I1210 15:02:36.268467 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 10 15:02:36 crc kubenswrapper[4718]: I1210 15:02:36.283958 4718 scope.go:117] "RemoveContainer" containerID="a4107564f24550776609d7394904582aa4fc800b6a16c9584877960184fbf4fd" Dec 10 15:02:36 crc kubenswrapper[4718]: E1210 15:02:36.284816 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a4107564f24550776609d7394904582aa4fc800b6a16c9584877960184fbf4fd\": container with ID starting with a4107564f24550776609d7394904582aa4fc800b6a16c9584877960184fbf4fd not found: ID does not exist" containerID="a4107564f24550776609d7394904582aa4fc800b6a16c9584877960184fbf4fd" Dec 10 15:02:36 crc kubenswrapper[4718]: I1210 15:02:36.284856 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a4107564f24550776609d7394904582aa4fc800b6a16c9584877960184fbf4fd"} err="failed to get container status \"a4107564f24550776609d7394904582aa4fc800b6a16c9584877960184fbf4fd\": rpc error: code = NotFound desc = could not find container \"a4107564f24550776609d7394904582aa4fc800b6a16c9584877960184fbf4fd\": container with ID starting with a4107564f24550776609d7394904582aa4fc800b6a16c9584877960184fbf4fd not found: ID does not exist" Dec 10 15:02:36 crc kubenswrapper[4718]: I1210 15:02:36.284898 4718 scope.go:117] "RemoveContainer" containerID="014c8d70fbcd263fe41b9c290022d85ffc3ed9ddc05b8f404a5dff862dae217e" Dec 10 15:02:36 crc kubenswrapper[4718]: I1210 15:02:36.286622 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Dec 10 15:02:36 crc kubenswrapper[4718]: E1210 15:02:36.287355 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"014c8d70fbcd263fe41b9c290022d85ffc3ed9ddc05b8f404a5dff862dae217e\": container with ID starting with 014c8d70fbcd263fe41b9c290022d85ffc3ed9ddc05b8f404a5dff862dae217e not found: ID does not exist" containerID="014c8d70fbcd263fe41b9c290022d85ffc3ed9ddc05b8f404a5dff862dae217e" Dec 10 15:02:36 crc kubenswrapper[4718]: I1210 15:02:36.287402 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"014c8d70fbcd263fe41b9c290022d85ffc3ed9ddc05b8f404a5dff862dae217e"} err="failed to get container status \"014c8d70fbcd263fe41b9c290022d85ffc3ed9ddc05b8f404a5dff862dae217e\": rpc error: code = NotFound desc = could not find container \"014c8d70fbcd263fe41b9c290022d85ffc3ed9ddc05b8f404a5dff862dae217e\": container with ID starting with 014c8d70fbcd263fe41b9c290022d85ffc3ed9ddc05b8f404a5dff862dae217e not found: ID does not exist" Dec 10 15:02:36 crc kubenswrapper[4718]: I1210 15:02:36.360330 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fd4410ea-13ef-4e0e-bb41-7758005636f2-logs\") pod \"nova-api-0\" (UID: \"fd4410ea-13ef-4e0e-bb41-7758005636f2\") " pod="openstack/nova-api-0" Dec 10 15:02:36 crc kubenswrapper[4718]: I1210 15:02:36.360792 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fd4410ea-13ef-4e0e-bb41-7758005636f2-public-tls-certs\") pod \"nova-api-0\" (UID: \"fd4410ea-13ef-4e0e-bb41-7758005636f2\") " pod="openstack/nova-api-0" Dec 10 15:02:36 crc kubenswrapper[4718]: I1210 15:02:36.360915 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd4410ea-13ef-4e0e-bb41-7758005636f2-config-data\") pod \"nova-api-0\" (UID: \"fd4410ea-13ef-4e0e-bb41-7758005636f2\") " pod="openstack/nova-api-0" Dec 10 15:02:36 crc kubenswrapper[4718]: I1210 15:02:36.360951 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fd4410ea-13ef-4e0e-bb41-7758005636f2-internal-tls-certs\") pod \"nova-api-0\" (UID: \"fd4410ea-13ef-4e0e-bb41-7758005636f2\") " pod="openstack/nova-api-0" Dec 10 15:02:36 crc kubenswrapper[4718]: I1210 15:02:36.360992 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd4410ea-13ef-4e0e-bb41-7758005636f2-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"fd4410ea-13ef-4e0e-bb41-7758005636f2\") " pod="openstack/nova-api-0" Dec 10 15:02:36 crc kubenswrapper[4718]: I1210 15:02:36.361028 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fcs9c\" (UniqueName: \"kubernetes.io/projected/fd4410ea-13ef-4e0e-bb41-7758005636f2-kube-api-access-fcs9c\") pod \"nova-api-0\" (UID: \"fd4410ea-13ef-4e0e-bb41-7758005636f2\") " pod="openstack/nova-api-0" Dec 10 15:02:36 crc kubenswrapper[4718]: I1210 15:02:36.364542 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fd4410ea-13ef-4e0e-bb41-7758005636f2-logs\") pod \"nova-api-0\" (UID: \"fd4410ea-13ef-4e0e-bb41-7758005636f2\") " pod="openstack/nova-api-0" Dec 10 15:02:36 crc kubenswrapper[4718]: I1210 15:02:36.371709 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd4410ea-13ef-4e0e-bb41-7758005636f2-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"fd4410ea-13ef-4e0e-bb41-7758005636f2\") " pod="openstack/nova-api-0" Dec 10 15:02:36 crc kubenswrapper[4718]: I1210 15:02:36.372579 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd4410ea-13ef-4e0e-bb41-7758005636f2-config-data\") pod \"nova-api-0\" (UID: \"fd4410ea-13ef-4e0e-bb41-7758005636f2\") " pod="openstack/nova-api-0" Dec 10 15:02:36 crc kubenswrapper[4718]: I1210 15:02:36.386216 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fd4410ea-13ef-4e0e-bb41-7758005636f2-internal-tls-certs\") pod \"nova-api-0\" (UID: \"fd4410ea-13ef-4e0e-bb41-7758005636f2\") " pod="openstack/nova-api-0" Dec 10 15:02:36 crc kubenswrapper[4718]: I1210 15:02:36.387316 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fcs9c\" (UniqueName: \"kubernetes.io/projected/fd4410ea-13ef-4e0e-bb41-7758005636f2-kube-api-access-fcs9c\") pod \"nova-api-0\" (UID: \"fd4410ea-13ef-4e0e-bb41-7758005636f2\") " pod="openstack/nova-api-0" Dec 10 15:02:36 crc kubenswrapper[4718]: I1210 15:02:36.392065 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fd4410ea-13ef-4e0e-bb41-7758005636f2-public-tls-certs\") pod \"nova-api-0\" (UID: \"fd4410ea-13ef-4e0e-bb41-7758005636f2\") " pod="openstack/nova-api-0" Dec 10 15:02:36 crc kubenswrapper[4718]: I1210 15:02:36.565539 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 10 15:02:37 crc kubenswrapper[4718]: I1210 15:02:37.321738 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 10 15:02:37 crc kubenswrapper[4718]: I1210 15:02:37.432354 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Dec 10 15:02:37 crc kubenswrapper[4718]: I1210 15:02:37.636685 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-54599d8f7-hnvbs" Dec 10 15:02:37 crc kubenswrapper[4718]: I1210 15:02:37.668570 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-8lwcn"] Dec 10 15:02:37 crc kubenswrapper[4718]: I1210 15:02:37.670972 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-8lwcn" Dec 10 15:02:37 crc kubenswrapper[4718]: I1210 15:02:37.681249 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Dec 10 15:02:37 crc kubenswrapper[4718]: I1210 15:02:37.681528 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Dec 10 15:02:37 crc kubenswrapper[4718]: I1210 15:02:37.700617 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-8lwcn"] Dec 10 15:02:37 crc kubenswrapper[4718]: I1210 15:02:37.735116 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3b4b4d05-de48-4103-9df2-bb976cd7f843-scripts\") pod \"nova-cell1-cell-mapping-8lwcn\" (UID: \"3b4b4d05-de48-4103-9df2-bb976cd7f843\") " pod="openstack/nova-cell1-cell-mapping-8lwcn" Dec 10 15:02:37 crc kubenswrapper[4718]: I1210 15:02:37.735298 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3b4b4d05-de48-4103-9df2-bb976cd7f843-config-data\") pod \"nova-cell1-cell-mapping-8lwcn\" (UID: \"3b4b4d05-de48-4103-9df2-bb976cd7f843\") " pod="openstack/nova-cell1-cell-mapping-8lwcn" Dec 10 15:02:37 crc kubenswrapper[4718]: I1210 15:02:37.735411 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b4b4d05-de48-4103-9df2-bb976cd7f843-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-8lwcn\" (UID: \"3b4b4d05-de48-4103-9df2-bb976cd7f843\") " pod="openstack/nova-cell1-cell-mapping-8lwcn" Dec 10 15:02:37 crc kubenswrapper[4718]: I1210 15:02:37.735503 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rj7d2\" (UniqueName: \"kubernetes.io/projected/3b4b4d05-de48-4103-9df2-bb976cd7f843-kube-api-access-rj7d2\") pod \"nova-cell1-cell-mapping-8lwcn\" (UID: \"3b4b4d05-de48-4103-9df2-bb976cd7f843\") " pod="openstack/nova-cell1-cell-mapping-8lwcn" Dec 10 15:02:37 crc kubenswrapper[4718]: I1210 15:02:37.838151 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rj7d2\" (UniqueName: \"kubernetes.io/projected/3b4b4d05-de48-4103-9df2-bb976cd7f843-kube-api-access-rj7d2\") pod \"nova-cell1-cell-mapping-8lwcn\" (UID: \"3b4b4d05-de48-4103-9df2-bb976cd7f843\") " pod="openstack/nova-cell1-cell-mapping-8lwcn" Dec 10 15:02:37 crc kubenswrapper[4718]: I1210 15:02:37.838340 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3b4b4d05-de48-4103-9df2-bb976cd7f843-scripts\") pod \"nova-cell1-cell-mapping-8lwcn\" (UID: \"3b4b4d05-de48-4103-9df2-bb976cd7f843\") " pod="openstack/nova-cell1-cell-mapping-8lwcn" Dec 10 15:02:37 crc kubenswrapper[4718]: I1210 15:02:37.838480 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3b4b4d05-de48-4103-9df2-bb976cd7f843-config-data\") pod \"nova-cell1-cell-mapping-8lwcn\" (UID: \"3b4b4d05-de48-4103-9df2-bb976cd7f843\") " pod="openstack/nova-cell1-cell-mapping-8lwcn" Dec 10 15:02:37 crc kubenswrapper[4718]: I1210 15:02:37.841713 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b4b4d05-de48-4103-9df2-bb976cd7f843-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-8lwcn\" (UID: \"3b4b4d05-de48-4103-9df2-bb976cd7f843\") " pod="openstack/nova-cell1-cell-mapping-8lwcn" Dec 10 15:02:37 crc kubenswrapper[4718]: I1210 15:02:37.845595 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-844fc57f6f-clkpk"] Dec 10 15:02:37 crc kubenswrapper[4718]: I1210 15:02:37.846107 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-844fc57f6f-clkpk" podUID="d52f76ed-011b-46f0-b5a7-653e1c481595" containerName="dnsmasq-dns" containerID="cri-o://7102780e09e56dfb8f29e1b7a522c30470f322e32f0b15ccf6c1b46ae7177ac7" gracePeriod=10 Dec 10 15:02:37 crc kubenswrapper[4718]: I1210 15:02:37.848612 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3b4b4d05-de48-4103-9df2-bb976cd7f843-scripts\") pod \"nova-cell1-cell-mapping-8lwcn\" (UID: \"3b4b4d05-de48-4103-9df2-bb976cd7f843\") " pod="openstack/nova-cell1-cell-mapping-8lwcn" Dec 10 15:02:37 crc kubenswrapper[4718]: I1210 15:02:37.850726 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b4b4d05-de48-4103-9df2-bb976cd7f843-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-8lwcn\" (UID: \"3b4b4d05-de48-4103-9df2-bb976cd7f843\") " pod="openstack/nova-cell1-cell-mapping-8lwcn" Dec 10 15:02:37 crc kubenswrapper[4718]: I1210 15:02:37.868663 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3b4b4d05-de48-4103-9df2-bb976cd7f843-config-data\") pod \"nova-cell1-cell-mapping-8lwcn\" (UID: \"3b4b4d05-de48-4103-9df2-bb976cd7f843\") " pod="openstack/nova-cell1-cell-mapping-8lwcn" Dec 10 15:02:37 crc kubenswrapper[4718]: I1210 15:02:37.870629 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rj7d2\" (UniqueName: \"kubernetes.io/projected/3b4b4d05-de48-4103-9df2-bb976cd7f843-kube-api-access-rj7d2\") pod \"nova-cell1-cell-mapping-8lwcn\" (UID: \"3b4b4d05-de48-4103-9df2-bb976cd7f843\") " pod="openstack/nova-cell1-cell-mapping-8lwcn" Dec 10 15:02:38 crc kubenswrapper[4718]: I1210 15:02:38.018966 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-8lwcn" Dec 10 15:02:38 crc kubenswrapper[4718]: I1210 15:02:38.067487 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0266bf72-60bf-4eb9-992a-79f1f5d35ced" path="/var/lib/kubelet/pods/0266bf72-60bf-4eb9-992a-79f1f5d35ced/volumes" Dec 10 15:02:38 crc kubenswrapper[4718]: I1210 15:02:38.254314 4718 generic.go:334] "Generic (PLEG): container finished" podID="d52f76ed-011b-46f0-b5a7-653e1c481595" containerID="7102780e09e56dfb8f29e1b7a522c30470f322e32f0b15ccf6c1b46ae7177ac7" exitCode=0 Dec 10 15:02:38 crc kubenswrapper[4718]: I1210 15:02:38.254905 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-844fc57f6f-clkpk" event={"ID":"d52f76ed-011b-46f0-b5a7-653e1c481595","Type":"ContainerDied","Data":"7102780e09e56dfb8f29e1b7a522c30470f322e32f0b15ccf6c1b46ae7177ac7"} Dec 10 15:02:38 crc kubenswrapper[4718]: I1210 15:02:38.261664 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"fd4410ea-13ef-4e0e-bb41-7758005636f2","Type":"ContainerStarted","Data":"766b8e1984e35c338caf2f40dd0bfc59a845d1bc97dc3bb494a5a98adea47f8c"} Dec 10 15:02:38 crc kubenswrapper[4718]: I1210 15:02:38.261912 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"fd4410ea-13ef-4e0e-bb41-7758005636f2","Type":"ContainerStarted","Data":"e2efd801c2baff1acb1dd141edd893a91c87f3c9bb3d6deddb3cf25b2264e482"} Dec 10 15:02:38 crc kubenswrapper[4718]: I1210 15:02:38.639936 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-844fc57f6f-clkpk" Dec 10 15:02:38 crc kubenswrapper[4718]: I1210 15:02:38.802663 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d52f76ed-011b-46f0-b5a7-653e1c481595-ovsdbserver-nb\") pod \"d52f76ed-011b-46f0-b5a7-653e1c481595\" (UID: \"d52f76ed-011b-46f0-b5a7-653e1c481595\") " Dec 10 15:02:38 crc kubenswrapper[4718]: I1210 15:02:38.802815 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d52f76ed-011b-46f0-b5a7-653e1c481595-dns-swift-storage-0\") pod \"d52f76ed-011b-46f0-b5a7-653e1c481595\" (UID: \"d52f76ed-011b-46f0-b5a7-653e1c481595\") " Dec 10 15:02:38 crc kubenswrapper[4718]: I1210 15:02:38.802852 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d52f76ed-011b-46f0-b5a7-653e1c481595-dns-svc\") pod \"d52f76ed-011b-46f0-b5a7-653e1c481595\" (UID: \"d52f76ed-011b-46f0-b5a7-653e1c481595\") " Dec 10 15:02:38 crc kubenswrapper[4718]: I1210 15:02:38.802966 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d52f76ed-011b-46f0-b5a7-653e1c481595-config\") pod \"d52f76ed-011b-46f0-b5a7-653e1c481595\" (UID: \"d52f76ed-011b-46f0-b5a7-653e1c481595\") " Dec 10 15:02:38 crc kubenswrapper[4718]: I1210 15:02:38.802997 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d52f76ed-011b-46f0-b5a7-653e1c481595-ovsdbserver-sb\") pod \"d52f76ed-011b-46f0-b5a7-653e1c481595\" (UID: \"d52f76ed-011b-46f0-b5a7-653e1c481595\") " Dec 10 15:02:38 crc kubenswrapper[4718]: I1210 15:02:38.803090 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ps7wz\" (UniqueName: \"kubernetes.io/projected/d52f76ed-011b-46f0-b5a7-653e1c481595-kube-api-access-ps7wz\") pod \"d52f76ed-011b-46f0-b5a7-653e1c481595\" (UID: \"d52f76ed-011b-46f0-b5a7-653e1c481595\") " Dec 10 15:02:39 crc kubenswrapper[4718]: I1210 15:02:39.147706 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-8lwcn"] Dec 10 15:02:39 crc kubenswrapper[4718]: I1210 15:02:39.152435 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d52f76ed-011b-46f0-b5a7-653e1c481595-kube-api-access-ps7wz" (OuterVolumeSpecName: "kube-api-access-ps7wz") pod "d52f76ed-011b-46f0-b5a7-653e1c481595" (UID: "d52f76ed-011b-46f0-b5a7-653e1c481595"). InnerVolumeSpecName "kube-api-access-ps7wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:02:39 crc kubenswrapper[4718]: I1210 15:02:39.166014 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ps7wz\" (UniqueName: \"kubernetes.io/projected/d52f76ed-011b-46f0-b5a7-653e1c481595-kube-api-access-ps7wz\") on node \"crc\" DevicePath \"\"" Dec 10 15:02:39 crc kubenswrapper[4718]: I1210 15:02:39.229202 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d52f76ed-011b-46f0-b5a7-653e1c481595-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "d52f76ed-011b-46f0-b5a7-653e1c481595" (UID: "d52f76ed-011b-46f0-b5a7-653e1c481595"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 15:02:39 crc kubenswrapper[4718]: I1210 15:02:39.238835 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d52f76ed-011b-46f0-b5a7-653e1c481595-config" (OuterVolumeSpecName: "config") pod "d52f76ed-011b-46f0-b5a7-653e1c481595" (UID: "d52f76ed-011b-46f0-b5a7-653e1c481595"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 15:02:39 crc kubenswrapper[4718]: I1210 15:02:39.240078 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d52f76ed-011b-46f0-b5a7-653e1c481595-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "d52f76ed-011b-46f0-b5a7-653e1c481595" (UID: "d52f76ed-011b-46f0-b5a7-653e1c481595"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 15:02:39 crc kubenswrapper[4718]: I1210 15:02:39.243060 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d52f76ed-011b-46f0-b5a7-653e1c481595-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "d52f76ed-011b-46f0-b5a7-653e1c481595" (UID: "d52f76ed-011b-46f0-b5a7-653e1c481595"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 15:02:39 crc kubenswrapper[4718]: I1210 15:02:39.268366 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d52f76ed-011b-46f0-b5a7-653e1c481595-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "d52f76ed-011b-46f0-b5a7-653e1c481595" (UID: "d52f76ed-011b-46f0-b5a7-653e1c481595"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 15:02:39 crc kubenswrapper[4718]: I1210 15:02:39.269132 4718 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d52f76ed-011b-46f0-b5a7-653e1c481595-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 10 15:02:39 crc kubenswrapper[4718]: I1210 15:02:39.269174 4718 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d52f76ed-011b-46f0-b5a7-653e1c481595-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 10 15:02:39 crc kubenswrapper[4718]: I1210 15:02:39.269186 4718 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d52f76ed-011b-46f0-b5a7-653e1c481595-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 10 15:02:39 crc kubenswrapper[4718]: I1210 15:02:39.269197 4718 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d52f76ed-011b-46f0-b5a7-653e1c481595-config\") on node \"crc\" DevicePath \"\"" Dec 10 15:02:39 crc kubenswrapper[4718]: I1210 15:02:39.269207 4718 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d52f76ed-011b-46f0-b5a7-653e1c481595-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 10 15:02:39 crc kubenswrapper[4718]: I1210 15:02:39.299432 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"fd4410ea-13ef-4e0e-bb41-7758005636f2","Type":"ContainerStarted","Data":"a25a0ea7fb35da8642cebe7c3487811936e4278388db598ea56b66d7a1dc240b"} Dec 10 15:02:39 crc kubenswrapper[4718]: I1210 15:02:39.304903 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-8lwcn" event={"ID":"3b4b4d05-de48-4103-9df2-bb976cd7f843","Type":"ContainerStarted","Data":"d1038afba3a3055eb594f71ebaaa3cc2fe305553b89452c7af4d57e729b1f51e"} Dec 10 15:02:39 crc kubenswrapper[4718]: I1210 15:02:39.313527 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-844fc57f6f-clkpk" event={"ID":"d52f76ed-011b-46f0-b5a7-653e1c481595","Type":"ContainerDied","Data":"b561495b4bd9042c18bd67116d44b9321065bbcb23079a1080716eaeafac2b9d"} Dec 10 15:02:39 crc kubenswrapper[4718]: I1210 15:02:39.313599 4718 scope.go:117] "RemoveContainer" containerID="7102780e09e56dfb8f29e1b7a522c30470f322e32f0b15ccf6c1b46ae7177ac7" Dec 10 15:02:39 crc kubenswrapper[4718]: I1210 15:02:39.313622 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-844fc57f6f-clkpk" Dec 10 15:02:39 crc kubenswrapper[4718]: I1210 15:02:39.347034 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=3.347003334 podStartE2EDuration="3.347003334s" podCreationTimestamp="2025-12-10 15:02:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 15:02:39.328182581 +0000 UTC m=+1864.277406018" watchObservedRunningTime="2025-12-10 15:02:39.347003334 +0000 UTC m=+1864.296226751" Dec 10 15:02:39 crc kubenswrapper[4718]: I1210 15:02:39.476933 4718 scope.go:117] "RemoveContainer" containerID="68dfd1fce9ee9d2991347b6313e2473d30c5c28b5cf40ddbb0fe3721189c8e52" Dec 10 15:02:39 crc kubenswrapper[4718]: I1210 15:02:39.531223 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-844fc57f6f-clkpk"] Dec 10 15:02:39 crc kubenswrapper[4718]: I1210 15:02:39.572086 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-844fc57f6f-clkpk"] Dec 10 15:02:40 crc kubenswrapper[4718]: I1210 15:02:40.045059 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d52f76ed-011b-46f0-b5a7-653e1c481595" path="/var/lib/kubelet/pods/d52f76ed-011b-46f0-b5a7-653e1c481595/volumes" Dec 10 15:02:40 crc kubenswrapper[4718]: I1210 15:02:40.336673 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-8lwcn" event={"ID":"3b4b4d05-de48-4103-9df2-bb976cd7f843","Type":"ContainerStarted","Data":"e3aa7ef178ffe4b6b1e323b58f203e4cb53e3de8063ba7747d78f20949204d59"} Dec 10 15:02:40 crc kubenswrapper[4718]: I1210 15:02:40.365848 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-8lwcn" podStartSLOduration=3.365806365 podStartE2EDuration="3.365806365s" podCreationTimestamp="2025-12-10 15:02:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 15:02:40.358354348 +0000 UTC m=+1865.307577765" watchObservedRunningTime="2025-12-10 15:02:40.365806365 +0000 UTC m=+1865.315029812" Dec 10 15:02:41 crc kubenswrapper[4718]: I1210 15:02:41.932350 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 10 15:02:41 crc kubenswrapper[4718]: I1210 15:02:41.954220 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/950afc4b-3374-46ee-9a3e-aea3ee2f6232-sg-core-conf-yaml\") pod \"950afc4b-3374-46ee-9a3e-aea3ee2f6232\" (UID: \"950afc4b-3374-46ee-9a3e-aea3ee2f6232\") " Dec 10 15:02:41 crc kubenswrapper[4718]: I1210 15:02:41.954370 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/950afc4b-3374-46ee-9a3e-aea3ee2f6232-scripts\") pod \"950afc4b-3374-46ee-9a3e-aea3ee2f6232\" (UID: \"950afc4b-3374-46ee-9a3e-aea3ee2f6232\") " Dec 10 15:02:41 crc kubenswrapper[4718]: I1210 15:02:41.955760 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/950afc4b-3374-46ee-9a3e-aea3ee2f6232-log-httpd\") pod \"950afc4b-3374-46ee-9a3e-aea3ee2f6232\" (UID: \"950afc4b-3374-46ee-9a3e-aea3ee2f6232\") " Dec 10 15:02:41 crc kubenswrapper[4718]: I1210 15:02:41.955993 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7h6v\" (UniqueName: \"kubernetes.io/projected/950afc4b-3374-46ee-9a3e-aea3ee2f6232-kube-api-access-x7h6v\") pod \"950afc4b-3374-46ee-9a3e-aea3ee2f6232\" (UID: \"950afc4b-3374-46ee-9a3e-aea3ee2f6232\") " Dec 10 15:02:41 crc kubenswrapper[4718]: I1210 15:02:41.956099 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/950afc4b-3374-46ee-9a3e-aea3ee2f6232-run-httpd\") pod \"950afc4b-3374-46ee-9a3e-aea3ee2f6232\" (UID: \"950afc4b-3374-46ee-9a3e-aea3ee2f6232\") " Dec 10 15:02:41 crc kubenswrapper[4718]: I1210 15:02:41.956155 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/950afc4b-3374-46ee-9a3e-aea3ee2f6232-config-data\") pod \"950afc4b-3374-46ee-9a3e-aea3ee2f6232\" (UID: \"950afc4b-3374-46ee-9a3e-aea3ee2f6232\") " Dec 10 15:02:41 crc kubenswrapper[4718]: I1210 15:02:41.956269 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/950afc4b-3374-46ee-9a3e-aea3ee2f6232-ceilometer-tls-certs\") pod \"950afc4b-3374-46ee-9a3e-aea3ee2f6232\" (UID: \"950afc4b-3374-46ee-9a3e-aea3ee2f6232\") " Dec 10 15:02:41 crc kubenswrapper[4718]: I1210 15:02:41.956567 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/950afc4b-3374-46ee-9a3e-aea3ee2f6232-combined-ca-bundle\") pod \"950afc4b-3374-46ee-9a3e-aea3ee2f6232\" (UID: \"950afc4b-3374-46ee-9a3e-aea3ee2f6232\") " Dec 10 15:02:41 crc kubenswrapper[4718]: I1210 15:02:41.956541 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/950afc4b-3374-46ee-9a3e-aea3ee2f6232-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "950afc4b-3374-46ee-9a3e-aea3ee2f6232" (UID: "950afc4b-3374-46ee-9a3e-aea3ee2f6232"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 15:02:41 crc kubenswrapper[4718]: I1210 15:02:41.956879 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/950afc4b-3374-46ee-9a3e-aea3ee2f6232-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "950afc4b-3374-46ee-9a3e-aea3ee2f6232" (UID: "950afc4b-3374-46ee-9a3e-aea3ee2f6232"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 15:02:41 crc kubenswrapper[4718]: I1210 15:02:41.957842 4718 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/950afc4b-3374-46ee-9a3e-aea3ee2f6232-log-httpd\") on node \"crc\" DevicePath \"\"" Dec 10 15:02:41 crc kubenswrapper[4718]: I1210 15:02:41.957875 4718 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/950afc4b-3374-46ee-9a3e-aea3ee2f6232-run-httpd\") on node \"crc\" DevicePath \"\"" Dec 10 15:02:41 crc kubenswrapper[4718]: I1210 15:02:41.968742 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/950afc4b-3374-46ee-9a3e-aea3ee2f6232-scripts" (OuterVolumeSpecName: "scripts") pod "950afc4b-3374-46ee-9a3e-aea3ee2f6232" (UID: "950afc4b-3374-46ee-9a3e-aea3ee2f6232"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:02:41 crc kubenswrapper[4718]: I1210 15:02:41.979127 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/950afc4b-3374-46ee-9a3e-aea3ee2f6232-kube-api-access-x7h6v" (OuterVolumeSpecName: "kube-api-access-x7h6v") pod "950afc4b-3374-46ee-9a3e-aea3ee2f6232" (UID: "950afc4b-3374-46ee-9a3e-aea3ee2f6232"). InnerVolumeSpecName "kube-api-access-x7h6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:02:42 crc kubenswrapper[4718]: I1210 15:02:42.290463 4718 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/950afc4b-3374-46ee-9a3e-aea3ee2f6232-scripts\") on node \"crc\" DevicePath \"\"" Dec 10 15:02:42 crc kubenswrapper[4718]: I1210 15:02:42.290512 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7h6v\" (UniqueName: \"kubernetes.io/projected/950afc4b-3374-46ee-9a3e-aea3ee2f6232-kube-api-access-x7h6v\") on node \"crc\" DevicePath \"\"" Dec 10 15:02:42 crc kubenswrapper[4718]: I1210 15:02:42.366770 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/950afc4b-3374-46ee-9a3e-aea3ee2f6232-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "950afc4b-3374-46ee-9a3e-aea3ee2f6232" (UID: "950afc4b-3374-46ee-9a3e-aea3ee2f6232"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:02:42 crc kubenswrapper[4718]: I1210 15:02:42.369897 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/950afc4b-3374-46ee-9a3e-aea3ee2f6232-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "950afc4b-3374-46ee-9a3e-aea3ee2f6232" (UID: "950afc4b-3374-46ee-9a3e-aea3ee2f6232"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:02:42 crc kubenswrapper[4718]: I1210 15:02:42.395329 4718 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/950afc4b-3374-46ee-9a3e-aea3ee2f6232-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Dec 10 15:02:42 crc kubenswrapper[4718]: I1210 15:02:42.395376 4718 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/950afc4b-3374-46ee-9a3e-aea3ee2f6232-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 10 15:02:42 crc kubenswrapper[4718]: I1210 15:02:42.402410 4718 generic.go:334] "Generic (PLEG): container finished" podID="950afc4b-3374-46ee-9a3e-aea3ee2f6232" containerID="5b3f3f47f18df10b2a82e44511c2a7f374437a154b3b1340b993b716bb59ccca" exitCode=0 Dec 10 15:02:42 crc kubenswrapper[4718]: I1210 15:02:42.402514 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"950afc4b-3374-46ee-9a3e-aea3ee2f6232","Type":"ContainerDied","Data":"5b3f3f47f18df10b2a82e44511c2a7f374437a154b3b1340b993b716bb59ccca"} Dec 10 15:02:42 crc kubenswrapper[4718]: I1210 15:02:42.402561 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"950afc4b-3374-46ee-9a3e-aea3ee2f6232","Type":"ContainerDied","Data":"08649fcfbd482a3600aaf5340d9f2207914bf6e4f97f1b048c13df1dfba40245"} Dec 10 15:02:42 crc kubenswrapper[4718]: I1210 15:02:42.402588 4718 scope.go:117] "RemoveContainer" containerID="5728dc3eccfeac390a2aa759cd32fb0b005d126f4718457d6c99dcb64a13f2e7" Dec 10 15:02:42 crc kubenswrapper[4718]: I1210 15:02:42.402846 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 10 15:02:42 crc kubenswrapper[4718]: I1210 15:02:42.426349 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/950afc4b-3374-46ee-9a3e-aea3ee2f6232-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "950afc4b-3374-46ee-9a3e-aea3ee2f6232" (UID: "950afc4b-3374-46ee-9a3e-aea3ee2f6232"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:02:42 crc kubenswrapper[4718]: I1210 15:02:42.460599 4718 scope.go:117] "RemoveContainer" containerID="20dab44fbfea6e9f3e8d19b1ec1618aefcfe359c3cf41d11b54933f49884c31a" Dec 10 15:02:42 crc kubenswrapper[4718]: I1210 15:02:42.471929 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/950afc4b-3374-46ee-9a3e-aea3ee2f6232-config-data" (OuterVolumeSpecName: "config-data") pod "950afc4b-3374-46ee-9a3e-aea3ee2f6232" (UID: "950afc4b-3374-46ee-9a3e-aea3ee2f6232"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:02:42 crc kubenswrapper[4718]: I1210 15:02:42.498755 4718 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/950afc4b-3374-46ee-9a3e-aea3ee2f6232-config-data\") on node \"crc\" DevicePath \"\"" Dec 10 15:02:42 crc kubenswrapper[4718]: I1210 15:02:42.498825 4718 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/950afc4b-3374-46ee-9a3e-aea3ee2f6232-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 10 15:02:42 crc kubenswrapper[4718]: I1210 15:02:42.589065 4718 scope.go:117] "RemoveContainer" containerID="adb5a54eca985ccdee086ffda1e2849f4f722c50eb15f50b7ac7de86b56df800" Dec 10 15:02:42 crc kubenswrapper[4718]: I1210 15:02:42.621675 4718 scope.go:117] "RemoveContainer" containerID="5b3f3f47f18df10b2a82e44511c2a7f374437a154b3b1340b993b716bb59ccca" Dec 10 15:02:42 crc kubenswrapper[4718]: I1210 15:02:42.652706 4718 scope.go:117] "RemoveContainer" containerID="5728dc3eccfeac390a2aa759cd32fb0b005d126f4718457d6c99dcb64a13f2e7" Dec 10 15:02:42 crc kubenswrapper[4718]: E1210 15:02:42.653599 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5728dc3eccfeac390a2aa759cd32fb0b005d126f4718457d6c99dcb64a13f2e7\": container with ID starting with 5728dc3eccfeac390a2aa759cd32fb0b005d126f4718457d6c99dcb64a13f2e7 not found: ID does not exist" containerID="5728dc3eccfeac390a2aa759cd32fb0b005d126f4718457d6c99dcb64a13f2e7" Dec 10 15:02:42 crc kubenswrapper[4718]: I1210 15:02:42.653721 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5728dc3eccfeac390a2aa759cd32fb0b005d126f4718457d6c99dcb64a13f2e7"} err="failed to get container status \"5728dc3eccfeac390a2aa759cd32fb0b005d126f4718457d6c99dcb64a13f2e7\": rpc error: code = NotFound desc = could not find container \"5728dc3eccfeac390a2aa759cd32fb0b005d126f4718457d6c99dcb64a13f2e7\": container with ID starting with 5728dc3eccfeac390a2aa759cd32fb0b005d126f4718457d6c99dcb64a13f2e7 not found: ID does not exist" Dec 10 15:02:42 crc kubenswrapper[4718]: I1210 15:02:42.653788 4718 scope.go:117] "RemoveContainer" containerID="20dab44fbfea6e9f3e8d19b1ec1618aefcfe359c3cf41d11b54933f49884c31a" Dec 10 15:02:42 crc kubenswrapper[4718]: E1210 15:02:42.654330 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"20dab44fbfea6e9f3e8d19b1ec1618aefcfe359c3cf41d11b54933f49884c31a\": container with ID starting with 20dab44fbfea6e9f3e8d19b1ec1618aefcfe359c3cf41d11b54933f49884c31a not found: ID does not exist" containerID="20dab44fbfea6e9f3e8d19b1ec1618aefcfe359c3cf41d11b54933f49884c31a" Dec 10 15:02:42 crc kubenswrapper[4718]: I1210 15:02:42.654375 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"20dab44fbfea6e9f3e8d19b1ec1618aefcfe359c3cf41d11b54933f49884c31a"} err="failed to get container status \"20dab44fbfea6e9f3e8d19b1ec1618aefcfe359c3cf41d11b54933f49884c31a\": rpc error: code = NotFound desc = could not find container \"20dab44fbfea6e9f3e8d19b1ec1618aefcfe359c3cf41d11b54933f49884c31a\": container with ID starting with 20dab44fbfea6e9f3e8d19b1ec1618aefcfe359c3cf41d11b54933f49884c31a not found: ID does not exist" Dec 10 15:02:42 crc kubenswrapper[4718]: I1210 15:02:42.654431 4718 scope.go:117] "RemoveContainer" containerID="adb5a54eca985ccdee086ffda1e2849f4f722c50eb15f50b7ac7de86b56df800" Dec 10 15:02:42 crc kubenswrapper[4718]: E1210 15:02:42.654975 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"adb5a54eca985ccdee086ffda1e2849f4f722c50eb15f50b7ac7de86b56df800\": container with ID starting with adb5a54eca985ccdee086ffda1e2849f4f722c50eb15f50b7ac7de86b56df800 not found: ID does not exist" containerID="adb5a54eca985ccdee086ffda1e2849f4f722c50eb15f50b7ac7de86b56df800" Dec 10 15:02:42 crc kubenswrapper[4718]: I1210 15:02:42.655006 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"adb5a54eca985ccdee086ffda1e2849f4f722c50eb15f50b7ac7de86b56df800"} err="failed to get container status \"adb5a54eca985ccdee086ffda1e2849f4f722c50eb15f50b7ac7de86b56df800\": rpc error: code = NotFound desc = could not find container \"adb5a54eca985ccdee086ffda1e2849f4f722c50eb15f50b7ac7de86b56df800\": container with ID starting with adb5a54eca985ccdee086ffda1e2849f4f722c50eb15f50b7ac7de86b56df800 not found: ID does not exist" Dec 10 15:02:42 crc kubenswrapper[4718]: I1210 15:02:42.655022 4718 scope.go:117] "RemoveContainer" containerID="5b3f3f47f18df10b2a82e44511c2a7f374437a154b3b1340b993b716bb59ccca" Dec 10 15:02:42 crc kubenswrapper[4718]: E1210 15:02:42.655275 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5b3f3f47f18df10b2a82e44511c2a7f374437a154b3b1340b993b716bb59ccca\": container with ID starting with 5b3f3f47f18df10b2a82e44511c2a7f374437a154b3b1340b993b716bb59ccca not found: ID does not exist" containerID="5b3f3f47f18df10b2a82e44511c2a7f374437a154b3b1340b993b716bb59ccca" Dec 10 15:02:42 crc kubenswrapper[4718]: I1210 15:02:42.655300 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5b3f3f47f18df10b2a82e44511c2a7f374437a154b3b1340b993b716bb59ccca"} err="failed to get container status \"5b3f3f47f18df10b2a82e44511c2a7f374437a154b3b1340b993b716bb59ccca\": rpc error: code = NotFound desc = could not find container \"5b3f3f47f18df10b2a82e44511c2a7f374437a154b3b1340b993b716bb59ccca\": container with ID starting with 5b3f3f47f18df10b2a82e44511c2a7f374437a154b3b1340b993b716bb59ccca not found: ID does not exist" Dec 10 15:02:42 crc kubenswrapper[4718]: I1210 15:02:42.762519 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 10 15:02:42 crc kubenswrapper[4718]: I1210 15:02:42.776906 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Dec 10 15:02:42 crc kubenswrapper[4718]: I1210 15:02:42.792536 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 10 15:02:42 crc kubenswrapper[4718]: E1210 15:02:42.793434 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d52f76ed-011b-46f0-b5a7-653e1c481595" containerName="init" Dec 10 15:02:42 crc kubenswrapper[4718]: I1210 15:02:42.793463 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="d52f76ed-011b-46f0-b5a7-653e1c481595" containerName="init" Dec 10 15:02:42 crc kubenswrapper[4718]: E1210 15:02:42.793506 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d52f76ed-011b-46f0-b5a7-653e1c481595" containerName="dnsmasq-dns" Dec 10 15:02:42 crc kubenswrapper[4718]: I1210 15:02:42.793517 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="d52f76ed-011b-46f0-b5a7-653e1c481595" containerName="dnsmasq-dns" Dec 10 15:02:42 crc kubenswrapper[4718]: E1210 15:02:42.793573 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="950afc4b-3374-46ee-9a3e-aea3ee2f6232" containerName="ceilometer-notification-agent" Dec 10 15:02:42 crc kubenswrapper[4718]: I1210 15:02:42.793584 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="950afc4b-3374-46ee-9a3e-aea3ee2f6232" containerName="ceilometer-notification-agent" Dec 10 15:02:42 crc kubenswrapper[4718]: E1210 15:02:42.793619 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="950afc4b-3374-46ee-9a3e-aea3ee2f6232" containerName="sg-core" Dec 10 15:02:42 crc kubenswrapper[4718]: I1210 15:02:42.793628 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="950afc4b-3374-46ee-9a3e-aea3ee2f6232" containerName="sg-core" Dec 10 15:02:42 crc kubenswrapper[4718]: E1210 15:02:42.793646 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="950afc4b-3374-46ee-9a3e-aea3ee2f6232" containerName="proxy-httpd" Dec 10 15:02:42 crc kubenswrapper[4718]: I1210 15:02:42.793653 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="950afc4b-3374-46ee-9a3e-aea3ee2f6232" containerName="proxy-httpd" Dec 10 15:02:42 crc kubenswrapper[4718]: E1210 15:02:42.793663 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="950afc4b-3374-46ee-9a3e-aea3ee2f6232" containerName="ceilometer-central-agent" Dec 10 15:02:42 crc kubenswrapper[4718]: I1210 15:02:42.793674 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="950afc4b-3374-46ee-9a3e-aea3ee2f6232" containerName="ceilometer-central-agent" Dec 10 15:02:42 crc kubenswrapper[4718]: I1210 15:02:42.793965 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="950afc4b-3374-46ee-9a3e-aea3ee2f6232" containerName="ceilometer-central-agent" Dec 10 15:02:42 crc kubenswrapper[4718]: I1210 15:02:42.794004 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="950afc4b-3374-46ee-9a3e-aea3ee2f6232" containerName="ceilometer-notification-agent" Dec 10 15:02:42 crc kubenswrapper[4718]: I1210 15:02:42.794026 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="d52f76ed-011b-46f0-b5a7-653e1c481595" containerName="dnsmasq-dns" Dec 10 15:02:42 crc kubenswrapper[4718]: I1210 15:02:42.794042 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="950afc4b-3374-46ee-9a3e-aea3ee2f6232" containerName="proxy-httpd" Dec 10 15:02:42 crc kubenswrapper[4718]: I1210 15:02:42.794061 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="950afc4b-3374-46ee-9a3e-aea3ee2f6232" containerName="sg-core" Dec 10 15:02:42 crc kubenswrapper[4718]: I1210 15:02:42.796682 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 10 15:02:42 crc kubenswrapper[4718]: I1210 15:02:42.800743 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 10 15:02:42 crc kubenswrapper[4718]: I1210 15:02:42.800810 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 10 15:02:42 crc kubenswrapper[4718]: I1210 15:02:42.801776 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Dec 10 15:02:42 crc kubenswrapper[4718]: I1210 15:02:42.811645 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/464e9486-56f7-4723-9cb6-6fe63cd86ae4-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"464e9486-56f7-4723-9cb6-6fe63cd86ae4\") " pod="openstack/ceilometer-0" Dec 10 15:02:42 crc kubenswrapper[4718]: I1210 15:02:42.811853 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bc9p9\" (UniqueName: \"kubernetes.io/projected/464e9486-56f7-4723-9cb6-6fe63cd86ae4-kube-api-access-bc9p9\") pod \"ceilometer-0\" (UID: \"464e9486-56f7-4723-9cb6-6fe63cd86ae4\") " pod="openstack/ceilometer-0" Dec 10 15:02:42 crc kubenswrapper[4718]: I1210 15:02:42.811879 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/464e9486-56f7-4723-9cb6-6fe63cd86ae4-scripts\") pod \"ceilometer-0\" (UID: \"464e9486-56f7-4723-9cb6-6fe63cd86ae4\") " pod="openstack/ceilometer-0" Dec 10 15:02:42 crc kubenswrapper[4718]: I1210 15:02:42.812111 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/464e9486-56f7-4723-9cb6-6fe63cd86ae4-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"464e9486-56f7-4723-9cb6-6fe63cd86ae4\") " pod="openstack/ceilometer-0" Dec 10 15:02:42 crc kubenswrapper[4718]: I1210 15:02:42.812250 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/464e9486-56f7-4723-9cb6-6fe63cd86ae4-run-httpd\") pod \"ceilometer-0\" (UID: \"464e9486-56f7-4723-9cb6-6fe63cd86ae4\") " pod="openstack/ceilometer-0" Dec 10 15:02:42 crc kubenswrapper[4718]: I1210 15:02:42.812342 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/464e9486-56f7-4723-9cb6-6fe63cd86ae4-config-data\") pod \"ceilometer-0\" (UID: \"464e9486-56f7-4723-9cb6-6fe63cd86ae4\") " pod="openstack/ceilometer-0" Dec 10 15:02:42 crc kubenswrapper[4718]: I1210 15:02:42.812449 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/464e9486-56f7-4723-9cb6-6fe63cd86ae4-log-httpd\") pod \"ceilometer-0\" (UID: \"464e9486-56f7-4723-9cb6-6fe63cd86ae4\") " pod="openstack/ceilometer-0" Dec 10 15:02:42 crc kubenswrapper[4718]: I1210 15:02:42.812620 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/464e9486-56f7-4723-9cb6-6fe63cd86ae4-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"464e9486-56f7-4723-9cb6-6fe63cd86ae4\") " pod="openstack/ceilometer-0" Dec 10 15:02:42 crc kubenswrapper[4718]: I1210 15:02:42.823467 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 10 15:02:42 crc kubenswrapper[4718]: I1210 15:02:42.928946 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/464e9486-56f7-4723-9cb6-6fe63cd86ae4-run-httpd\") pod \"ceilometer-0\" (UID: \"464e9486-56f7-4723-9cb6-6fe63cd86ae4\") " pod="openstack/ceilometer-0" Dec 10 15:02:42 crc kubenswrapper[4718]: I1210 15:02:42.929036 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/464e9486-56f7-4723-9cb6-6fe63cd86ae4-config-data\") pod \"ceilometer-0\" (UID: \"464e9486-56f7-4723-9cb6-6fe63cd86ae4\") " pod="openstack/ceilometer-0" Dec 10 15:02:42 crc kubenswrapper[4718]: I1210 15:02:42.929067 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/464e9486-56f7-4723-9cb6-6fe63cd86ae4-log-httpd\") pod \"ceilometer-0\" (UID: \"464e9486-56f7-4723-9cb6-6fe63cd86ae4\") " pod="openstack/ceilometer-0" Dec 10 15:02:42 crc kubenswrapper[4718]: I1210 15:02:42.929104 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/464e9486-56f7-4723-9cb6-6fe63cd86ae4-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"464e9486-56f7-4723-9cb6-6fe63cd86ae4\") " pod="openstack/ceilometer-0" Dec 10 15:02:42 crc kubenswrapper[4718]: I1210 15:02:42.929173 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/464e9486-56f7-4723-9cb6-6fe63cd86ae4-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"464e9486-56f7-4723-9cb6-6fe63cd86ae4\") " pod="openstack/ceilometer-0" Dec 10 15:02:42 crc kubenswrapper[4718]: I1210 15:02:42.929646 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/464e9486-56f7-4723-9cb6-6fe63cd86ae4-run-httpd\") pod \"ceilometer-0\" (UID: \"464e9486-56f7-4723-9cb6-6fe63cd86ae4\") " pod="openstack/ceilometer-0" Dec 10 15:02:42 crc kubenswrapper[4718]: I1210 15:02:42.929728 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bc9p9\" (UniqueName: \"kubernetes.io/projected/464e9486-56f7-4723-9cb6-6fe63cd86ae4-kube-api-access-bc9p9\") pod \"ceilometer-0\" (UID: \"464e9486-56f7-4723-9cb6-6fe63cd86ae4\") " pod="openstack/ceilometer-0" Dec 10 15:02:42 crc kubenswrapper[4718]: I1210 15:02:42.929941 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/464e9486-56f7-4723-9cb6-6fe63cd86ae4-log-httpd\") pod \"ceilometer-0\" (UID: \"464e9486-56f7-4723-9cb6-6fe63cd86ae4\") " pod="openstack/ceilometer-0" Dec 10 15:02:42 crc kubenswrapper[4718]: I1210 15:02:42.930073 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/464e9486-56f7-4723-9cb6-6fe63cd86ae4-scripts\") pod \"ceilometer-0\" (UID: \"464e9486-56f7-4723-9cb6-6fe63cd86ae4\") " pod="openstack/ceilometer-0" Dec 10 15:02:42 crc kubenswrapper[4718]: I1210 15:02:42.930480 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/464e9486-56f7-4723-9cb6-6fe63cd86ae4-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"464e9486-56f7-4723-9cb6-6fe63cd86ae4\") " pod="openstack/ceilometer-0" Dec 10 15:02:42 crc kubenswrapper[4718]: I1210 15:02:42.935948 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/464e9486-56f7-4723-9cb6-6fe63cd86ae4-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"464e9486-56f7-4723-9cb6-6fe63cd86ae4\") " pod="openstack/ceilometer-0" Dec 10 15:02:42 crc kubenswrapper[4718]: I1210 15:02:42.955532 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/464e9486-56f7-4723-9cb6-6fe63cd86ae4-config-data\") pod \"ceilometer-0\" (UID: \"464e9486-56f7-4723-9cb6-6fe63cd86ae4\") " pod="openstack/ceilometer-0" Dec 10 15:02:42 crc kubenswrapper[4718]: I1210 15:02:42.956314 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/464e9486-56f7-4723-9cb6-6fe63cd86ae4-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"464e9486-56f7-4723-9cb6-6fe63cd86ae4\") " pod="openstack/ceilometer-0" Dec 10 15:02:42 crc kubenswrapper[4718]: I1210 15:02:42.958503 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/464e9486-56f7-4723-9cb6-6fe63cd86ae4-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"464e9486-56f7-4723-9cb6-6fe63cd86ae4\") " pod="openstack/ceilometer-0" Dec 10 15:02:42 crc kubenswrapper[4718]: I1210 15:02:42.973459 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/464e9486-56f7-4723-9cb6-6fe63cd86ae4-scripts\") pod \"ceilometer-0\" (UID: \"464e9486-56f7-4723-9cb6-6fe63cd86ae4\") " pod="openstack/ceilometer-0" Dec 10 15:02:42 crc kubenswrapper[4718]: I1210 15:02:42.984491 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bc9p9\" (UniqueName: \"kubernetes.io/projected/464e9486-56f7-4723-9cb6-6fe63cd86ae4-kube-api-access-bc9p9\") pod \"ceilometer-0\" (UID: \"464e9486-56f7-4723-9cb6-6fe63cd86ae4\") " pod="openstack/ceilometer-0" Dec 10 15:02:43 crc kubenswrapper[4718]: I1210 15:02:43.129581 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 10 15:02:43 crc kubenswrapper[4718]: I1210 15:02:43.795141 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 10 15:02:44 crc kubenswrapper[4718]: I1210 15:02:44.033907 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="950afc4b-3374-46ee-9a3e-aea3ee2f6232" path="/var/lib/kubelet/pods/950afc4b-3374-46ee-9a3e-aea3ee2f6232/volumes" Dec 10 15:02:44 crc kubenswrapper[4718]: I1210 15:02:44.440094 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"464e9486-56f7-4723-9cb6-6fe63cd86ae4","Type":"ContainerStarted","Data":"480fca020e3557a9e53274703bdd73ace37cd0e5b56d964d201823b082b57465"} Dec 10 15:02:44 crc kubenswrapper[4718]: I1210 15:02:44.440872 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"464e9486-56f7-4723-9cb6-6fe63cd86ae4","Type":"ContainerStarted","Data":"86e622f5e43b1240bca05caa97ecaf1d5c2d4694284348e11d9ae8b9de277733"} Dec 10 15:02:45 crc kubenswrapper[4718]: I1210 15:02:45.457447 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"464e9486-56f7-4723-9cb6-6fe63cd86ae4","Type":"ContainerStarted","Data":"6a6cfe18e041d53753d6659b26b15b9c2dc7bf6712cb6a702806e8db35e69a91"} Dec 10 15:02:46 crc kubenswrapper[4718]: I1210 15:02:46.472336 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"464e9486-56f7-4723-9cb6-6fe63cd86ae4","Type":"ContainerStarted","Data":"2d18e0a16343ff68fd37b5a1714cd61937118fbf81cef9f3addabd97e09e676a"} Dec 10 15:02:46 crc kubenswrapper[4718]: I1210 15:02:46.566848 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Dec 10 15:02:46 crc kubenswrapper[4718]: I1210 15:02:46.566936 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Dec 10 15:02:47 crc kubenswrapper[4718]: I1210 15:02:47.487490 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"464e9486-56f7-4723-9cb6-6fe63cd86ae4","Type":"ContainerStarted","Data":"111c8e04d8ed8a4258edfc31ae938aec9a67723692cbe1aa7904692fa6f3e0cf"} Dec 10 15:02:47 crc kubenswrapper[4718]: I1210 15:02:47.488370 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Dec 10 15:02:47 crc kubenswrapper[4718]: I1210 15:02:47.517291 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.269233939 podStartE2EDuration="5.517245157s" podCreationTimestamp="2025-12-10 15:02:42 +0000 UTC" firstStartedPulling="2025-12-10 15:02:43.789544715 +0000 UTC m=+1868.738768132" lastFinishedPulling="2025-12-10 15:02:47.037555943 +0000 UTC m=+1871.986779350" observedRunningTime="2025-12-10 15:02:47.509607545 +0000 UTC m=+1872.458830972" watchObservedRunningTime="2025-12-10 15:02:47.517245157 +0000 UTC m=+1872.466468574" Dec 10 15:02:47 crc kubenswrapper[4718]: I1210 15:02:47.579601 4718 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="fd4410ea-13ef-4e0e-bb41-7758005636f2" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.220:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 10 15:02:47 crc kubenswrapper[4718]: I1210 15:02:47.579957 4718 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="fd4410ea-13ef-4e0e-bb41-7758005636f2" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.220:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 10 15:02:48 crc kubenswrapper[4718]: I1210 15:02:48.507024 4718 generic.go:334] "Generic (PLEG): container finished" podID="3b4b4d05-de48-4103-9df2-bb976cd7f843" containerID="e3aa7ef178ffe4b6b1e323b58f203e4cb53e3de8063ba7747d78f20949204d59" exitCode=0 Dec 10 15:02:48 crc kubenswrapper[4718]: I1210 15:02:48.507121 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-8lwcn" event={"ID":"3b4b4d05-de48-4103-9df2-bb976cd7f843","Type":"ContainerDied","Data":"e3aa7ef178ffe4b6b1e323b58f203e4cb53e3de8063ba7747d78f20949204d59"} Dec 10 15:02:50 crc kubenswrapper[4718]: I1210 15:02:50.033829 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-8lwcn" Dec 10 15:02:50 crc kubenswrapper[4718]: I1210 15:02:50.100169 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rj7d2\" (UniqueName: \"kubernetes.io/projected/3b4b4d05-de48-4103-9df2-bb976cd7f843-kube-api-access-rj7d2\") pod \"3b4b4d05-de48-4103-9df2-bb976cd7f843\" (UID: \"3b4b4d05-de48-4103-9df2-bb976cd7f843\") " Dec 10 15:02:50 crc kubenswrapper[4718]: I1210 15:02:50.101019 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3b4b4d05-de48-4103-9df2-bb976cd7f843-config-data\") pod \"3b4b4d05-de48-4103-9df2-bb976cd7f843\" (UID: \"3b4b4d05-de48-4103-9df2-bb976cd7f843\") " Dec 10 15:02:50 crc kubenswrapper[4718]: I1210 15:02:50.101445 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3b4b4d05-de48-4103-9df2-bb976cd7f843-scripts\") pod \"3b4b4d05-de48-4103-9df2-bb976cd7f843\" (UID: \"3b4b4d05-de48-4103-9df2-bb976cd7f843\") " Dec 10 15:02:50 crc kubenswrapper[4718]: I1210 15:02:50.102106 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b4b4d05-de48-4103-9df2-bb976cd7f843-combined-ca-bundle\") pod \"3b4b4d05-de48-4103-9df2-bb976cd7f843\" (UID: \"3b4b4d05-de48-4103-9df2-bb976cd7f843\") " Dec 10 15:02:50 crc kubenswrapper[4718]: I1210 15:02:50.111821 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3b4b4d05-de48-4103-9df2-bb976cd7f843-kube-api-access-rj7d2" (OuterVolumeSpecName: "kube-api-access-rj7d2") pod "3b4b4d05-de48-4103-9df2-bb976cd7f843" (UID: "3b4b4d05-de48-4103-9df2-bb976cd7f843"). InnerVolumeSpecName "kube-api-access-rj7d2". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:02:50 crc kubenswrapper[4718]: I1210 15:02:50.121705 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3b4b4d05-de48-4103-9df2-bb976cd7f843-scripts" (OuterVolumeSpecName: "scripts") pod "3b4b4d05-de48-4103-9df2-bb976cd7f843" (UID: "3b4b4d05-de48-4103-9df2-bb976cd7f843"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:02:50 crc kubenswrapper[4718]: I1210 15:02:50.144455 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3b4b4d05-de48-4103-9df2-bb976cd7f843-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3b4b4d05-de48-4103-9df2-bb976cd7f843" (UID: "3b4b4d05-de48-4103-9df2-bb976cd7f843"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:02:50 crc kubenswrapper[4718]: I1210 15:02:50.146985 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3b4b4d05-de48-4103-9df2-bb976cd7f843-config-data" (OuterVolumeSpecName: "config-data") pod "3b4b4d05-de48-4103-9df2-bb976cd7f843" (UID: "3b4b4d05-de48-4103-9df2-bb976cd7f843"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:02:50 crc kubenswrapper[4718]: I1210 15:02:50.212868 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rj7d2\" (UniqueName: \"kubernetes.io/projected/3b4b4d05-de48-4103-9df2-bb976cd7f843-kube-api-access-rj7d2\") on node \"crc\" DevicePath \"\"" Dec 10 15:02:50 crc kubenswrapper[4718]: I1210 15:02:50.212934 4718 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3b4b4d05-de48-4103-9df2-bb976cd7f843-config-data\") on node \"crc\" DevicePath \"\"" Dec 10 15:02:50 crc kubenswrapper[4718]: I1210 15:02:50.212955 4718 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3b4b4d05-de48-4103-9df2-bb976cd7f843-scripts\") on node \"crc\" DevicePath \"\"" Dec 10 15:02:50 crc kubenswrapper[4718]: I1210 15:02:50.212965 4718 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b4b4d05-de48-4103-9df2-bb976cd7f843-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 10 15:02:50 crc kubenswrapper[4718]: I1210 15:02:50.535884 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-8lwcn" event={"ID":"3b4b4d05-de48-4103-9df2-bb976cd7f843","Type":"ContainerDied","Data":"d1038afba3a3055eb594f71ebaaa3cc2fe305553b89452c7af4d57e729b1f51e"} Dec 10 15:02:50 crc kubenswrapper[4718]: I1210 15:02:50.535958 4718 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d1038afba3a3055eb594f71ebaaa3cc2fe305553b89452c7af4d57e729b1f51e" Dec 10 15:02:50 crc kubenswrapper[4718]: I1210 15:02:50.536086 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-8lwcn" Dec 10 15:02:51 crc kubenswrapper[4718]: I1210 15:02:51.066210 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Dec 10 15:02:51 crc kubenswrapper[4718]: I1210 15:02:51.066637 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="fd4410ea-13ef-4e0e-bb41-7758005636f2" containerName="nova-api-log" containerID="cri-o://766b8e1984e35c338caf2f40dd0bfc59a845d1bc97dc3bb494a5a98adea47f8c" gracePeriod=30 Dec 10 15:02:51 crc kubenswrapper[4718]: I1210 15:02:51.067451 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="fd4410ea-13ef-4e0e-bb41-7758005636f2" containerName="nova-api-api" containerID="cri-o://a25a0ea7fb35da8642cebe7c3487811936e4278388db598ea56b66d7a1dc240b" gracePeriod=30 Dec 10 15:02:51 crc kubenswrapper[4718]: I1210 15:02:51.209296 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Dec 10 15:02:51 crc kubenswrapper[4718]: I1210 15:02:51.209732 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="09e476e7-3088-49bf-8e65-94212290e5e8" containerName="nova-scheduler-scheduler" containerID="cri-o://96e5f13f37b794cf6aebbe98bd418c61caa91fc2023329cd42cead2df6c0b096" gracePeriod=30 Dec 10 15:02:51 crc kubenswrapper[4718]: I1210 15:02:51.230712 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Dec 10 15:02:51 crc kubenswrapper[4718]: I1210 15:02:51.231140 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="c4aed149-3d16-484b-bef2-7828f1ffda8e" containerName="nova-metadata-log" containerID="cri-o://557591a1a71180960c70024cdaabc7123b9d03913c6069cf5a5e26eb6ce3fbc5" gracePeriod=30 Dec 10 15:02:51 crc kubenswrapper[4718]: I1210 15:02:51.231428 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="c4aed149-3d16-484b-bef2-7828f1ffda8e" containerName="nova-metadata-metadata" containerID="cri-o://23d4dcb35f647dcee7ccd4837f689e76a074605956223bd46cff165951a06847" gracePeriod=30 Dec 10 15:02:51 crc kubenswrapper[4718]: I1210 15:02:51.457666 4718 scope.go:117] "RemoveContainer" containerID="0ce446244bfe9eab1d65bf0e9a1639aa43526a4913e613958f2dd7091d1b01d5" Dec 10 15:02:51 crc kubenswrapper[4718]: I1210 15:02:51.596309 4718 generic.go:334] "Generic (PLEG): container finished" podID="fd4410ea-13ef-4e0e-bb41-7758005636f2" containerID="766b8e1984e35c338caf2f40dd0bfc59a845d1bc97dc3bb494a5a98adea47f8c" exitCode=143 Dec 10 15:02:51 crc kubenswrapper[4718]: I1210 15:02:51.596454 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"fd4410ea-13ef-4e0e-bb41-7758005636f2","Type":"ContainerDied","Data":"766b8e1984e35c338caf2f40dd0bfc59a845d1bc97dc3bb494a5a98adea47f8c"} Dec 10 15:02:51 crc kubenswrapper[4718]: I1210 15:02:51.613114 4718 generic.go:334] "Generic (PLEG): container finished" podID="c4aed149-3d16-484b-bef2-7828f1ffda8e" containerID="557591a1a71180960c70024cdaabc7123b9d03913c6069cf5a5e26eb6ce3fbc5" exitCode=143 Dec 10 15:02:51 crc kubenswrapper[4718]: I1210 15:02:51.613171 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"c4aed149-3d16-484b-bef2-7828f1ffda8e","Type":"ContainerDied","Data":"557591a1a71180960c70024cdaabc7123b9d03913c6069cf5a5e26eb6ce3fbc5"} Dec 10 15:02:53 crc kubenswrapper[4718]: I1210 15:02:53.838031 4718 generic.go:334] "Generic (PLEG): container finished" podID="c4aed149-3d16-484b-bef2-7828f1ffda8e" containerID="23d4dcb35f647dcee7ccd4837f689e76a074605956223bd46cff165951a06847" exitCode=0 Dec 10 15:02:53 crc kubenswrapper[4718]: I1210 15:02:53.838979 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"c4aed149-3d16-484b-bef2-7828f1ffda8e","Type":"ContainerDied","Data":"23d4dcb35f647dcee7ccd4837f689e76a074605956223bd46cff165951a06847"} Dec 10 15:02:53 crc kubenswrapper[4718]: I1210 15:02:53.839073 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"c4aed149-3d16-484b-bef2-7828f1ffda8e","Type":"ContainerDied","Data":"c84f7331c71059452e4c28d59fcb1a8ef176e8bb8aa7e797cfc20c661dc36b71"} Dec 10 15:02:53 crc kubenswrapper[4718]: I1210 15:02:53.839111 4718 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c84f7331c71059452e4c28d59fcb1a8ef176e8bb8aa7e797cfc20c661dc36b71" Dec 10 15:02:53 crc kubenswrapper[4718]: I1210 15:02:53.841120 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 10 15:02:53 crc kubenswrapper[4718]: I1210 15:02:53.995212 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/c4aed149-3d16-484b-bef2-7828f1ffda8e-nova-metadata-tls-certs\") pod \"c4aed149-3d16-484b-bef2-7828f1ffda8e\" (UID: \"c4aed149-3d16-484b-bef2-7828f1ffda8e\") " Dec 10 15:02:53 crc kubenswrapper[4718]: I1210 15:02:53.995667 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c4aed149-3d16-484b-bef2-7828f1ffda8e-logs\") pod \"c4aed149-3d16-484b-bef2-7828f1ffda8e\" (UID: \"c4aed149-3d16-484b-bef2-7828f1ffda8e\") " Dec 10 15:02:53 crc kubenswrapper[4718]: I1210 15:02:53.995826 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jzn2v\" (UniqueName: \"kubernetes.io/projected/c4aed149-3d16-484b-bef2-7828f1ffda8e-kube-api-access-jzn2v\") pod \"c4aed149-3d16-484b-bef2-7828f1ffda8e\" (UID: \"c4aed149-3d16-484b-bef2-7828f1ffda8e\") " Dec 10 15:02:53 crc kubenswrapper[4718]: I1210 15:02:53.995905 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c4aed149-3d16-484b-bef2-7828f1ffda8e-config-data\") pod \"c4aed149-3d16-484b-bef2-7828f1ffda8e\" (UID: \"c4aed149-3d16-484b-bef2-7828f1ffda8e\") " Dec 10 15:02:53 crc kubenswrapper[4718]: I1210 15:02:53.996104 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4aed149-3d16-484b-bef2-7828f1ffda8e-combined-ca-bundle\") pod \"c4aed149-3d16-484b-bef2-7828f1ffda8e\" (UID: \"c4aed149-3d16-484b-bef2-7828f1ffda8e\") " Dec 10 15:02:54 crc kubenswrapper[4718]: I1210 15:02:54.006096 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c4aed149-3d16-484b-bef2-7828f1ffda8e-logs" (OuterVolumeSpecName: "logs") pod "c4aed149-3d16-484b-bef2-7828f1ffda8e" (UID: "c4aed149-3d16-484b-bef2-7828f1ffda8e"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 15:02:54 crc kubenswrapper[4718]: I1210 15:02:54.068116 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c4aed149-3d16-484b-bef2-7828f1ffda8e-config-data" (OuterVolumeSpecName: "config-data") pod "c4aed149-3d16-484b-bef2-7828f1ffda8e" (UID: "c4aed149-3d16-484b-bef2-7828f1ffda8e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:02:54 crc kubenswrapper[4718]: I1210 15:02:54.074168 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c4aed149-3d16-484b-bef2-7828f1ffda8e-kube-api-access-jzn2v" (OuterVolumeSpecName: "kube-api-access-jzn2v") pod "c4aed149-3d16-484b-bef2-7828f1ffda8e" (UID: "c4aed149-3d16-484b-bef2-7828f1ffda8e"). InnerVolumeSpecName "kube-api-access-jzn2v". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:02:54 crc kubenswrapper[4718]: I1210 15:02:54.080857 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c4aed149-3d16-484b-bef2-7828f1ffda8e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c4aed149-3d16-484b-bef2-7828f1ffda8e" (UID: "c4aed149-3d16-484b-bef2-7828f1ffda8e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:02:54 crc kubenswrapper[4718]: I1210 15:02:54.099747 4718 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c4aed149-3d16-484b-bef2-7828f1ffda8e-logs\") on node \"crc\" DevicePath \"\"" Dec 10 15:02:54 crc kubenswrapper[4718]: I1210 15:02:54.099805 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jzn2v\" (UniqueName: \"kubernetes.io/projected/c4aed149-3d16-484b-bef2-7828f1ffda8e-kube-api-access-jzn2v\") on node \"crc\" DevicePath \"\"" Dec 10 15:02:54 crc kubenswrapper[4718]: I1210 15:02:54.099818 4718 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c4aed149-3d16-484b-bef2-7828f1ffda8e-config-data\") on node \"crc\" DevicePath \"\"" Dec 10 15:02:54 crc kubenswrapper[4718]: I1210 15:02:54.099832 4718 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4aed149-3d16-484b-bef2-7828f1ffda8e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 10 15:02:54 crc kubenswrapper[4718]: I1210 15:02:54.186717 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c4aed149-3d16-484b-bef2-7828f1ffda8e-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "c4aed149-3d16-484b-bef2-7828f1ffda8e" (UID: "c4aed149-3d16-484b-bef2-7828f1ffda8e"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:02:54 crc kubenswrapper[4718]: I1210 15:02:54.203195 4718 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/c4aed149-3d16-484b-bef2-7828f1ffda8e-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 10 15:02:54 crc kubenswrapper[4718]: E1210 15:02:54.535638 4718 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="96e5f13f37b794cf6aebbe98bd418c61caa91fc2023329cd42cead2df6c0b096" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Dec 10 15:02:54 crc kubenswrapper[4718]: E1210 15:02:54.542696 4718 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="96e5f13f37b794cf6aebbe98bd418c61caa91fc2023329cd42cead2df6c0b096" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Dec 10 15:02:54 crc kubenswrapper[4718]: E1210 15:02:54.546584 4718 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="96e5f13f37b794cf6aebbe98bd418c61caa91fc2023329cd42cead2df6c0b096" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Dec 10 15:02:54 crc kubenswrapper[4718]: E1210 15:02:54.546775 4718 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="09e476e7-3088-49bf-8e65-94212290e5e8" containerName="nova-scheduler-scheduler" Dec 10 15:02:54 crc kubenswrapper[4718]: I1210 15:02:54.855119 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 10 15:02:54 crc kubenswrapper[4718]: I1210 15:02:54.927681 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Dec 10 15:02:54 crc kubenswrapper[4718]: I1210 15:02:54.949811 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Dec 10 15:02:54 crc kubenswrapper[4718]: I1210 15:02:54.962464 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Dec 10 15:02:54 crc kubenswrapper[4718]: E1210 15:02:54.963161 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c4aed149-3d16-484b-bef2-7828f1ffda8e" containerName="nova-metadata-metadata" Dec 10 15:02:54 crc kubenswrapper[4718]: I1210 15:02:54.963189 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4aed149-3d16-484b-bef2-7828f1ffda8e" containerName="nova-metadata-metadata" Dec 10 15:02:54 crc kubenswrapper[4718]: E1210 15:02:54.963237 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c4aed149-3d16-484b-bef2-7828f1ffda8e" containerName="nova-metadata-log" Dec 10 15:02:54 crc kubenswrapper[4718]: I1210 15:02:54.963245 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4aed149-3d16-484b-bef2-7828f1ffda8e" containerName="nova-metadata-log" Dec 10 15:02:54 crc kubenswrapper[4718]: E1210 15:02:54.963256 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b4b4d05-de48-4103-9df2-bb976cd7f843" containerName="nova-manage" Dec 10 15:02:54 crc kubenswrapper[4718]: I1210 15:02:54.963262 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b4b4d05-de48-4103-9df2-bb976cd7f843" containerName="nova-manage" Dec 10 15:02:54 crc kubenswrapper[4718]: I1210 15:02:54.963523 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="3b4b4d05-de48-4103-9df2-bb976cd7f843" containerName="nova-manage" Dec 10 15:02:54 crc kubenswrapper[4718]: I1210 15:02:54.963540 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="c4aed149-3d16-484b-bef2-7828f1ffda8e" containerName="nova-metadata-metadata" Dec 10 15:02:54 crc kubenswrapper[4718]: I1210 15:02:54.963563 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="c4aed149-3d16-484b-bef2-7828f1ffda8e" containerName="nova-metadata-log" Dec 10 15:02:54 crc kubenswrapper[4718]: I1210 15:02:54.964910 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 10 15:02:54 crc kubenswrapper[4718]: I1210 15:02:54.969077 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Dec 10 15:02:54 crc kubenswrapper[4718]: I1210 15:02:54.971468 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Dec 10 15:02:54 crc kubenswrapper[4718]: I1210 15:02:54.973639 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Dec 10 15:02:55 crc kubenswrapper[4718]: I1210 15:02:55.042231 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/badca5dd-ef88-4de8-a596-9cb2adc01193-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"badca5dd-ef88-4de8-a596-9cb2adc01193\") " pod="openstack/nova-metadata-0" Dec 10 15:02:55 crc kubenswrapper[4718]: I1210 15:02:55.042337 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/badca5dd-ef88-4de8-a596-9cb2adc01193-config-data\") pod \"nova-metadata-0\" (UID: \"badca5dd-ef88-4de8-a596-9cb2adc01193\") " pod="openstack/nova-metadata-0" Dec 10 15:02:55 crc kubenswrapper[4718]: I1210 15:02:55.042455 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-769qw\" (UniqueName: \"kubernetes.io/projected/badca5dd-ef88-4de8-a596-9cb2adc01193-kube-api-access-769qw\") pod \"nova-metadata-0\" (UID: \"badca5dd-ef88-4de8-a596-9cb2adc01193\") " pod="openstack/nova-metadata-0" Dec 10 15:02:55 crc kubenswrapper[4718]: I1210 15:02:55.042707 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/badca5dd-ef88-4de8-a596-9cb2adc01193-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"badca5dd-ef88-4de8-a596-9cb2adc01193\") " pod="openstack/nova-metadata-0" Dec 10 15:02:55 crc kubenswrapper[4718]: I1210 15:02:55.042969 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/badca5dd-ef88-4de8-a596-9cb2adc01193-logs\") pod \"nova-metadata-0\" (UID: \"badca5dd-ef88-4de8-a596-9cb2adc01193\") " pod="openstack/nova-metadata-0" Dec 10 15:02:55 crc kubenswrapper[4718]: I1210 15:02:55.145295 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/badca5dd-ef88-4de8-a596-9cb2adc01193-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"badca5dd-ef88-4de8-a596-9cb2adc01193\") " pod="openstack/nova-metadata-0" Dec 10 15:02:55 crc kubenswrapper[4718]: I1210 15:02:55.145410 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/badca5dd-ef88-4de8-a596-9cb2adc01193-logs\") pod \"nova-metadata-0\" (UID: \"badca5dd-ef88-4de8-a596-9cb2adc01193\") " pod="openstack/nova-metadata-0" Dec 10 15:02:55 crc kubenswrapper[4718]: I1210 15:02:55.145483 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/badca5dd-ef88-4de8-a596-9cb2adc01193-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"badca5dd-ef88-4de8-a596-9cb2adc01193\") " pod="openstack/nova-metadata-0" Dec 10 15:02:55 crc kubenswrapper[4718]: I1210 15:02:55.145523 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/badca5dd-ef88-4de8-a596-9cb2adc01193-config-data\") pod \"nova-metadata-0\" (UID: \"badca5dd-ef88-4de8-a596-9cb2adc01193\") " pod="openstack/nova-metadata-0" Dec 10 15:02:55 crc kubenswrapper[4718]: I1210 15:02:55.145599 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-769qw\" (UniqueName: \"kubernetes.io/projected/badca5dd-ef88-4de8-a596-9cb2adc01193-kube-api-access-769qw\") pod \"nova-metadata-0\" (UID: \"badca5dd-ef88-4de8-a596-9cb2adc01193\") " pod="openstack/nova-metadata-0" Dec 10 15:02:55 crc kubenswrapper[4718]: I1210 15:02:55.146910 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/badca5dd-ef88-4de8-a596-9cb2adc01193-logs\") pod \"nova-metadata-0\" (UID: \"badca5dd-ef88-4de8-a596-9cb2adc01193\") " pod="openstack/nova-metadata-0" Dec 10 15:02:55 crc kubenswrapper[4718]: I1210 15:02:55.157959 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/badca5dd-ef88-4de8-a596-9cb2adc01193-config-data\") pod \"nova-metadata-0\" (UID: \"badca5dd-ef88-4de8-a596-9cb2adc01193\") " pod="openstack/nova-metadata-0" Dec 10 15:02:55 crc kubenswrapper[4718]: I1210 15:02:55.160256 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/badca5dd-ef88-4de8-a596-9cb2adc01193-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"badca5dd-ef88-4de8-a596-9cb2adc01193\") " pod="openstack/nova-metadata-0" Dec 10 15:02:55 crc kubenswrapper[4718]: I1210 15:02:55.168064 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-769qw\" (UniqueName: \"kubernetes.io/projected/badca5dd-ef88-4de8-a596-9cb2adc01193-kube-api-access-769qw\") pod \"nova-metadata-0\" (UID: \"badca5dd-ef88-4de8-a596-9cb2adc01193\") " pod="openstack/nova-metadata-0" Dec 10 15:02:55 crc kubenswrapper[4718]: I1210 15:02:55.172341 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/badca5dd-ef88-4de8-a596-9cb2adc01193-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"badca5dd-ef88-4de8-a596-9cb2adc01193\") " pod="openstack/nova-metadata-0" Dec 10 15:02:55 crc kubenswrapper[4718]: I1210 15:02:55.291351 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 10 15:02:55 crc kubenswrapper[4718]: I1210 15:02:55.832444 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Dec 10 15:02:55 crc kubenswrapper[4718]: I1210 15:02:55.872882 4718 generic.go:334] "Generic (PLEG): container finished" podID="fd4410ea-13ef-4e0e-bb41-7758005636f2" containerID="a25a0ea7fb35da8642cebe7c3487811936e4278388db598ea56b66d7a1dc240b" exitCode=0 Dec 10 15:02:55 crc kubenswrapper[4718]: I1210 15:02:55.872925 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"fd4410ea-13ef-4e0e-bb41-7758005636f2","Type":"ContainerDied","Data":"a25a0ea7fb35da8642cebe7c3487811936e4278388db598ea56b66d7a1dc240b"} Dec 10 15:02:55 crc kubenswrapper[4718]: I1210 15:02:55.875433 4718 generic.go:334] "Generic (PLEG): container finished" podID="09e476e7-3088-49bf-8e65-94212290e5e8" containerID="96e5f13f37b794cf6aebbe98bd418c61caa91fc2023329cd42cead2df6c0b096" exitCode=0 Dec 10 15:02:55 crc kubenswrapper[4718]: I1210 15:02:55.875502 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"09e476e7-3088-49bf-8e65-94212290e5e8","Type":"ContainerDied","Data":"96e5f13f37b794cf6aebbe98bd418c61caa91fc2023329cd42cead2df6c0b096"} Dec 10 15:02:56 crc kubenswrapper[4718]: I1210 15:02:56.040716 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c4aed149-3d16-484b-bef2-7828f1ffda8e" path="/var/lib/kubelet/pods/c4aed149-3d16-484b-bef2-7828f1ffda8e/volumes" Dec 10 15:02:56 crc kubenswrapper[4718]: I1210 15:02:56.676689 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 10 15:02:56 crc kubenswrapper[4718]: I1210 15:02:56.683510 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 10 15:02:56 crc kubenswrapper[4718]: I1210 15:02:56.790055 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fd4410ea-13ef-4e0e-bb41-7758005636f2-logs\") pod \"fd4410ea-13ef-4e0e-bb41-7758005636f2\" (UID: \"fd4410ea-13ef-4e0e-bb41-7758005636f2\") " Dec 10 15:02:56 crc kubenswrapper[4718]: I1210 15:02:56.790637 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcs9c\" (UniqueName: \"kubernetes.io/projected/fd4410ea-13ef-4e0e-bb41-7758005636f2-kube-api-access-fcs9c\") pod \"fd4410ea-13ef-4e0e-bb41-7758005636f2\" (UID: \"fd4410ea-13ef-4e0e-bb41-7758005636f2\") " Dec 10 15:02:56 crc kubenswrapper[4718]: I1210 15:02:56.790826 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fd4410ea-13ef-4e0e-bb41-7758005636f2-public-tls-certs\") pod \"fd4410ea-13ef-4e0e-bb41-7758005636f2\" (UID: \"fd4410ea-13ef-4e0e-bb41-7758005636f2\") " Dec 10 15:02:56 crc kubenswrapper[4718]: I1210 15:02:56.790863 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-66z7j\" (UniqueName: \"kubernetes.io/projected/09e476e7-3088-49bf-8e65-94212290e5e8-kube-api-access-66z7j\") pod \"09e476e7-3088-49bf-8e65-94212290e5e8\" (UID: \"09e476e7-3088-49bf-8e65-94212290e5e8\") " Dec 10 15:02:56 crc kubenswrapper[4718]: I1210 15:02:56.790987 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fd4410ea-13ef-4e0e-bb41-7758005636f2-internal-tls-certs\") pod \"fd4410ea-13ef-4e0e-bb41-7758005636f2\" (UID: \"fd4410ea-13ef-4e0e-bb41-7758005636f2\") " Dec 10 15:02:56 crc kubenswrapper[4718]: I1210 15:02:56.791055 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/09e476e7-3088-49bf-8e65-94212290e5e8-config-data\") pod \"09e476e7-3088-49bf-8e65-94212290e5e8\" (UID: \"09e476e7-3088-49bf-8e65-94212290e5e8\") " Dec 10 15:02:56 crc kubenswrapper[4718]: I1210 15:02:56.791102 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd4410ea-13ef-4e0e-bb41-7758005636f2-combined-ca-bundle\") pod \"fd4410ea-13ef-4e0e-bb41-7758005636f2\" (UID: \"fd4410ea-13ef-4e0e-bb41-7758005636f2\") " Dec 10 15:02:56 crc kubenswrapper[4718]: I1210 15:02:56.791126 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fd4410ea-13ef-4e0e-bb41-7758005636f2-logs" (OuterVolumeSpecName: "logs") pod "fd4410ea-13ef-4e0e-bb41-7758005636f2" (UID: "fd4410ea-13ef-4e0e-bb41-7758005636f2"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 15:02:56 crc kubenswrapper[4718]: I1210 15:02:56.791347 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd4410ea-13ef-4e0e-bb41-7758005636f2-config-data\") pod \"fd4410ea-13ef-4e0e-bb41-7758005636f2\" (UID: \"fd4410ea-13ef-4e0e-bb41-7758005636f2\") " Dec 10 15:02:56 crc kubenswrapper[4718]: I1210 15:02:56.791402 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09e476e7-3088-49bf-8e65-94212290e5e8-combined-ca-bundle\") pod \"09e476e7-3088-49bf-8e65-94212290e5e8\" (UID: \"09e476e7-3088-49bf-8e65-94212290e5e8\") " Dec 10 15:02:56 crc kubenswrapper[4718]: I1210 15:02:56.792057 4718 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fd4410ea-13ef-4e0e-bb41-7758005636f2-logs\") on node \"crc\" DevicePath \"\"" Dec 10 15:02:56 crc kubenswrapper[4718]: I1210 15:02:56.804863 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fd4410ea-13ef-4e0e-bb41-7758005636f2-kube-api-access-fcs9c" (OuterVolumeSpecName: "kube-api-access-fcs9c") pod "fd4410ea-13ef-4e0e-bb41-7758005636f2" (UID: "fd4410ea-13ef-4e0e-bb41-7758005636f2"). InnerVolumeSpecName "kube-api-access-fcs9c". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:02:56 crc kubenswrapper[4718]: I1210 15:02:56.808659 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09e476e7-3088-49bf-8e65-94212290e5e8-kube-api-access-66z7j" (OuterVolumeSpecName: "kube-api-access-66z7j") pod "09e476e7-3088-49bf-8e65-94212290e5e8" (UID: "09e476e7-3088-49bf-8e65-94212290e5e8"). InnerVolumeSpecName "kube-api-access-66z7j". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:02:56 crc kubenswrapper[4718]: I1210 15:02:56.835535 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fd4410ea-13ef-4e0e-bb41-7758005636f2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fd4410ea-13ef-4e0e-bb41-7758005636f2" (UID: "fd4410ea-13ef-4e0e-bb41-7758005636f2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:02:56 crc kubenswrapper[4718]: I1210 15:02:56.846948 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fd4410ea-13ef-4e0e-bb41-7758005636f2-config-data" (OuterVolumeSpecName: "config-data") pod "fd4410ea-13ef-4e0e-bb41-7758005636f2" (UID: "fd4410ea-13ef-4e0e-bb41-7758005636f2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:02:56 crc kubenswrapper[4718]: I1210 15:02:56.872787 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09e476e7-3088-49bf-8e65-94212290e5e8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "09e476e7-3088-49bf-8e65-94212290e5e8" (UID: "09e476e7-3088-49bf-8e65-94212290e5e8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:02:56 crc kubenswrapper[4718]: I1210 15:02:56.873451 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09e476e7-3088-49bf-8e65-94212290e5e8-config-data" (OuterVolumeSpecName: "config-data") pod "09e476e7-3088-49bf-8e65-94212290e5e8" (UID: "09e476e7-3088-49bf-8e65-94212290e5e8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:02:56 crc kubenswrapper[4718]: I1210 15:02:56.885257 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fd4410ea-13ef-4e0e-bb41-7758005636f2-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "fd4410ea-13ef-4e0e-bb41-7758005636f2" (UID: "fd4410ea-13ef-4e0e-bb41-7758005636f2"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:02:56 crc kubenswrapper[4718]: I1210 15:02:56.889359 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fd4410ea-13ef-4e0e-bb41-7758005636f2-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "fd4410ea-13ef-4e0e-bb41-7758005636f2" (UID: "fd4410ea-13ef-4e0e-bb41-7758005636f2"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:02:56 crc kubenswrapper[4718]: I1210 15:02:56.895574 4718 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fd4410ea-13ef-4e0e-bb41-7758005636f2-public-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 10 15:02:56 crc kubenswrapper[4718]: I1210 15:02:56.895675 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-66z7j\" (UniqueName: \"kubernetes.io/projected/09e476e7-3088-49bf-8e65-94212290e5e8-kube-api-access-66z7j\") on node \"crc\" DevicePath \"\"" Dec 10 15:02:56 crc kubenswrapper[4718]: I1210 15:02:56.896129 4718 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fd4410ea-13ef-4e0e-bb41-7758005636f2-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 10 15:02:56 crc kubenswrapper[4718]: I1210 15:02:56.896145 4718 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/09e476e7-3088-49bf-8e65-94212290e5e8-config-data\") on node \"crc\" DevicePath \"\"" Dec 10 15:02:56 crc kubenswrapper[4718]: I1210 15:02:56.896222 4718 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd4410ea-13ef-4e0e-bb41-7758005636f2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 10 15:02:56 crc kubenswrapper[4718]: I1210 15:02:56.896239 4718 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd4410ea-13ef-4e0e-bb41-7758005636f2-config-data\") on node \"crc\" DevicePath \"\"" Dec 10 15:02:56 crc kubenswrapper[4718]: I1210 15:02:56.896283 4718 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09e476e7-3088-49bf-8e65-94212290e5e8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 10 15:02:56 crc kubenswrapper[4718]: I1210 15:02:56.896300 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcs9c\" (UniqueName: \"kubernetes.io/projected/fd4410ea-13ef-4e0e-bb41-7758005636f2-kube-api-access-fcs9c\") on node \"crc\" DevicePath \"\"" Dec 10 15:02:56 crc kubenswrapper[4718]: I1210 15:02:56.900214 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"09e476e7-3088-49bf-8e65-94212290e5e8","Type":"ContainerDied","Data":"c24bca1ed04358f77735fbddc266861382fa036041c72503ba795915d45395bc"} Dec 10 15:02:56 crc kubenswrapper[4718]: I1210 15:02:56.900280 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 10 15:02:56 crc kubenswrapper[4718]: I1210 15:02:56.900304 4718 scope.go:117] "RemoveContainer" containerID="96e5f13f37b794cf6aebbe98bd418c61caa91fc2023329cd42cead2df6c0b096" Dec 10 15:02:56 crc kubenswrapper[4718]: I1210 15:02:56.907364 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"badca5dd-ef88-4de8-a596-9cb2adc01193","Type":"ContainerStarted","Data":"1e9e3d0b4d56a79dadebdb9050f865d658c3b6b9e267d1591e1e3ff17cf912ab"} Dec 10 15:02:56 crc kubenswrapper[4718]: I1210 15:02:56.907490 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"badca5dd-ef88-4de8-a596-9cb2adc01193","Type":"ContainerStarted","Data":"8e5ae1a29b70461df613337684fcee30c2b5371f71e2af885f8c0ebd08a17181"} Dec 10 15:02:56 crc kubenswrapper[4718]: I1210 15:02:56.907505 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"badca5dd-ef88-4de8-a596-9cb2adc01193","Type":"ContainerStarted","Data":"78701de220958f8074ff8dc2169e1953dc2fb6a163e6e727fd2142d4af66e1f4"} Dec 10 15:02:56 crc kubenswrapper[4718]: I1210 15:02:56.915481 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"fd4410ea-13ef-4e0e-bb41-7758005636f2","Type":"ContainerDied","Data":"e2efd801c2baff1acb1dd141edd893a91c87f3c9bb3d6deddb3cf25b2264e482"} Dec 10 15:02:56 crc kubenswrapper[4718]: I1210 15:02:56.915635 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 10 15:02:56 crc kubenswrapper[4718]: I1210 15:02:56.975338 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.975268808 podStartE2EDuration="2.975268808s" podCreationTimestamp="2025-12-10 15:02:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 15:02:56.93232593 +0000 UTC m=+1881.881549357" watchObservedRunningTime="2025-12-10 15:02:56.975268808 +0000 UTC m=+1881.924492225" Dec 10 15:02:56 crc kubenswrapper[4718]: I1210 15:02:56.986622 4718 scope.go:117] "RemoveContainer" containerID="a25a0ea7fb35da8642cebe7c3487811936e4278388db598ea56b66d7a1dc240b" Dec 10 15:02:57 crc kubenswrapper[4718]: I1210 15:02:57.027006 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Dec 10 15:02:57 crc kubenswrapper[4718]: I1210 15:02:57.049674 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Dec 10 15:02:57 crc kubenswrapper[4718]: I1210 15:02:57.056694 4718 scope.go:117] "RemoveContainer" containerID="766b8e1984e35c338caf2f40dd0bfc59a845d1bc97dc3bb494a5a98adea47f8c" Dec 10 15:02:57 crc kubenswrapper[4718]: I1210 15:02:57.142889 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Dec 10 15:02:57 crc kubenswrapper[4718]: I1210 15:02:57.158991 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Dec 10 15:02:57 crc kubenswrapper[4718]: I1210 15:02:57.172018 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Dec 10 15:02:57 crc kubenswrapper[4718]: E1210 15:02:57.172810 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd4410ea-13ef-4e0e-bb41-7758005636f2" containerName="nova-api-api" Dec 10 15:02:57 crc kubenswrapper[4718]: I1210 15:02:57.172840 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd4410ea-13ef-4e0e-bb41-7758005636f2" containerName="nova-api-api" Dec 10 15:02:57 crc kubenswrapper[4718]: E1210 15:02:57.172861 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="09e476e7-3088-49bf-8e65-94212290e5e8" containerName="nova-scheduler-scheduler" Dec 10 15:02:57 crc kubenswrapper[4718]: I1210 15:02:57.172868 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="09e476e7-3088-49bf-8e65-94212290e5e8" containerName="nova-scheduler-scheduler" Dec 10 15:02:57 crc kubenswrapper[4718]: E1210 15:02:57.172910 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd4410ea-13ef-4e0e-bb41-7758005636f2" containerName="nova-api-log" Dec 10 15:02:57 crc kubenswrapper[4718]: I1210 15:02:57.172919 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd4410ea-13ef-4e0e-bb41-7758005636f2" containerName="nova-api-log" Dec 10 15:02:57 crc kubenswrapper[4718]: I1210 15:02:57.173164 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="fd4410ea-13ef-4e0e-bb41-7758005636f2" containerName="nova-api-log" Dec 10 15:02:57 crc kubenswrapper[4718]: I1210 15:02:57.173189 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="09e476e7-3088-49bf-8e65-94212290e5e8" containerName="nova-scheduler-scheduler" Dec 10 15:02:57 crc kubenswrapper[4718]: I1210 15:02:57.173199 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="fd4410ea-13ef-4e0e-bb41-7758005636f2" containerName="nova-api-api" Dec 10 15:02:57 crc kubenswrapper[4718]: I1210 15:02:57.174258 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 10 15:02:57 crc kubenswrapper[4718]: I1210 15:02:57.177081 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Dec 10 15:02:57 crc kubenswrapper[4718]: I1210 15:02:57.193723 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Dec 10 15:02:57 crc kubenswrapper[4718]: I1210 15:02:57.196742 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 10 15:02:57 crc kubenswrapper[4718]: I1210 15:02:57.200106 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Dec 10 15:02:57 crc kubenswrapper[4718]: I1210 15:02:57.200226 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Dec 10 15:02:57 crc kubenswrapper[4718]: I1210 15:02:57.200612 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Dec 10 15:02:57 crc kubenswrapper[4718]: I1210 15:02:57.209551 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Dec 10 15:02:57 crc kubenswrapper[4718]: I1210 15:02:57.222585 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 10 15:02:57 crc kubenswrapper[4718]: I1210 15:02:57.325594 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cc4b750b-8599-4d08-9b09-d2d75f035dc4-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"cc4b750b-8599-4d08-9b09-d2d75f035dc4\") " pod="openstack/nova-scheduler-0" Dec 10 15:02:57 crc kubenswrapper[4718]: I1210 15:02:57.325671 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/12aef49c-9e40-4cc4-a280-103e9c6180de-config-data\") pod \"nova-api-0\" (UID: \"12aef49c-9e40-4cc4-a280-103e9c6180de\") " pod="openstack/nova-api-0" Dec 10 15:02:57 crc kubenswrapper[4718]: I1210 15:02:57.325800 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/12aef49c-9e40-4cc4-a280-103e9c6180de-internal-tls-certs\") pod \"nova-api-0\" (UID: \"12aef49c-9e40-4cc4-a280-103e9c6180de\") " pod="openstack/nova-api-0" Dec 10 15:02:57 crc kubenswrapper[4718]: I1210 15:02:57.325873 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/12aef49c-9e40-4cc4-a280-103e9c6180de-public-tls-certs\") pod \"nova-api-0\" (UID: \"12aef49c-9e40-4cc4-a280-103e9c6180de\") " pod="openstack/nova-api-0" Dec 10 15:02:57 crc kubenswrapper[4718]: I1210 15:02:57.326061 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/12aef49c-9e40-4cc4-a280-103e9c6180de-logs\") pod \"nova-api-0\" (UID: \"12aef49c-9e40-4cc4-a280-103e9c6180de\") " pod="openstack/nova-api-0" Dec 10 15:02:57 crc kubenswrapper[4718]: I1210 15:02:57.326148 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7z4s5\" (UniqueName: \"kubernetes.io/projected/cc4b750b-8599-4d08-9b09-d2d75f035dc4-kube-api-access-7z4s5\") pod \"nova-scheduler-0\" (UID: \"cc4b750b-8599-4d08-9b09-d2d75f035dc4\") " pod="openstack/nova-scheduler-0" Dec 10 15:02:57 crc kubenswrapper[4718]: I1210 15:02:57.326238 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/12aef49c-9e40-4cc4-a280-103e9c6180de-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"12aef49c-9e40-4cc4-a280-103e9c6180de\") " pod="openstack/nova-api-0" Dec 10 15:02:57 crc kubenswrapper[4718]: I1210 15:02:57.326432 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cc4b750b-8599-4d08-9b09-d2d75f035dc4-config-data\") pod \"nova-scheduler-0\" (UID: \"cc4b750b-8599-4d08-9b09-d2d75f035dc4\") " pod="openstack/nova-scheduler-0" Dec 10 15:02:57 crc kubenswrapper[4718]: I1210 15:02:57.326469 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d2zps\" (UniqueName: \"kubernetes.io/projected/12aef49c-9e40-4cc4-a280-103e9c6180de-kube-api-access-d2zps\") pod \"nova-api-0\" (UID: \"12aef49c-9e40-4cc4-a280-103e9c6180de\") " pod="openstack/nova-api-0" Dec 10 15:02:57 crc kubenswrapper[4718]: I1210 15:02:57.429262 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cc4b750b-8599-4d08-9b09-d2d75f035dc4-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"cc4b750b-8599-4d08-9b09-d2d75f035dc4\") " pod="openstack/nova-scheduler-0" Dec 10 15:02:57 crc kubenswrapper[4718]: I1210 15:02:57.429326 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/12aef49c-9e40-4cc4-a280-103e9c6180de-config-data\") pod \"nova-api-0\" (UID: \"12aef49c-9e40-4cc4-a280-103e9c6180de\") " pod="openstack/nova-api-0" Dec 10 15:02:57 crc kubenswrapper[4718]: I1210 15:02:57.429366 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/12aef49c-9e40-4cc4-a280-103e9c6180de-internal-tls-certs\") pod \"nova-api-0\" (UID: \"12aef49c-9e40-4cc4-a280-103e9c6180de\") " pod="openstack/nova-api-0" Dec 10 15:02:57 crc kubenswrapper[4718]: I1210 15:02:57.429435 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/12aef49c-9e40-4cc4-a280-103e9c6180de-public-tls-certs\") pod \"nova-api-0\" (UID: \"12aef49c-9e40-4cc4-a280-103e9c6180de\") " pod="openstack/nova-api-0" Dec 10 15:02:57 crc kubenswrapper[4718]: I1210 15:02:57.429504 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/12aef49c-9e40-4cc4-a280-103e9c6180de-logs\") pod \"nova-api-0\" (UID: \"12aef49c-9e40-4cc4-a280-103e9c6180de\") " pod="openstack/nova-api-0" Dec 10 15:02:57 crc kubenswrapper[4718]: I1210 15:02:57.429527 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7z4s5\" (UniqueName: \"kubernetes.io/projected/cc4b750b-8599-4d08-9b09-d2d75f035dc4-kube-api-access-7z4s5\") pod \"nova-scheduler-0\" (UID: \"cc4b750b-8599-4d08-9b09-d2d75f035dc4\") " pod="openstack/nova-scheduler-0" Dec 10 15:02:57 crc kubenswrapper[4718]: I1210 15:02:57.429556 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/12aef49c-9e40-4cc4-a280-103e9c6180de-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"12aef49c-9e40-4cc4-a280-103e9c6180de\") " pod="openstack/nova-api-0" Dec 10 15:02:57 crc kubenswrapper[4718]: I1210 15:02:57.429617 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cc4b750b-8599-4d08-9b09-d2d75f035dc4-config-data\") pod \"nova-scheduler-0\" (UID: \"cc4b750b-8599-4d08-9b09-d2d75f035dc4\") " pod="openstack/nova-scheduler-0" Dec 10 15:02:57 crc kubenswrapper[4718]: I1210 15:02:57.429640 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d2zps\" (UniqueName: \"kubernetes.io/projected/12aef49c-9e40-4cc4-a280-103e9c6180de-kube-api-access-d2zps\") pod \"nova-api-0\" (UID: \"12aef49c-9e40-4cc4-a280-103e9c6180de\") " pod="openstack/nova-api-0" Dec 10 15:02:57 crc kubenswrapper[4718]: I1210 15:02:57.431039 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/12aef49c-9e40-4cc4-a280-103e9c6180de-logs\") pod \"nova-api-0\" (UID: \"12aef49c-9e40-4cc4-a280-103e9c6180de\") " pod="openstack/nova-api-0" Dec 10 15:02:57 crc kubenswrapper[4718]: I1210 15:02:57.435006 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cc4b750b-8599-4d08-9b09-d2d75f035dc4-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"cc4b750b-8599-4d08-9b09-d2d75f035dc4\") " pod="openstack/nova-scheduler-0" Dec 10 15:02:57 crc kubenswrapper[4718]: I1210 15:02:57.436141 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/12aef49c-9e40-4cc4-a280-103e9c6180de-internal-tls-certs\") pod \"nova-api-0\" (UID: \"12aef49c-9e40-4cc4-a280-103e9c6180de\") " pod="openstack/nova-api-0" Dec 10 15:02:57 crc kubenswrapper[4718]: I1210 15:02:57.437241 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cc4b750b-8599-4d08-9b09-d2d75f035dc4-config-data\") pod \"nova-scheduler-0\" (UID: \"cc4b750b-8599-4d08-9b09-d2d75f035dc4\") " pod="openstack/nova-scheduler-0" Dec 10 15:02:57 crc kubenswrapper[4718]: I1210 15:02:57.440612 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/12aef49c-9e40-4cc4-a280-103e9c6180de-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"12aef49c-9e40-4cc4-a280-103e9c6180de\") " pod="openstack/nova-api-0" Dec 10 15:02:57 crc kubenswrapper[4718]: I1210 15:02:57.446820 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/12aef49c-9e40-4cc4-a280-103e9c6180de-config-data\") pod \"nova-api-0\" (UID: \"12aef49c-9e40-4cc4-a280-103e9c6180de\") " pod="openstack/nova-api-0" Dec 10 15:02:57 crc kubenswrapper[4718]: I1210 15:02:57.450238 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/12aef49c-9e40-4cc4-a280-103e9c6180de-public-tls-certs\") pod \"nova-api-0\" (UID: \"12aef49c-9e40-4cc4-a280-103e9c6180de\") " pod="openstack/nova-api-0" Dec 10 15:02:57 crc kubenswrapper[4718]: I1210 15:02:57.455282 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d2zps\" (UniqueName: \"kubernetes.io/projected/12aef49c-9e40-4cc4-a280-103e9c6180de-kube-api-access-d2zps\") pod \"nova-api-0\" (UID: \"12aef49c-9e40-4cc4-a280-103e9c6180de\") " pod="openstack/nova-api-0" Dec 10 15:02:57 crc kubenswrapper[4718]: I1210 15:02:57.459362 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7z4s5\" (UniqueName: \"kubernetes.io/projected/cc4b750b-8599-4d08-9b09-d2d75f035dc4-kube-api-access-7z4s5\") pod \"nova-scheduler-0\" (UID: \"cc4b750b-8599-4d08-9b09-d2d75f035dc4\") " pod="openstack/nova-scheduler-0" Dec 10 15:02:57 crc kubenswrapper[4718]: I1210 15:02:57.594935 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 10 15:02:57 crc kubenswrapper[4718]: I1210 15:02:57.606947 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 10 15:02:58 crc kubenswrapper[4718]: I1210 15:02:58.106663 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09e476e7-3088-49bf-8e65-94212290e5e8" path="/var/lib/kubelet/pods/09e476e7-3088-49bf-8e65-94212290e5e8/volumes" Dec 10 15:02:58 crc kubenswrapper[4718]: I1210 15:02:58.108602 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fd4410ea-13ef-4e0e-bb41-7758005636f2" path="/var/lib/kubelet/pods/fd4410ea-13ef-4e0e-bb41-7758005636f2/volumes" Dec 10 15:02:58 crc kubenswrapper[4718]: I1210 15:02:58.213142 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Dec 10 15:02:58 crc kubenswrapper[4718]: I1210 15:02:58.225577 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 10 15:02:58 crc kubenswrapper[4718]: I1210 15:02:58.950133 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"12aef49c-9e40-4cc4-a280-103e9c6180de","Type":"ContainerStarted","Data":"b9ade3f00a38c8e43107b87854ff490fbad569633e9cbb2cc7f81158c2b6c10f"} Dec 10 15:02:58 crc kubenswrapper[4718]: I1210 15:02:58.950836 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"12aef49c-9e40-4cc4-a280-103e9c6180de","Type":"ContainerStarted","Data":"acb987d3ddef332342daa61ace6ec4a7f2e727763b31ac3d46e006fe8c7b812a"} Dec 10 15:02:58 crc kubenswrapper[4718]: I1210 15:02:58.953097 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"cc4b750b-8599-4d08-9b09-d2d75f035dc4","Type":"ContainerStarted","Data":"77773d01163167799b9ee68dd53ee0cde58820900294f1efd2b48e6e9d96e5ab"} Dec 10 15:02:58 crc kubenswrapper[4718]: I1210 15:02:58.953178 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"cc4b750b-8599-4d08-9b09-d2d75f035dc4","Type":"ContainerStarted","Data":"783292e9715c6343663d45cb915c03964a8d65da9afefd02d07e6231a05b6d6b"} Dec 10 15:02:58 crc kubenswrapper[4718]: I1210 15:02:58.977734 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.977679578 podStartE2EDuration="2.977679578s" podCreationTimestamp="2025-12-10 15:02:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 15:02:58.974112868 +0000 UTC m=+1883.923336285" watchObservedRunningTime="2025-12-10 15:02:58.977679578 +0000 UTC m=+1883.926902995" Dec 10 15:02:59 crc kubenswrapper[4718]: I1210 15:02:59.977971 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"12aef49c-9e40-4cc4-a280-103e9c6180de","Type":"ContainerStarted","Data":"ab86b9673e36dc23436d843589f9074b82f0eed6af36d24cde5ff8e87222ee76"} Dec 10 15:03:00 crc kubenswrapper[4718]: I1210 15:03:00.013647 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=3.01361837 podStartE2EDuration="3.01361837s" podCreationTimestamp="2025-12-10 15:02:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 15:03:00.0052259 +0000 UTC m=+1884.954449317" watchObservedRunningTime="2025-12-10 15:03:00.01361837 +0000 UTC m=+1884.962841787" Dec 10 15:03:00 crc kubenswrapper[4718]: I1210 15:03:00.293184 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Dec 10 15:03:00 crc kubenswrapper[4718]: I1210 15:03:00.293268 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Dec 10 15:03:02 crc kubenswrapper[4718]: I1210 15:03:02.596054 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Dec 10 15:03:05 crc kubenswrapper[4718]: I1210 15:03:05.292498 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Dec 10 15:03:05 crc kubenswrapper[4718]: I1210 15:03:05.292865 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Dec 10 15:03:06 crc kubenswrapper[4718]: I1210 15:03:06.301959 4718 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="badca5dd-ef88-4de8-a596-9cb2adc01193" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.223:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 10 15:03:06 crc kubenswrapper[4718]: I1210 15:03:06.309776 4718 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="badca5dd-ef88-4de8-a596-9cb2adc01193" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.223:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 10 15:03:07 crc kubenswrapper[4718]: I1210 15:03:07.595643 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Dec 10 15:03:07 crc kubenswrapper[4718]: I1210 15:03:07.607915 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Dec 10 15:03:07 crc kubenswrapper[4718]: I1210 15:03:07.607968 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Dec 10 15:03:07 crc kubenswrapper[4718]: I1210 15:03:07.635502 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Dec 10 15:03:08 crc kubenswrapper[4718]: I1210 15:03:08.148365 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Dec 10 15:03:08 crc kubenswrapper[4718]: I1210 15:03:08.625577 4718 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="12aef49c-9e40-4cc4-a280-103e9c6180de" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.225:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 10 15:03:08 crc kubenswrapper[4718]: I1210 15:03:08.625610 4718 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="12aef49c-9e40-4cc4-a280-103e9c6180de" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.225:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 10 15:03:13 crc kubenswrapper[4718]: I1210 15:03:13.207168 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Dec 10 15:03:15 crc kubenswrapper[4718]: I1210 15:03:15.300293 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Dec 10 15:03:15 crc kubenswrapper[4718]: I1210 15:03:15.307784 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Dec 10 15:03:15 crc kubenswrapper[4718]: I1210 15:03:15.307974 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Dec 10 15:03:15 crc kubenswrapper[4718]: I1210 15:03:15.425382 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Dec 10 15:03:17 crc kubenswrapper[4718]: I1210 15:03:17.620895 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Dec 10 15:03:17 crc kubenswrapper[4718]: I1210 15:03:17.621658 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Dec 10 15:03:17 crc kubenswrapper[4718]: I1210 15:03:17.629288 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Dec 10 15:03:17 crc kubenswrapper[4718]: I1210 15:03:17.637583 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Dec 10 15:03:18 crc kubenswrapper[4718]: I1210 15:03:18.453230 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Dec 10 15:03:18 crc kubenswrapper[4718]: I1210 15:03:18.464626 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Dec 10 15:03:27 crc kubenswrapper[4718]: I1210 15:03:27.902766 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Dec 10 15:03:30 crc kubenswrapper[4718]: I1210 15:03:30.514827 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 10 15:03:32 crc kubenswrapper[4718]: I1210 15:03:32.706658 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="0fcf07d8-859b-4547-8a32-824f40da6a93" containerName="rabbitmq" containerID="cri-o://39f018b3b86bf926a366646740cd5924108999d0e42616a6428f138277fcd0df" gracePeriod=604796 Dec 10 15:03:33 crc kubenswrapper[4718]: I1210 15:03:33.743501 4718 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="0fcf07d8-859b-4547-8a32-824f40da6a93" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.103:5671: connect: connection refused" Dec 10 15:03:34 crc kubenswrapper[4718]: I1210 15:03:34.366290 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="282d32e9-d539-4bac-9fd1-a8735e8d92e1" containerName="rabbitmq" containerID="cri-o://59d74c840e862e7d3f916969d5014bad6fbf30eedc965312f87861e06f8d87fc" gracePeriod=604797 Dec 10 15:03:34 crc kubenswrapper[4718]: I1210 15:03:34.660421 4718 generic.go:334] "Generic (PLEG): container finished" podID="0fcf07d8-859b-4547-8a32-824f40da6a93" containerID="39f018b3b86bf926a366646740cd5924108999d0e42616a6428f138277fcd0df" exitCode=0 Dec 10 15:03:34 crc kubenswrapper[4718]: I1210 15:03:34.660465 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"0fcf07d8-859b-4547-8a32-824f40da6a93","Type":"ContainerDied","Data":"39f018b3b86bf926a366646740cd5924108999d0e42616a6428f138277fcd0df"} Dec 10 15:03:34 crc kubenswrapper[4718]: I1210 15:03:34.885505 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Dec 10 15:03:34 crc kubenswrapper[4718]: I1210 15:03:34.945995 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"0fcf07d8-859b-4547-8a32-824f40da6a93\" (UID: \"0fcf07d8-859b-4547-8a32-824f40da6a93\") " Dec 10 15:03:34 crc kubenswrapper[4718]: I1210 15:03:34.946114 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-79ddq\" (UniqueName: \"kubernetes.io/projected/0fcf07d8-859b-4547-8a32-824f40da6a93-kube-api-access-79ddq\") pod \"0fcf07d8-859b-4547-8a32-824f40da6a93\" (UID: \"0fcf07d8-859b-4547-8a32-824f40da6a93\") " Dec 10 15:03:34 crc kubenswrapper[4718]: I1210 15:03:34.946185 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/0fcf07d8-859b-4547-8a32-824f40da6a93-pod-info\") pod \"0fcf07d8-859b-4547-8a32-824f40da6a93\" (UID: \"0fcf07d8-859b-4547-8a32-824f40da6a93\") " Dec 10 15:03:34 crc kubenswrapper[4718]: I1210 15:03:34.946275 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/0fcf07d8-859b-4547-8a32-824f40da6a93-erlang-cookie-secret\") pod \"0fcf07d8-859b-4547-8a32-824f40da6a93\" (UID: \"0fcf07d8-859b-4547-8a32-824f40da6a93\") " Dec 10 15:03:34 crc kubenswrapper[4718]: I1210 15:03:34.946607 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/0fcf07d8-859b-4547-8a32-824f40da6a93-server-conf\") pod \"0fcf07d8-859b-4547-8a32-824f40da6a93\" (UID: \"0fcf07d8-859b-4547-8a32-824f40da6a93\") " Dec 10 15:03:34 crc kubenswrapper[4718]: I1210 15:03:34.946806 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0fcf07d8-859b-4547-8a32-824f40da6a93-config-data\") pod \"0fcf07d8-859b-4547-8a32-824f40da6a93\" (UID: \"0fcf07d8-859b-4547-8a32-824f40da6a93\") " Dec 10 15:03:34 crc kubenswrapper[4718]: I1210 15:03:34.946880 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/0fcf07d8-859b-4547-8a32-824f40da6a93-rabbitmq-confd\") pod \"0fcf07d8-859b-4547-8a32-824f40da6a93\" (UID: \"0fcf07d8-859b-4547-8a32-824f40da6a93\") " Dec 10 15:03:34 crc kubenswrapper[4718]: I1210 15:03:34.946929 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/0fcf07d8-859b-4547-8a32-824f40da6a93-rabbitmq-tls\") pod \"0fcf07d8-859b-4547-8a32-824f40da6a93\" (UID: \"0fcf07d8-859b-4547-8a32-824f40da6a93\") " Dec 10 15:03:34 crc kubenswrapper[4718]: I1210 15:03:34.947000 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/0fcf07d8-859b-4547-8a32-824f40da6a93-rabbitmq-plugins\") pod \"0fcf07d8-859b-4547-8a32-824f40da6a93\" (UID: \"0fcf07d8-859b-4547-8a32-824f40da6a93\") " Dec 10 15:03:34 crc kubenswrapper[4718]: I1210 15:03:34.947078 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/0fcf07d8-859b-4547-8a32-824f40da6a93-rabbitmq-erlang-cookie\") pod \"0fcf07d8-859b-4547-8a32-824f40da6a93\" (UID: \"0fcf07d8-859b-4547-8a32-824f40da6a93\") " Dec 10 15:03:34 crc kubenswrapper[4718]: I1210 15:03:34.947134 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/0fcf07d8-859b-4547-8a32-824f40da6a93-plugins-conf\") pod \"0fcf07d8-859b-4547-8a32-824f40da6a93\" (UID: \"0fcf07d8-859b-4547-8a32-824f40da6a93\") " Dec 10 15:03:34 crc kubenswrapper[4718]: I1210 15:03:34.950324 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0fcf07d8-859b-4547-8a32-824f40da6a93-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "0fcf07d8-859b-4547-8a32-824f40da6a93" (UID: "0fcf07d8-859b-4547-8a32-824f40da6a93"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 15:03:34 crc kubenswrapper[4718]: I1210 15:03:34.951927 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0fcf07d8-859b-4547-8a32-824f40da6a93-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "0fcf07d8-859b-4547-8a32-824f40da6a93" (UID: "0fcf07d8-859b-4547-8a32-824f40da6a93"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 15:03:34 crc kubenswrapper[4718]: I1210 15:03:34.954444 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0fcf07d8-859b-4547-8a32-824f40da6a93-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "0fcf07d8-859b-4547-8a32-824f40da6a93" (UID: "0fcf07d8-859b-4547-8a32-824f40da6a93"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 15:03:34 crc kubenswrapper[4718]: I1210 15:03:34.961838 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0fcf07d8-859b-4547-8a32-824f40da6a93-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "0fcf07d8-859b-4547-8a32-824f40da6a93" (UID: "0fcf07d8-859b-4547-8a32-824f40da6a93"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:03:34 crc kubenswrapper[4718]: I1210 15:03:34.961898 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0fcf07d8-859b-4547-8a32-824f40da6a93-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "0fcf07d8-859b-4547-8a32-824f40da6a93" (UID: "0fcf07d8-859b-4547-8a32-824f40da6a93"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:03:34 crc kubenswrapper[4718]: I1210 15:03:34.962036 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage04-crc" (OuterVolumeSpecName: "persistence") pod "0fcf07d8-859b-4547-8a32-824f40da6a93" (UID: "0fcf07d8-859b-4547-8a32-824f40da6a93"). InnerVolumeSpecName "local-storage04-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Dec 10 15:03:34 crc kubenswrapper[4718]: I1210 15:03:34.972439 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/0fcf07d8-859b-4547-8a32-824f40da6a93-pod-info" (OuterVolumeSpecName: "pod-info") pod "0fcf07d8-859b-4547-8a32-824f40da6a93" (UID: "0fcf07d8-859b-4547-8a32-824f40da6a93"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Dec 10 15:03:35 crc kubenswrapper[4718]: I1210 15:03:35.028212 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0fcf07d8-859b-4547-8a32-824f40da6a93-kube-api-access-79ddq" (OuterVolumeSpecName: "kube-api-access-79ddq") pod "0fcf07d8-859b-4547-8a32-824f40da6a93" (UID: "0fcf07d8-859b-4547-8a32-824f40da6a93"). InnerVolumeSpecName "kube-api-access-79ddq". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:03:35 crc kubenswrapper[4718]: I1210 15:03:35.051669 4718 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/0fcf07d8-859b-4547-8a32-824f40da6a93-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Dec 10 15:03:35 crc kubenswrapper[4718]: I1210 15:03:35.051722 4718 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/0fcf07d8-859b-4547-8a32-824f40da6a93-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Dec 10 15:03:35 crc kubenswrapper[4718]: I1210 15:03:35.051731 4718 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/0fcf07d8-859b-4547-8a32-824f40da6a93-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Dec 10 15:03:35 crc kubenswrapper[4718]: I1210 15:03:35.051742 4718 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/0fcf07d8-859b-4547-8a32-824f40da6a93-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Dec 10 15:03:35 crc kubenswrapper[4718]: I1210 15:03:35.051755 4718 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/0fcf07d8-859b-4547-8a32-824f40da6a93-plugins-conf\") on node \"crc\" DevicePath \"\"" Dec 10 15:03:35 crc kubenswrapper[4718]: I1210 15:03:35.051816 4718 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" " Dec 10 15:03:35 crc kubenswrapper[4718]: I1210 15:03:35.051835 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-79ddq\" (UniqueName: \"kubernetes.io/projected/0fcf07d8-859b-4547-8a32-824f40da6a93-kube-api-access-79ddq\") on node \"crc\" DevicePath \"\"" Dec 10 15:03:35 crc kubenswrapper[4718]: I1210 15:03:35.051847 4718 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/0fcf07d8-859b-4547-8a32-824f40da6a93-pod-info\") on node \"crc\" DevicePath \"\"" Dec 10 15:03:35 crc kubenswrapper[4718]: I1210 15:03:35.095177 4718 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage04-crc" (UniqueName: "kubernetes.io/local-volume/local-storage04-crc") on node "crc" Dec 10 15:03:35 crc kubenswrapper[4718]: I1210 15:03:35.116996 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0fcf07d8-859b-4547-8a32-824f40da6a93-server-conf" (OuterVolumeSpecName: "server-conf") pod "0fcf07d8-859b-4547-8a32-824f40da6a93" (UID: "0fcf07d8-859b-4547-8a32-824f40da6a93"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 15:03:35 crc kubenswrapper[4718]: I1210 15:03:35.126675 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0fcf07d8-859b-4547-8a32-824f40da6a93-config-data" (OuterVolumeSpecName: "config-data") pod "0fcf07d8-859b-4547-8a32-824f40da6a93" (UID: "0fcf07d8-859b-4547-8a32-824f40da6a93"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 15:03:35 crc kubenswrapper[4718]: I1210 15:03:35.155105 4718 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0fcf07d8-859b-4547-8a32-824f40da6a93-config-data\") on node \"crc\" DevicePath \"\"" Dec 10 15:03:35 crc kubenswrapper[4718]: I1210 15:03:35.156461 4718 reconciler_common.go:293] "Volume detached for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" DevicePath \"\"" Dec 10 15:03:35 crc kubenswrapper[4718]: I1210 15:03:35.156484 4718 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/0fcf07d8-859b-4547-8a32-824f40da6a93-server-conf\") on node \"crc\" DevicePath \"\"" Dec 10 15:03:35 crc kubenswrapper[4718]: I1210 15:03:35.233836 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0fcf07d8-859b-4547-8a32-824f40da6a93-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "0fcf07d8-859b-4547-8a32-824f40da6a93" (UID: "0fcf07d8-859b-4547-8a32-824f40da6a93"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:03:35 crc kubenswrapper[4718]: I1210 15:03:35.259556 4718 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/0fcf07d8-859b-4547-8a32-824f40da6a93-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Dec 10 15:03:35 crc kubenswrapper[4718]: I1210 15:03:35.677868 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"0fcf07d8-859b-4547-8a32-824f40da6a93","Type":"ContainerDied","Data":"bd733297e1f2aec880af84c4c0846c3d26f69d8fea31f5413b6b94099bc9fe29"} Dec 10 15:03:35 crc kubenswrapper[4718]: I1210 15:03:35.677983 4718 scope.go:117] "RemoveContainer" containerID="39f018b3b86bf926a366646740cd5924108999d0e42616a6428f138277fcd0df" Dec 10 15:03:35 crc kubenswrapper[4718]: I1210 15:03:35.677996 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Dec 10 15:03:35 crc kubenswrapper[4718]: I1210 15:03:35.745443 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Dec 10 15:03:35 crc kubenswrapper[4718]: I1210 15:03:35.760077 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Dec 10 15:03:35 crc kubenswrapper[4718]: I1210 15:03:35.785087 4718 scope.go:117] "RemoveContainer" containerID="c8b2fcef7b1e85bda85dbf340e17ddc30ec7066a78b9e32b881e0b894f46560d" Dec 10 15:03:35 crc kubenswrapper[4718]: I1210 15:03:35.802917 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Dec 10 15:03:35 crc kubenswrapper[4718]: E1210 15:03:35.808867 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0fcf07d8-859b-4547-8a32-824f40da6a93" containerName="rabbitmq" Dec 10 15:03:35 crc kubenswrapper[4718]: I1210 15:03:35.808941 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="0fcf07d8-859b-4547-8a32-824f40da6a93" containerName="rabbitmq" Dec 10 15:03:35 crc kubenswrapper[4718]: E1210 15:03:35.809018 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0fcf07d8-859b-4547-8a32-824f40da6a93" containerName="setup-container" Dec 10 15:03:35 crc kubenswrapper[4718]: I1210 15:03:35.809029 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="0fcf07d8-859b-4547-8a32-824f40da6a93" containerName="setup-container" Dec 10 15:03:35 crc kubenswrapper[4718]: I1210 15:03:35.809565 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="0fcf07d8-859b-4547-8a32-824f40da6a93" containerName="rabbitmq" Dec 10 15:03:35 crc kubenswrapper[4718]: I1210 15:03:35.811461 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Dec 10 15:03:35 crc kubenswrapper[4718]: I1210 15:03:35.820437 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Dec 10 15:03:35 crc kubenswrapper[4718]: I1210 15:03:35.820640 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-dl8ph" Dec 10 15:03:35 crc kubenswrapper[4718]: I1210 15:03:35.820931 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Dec 10 15:03:35 crc kubenswrapper[4718]: I1210 15:03:35.821729 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Dec 10 15:03:35 crc kubenswrapper[4718]: I1210 15:03:35.827070 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Dec 10 15:03:35 crc kubenswrapper[4718]: I1210 15:03:35.827468 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Dec 10 15:03:35 crc kubenswrapper[4718]: I1210 15:03:35.827751 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Dec 10 15:03:35 crc kubenswrapper[4718]: I1210 15:03:35.846921 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Dec 10 15:03:36 crc kubenswrapper[4718]: I1210 15:03:36.164962 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gf5nj\" (UniqueName: \"kubernetes.io/projected/4e530819-d029-4526-aed9-2cd33568dbcb-kube-api-access-gf5nj\") pod \"rabbitmq-server-0\" (UID: \"4e530819-d029-4526-aed9-2cd33568dbcb\") " pod="openstack/rabbitmq-server-0" Dec 10 15:03:36 crc kubenswrapper[4718]: I1210 15:03:36.165839 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/4e530819-d029-4526-aed9-2cd33568dbcb-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"4e530819-d029-4526-aed9-2cd33568dbcb\") " pod="openstack/rabbitmq-server-0" Dec 10 15:03:36 crc kubenswrapper[4718]: I1210 15:03:36.166217 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/4e530819-d029-4526-aed9-2cd33568dbcb-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"4e530819-d029-4526-aed9-2cd33568dbcb\") " pod="openstack/rabbitmq-server-0" Dec 10 15:03:36 crc kubenswrapper[4718]: I1210 15:03:36.166448 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4e530819-d029-4526-aed9-2cd33568dbcb-config-data\") pod \"rabbitmq-server-0\" (UID: \"4e530819-d029-4526-aed9-2cd33568dbcb\") " pod="openstack/rabbitmq-server-0" Dec 10 15:03:36 crc kubenswrapper[4718]: I1210 15:03:36.166593 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-server-0\" (UID: \"4e530819-d029-4526-aed9-2cd33568dbcb\") " pod="openstack/rabbitmq-server-0" Dec 10 15:03:36 crc kubenswrapper[4718]: I1210 15:03:36.166699 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/4e530819-d029-4526-aed9-2cd33568dbcb-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"4e530819-d029-4526-aed9-2cd33568dbcb\") " pod="openstack/rabbitmq-server-0" Dec 10 15:03:36 crc kubenswrapper[4718]: I1210 15:03:36.166829 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/4e530819-d029-4526-aed9-2cd33568dbcb-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"4e530819-d029-4526-aed9-2cd33568dbcb\") " pod="openstack/rabbitmq-server-0" Dec 10 15:03:36 crc kubenswrapper[4718]: I1210 15:03:36.166938 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/4e530819-d029-4526-aed9-2cd33568dbcb-server-conf\") pod \"rabbitmq-server-0\" (UID: \"4e530819-d029-4526-aed9-2cd33568dbcb\") " pod="openstack/rabbitmq-server-0" Dec 10 15:03:36 crc kubenswrapper[4718]: I1210 15:03:36.167071 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/4e530819-d029-4526-aed9-2cd33568dbcb-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"4e530819-d029-4526-aed9-2cd33568dbcb\") " pod="openstack/rabbitmq-server-0" Dec 10 15:03:36 crc kubenswrapper[4718]: I1210 15:03:36.167213 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/4e530819-d029-4526-aed9-2cd33568dbcb-pod-info\") pod \"rabbitmq-server-0\" (UID: \"4e530819-d029-4526-aed9-2cd33568dbcb\") " pod="openstack/rabbitmq-server-0" Dec 10 15:03:36 crc kubenswrapper[4718]: I1210 15:03:36.167446 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/4e530819-d029-4526-aed9-2cd33568dbcb-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"4e530819-d029-4526-aed9-2cd33568dbcb\") " pod="openstack/rabbitmq-server-0" Dec 10 15:03:36 crc kubenswrapper[4718]: I1210 15:03:36.188894 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0fcf07d8-859b-4547-8a32-824f40da6a93" path="/var/lib/kubelet/pods/0fcf07d8-859b-4547-8a32-824f40da6a93/volumes" Dec 10 15:03:36 crc kubenswrapper[4718]: I1210 15:03:36.270255 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/4e530819-d029-4526-aed9-2cd33568dbcb-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"4e530819-d029-4526-aed9-2cd33568dbcb\") " pod="openstack/rabbitmq-server-0" Dec 10 15:03:36 crc kubenswrapper[4718]: I1210 15:03:36.270335 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gf5nj\" (UniqueName: \"kubernetes.io/projected/4e530819-d029-4526-aed9-2cd33568dbcb-kube-api-access-gf5nj\") pod \"rabbitmq-server-0\" (UID: \"4e530819-d029-4526-aed9-2cd33568dbcb\") " pod="openstack/rabbitmq-server-0" Dec 10 15:03:36 crc kubenswrapper[4718]: I1210 15:03:36.270356 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/4e530819-d029-4526-aed9-2cd33568dbcb-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"4e530819-d029-4526-aed9-2cd33568dbcb\") " pod="openstack/rabbitmq-server-0" Dec 10 15:03:36 crc kubenswrapper[4718]: I1210 15:03:36.270450 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/4e530819-d029-4526-aed9-2cd33568dbcb-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"4e530819-d029-4526-aed9-2cd33568dbcb\") " pod="openstack/rabbitmq-server-0" Dec 10 15:03:36 crc kubenswrapper[4718]: I1210 15:03:36.270480 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4e530819-d029-4526-aed9-2cd33568dbcb-config-data\") pod \"rabbitmq-server-0\" (UID: \"4e530819-d029-4526-aed9-2cd33568dbcb\") " pod="openstack/rabbitmq-server-0" Dec 10 15:03:36 crc kubenswrapper[4718]: I1210 15:03:36.270515 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-server-0\" (UID: \"4e530819-d029-4526-aed9-2cd33568dbcb\") " pod="openstack/rabbitmq-server-0" Dec 10 15:03:36 crc kubenswrapper[4718]: I1210 15:03:36.270537 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/4e530819-d029-4526-aed9-2cd33568dbcb-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"4e530819-d029-4526-aed9-2cd33568dbcb\") " pod="openstack/rabbitmq-server-0" Dec 10 15:03:36 crc kubenswrapper[4718]: I1210 15:03:36.270570 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/4e530819-d029-4526-aed9-2cd33568dbcb-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"4e530819-d029-4526-aed9-2cd33568dbcb\") " pod="openstack/rabbitmq-server-0" Dec 10 15:03:36 crc kubenswrapper[4718]: I1210 15:03:36.270591 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/4e530819-d029-4526-aed9-2cd33568dbcb-server-conf\") pod \"rabbitmq-server-0\" (UID: \"4e530819-d029-4526-aed9-2cd33568dbcb\") " pod="openstack/rabbitmq-server-0" Dec 10 15:03:36 crc kubenswrapper[4718]: I1210 15:03:36.270623 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/4e530819-d029-4526-aed9-2cd33568dbcb-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"4e530819-d029-4526-aed9-2cd33568dbcb\") " pod="openstack/rabbitmq-server-0" Dec 10 15:03:36 crc kubenswrapper[4718]: I1210 15:03:36.270668 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/4e530819-d029-4526-aed9-2cd33568dbcb-pod-info\") pod \"rabbitmq-server-0\" (UID: \"4e530819-d029-4526-aed9-2cd33568dbcb\") " pod="openstack/rabbitmq-server-0" Dec 10 15:03:36 crc kubenswrapper[4718]: I1210 15:03:36.271827 4718 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-server-0\" (UID: \"4e530819-d029-4526-aed9-2cd33568dbcb\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/rabbitmq-server-0" Dec 10 15:03:36 crc kubenswrapper[4718]: I1210 15:03:36.272018 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/4e530819-d029-4526-aed9-2cd33568dbcb-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"4e530819-d029-4526-aed9-2cd33568dbcb\") " pod="openstack/rabbitmq-server-0" Dec 10 15:03:36 crc kubenswrapper[4718]: I1210 15:03:36.272810 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/4e530819-d029-4526-aed9-2cd33568dbcb-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"4e530819-d029-4526-aed9-2cd33568dbcb\") " pod="openstack/rabbitmq-server-0" Dec 10 15:03:36 crc kubenswrapper[4718]: I1210 15:03:36.273056 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/4e530819-d029-4526-aed9-2cd33568dbcb-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"4e530819-d029-4526-aed9-2cd33568dbcb\") " pod="openstack/rabbitmq-server-0" Dec 10 15:03:36 crc kubenswrapper[4718]: I1210 15:03:36.274378 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/4e530819-d029-4526-aed9-2cd33568dbcb-server-conf\") pod \"rabbitmq-server-0\" (UID: \"4e530819-d029-4526-aed9-2cd33568dbcb\") " pod="openstack/rabbitmq-server-0" Dec 10 15:03:36 crc kubenswrapper[4718]: I1210 15:03:36.276194 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4e530819-d029-4526-aed9-2cd33568dbcb-config-data\") pod \"rabbitmq-server-0\" (UID: \"4e530819-d029-4526-aed9-2cd33568dbcb\") " pod="openstack/rabbitmq-server-0" Dec 10 15:03:36 crc kubenswrapper[4718]: I1210 15:03:36.281539 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/4e530819-d029-4526-aed9-2cd33568dbcb-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"4e530819-d029-4526-aed9-2cd33568dbcb\") " pod="openstack/rabbitmq-server-0" Dec 10 15:03:36 crc kubenswrapper[4718]: I1210 15:03:36.285158 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/4e530819-d029-4526-aed9-2cd33568dbcb-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"4e530819-d029-4526-aed9-2cd33568dbcb\") " pod="openstack/rabbitmq-server-0" Dec 10 15:03:36 crc kubenswrapper[4718]: I1210 15:03:36.285700 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/4e530819-d029-4526-aed9-2cd33568dbcb-pod-info\") pod \"rabbitmq-server-0\" (UID: \"4e530819-d029-4526-aed9-2cd33568dbcb\") " pod="openstack/rabbitmq-server-0" Dec 10 15:03:36 crc kubenswrapper[4718]: I1210 15:03:36.301033 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gf5nj\" (UniqueName: \"kubernetes.io/projected/4e530819-d029-4526-aed9-2cd33568dbcb-kube-api-access-gf5nj\") pod \"rabbitmq-server-0\" (UID: \"4e530819-d029-4526-aed9-2cd33568dbcb\") " pod="openstack/rabbitmq-server-0" Dec 10 15:03:36 crc kubenswrapper[4718]: I1210 15:03:36.302175 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/4e530819-d029-4526-aed9-2cd33568dbcb-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"4e530819-d029-4526-aed9-2cd33568dbcb\") " pod="openstack/rabbitmq-server-0" Dec 10 15:03:36 crc kubenswrapper[4718]: I1210 15:03:36.348444 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-server-0\" (UID: \"4e530819-d029-4526-aed9-2cd33568dbcb\") " pod="openstack/rabbitmq-server-0" Dec 10 15:03:36 crc kubenswrapper[4718]: I1210 15:03:36.449550 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Dec 10 15:03:36 crc kubenswrapper[4718]: I1210 15:03:36.615187 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Dec 10 15:03:36 crc kubenswrapper[4718]: I1210 15:03:36.683700 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/282d32e9-d539-4bac-9fd1-a8735e8d92e1-rabbitmq-confd\") pod \"282d32e9-d539-4bac-9fd1-a8735e8d92e1\" (UID: \"282d32e9-d539-4bac-9fd1-a8735e8d92e1\") " Dec 10 15:03:36 crc kubenswrapper[4718]: I1210 15:03:36.684121 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/282d32e9-d539-4bac-9fd1-a8735e8d92e1-erlang-cookie-secret\") pod \"282d32e9-d539-4bac-9fd1-a8735e8d92e1\" (UID: \"282d32e9-d539-4bac-9fd1-a8735e8d92e1\") " Dec 10 15:03:36 crc kubenswrapper[4718]: I1210 15:03:36.684165 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/282d32e9-d539-4bac-9fd1-a8735e8d92e1-server-conf\") pod \"282d32e9-d539-4bac-9fd1-a8735e8d92e1\" (UID: \"282d32e9-d539-4bac-9fd1-a8735e8d92e1\") " Dec 10 15:03:36 crc kubenswrapper[4718]: I1210 15:03:36.684279 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/282d32e9-d539-4bac-9fd1-a8735e8d92e1-rabbitmq-erlang-cookie\") pod \"282d32e9-d539-4bac-9fd1-a8735e8d92e1\" (UID: \"282d32e9-d539-4bac-9fd1-a8735e8d92e1\") " Dec 10 15:03:36 crc kubenswrapper[4718]: I1210 15:03:36.684308 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"282d32e9-d539-4bac-9fd1-a8735e8d92e1\" (UID: \"282d32e9-d539-4bac-9fd1-a8735e8d92e1\") " Dec 10 15:03:36 crc kubenswrapper[4718]: I1210 15:03:36.684426 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/282d32e9-d539-4bac-9fd1-a8735e8d92e1-pod-info\") pod \"282d32e9-d539-4bac-9fd1-a8735e8d92e1\" (UID: \"282d32e9-d539-4bac-9fd1-a8735e8d92e1\") " Dec 10 15:03:36 crc kubenswrapper[4718]: I1210 15:03:36.684500 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-55kgx\" (UniqueName: \"kubernetes.io/projected/282d32e9-d539-4bac-9fd1-a8735e8d92e1-kube-api-access-55kgx\") pod \"282d32e9-d539-4bac-9fd1-a8735e8d92e1\" (UID: \"282d32e9-d539-4bac-9fd1-a8735e8d92e1\") " Dec 10 15:03:36 crc kubenswrapper[4718]: I1210 15:03:36.684557 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/282d32e9-d539-4bac-9fd1-a8735e8d92e1-rabbitmq-tls\") pod \"282d32e9-d539-4bac-9fd1-a8735e8d92e1\" (UID: \"282d32e9-d539-4bac-9fd1-a8735e8d92e1\") " Dec 10 15:03:36 crc kubenswrapper[4718]: I1210 15:03:36.684628 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/282d32e9-d539-4bac-9fd1-a8735e8d92e1-rabbitmq-plugins\") pod \"282d32e9-d539-4bac-9fd1-a8735e8d92e1\" (UID: \"282d32e9-d539-4bac-9fd1-a8735e8d92e1\") " Dec 10 15:03:36 crc kubenswrapper[4718]: I1210 15:03:36.684672 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/282d32e9-d539-4bac-9fd1-a8735e8d92e1-plugins-conf\") pod \"282d32e9-d539-4bac-9fd1-a8735e8d92e1\" (UID: \"282d32e9-d539-4bac-9fd1-a8735e8d92e1\") " Dec 10 15:03:36 crc kubenswrapper[4718]: I1210 15:03:36.684739 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/282d32e9-d539-4bac-9fd1-a8735e8d92e1-config-data\") pod \"282d32e9-d539-4bac-9fd1-a8735e8d92e1\" (UID: \"282d32e9-d539-4bac-9fd1-a8735e8d92e1\") " Dec 10 15:03:36 crc kubenswrapper[4718]: I1210 15:03:36.699175 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/282d32e9-d539-4bac-9fd1-a8735e8d92e1-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "282d32e9-d539-4bac-9fd1-a8735e8d92e1" (UID: "282d32e9-d539-4bac-9fd1-a8735e8d92e1"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 15:03:36 crc kubenswrapper[4718]: I1210 15:03:36.700321 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/282d32e9-d539-4bac-9fd1-a8735e8d92e1-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "282d32e9-d539-4bac-9fd1-a8735e8d92e1" (UID: "282d32e9-d539-4bac-9fd1-a8735e8d92e1"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 15:03:36 crc kubenswrapper[4718]: I1210 15:03:36.701570 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/282d32e9-d539-4bac-9fd1-a8735e8d92e1-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "282d32e9-d539-4bac-9fd1-a8735e8d92e1" (UID: "282d32e9-d539-4bac-9fd1-a8735e8d92e1"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 15:03:36 crc kubenswrapper[4718]: I1210 15:03:36.721182 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage09-crc" (OuterVolumeSpecName: "persistence") pod "282d32e9-d539-4bac-9fd1-a8735e8d92e1" (UID: "282d32e9-d539-4bac-9fd1-a8735e8d92e1"). InnerVolumeSpecName "local-storage09-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Dec 10 15:03:36 crc kubenswrapper[4718]: I1210 15:03:36.724525 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/282d32e9-d539-4bac-9fd1-a8735e8d92e1-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "282d32e9-d539-4bac-9fd1-a8735e8d92e1" (UID: "282d32e9-d539-4bac-9fd1-a8735e8d92e1"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:03:36 crc kubenswrapper[4718]: I1210 15:03:36.724690 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/282d32e9-d539-4bac-9fd1-a8735e8d92e1-kube-api-access-55kgx" (OuterVolumeSpecName: "kube-api-access-55kgx") pod "282d32e9-d539-4bac-9fd1-a8735e8d92e1" (UID: "282d32e9-d539-4bac-9fd1-a8735e8d92e1"). InnerVolumeSpecName "kube-api-access-55kgx". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:03:36 crc kubenswrapper[4718]: I1210 15:03:36.725021 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/282d32e9-d539-4bac-9fd1-a8735e8d92e1-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "282d32e9-d539-4bac-9fd1-a8735e8d92e1" (UID: "282d32e9-d539-4bac-9fd1-a8735e8d92e1"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:03:36 crc kubenswrapper[4718]: I1210 15:03:36.729086 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/282d32e9-d539-4bac-9fd1-a8735e8d92e1-pod-info" (OuterVolumeSpecName: "pod-info") pod "282d32e9-d539-4bac-9fd1-a8735e8d92e1" (UID: "282d32e9-d539-4bac-9fd1-a8735e8d92e1"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Dec 10 15:03:36 crc kubenswrapper[4718]: I1210 15:03:36.753628 4718 generic.go:334] "Generic (PLEG): container finished" podID="282d32e9-d539-4bac-9fd1-a8735e8d92e1" containerID="59d74c840e862e7d3f916969d5014bad6fbf30eedc965312f87861e06f8d87fc" exitCode=0 Dec 10 15:03:36 crc kubenswrapper[4718]: I1210 15:03:36.753968 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"282d32e9-d539-4bac-9fd1-a8735e8d92e1","Type":"ContainerDied","Data":"59d74c840e862e7d3f916969d5014bad6fbf30eedc965312f87861e06f8d87fc"} Dec 10 15:03:36 crc kubenswrapper[4718]: I1210 15:03:36.754121 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"282d32e9-d539-4bac-9fd1-a8735e8d92e1","Type":"ContainerDied","Data":"83f4820489da7e0f23fce40539fbf05b5e91f0d1b4c31c463394476cbeda7059"} Dec 10 15:03:36 crc kubenswrapper[4718]: I1210 15:03:36.754198 4718 scope.go:117] "RemoveContainer" containerID="59d74c840e862e7d3f916969d5014bad6fbf30eedc965312f87861e06f8d87fc" Dec 10 15:03:36 crc kubenswrapper[4718]: I1210 15:03:36.754224 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Dec 10 15:03:36 crc kubenswrapper[4718]: I1210 15:03:36.776922 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/282d32e9-d539-4bac-9fd1-a8735e8d92e1-config-data" (OuterVolumeSpecName: "config-data") pod "282d32e9-d539-4bac-9fd1-a8735e8d92e1" (UID: "282d32e9-d539-4bac-9fd1-a8735e8d92e1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 15:03:36 crc kubenswrapper[4718]: I1210 15:03:36.793159 4718 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/282d32e9-d539-4bac-9fd1-a8735e8d92e1-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Dec 10 15:03:36 crc kubenswrapper[4718]: I1210 15:03:36.793291 4718 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" " Dec 10 15:03:36 crc kubenswrapper[4718]: I1210 15:03:36.793308 4718 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/282d32e9-d539-4bac-9fd1-a8735e8d92e1-pod-info\") on node \"crc\" DevicePath \"\"" Dec 10 15:03:36 crc kubenswrapper[4718]: I1210 15:03:36.793355 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-55kgx\" (UniqueName: \"kubernetes.io/projected/282d32e9-d539-4bac-9fd1-a8735e8d92e1-kube-api-access-55kgx\") on node \"crc\" DevicePath \"\"" Dec 10 15:03:36 crc kubenswrapper[4718]: I1210 15:03:36.793370 4718 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/282d32e9-d539-4bac-9fd1-a8735e8d92e1-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Dec 10 15:03:36 crc kubenswrapper[4718]: I1210 15:03:36.793381 4718 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/282d32e9-d539-4bac-9fd1-a8735e8d92e1-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Dec 10 15:03:36 crc kubenswrapper[4718]: I1210 15:03:36.793441 4718 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/282d32e9-d539-4bac-9fd1-a8735e8d92e1-plugins-conf\") on node \"crc\" DevicePath \"\"" Dec 10 15:03:36 crc kubenswrapper[4718]: I1210 15:03:36.793460 4718 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/282d32e9-d539-4bac-9fd1-a8735e8d92e1-config-data\") on node \"crc\" DevicePath \"\"" Dec 10 15:03:36 crc kubenswrapper[4718]: I1210 15:03:36.793489 4718 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/282d32e9-d539-4bac-9fd1-a8735e8d92e1-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Dec 10 15:03:36 crc kubenswrapper[4718]: I1210 15:03:36.828729 4718 scope.go:117] "RemoveContainer" containerID="49d4ddfce0fa0f7d4939e716b90aec39e4406e5c6207e6f2157b8812b80cc12d" Dec 10 15:03:36 crc kubenswrapper[4718]: I1210 15:03:36.828908 4718 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage09-crc" (UniqueName: "kubernetes.io/local-volume/local-storage09-crc") on node "crc" Dec 10 15:03:36 crc kubenswrapper[4718]: I1210 15:03:36.871468 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/282d32e9-d539-4bac-9fd1-a8735e8d92e1-server-conf" (OuterVolumeSpecName: "server-conf") pod "282d32e9-d539-4bac-9fd1-a8735e8d92e1" (UID: "282d32e9-d539-4bac-9fd1-a8735e8d92e1"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 15:03:36 crc kubenswrapper[4718]: I1210 15:03:36.890791 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/282d32e9-d539-4bac-9fd1-a8735e8d92e1-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "282d32e9-d539-4bac-9fd1-a8735e8d92e1" (UID: "282d32e9-d539-4bac-9fd1-a8735e8d92e1"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:03:36 crc kubenswrapper[4718]: I1210 15:03:36.895485 4718 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/282d32e9-d539-4bac-9fd1-a8735e8d92e1-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Dec 10 15:03:36 crc kubenswrapper[4718]: I1210 15:03:36.895525 4718 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/282d32e9-d539-4bac-9fd1-a8735e8d92e1-server-conf\") on node \"crc\" DevicePath \"\"" Dec 10 15:03:36 crc kubenswrapper[4718]: I1210 15:03:36.895535 4718 reconciler_common.go:293] "Volume detached for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" DevicePath \"\"" Dec 10 15:03:36 crc kubenswrapper[4718]: I1210 15:03:36.906075 4718 scope.go:117] "RemoveContainer" containerID="59d74c840e862e7d3f916969d5014bad6fbf30eedc965312f87861e06f8d87fc" Dec 10 15:03:36 crc kubenswrapper[4718]: E1210 15:03:36.907642 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"59d74c840e862e7d3f916969d5014bad6fbf30eedc965312f87861e06f8d87fc\": container with ID starting with 59d74c840e862e7d3f916969d5014bad6fbf30eedc965312f87861e06f8d87fc not found: ID does not exist" containerID="59d74c840e862e7d3f916969d5014bad6fbf30eedc965312f87861e06f8d87fc" Dec 10 15:03:36 crc kubenswrapper[4718]: I1210 15:03:36.907711 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"59d74c840e862e7d3f916969d5014bad6fbf30eedc965312f87861e06f8d87fc"} err="failed to get container status \"59d74c840e862e7d3f916969d5014bad6fbf30eedc965312f87861e06f8d87fc\": rpc error: code = NotFound desc = could not find container \"59d74c840e862e7d3f916969d5014bad6fbf30eedc965312f87861e06f8d87fc\": container with ID starting with 59d74c840e862e7d3f916969d5014bad6fbf30eedc965312f87861e06f8d87fc not found: ID does not exist" Dec 10 15:03:36 crc kubenswrapper[4718]: I1210 15:03:36.907744 4718 scope.go:117] "RemoveContainer" containerID="49d4ddfce0fa0f7d4939e716b90aec39e4406e5c6207e6f2157b8812b80cc12d" Dec 10 15:03:36 crc kubenswrapper[4718]: E1210 15:03:36.909189 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"49d4ddfce0fa0f7d4939e716b90aec39e4406e5c6207e6f2157b8812b80cc12d\": container with ID starting with 49d4ddfce0fa0f7d4939e716b90aec39e4406e5c6207e6f2157b8812b80cc12d not found: ID does not exist" containerID="49d4ddfce0fa0f7d4939e716b90aec39e4406e5c6207e6f2157b8812b80cc12d" Dec 10 15:03:36 crc kubenswrapper[4718]: I1210 15:03:36.909256 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"49d4ddfce0fa0f7d4939e716b90aec39e4406e5c6207e6f2157b8812b80cc12d"} err="failed to get container status \"49d4ddfce0fa0f7d4939e716b90aec39e4406e5c6207e6f2157b8812b80cc12d\": rpc error: code = NotFound desc = could not find container \"49d4ddfce0fa0f7d4939e716b90aec39e4406e5c6207e6f2157b8812b80cc12d\": container with ID starting with 49d4ddfce0fa0f7d4939e716b90aec39e4406e5c6207e6f2157b8812b80cc12d not found: ID does not exist" Dec 10 15:03:37 crc kubenswrapper[4718]: I1210 15:03:37.068977 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Dec 10 15:03:37 crc kubenswrapper[4718]: W1210 15:03:37.108629 4718 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4e530819_d029_4526_aed9_2cd33568dbcb.slice/crio-d86489ba59f45180324f9713b16995fa34d1fd32a0fcd77bbe8591720b6ef748 WatchSource:0}: Error finding container d86489ba59f45180324f9713b16995fa34d1fd32a0fcd77bbe8591720b6ef748: Status 404 returned error can't find the container with id d86489ba59f45180324f9713b16995fa34d1fd32a0fcd77bbe8591720b6ef748 Dec 10 15:03:37 crc kubenswrapper[4718]: I1210 15:03:37.127707 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 10 15:03:37 crc kubenswrapper[4718]: I1210 15:03:37.154252 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 10 15:03:37 crc kubenswrapper[4718]: I1210 15:03:37.172735 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 10 15:03:37 crc kubenswrapper[4718]: E1210 15:03:37.176022 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="282d32e9-d539-4bac-9fd1-a8735e8d92e1" containerName="setup-container" Dec 10 15:03:37 crc kubenswrapper[4718]: I1210 15:03:37.179292 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="282d32e9-d539-4bac-9fd1-a8735e8d92e1" containerName="setup-container" Dec 10 15:03:37 crc kubenswrapper[4718]: E1210 15:03:37.179401 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="282d32e9-d539-4bac-9fd1-a8735e8d92e1" containerName="rabbitmq" Dec 10 15:03:37 crc kubenswrapper[4718]: I1210 15:03:37.179487 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="282d32e9-d539-4bac-9fd1-a8735e8d92e1" containerName="rabbitmq" Dec 10 15:03:37 crc kubenswrapper[4718]: I1210 15:03:37.179944 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="282d32e9-d539-4bac-9fd1-a8735e8d92e1" containerName="rabbitmq" Dec 10 15:03:37 crc kubenswrapper[4718]: I1210 15:03:37.183280 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Dec 10 15:03:37 crc kubenswrapper[4718]: I1210 15:03:37.187248 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 10 15:03:37 crc kubenswrapper[4718]: I1210 15:03:37.187910 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Dec 10 15:03:37 crc kubenswrapper[4718]: I1210 15:03:37.187951 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Dec 10 15:03:37 crc kubenswrapper[4718]: I1210 15:03:37.187973 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-svxt4" Dec 10 15:03:37 crc kubenswrapper[4718]: I1210 15:03:37.188108 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Dec 10 15:03:37 crc kubenswrapper[4718]: I1210 15:03:37.188219 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Dec 10 15:03:37 crc kubenswrapper[4718]: I1210 15:03:37.198223 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Dec 10 15:03:37 crc kubenswrapper[4718]: I1210 15:03:37.198577 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Dec 10 15:03:37 crc kubenswrapper[4718]: I1210 15:03:37.309177 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/55b4c58e-c07e-4cd2-8592-f57b1d9f9233-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"55b4c58e-c07e-4cd2-8592-f57b1d9f9233\") " pod="openstack/rabbitmq-cell1-server-0" Dec 10 15:03:37 crc kubenswrapper[4718]: I1210 15:03:37.309235 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"55b4c58e-c07e-4cd2-8592-f57b1d9f9233\") " pod="openstack/rabbitmq-cell1-server-0" Dec 10 15:03:37 crc kubenswrapper[4718]: I1210 15:03:37.309473 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/55b4c58e-c07e-4cd2-8592-f57b1d9f9233-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"55b4c58e-c07e-4cd2-8592-f57b1d9f9233\") " pod="openstack/rabbitmq-cell1-server-0" Dec 10 15:03:37 crc kubenswrapper[4718]: I1210 15:03:37.309878 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/55b4c58e-c07e-4cd2-8592-f57b1d9f9233-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"55b4c58e-c07e-4cd2-8592-f57b1d9f9233\") " pod="openstack/rabbitmq-cell1-server-0" Dec 10 15:03:37 crc kubenswrapper[4718]: I1210 15:03:37.310041 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/55b4c58e-c07e-4cd2-8592-f57b1d9f9233-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"55b4c58e-c07e-4cd2-8592-f57b1d9f9233\") " pod="openstack/rabbitmq-cell1-server-0" Dec 10 15:03:37 crc kubenswrapper[4718]: I1210 15:03:37.310079 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/55b4c58e-c07e-4cd2-8592-f57b1d9f9233-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"55b4c58e-c07e-4cd2-8592-f57b1d9f9233\") " pod="openstack/rabbitmq-cell1-server-0" Dec 10 15:03:37 crc kubenswrapper[4718]: I1210 15:03:37.310211 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9zg6r\" (UniqueName: \"kubernetes.io/projected/55b4c58e-c07e-4cd2-8592-f57b1d9f9233-kube-api-access-9zg6r\") pod \"rabbitmq-cell1-server-0\" (UID: \"55b4c58e-c07e-4cd2-8592-f57b1d9f9233\") " pod="openstack/rabbitmq-cell1-server-0" Dec 10 15:03:37 crc kubenswrapper[4718]: I1210 15:03:37.310260 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/55b4c58e-c07e-4cd2-8592-f57b1d9f9233-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"55b4c58e-c07e-4cd2-8592-f57b1d9f9233\") " pod="openstack/rabbitmq-cell1-server-0" Dec 10 15:03:37 crc kubenswrapper[4718]: I1210 15:03:37.310349 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/55b4c58e-c07e-4cd2-8592-f57b1d9f9233-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"55b4c58e-c07e-4cd2-8592-f57b1d9f9233\") " pod="openstack/rabbitmq-cell1-server-0" Dec 10 15:03:37 crc kubenswrapper[4718]: I1210 15:03:37.310445 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/55b4c58e-c07e-4cd2-8592-f57b1d9f9233-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"55b4c58e-c07e-4cd2-8592-f57b1d9f9233\") " pod="openstack/rabbitmq-cell1-server-0" Dec 10 15:03:37 crc kubenswrapper[4718]: I1210 15:03:37.310519 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/55b4c58e-c07e-4cd2-8592-f57b1d9f9233-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"55b4c58e-c07e-4cd2-8592-f57b1d9f9233\") " pod="openstack/rabbitmq-cell1-server-0" Dec 10 15:03:37 crc kubenswrapper[4718]: I1210 15:03:37.417118 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9zg6r\" (UniqueName: \"kubernetes.io/projected/55b4c58e-c07e-4cd2-8592-f57b1d9f9233-kube-api-access-9zg6r\") pod \"rabbitmq-cell1-server-0\" (UID: \"55b4c58e-c07e-4cd2-8592-f57b1d9f9233\") " pod="openstack/rabbitmq-cell1-server-0" Dec 10 15:03:37 crc kubenswrapper[4718]: I1210 15:03:37.417211 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/55b4c58e-c07e-4cd2-8592-f57b1d9f9233-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"55b4c58e-c07e-4cd2-8592-f57b1d9f9233\") " pod="openstack/rabbitmq-cell1-server-0" Dec 10 15:03:37 crc kubenswrapper[4718]: I1210 15:03:37.417270 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/55b4c58e-c07e-4cd2-8592-f57b1d9f9233-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"55b4c58e-c07e-4cd2-8592-f57b1d9f9233\") " pod="openstack/rabbitmq-cell1-server-0" Dec 10 15:03:37 crc kubenswrapper[4718]: I1210 15:03:37.417314 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/55b4c58e-c07e-4cd2-8592-f57b1d9f9233-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"55b4c58e-c07e-4cd2-8592-f57b1d9f9233\") " pod="openstack/rabbitmq-cell1-server-0" Dec 10 15:03:37 crc kubenswrapper[4718]: I1210 15:03:37.417346 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/55b4c58e-c07e-4cd2-8592-f57b1d9f9233-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"55b4c58e-c07e-4cd2-8592-f57b1d9f9233\") " pod="openstack/rabbitmq-cell1-server-0" Dec 10 15:03:37 crc kubenswrapper[4718]: I1210 15:03:37.417439 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/55b4c58e-c07e-4cd2-8592-f57b1d9f9233-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"55b4c58e-c07e-4cd2-8592-f57b1d9f9233\") " pod="openstack/rabbitmq-cell1-server-0" Dec 10 15:03:37 crc kubenswrapper[4718]: I1210 15:03:37.417471 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"55b4c58e-c07e-4cd2-8592-f57b1d9f9233\") " pod="openstack/rabbitmq-cell1-server-0" Dec 10 15:03:37 crc kubenswrapper[4718]: I1210 15:03:37.417512 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/55b4c58e-c07e-4cd2-8592-f57b1d9f9233-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"55b4c58e-c07e-4cd2-8592-f57b1d9f9233\") " pod="openstack/rabbitmq-cell1-server-0" Dec 10 15:03:37 crc kubenswrapper[4718]: I1210 15:03:37.417591 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/55b4c58e-c07e-4cd2-8592-f57b1d9f9233-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"55b4c58e-c07e-4cd2-8592-f57b1d9f9233\") " pod="openstack/rabbitmq-cell1-server-0" Dec 10 15:03:37 crc kubenswrapper[4718]: I1210 15:03:37.417640 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/55b4c58e-c07e-4cd2-8592-f57b1d9f9233-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"55b4c58e-c07e-4cd2-8592-f57b1d9f9233\") " pod="openstack/rabbitmq-cell1-server-0" Dec 10 15:03:37 crc kubenswrapper[4718]: I1210 15:03:37.417660 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/55b4c58e-c07e-4cd2-8592-f57b1d9f9233-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"55b4c58e-c07e-4cd2-8592-f57b1d9f9233\") " pod="openstack/rabbitmq-cell1-server-0" Dec 10 15:03:37 crc kubenswrapper[4718]: I1210 15:03:37.430329 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/55b4c58e-c07e-4cd2-8592-f57b1d9f9233-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"55b4c58e-c07e-4cd2-8592-f57b1d9f9233\") " pod="openstack/rabbitmq-cell1-server-0" Dec 10 15:03:37 crc kubenswrapper[4718]: I1210 15:03:37.430728 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/55b4c58e-c07e-4cd2-8592-f57b1d9f9233-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"55b4c58e-c07e-4cd2-8592-f57b1d9f9233\") " pod="openstack/rabbitmq-cell1-server-0" Dec 10 15:03:37 crc kubenswrapper[4718]: I1210 15:03:37.431067 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/55b4c58e-c07e-4cd2-8592-f57b1d9f9233-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"55b4c58e-c07e-4cd2-8592-f57b1d9f9233\") " pod="openstack/rabbitmq-cell1-server-0" Dec 10 15:03:37 crc kubenswrapper[4718]: I1210 15:03:37.431368 4718 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"55b4c58e-c07e-4cd2-8592-f57b1d9f9233\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/rabbitmq-cell1-server-0" Dec 10 15:03:37 crc kubenswrapper[4718]: I1210 15:03:37.431416 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/55b4c58e-c07e-4cd2-8592-f57b1d9f9233-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"55b4c58e-c07e-4cd2-8592-f57b1d9f9233\") " pod="openstack/rabbitmq-cell1-server-0" Dec 10 15:03:37 crc kubenswrapper[4718]: I1210 15:03:37.432117 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/55b4c58e-c07e-4cd2-8592-f57b1d9f9233-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"55b4c58e-c07e-4cd2-8592-f57b1d9f9233\") " pod="openstack/rabbitmq-cell1-server-0" Dec 10 15:03:37 crc kubenswrapper[4718]: I1210 15:03:37.436171 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/55b4c58e-c07e-4cd2-8592-f57b1d9f9233-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"55b4c58e-c07e-4cd2-8592-f57b1d9f9233\") " pod="openstack/rabbitmq-cell1-server-0" Dec 10 15:03:37 crc kubenswrapper[4718]: I1210 15:03:37.445656 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/55b4c58e-c07e-4cd2-8592-f57b1d9f9233-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"55b4c58e-c07e-4cd2-8592-f57b1d9f9233\") " pod="openstack/rabbitmq-cell1-server-0" Dec 10 15:03:37 crc kubenswrapper[4718]: I1210 15:03:37.446606 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/55b4c58e-c07e-4cd2-8592-f57b1d9f9233-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"55b4c58e-c07e-4cd2-8592-f57b1d9f9233\") " pod="openstack/rabbitmq-cell1-server-0" Dec 10 15:03:37 crc kubenswrapper[4718]: I1210 15:03:37.455242 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/55b4c58e-c07e-4cd2-8592-f57b1d9f9233-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"55b4c58e-c07e-4cd2-8592-f57b1d9f9233\") " pod="openstack/rabbitmq-cell1-server-0" Dec 10 15:03:37 crc kubenswrapper[4718]: I1210 15:03:37.467285 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9zg6r\" (UniqueName: \"kubernetes.io/projected/55b4c58e-c07e-4cd2-8592-f57b1d9f9233-kube-api-access-9zg6r\") pod \"rabbitmq-cell1-server-0\" (UID: \"55b4c58e-c07e-4cd2-8592-f57b1d9f9233\") " pod="openstack/rabbitmq-cell1-server-0" Dec 10 15:03:37 crc kubenswrapper[4718]: I1210 15:03:37.555669 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"55b4c58e-c07e-4cd2-8592-f57b1d9f9233\") " pod="openstack/rabbitmq-cell1-server-0" Dec 10 15:03:37 crc kubenswrapper[4718]: I1210 15:03:37.621852 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Dec 10 15:03:37 crc kubenswrapper[4718]: I1210 15:03:37.801910 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"4e530819-d029-4526-aed9-2cd33568dbcb","Type":"ContainerStarted","Data":"d86489ba59f45180324f9713b16995fa34d1fd32a0fcd77bbe8591720b6ef748"} Dec 10 15:03:38 crc kubenswrapper[4718]: I1210 15:03:38.037235 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="282d32e9-d539-4bac-9fd1-a8735e8d92e1" path="/var/lib/kubelet/pods/282d32e9-d539-4bac-9fd1-a8735e8d92e1/volumes" Dec 10 15:03:38 crc kubenswrapper[4718]: I1210 15:03:38.176367 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 10 15:03:38 crc kubenswrapper[4718]: W1210 15:03:38.273875 4718 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod55b4c58e_c07e_4cd2_8592_f57b1d9f9233.slice/crio-b5b3397a2839d7db5f4fe0b87fe940f180dfe6f116d7d2c9fa913b790ae3f584 WatchSource:0}: Error finding container b5b3397a2839d7db5f4fe0b87fe940f180dfe6f116d7d2c9fa913b790ae3f584: Status 404 returned error can't find the container with id b5b3397a2839d7db5f4fe0b87fe940f180dfe6f116d7d2c9fa913b790ae3f584 Dec 10 15:03:38 crc kubenswrapper[4718]: I1210 15:03:38.813620 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"55b4c58e-c07e-4cd2-8592-f57b1d9f9233","Type":"ContainerStarted","Data":"b5b3397a2839d7db5f4fe0b87fe940f180dfe6f116d7d2c9fa913b790ae3f584"} Dec 10 15:03:39 crc kubenswrapper[4718]: I1210 15:03:39.837172 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"4e530819-d029-4526-aed9-2cd33568dbcb","Type":"ContainerStarted","Data":"565f47481709146237d82a283e76048fabcf4d73093db6b1b908363c9450125f"} Dec 10 15:03:40 crc kubenswrapper[4718]: I1210 15:03:40.852089 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"55b4c58e-c07e-4cd2-8592-f57b1d9f9233","Type":"ContainerStarted","Data":"1eea9919acf435ed7d507e8aef2d77fb146551db48aac26c08d0daf2f968f0ea"} Dec 10 15:03:49 crc kubenswrapper[4718]: I1210 15:03:49.238430 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-bf6c7df67-pg5nw"] Dec 10 15:03:49 crc kubenswrapper[4718]: I1210 15:03:49.241202 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bf6c7df67-pg5nw" Dec 10 15:03:49 crc kubenswrapper[4718]: I1210 15:03:49.244309 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-edpm-ipam" Dec 10 15:03:49 crc kubenswrapper[4718]: I1210 15:03:49.262691 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-bf6c7df67-pg5nw"] Dec 10 15:03:49 crc kubenswrapper[4718]: I1210 15:03:49.357275 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ec75500c-e3d3-4180-9313-8ac2fb33ac55-ovsdbserver-sb\") pod \"dnsmasq-dns-bf6c7df67-pg5nw\" (UID: \"ec75500c-e3d3-4180-9313-8ac2fb33ac55\") " pod="openstack/dnsmasq-dns-bf6c7df67-pg5nw" Dec 10 15:03:49 crc kubenswrapper[4718]: I1210 15:03:49.357381 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/ec75500c-e3d3-4180-9313-8ac2fb33ac55-openstack-edpm-ipam\") pod \"dnsmasq-dns-bf6c7df67-pg5nw\" (UID: \"ec75500c-e3d3-4180-9313-8ac2fb33ac55\") " pod="openstack/dnsmasq-dns-bf6c7df67-pg5nw" Dec 10 15:03:49 crc kubenswrapper[4718]: I1210 15:03:49.357958 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ec75500c-e3d3-4180-9313-8ac2fb33ac55-dns-swift-storage-0\") pod \"dnsmasq-dns-bf6c7df67-pg5nw\" (UID: \"ec75500c-e3d3-4180-9313-8ac2fb33ac55\") " pod="openstack/dnsmasq-dns-bf6c7df67-pg5nw" Dec 10 15:03:49 crc kubenswrapper[4718]: I1210 15:03:49.358116 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ec75500c-e3d3-4180-9313-8ac2fb33ac55-ovsdbserver-nb\") pod \"dnsmasq-dns-bf6c7df67-pg5nw\" (UID: \"ec75500c-e3d3-4180-9313-8ac2fb33ac55\") " pod="openstack/dnsmasq-dns-bf6c7df67-pg5nw" Dec 10 15:03:49 crc kubenswrapper[4718]: I1210 15:03:49.358228 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ec75500c-e3d3-4180-9313-8ac2fb33ac55-dns-svc\") pod \"dnsmasq-dns-bf6c7df67-pg5nw\" (UID: \"ec75500c-e3d3-4180-9313-8ac2fb33ac55\") " pod="openstack/dnsmasq-dns-bf6c7df67-pg5nw" Dec 10 15:03:49 crc kubenswrapper[4718]: I1210 15:03:49.358644 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5bc4k\" (UniqueName: \"kubernetes.io/projected/ec75500c-e3d3-4180-9313-8ac2fb33ac55-kube-api-access-5bc4k\") pod \"dnsmasq-dns-bf6c7df67-pg5nw\" (UID: \"ec75500c-e3d3-4180-9313-8ac2fb33ac55\") " pod="openstack/dnsmasq-dns-bf6c7df67-pg5nw" Dec 10 15:03:49 crc kubenswrapper[4718]: I1210 15:03:49.358742 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ec75500c-e3d3-4180-9313-8ac2fb33ac55-config\") pod \"dnsmasq-dns-bf6c7df67-pg5nw\" (UID: \"ec75500c-e3d3-4180-9313-8ac2fb33ac55\") " pod="openstack/dnsmasq-dns-bf6c7df67-pg5nw" Dec 10 15:03:49 crc kubenswrapper[4718]: I1210 15:03:49.461617 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ec75500c-e3d3-4180-9313-8ac2fb33ac55-dns-swift-storage-0\") pod \"dnsmasq-dns-bf6c7df67-pg5nw\" (UID: \"ec75500c-e3d3-4180-9313-8ac2fb33ac55\") " pod="openstack/dnsmasq-dns-bf6c7df67-pg5nw" Dec 10 15:03:49 crc kubenswrapper[4718]: I1210 15:03:49.461723 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ec75500c-e3d3-4180-9313-8ac2fb33ac55-ovsdbserver-nb\") pod \"dnsmasq-dns-bf6c7df67-pg5nw\" (UID: \"ec75500c-e3d3-4180-9313-8ac2fb33ac55\") " pod="openstack/dnsmasq-dns-bf6c7df67-pg5nw" Dec 10 15:03:49 crc kubenswrapper[4718]: I1210 15:03:49.461769 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ec75500c-e3d3-4180-9313-8ac2fb33ac55-dns-svc\") pod \"dnsmasq-dns-bf6c7df67-pg5nw\" (UID: \"ec75500c-e3d3-4180-9313-8ac2fb33ac55\") " pod="openstack/dnsmasq-dns-bf6c7df67-pg5nw" Dec 10 15:03:49 crc kubenswrapper[4718]: I1210 15:03:49.461862 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5bc4k\" (UniqueName: \"kubernetes.io/projected/ec75500c-e3d3-4180-9313-8ac2fb33ac55-kube-api-access-5bc4k\") pod \"dnsmasq-dns-bf6c7df67-pg5nw\" (UID: \"ec75500c-e3d3-4180-9313-8ac2fb33ac55\") " pod="openstack/dnsmasq-dns-bf6c7df67-pg5nw" Dec 10 15:03:49 crc kubenswrapper[4718]: I1210 15:03:49.461889 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ec75500c-e3d3-4180-9313-8ac2fb33ac55-config\") pod \"dnsmasq-dns-bf6c7df67-pg5nw\" (UID: \"ec75500c-e3d3-4180-9313-8ac2fb33ac55\") " pod="openstack/dnsmasq-dns-bf6c7df67-pg5nw" Dec 10 15:03:49 crc kubenswrapper[4718]: I1210 15:03:49.461924 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ec75500c-e3d3-4180-9313-8ac2fb33ac55-ovsdbserver-sb\") pod \"dnsmasq-dns-bf6c7df67-pg5nw\" (UID: \"ec75500c-e3d3-4180-9313-8ac2fb33ac55\") " pod="openstack/dnsmasq-dns-bf6c7df67-pg5nw" Dec 10 15:03:49 crc kubenswrapper[4718]: I1210 15:03:49.461950 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/ec75500c-e3d3-4180-9313-8ac2fb33ac55-openstack-edpm-ipam\") pod \"dnsmasq-dns-bf6c7df67-pg5nw\" (UID: \"ec75500c-e3d3-4180-9313-8ac2fb33ac55\") " pod="openstack/dnsmasq-dns-bf6c7df67-pg5nw" Dec 10 15:03:49 crc kubenswrapper[4718]: I1210 15:03:49.463438 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ec75500c-e3d3-4180-9313-8ac2fb33ac55-ovsdbserver-nb\") pod \"dnsmasq-dns-bf6c7df67-pg5nw\" (UID: \"ec75500c-e3d3-4180-9313-8ac2fb33ac55\") " pod="openstack/dnsmasq-dns-bf6c7df67-pg5nw" Dec 10 15:03:49 crc kubenswrapper[4718]: I1210 15:03:49.463438 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ec75500c-e3d3-4180-9313-8ac2fb33ac55-dns-swift-storage-0\") pod \"dnsmasq-dns-bf6c7df67-pg5nw\" (UID: \"ec75500c-e3d3-4180-9313-8ac2fb33ac55\") " pod="openstack/dnsmasq-dns-bf6c7df67-pg5nw" Dec 10 15:03:49 crc kubenswrapper[4718]: I1210 15:03:49.463555 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ec75500c-e3d3-4180-9313-8ac2fb33ac55-ovsdbserver-sb\") pod \"dnsmasq-dns-bf6c7df67-pg5nw\" (UID: \"ec75500c-e3d3-4180-9313-8ac2fb33ac55\") " pod="openstack/dnsmasq-dns-bf6c7df67-pg5nw" Dec 10 15:03:49 crc kubenswrapper[4718]: I1210 15:03:49.463732 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/ec75500c-e3d3-4180-9313-8ac2fb33ac55-openstack-edpm-ipam\") pod \"dnsmasq-dns-bf6c7df67-pg5nw\" (UID: \"ec75500c-e3d3-4180-9313-8ac2fb33ac55\") " pod="openstack/dnsmasq-dns-bf6c7df67-pg5nw" Dec 10 15:03:49 crc kubenswrapper[4718]: I1210 15:03:49.463870 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ec75500c-e3d3-4180-9313-8ac2fb33ac55-config\") pod \"dnsmasq-dns-bf6c7df67-pg5nw\" (UID: \"ec75500c-e3d3-4180-9313-8ac2fb33ac55\") " pod="openstack/dnsmasq-dns-bf6c7df67-pg5nw" Dec 10 15:03:49 crc kubenswrapper[4718]: I1210 15:03:49.463973 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ec75500c-e3d3-4180-9313-8ac2fb33ac55-dns-svc\") pod \"dnsmasq-dns-bf6c7df67-pg5nw\" (UID: \"ec75500c-e3d3-4180-9313-8ac2fb33ac55\") " pod="openstack/dnsmasq-dns-bf6c7df67-pg5nw" Dec 10 15:03:49 crc kubenswrapper[4718]: I1210 15:03:49.489011 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5bc4k\" (UniqueName: \"kubernetes.io/projected/ec75500c-e3d3-4180-9313-8ac2fb33ac55-kube-api-access-5bc4k\") pod \"dnsmasq-dns-bf6c7df67-pg5nw\" (UID: \"ec75500c-e3d3-4180-9313-8ac2fb33ac55\") " pod="openstack/dnsmasq-dns-bf6c7df67-pg5nw" Dec 10 15:03:49 crc kubenswrapper[4718]: I1210 15:03:49.563774 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bf6c7df67-pg5nw" Dec 10 15:03:50 crc kubenswrapper[4718]: I1210 15:03:50.514908 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-bf6c7df67-pg5nw"] Dec 10 15:03:51 crc kubenswrapper[4718]: I1210 15:03:51.001622 4718 generic.go:334] "Generic (PLEG): container finished" podID="ec75500c-e3d3-4180-9313-8ac2fb33ac55" containerID="42e5a945a62d6bfd18939e334cb8bf30bf528059265faab209774e588b3f3fa7" exitCode=0 Dec 10 15:03:51 crc kubenswrapper[4718]: I1210 15:03:51.001714 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bf6c7df67-pg5nw" event={"ID":"ec75500c-e3d3-4180-9313-8ac2fb33ac55","Type":"ContainerDied","Data":"42e5a945a62d6bfd18939e334cb8bf30bf528059265faab209774e588b3f3fa7"} Dec 10 15:03:51 crc kubenswrapper[4718]: I1210 15:03:51.002048 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bf6c7df67-pg5nw" event={"ID":"ec75500c-e3d3-4180-9313-8ac2fb33ac55","Type":"ContainerStarted","Data":"f2d4bc9febda7a0d7064498e9551a57d90f64ff037d7eeb4220a77d051f71dd5"} Dec 10 15:03:51 crc kubenswrapper[4718]: I1210 15:03:51.852791 4718 scope.go:117] "RemoveContainer" containerID="a14f45af416b3a29626303cc96f6de332ad3d25851121806e75e551a172f9346" Dec 10 15:03:52 crc kubenswrapper[4718]: I1210 15:03:52.037450 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-bf6c7df67-pg5nw" Dec 10 15:03:52 crc kubenswrapper[4718]: I1210 15:03:52.037502 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bf6c7df67-pg5nw" event={"ID":"ec75500c-e3d3-4180-9313-8ac2fb33ac55","Type":"ContainerStarted","Data":"742c51439d9e3c96d0e45835a338c25b3502a3a4165b9a41049be6c0cb0a163b"} Dec 10 15:03:52 crc kubenswrapper[4718]: I1210 15:03:52.063674 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-bf6c7df67-pg5nw" podStartSLOduration=3.063607378 podStartE2EDuration="3.063607378s" podCreationTimestamp="2025-12-10 15:03:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 15:03:52.051172566 +0000 UTC m=+1937.000396013" watchObservedRunningTime="2025-12-10 15:03:52.063607378 +0000 UTC m=+1937.012830785" Dec 10 15:03:59 crc kubenswrapper[4718]: I1210 15:03:59.565698 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-bf6c7df67-pg5nw" Dec 10 15:03:59 crc kubenswrapper[4718]: I1210 15:03:59.661837 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-54599d8f7-hnvbs"] Dec 10 15:03:59 crc kubenswrapper[4718]: I1210 15:03:59.662168 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-54599d8f7-hnvbs" podUID="e45712e4-047b-4b52-bc5f-983749972808" containerName="dnsmasq-dns" containerID="cri-o://af29a698ed499dcfbdb927491560ac434ff739be2b99122711e0955f85d32e07" gracePeriod=10 Dec 10 15:03:59 crc kubenswrapper[4718]: I1210 15:03:59.949841 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5c6b84c7df-hwqf9"] Dec 10 15:03:59 crc kubenswrapper[4718]: I1210 15:03:59.952253 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c6b84c7df-hwqf9" Dec 10 15:03:59 crc kubenswrapper[4718]: I1210 15:03:59.977629 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/177ad74b-362a-478f-a755-7c2862fa179d-openstack-edpm-ipam\") pod \"dnsmasq-dns-5c6b84c7df-hwqf9\" (UID: \"177ad74b-362a-478f-a755-7c2862fa179d\") " pod="openstack/dnsmasq-dns-5c6b84c7df-hwqf9" Dec 10 15:03:59 crc kubenswrapper[4718]: I1210 15:03:59.977816 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/177ad74b-362a-478f-a755-7c2862fa179d-dns-swift-storage-0\") pod \"dnsmasq-dns-5c6b84c7df-hwqf9\" (UID: \"177ad74b-362a-478f-a755-7c2862fa179d\") " pod="openstack/dnsmasq-dns-5c6b84c7df-hwqf9" Dec 10 15:03:59 crc kubenswrapper[4718]: I1210 15:03:59.978000 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-thplr\" (UniqueName: \"kubernetes.io/projected/177ad74b-362a-478f-a755-7c2862fa179d-kube-api-access-thplr\") pod \"dnsmasq-dns-5c6b84c7df-hwqf9\" (UID: \"177ad74b-362a-478f-a755-7c2862fa179d\") " pod="openstack/dnsmasq-dns-5c6b84c7df-hwqf9" Dec 10 15:03:59 crc kubenswrapper[4718]: I1210 15:03:59.978095 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/177ad74b-362a-478f-a755-7c2862fa179d-ovsdbserver-sb\") pod \"dnsmasq-dns-5c6b84c7df-hwqf9\" (UID: \"177ad74b-362a-478f-a755-7c2862fa179d\") " pod="openstack/dnsmasq-dns-5c6b84c7df-hwqf9" Dec 10 15:03:59 crc kubenswrapper[4718]: I1210 15:03:59.978304 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/177ad74b-362a-478f-a755-7c2862fa179d-config\") pod \"dnsmasq-dns-5c6b84c7df-hwqf9\" (UID: \"177ad74b-362a-478f-a755-7c2862fa179d\") " pod="openstack/dnsmasq-dns-5c6b84c7df-hwqf9" Dec 10 15:03:59 crc kubenswrapper[4718]: I1210 15:03:59.978344 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/177ad74b-362a-478f-a755-7c2862fa179d-ovsdbserver-nb\") pod \"dnsmasq-dns-5c6b84c7df-hwqf9\" (UID: \"177ad74b-362a-478f-a755-7c2862fa179d\") " pod="openstack/dnsmasq-dns-5c6b84c7df-hwqf9" Dec 10 15:03:59 crc kubenswrapper[4718]: I1210 15:03:59.978471 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/177ad74b-362a-478f-a755-7c2862fa179d-dns-svc\") pod \"dnsmasq-dns-5c6b84c7df-hwqf9\" (UID: \"177ad74b-362a-478f-a755-7c2862fa179d\") " pod="openstack/dnsmasq-dns-5c6b84c7df-hwqf9" Dec 10 15:04:00 crc kubenswrapper[4718]: I1210 15:04:00.338853 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/177ad74b-362a-478f-a755-7c2862fa179d-openstack-edpm-ipam\") pod \"dnsmasq-dns-5c6b84c7df-hwqf9\" (UID: \"177ad74b-362a-478f-a755-7c2862fa179d\") " pod="openstack/dnsmasq-dns-5c6b84c7df-hwqf9" Dec 10 15:04:00 crc kubenswrapper[4718]: I1210 15:04:00.338942 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/177ad74b-362a-478f-a755-7c2862fa179d-dns-swift-storage-0\") pod \"dnsmasq-dns-5c6b84c7df-hwqf9\" (UID: \"177ad74b-362a-478f-a755-7c2862fa179d\") " pod="openstack/dnsmasq-dns-5c6b84c7df-hwqf9" Dec 10 15:04:00 crc kubenswrapper[4718]: I1210 15:04:00.339056 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-thplr\" (UniqueName: \"kubernetes.io/projected/177ad74b-362a-478f-a755-7c2862fa179d-kube-api-access-thplr\") pod \"dnsmasq-dns-5c6b84c7df-hwqf9\" (UID: \"177ad74b-362a-478f-a755-7c2862fa179d\") " pod="openstack/dnsmasq-dns-5c6b84c7df-hwqf9" Dec 10 15:04:00 crc kubenswrapper[4718]: I1210 15:04:00.339118 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/177ad74b-362a-478f-a755-7c2862fa179d-ovsdbserver-sb\") pod \"dnsmasq-dns-5c6b84c7df-hwqf9\" (UID: \"177ad74b-362a-478f-a755-7c2862fa179d\") " pod="openstack/dnsmasq-dns-5c6b84c7df-hwqf9" Dec 10 15:04:00 crc kubenswrapper[4718]: I1210 15:04:00.339250 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/177ad74b-362a-478f-a755-7c2862fa179d-config\") pod \"dnsmasq-dns-5c6b84c7df-hwqf9\" (UID: \"177ad74b-362a-478f-a755-7c2862fa179d\") " pod="openstack/dnsmasq-dns-5c6b84c7df-hwqf9" Dec 10 15:04:00 crc kubenswrapper[4718]: I1210 15:04:00.339274 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/177ad74b-362a-478f-a755-7c2862fa179d-ovsdbserver-nb\") pod \"dnsmasq-dns-5c6b84c7df-hwqf9\" (UID: \"177ad74b-362a-478f-a755-7c2862fa179d\") " pod="openstack/dnsmasq-dns-5c6b84c7df-hwqf9" Dec 10 15:04:00 crc kubenswrapper[4718]: I1210 15:04:00.339347 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/177ad74b-362a-478f-a755-7c2862fa179d-dns-svc\") pod \"dnsmasq-dns-5c6b84c7df-hwqf9\" (UID: \"177ad74b-362a-478f-a755-7c2862fa179d\") " pod="openstack/dnsmasq-dns-5c6b84c7df-hwqf9" Dec 10 15:04:00 crc kubenswrapper[4718]: I1210 15:04:00.340824 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/177ad74b-362a-478f-a755-7c2862fa179d-dns-svc\") pod \"dnsmasq-dns-5c6b84c7df-hwqf9\" (UID: \"177ad74b-362a-478f-a755-7c2862fa179d\") " pod="openstack/dnsmasq-dns-5c6b84c7df-hwqf9" Dec 10 15:04:00 crc kubenswrapper[4718]: I1210 15:04:00.346654 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/177ad74b-362a-478f-a755-7c2862fa179d-ovsdbserver-nb\") pod \"dnsmasq-dns-5c6b84c7df-hwqf9\" (UID: \"177ad74b-362a-478f-a755-7c2862fa179d\") " pod="openstack/dnsmasq-dns-5c6b84c7df-hwqf9" Dec 10 15:04:00 crc kubenswrapper[4718]: I1210 15:04:00.347370 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/177ad74b-362a-478f-a755-7c2862fa179d-openstack-edpm-ipam\") pod \"dnsmasq-dns-5c6b84c7df-hwqf9\" (UID: \"177ad74b-362a-478f-a755-7c2862fa179d\") " pod="openstack/dnsmasq-dns-5c6b84c7df-hwqf9" Dec 10 15:04:00 crc kubenswrapper[4718]: I1210 15:04:00.348155 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/177ad74b-362a-478f-a755-7c2862fa179d-dns-swift-storage-0\") pod \"dnsmasq-dns-5c6b84c7df-hwqf9\" (UID: \"177ad74b-362a-478f-a755-7c2862fa179d\") " pod="openstack/dnsmasq-dns-5c6b84c7df-hwqf9" Dec 10 15:04:00 crc kubenswrapper[4718]: I1210 15:04:00.372424 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/177ad74b-362a-478f-a755-7c2862fa179d-config\") pod \"dnsmasq-dns-5c6b84c7df-hwqf9\" (UID: \"177ad74b-362a-478f-a755-7c2862fa179d\") " pod="openstack/dnsmasq-dns-5c6b84c7df-hwqf9" Dec 10 15:04:00 crc kubenswrapper[4718]: I1210 15:04:00.422587 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/177ad74b-362a-478f-a755-7c2862fa179d-ovsdbserver-sb\") pod \"dnsmasq-dns-5c6b84c7df-hwqf9\" (UID: \"177ad74b-362a-478f-a755-7c2862fa179d\") " pod="openstack/dnsmasq-dns-5c6b84c7df-hwqf9" Dec 10 15:04:00 crc kubenswrapper[4718]: I1210 15:04:00.454267 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-thplr\" (UniqueName: \"kubernetes.io/projected/177ad74b-362a-478f-a755-7c2862fa179d-kube-api-access-thplr\") pod \"dnsmasq-dns-5c6b84c7df-hwqf9\" (UID: \"177ad74b-362a-478f-a755-7c2862fa179d\") " pod="openstack/dnsmasq-dns-5c6b84c7df-hwqf9" Dec 10 15:04:00 crc kubenswrapper[4718]: I1210 15:04:00.465813 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c6b84c7df-hwqf9" Dec 10 15:04:00 crc kubenswrapper[4718]: I1210 15:04:00.471948 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c6b84c7df-hwqf9"] Dec 10 15:04:00 crc kubenswrapper[4718]: I1210 15:04:00.499023 4718 generic.go:334] "Generic (PLEG): container finished" podID="e45712e4-047b-4b52-bc5f-983749972808" containerID="af29a698ed499dcfbdb927491560ac434ff739be2b99122711e0955f85d32e07" exitCode=0 Dec 10 15:04:00 crc kubenswrapper[4718]: I1210 15:04:00.499107 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-54599d8f7-hnvbs" event={"ID":"e45712e4-047b-4b52-bc5f-983749972808","Type":"ContainerDied","Data":"af29a698ed499dcfbdb927491560ac434ff739be2b99122711e0955f85d32e07"} Dec 10 15:04:00 crc kubenswrapper[4718]: I1210 15:04:00.751681 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-54599d8f7-hnvbs" Dec 10 15:04:00 crc kubenswrapper[4718]: I1210 15:04:00.867720 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e45712e4-047b-4b52-bc5f-983749972808-ovsdbserver-sb\") pod \"e45712e4-047b-4b52-bc5f-983749972808\" (UID: \"e45712e4-047b-4b52-bc5f-983749972808\") " Dec 10 15:04:00 crc kubenswrapper[4718]: I1210 15:04:00.867830 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e45712e4-047b-4b52-bc5f-983749972808-ovsdbserver-nb\") pod \"e45712e4-047b-4b52-bc5f-983749972808\" (UID: \"e45712e4-047b-4b52-bc5f-983749972808\") " Dec 10 15:04:00 crc kubenswrapper[4718]: I1210 15:04:00.867935 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e45712e4-047b-4b52-bc5f-983749972808-dns-svc\") pod \"e45712e4-047b-4b52-bc5f-983749972808\" (UID: \"e45712e4-047b-4b52-bc5f-983749972808\") " Dec 10 15:04:00 crc kubenswrapper[4718]: I1210 15:04:00.868000 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2kp2c\" (UniqueName: \"kubernetes.io/projected/e45712e4-047b-4b52-bc5f-983749972808-kube-api-access-2kp2c\") pod \"e45712e4-047b-4b52-bc5f-983749972808\" (UID: \"e45712e4-047b-4b52-bc5f-983749972808\") " Dec 10 15:04:00 crc kubenswrapper[4718]: I1210 15:04:00.868215 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e45712e4-047b-4b52-bc5f-983749972808-dns-swift-storage-0\") pod \"e45712e4-047b-4b52-bc5f-983749972808\" (UID: \"e45712e4-047b-4b52-bc5f-983749972808\") " Dec 10 15:04:00 crc kubenswrapper[4718]: I1210 15:04:00.868261 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e45712e4-047b-4b52-bc5f-983749972808-config\") pod \"e45712e4-047b-4b52-bc5f-983749972808\" (UID: \"e45712e4-047b-4b52-bc5f-983749972808\") " Dec 10 15:04:00 crc kubenswrapper[4718]: I1210 15:04:00.938352 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e45712e4-047b-4b52-bc5f-983749972808-kube-api-access-2kp2c" (OuterVolumeSpecName: "kube-api-access-2kp2c") pod "e45712e4-047b-4b52-bc5f-983749972808" (UID: "e45712e4-047b-4b52-bc5f-983749972808"). InnerVolumeSpecName "kube-api-access-2kp2c". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:04:00 crc kubenswrapper[4718]: I1210 15:04:00.984319 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2kp2c\" (UniqueName: \"kubernetes.io/projected/e45712e4-047b-4b52-bc5f-983749972808-kube-api-access-2kp2c\") on node \"crc\" DevicePath \"\"" Dec 10 15:04:01 crc kubenswrapper[4718]: I1210 15:04:01.082754 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e45712e4-047b-4b52-bc5f-983749972808-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "e45712e4-047b-4b52-bc5f-983749972808" (UID: "e45712e4-047b-4b52-bc5f-983749972808"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 15:04:01 crc kubenswrapper[4718]: I1210 15:04:01.097573 4718 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e45712e4-047b-4b52-bc5f-983749972808-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 10 15:04:01 crc kubenswrapper[4718]: I1210 15:04:01.152125 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e45712e4-047b-4b52-bc5f-983749972808-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "e45712e4-047b-4b52-bc5f-983749972808" (UID: "e45712e4-047b-4b52-bc5f-983749972808"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 15:04:01 crc kubenswrapper[4718]: I1210 15:04:01.185966 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e45712e4-047b-4b52-bc5f-983749972808-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "e45712e4-047b-4b52-bc5f-983749972808" (UID: "e45712e4-047b-4b52-bc5f-983749972808"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 15:04:01 crc kubenswrapper[4718]: I1210 15:04:01.190046 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e45712e4-047b-4b52-bc5f-983749972808-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "e45712e4-047b-4b52-bc5f-983749972808" (UID: "e45712e4-047b-4b52-bc5f-983749972808"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 15:04:01 crc kubenswrapper[4718]: I1210 15:04:01.207826 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e45712e4-047b-4b52-bc5f-983749972808-config" (OuterVolumeSpecName: "config") pod "e45712e4-047b-4b52-bc5f-983749972808" (UID: "e45712e4-047b-4b52-bc5f-983749972808"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 15:04:01 crc kubenswrapper[4718]: I1210 15:04:01.208942 4718 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e45712e4-047b-4b52-bc5f-983749972808-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 10 15:04:01 crc kubenswrapper[4718]: I1210 15:04:01.209111 4718 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e45712e4-047b-4b52-bc5f-983749972808-config\") on node \"crc\" DevicePath \"\"" Dec 10 15:04:01 crc kubenswrapper[4718]: I1210 15:04:01.209206 4718 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e45712e4-047b-4b52-bc5f-983749972808-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 10 15:04:01 crc kubenswrapper[4718]: I1210 15:04:01.210008 4718 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e45712e4-047b-4b52-bc5f-983749972808-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 10 15:04:01 crc kubenswrapper[4718]: I1210 15:04:01.236416 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c6b84c7df-hwqf9"] Dec 10 15:04:01 crc kubenswrapper[4718]: I1210 15:04:01.515492 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c6b84c7df-hwqf9" event={"ID":"177ad74b-362a-478f-a755-7c2862fa179d","Type":"ContainerStarted","Data":"8ab52f38d713bb66930145d0df248065e630d14e5014a398be5b79ab609e7674"} Dec 10 15:04:01 crc kubenswrapper[4718]: I1210 15:04:01.518082 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-54599d8f7-hnvbs" event={"ID":"e45712e4-047b-4b52-bc5f-983749972808","Type":"ContainerDied","Data":"cb30b3f5fad89918cbd2d2909046529bc1b25488f143de3a9c88e881372df233"} Dec 10 15:04:01 crc kubenswrapper[4718]: I1210 15:04:01.518157 4718 scope.go:117] "RemoveContainer" containerID="af29a698ed499dcfbdb927491560ac434ff739be2b99122711e0955f85d32e07" Dec 10 15:04:01 crc kubenswrapper[4718]: I1210 15:04:01.518325 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-54599d8f7-hnvbs" Dec 10 15:04:01 crc kubenswrapper[4718]: I1210 15:04:01.550731 4718 scope.go:117] "RemoveContainer" containerID="41707bb20433d0cd14ee0c3e5f13c73be1421803852b14041574809ba2eb4977" Dec 10 15:04:01 crc kubenswrapper[4718]: I1210 15:04:01.580830 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-54599d8f7-hnvbs"] Dec 10 15:04:01 crc kubenswrapper[4718]: I1210 15:04:01.592714 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-54599d8f7-hnvbs"] Dec 10 15:04:02 crc kubenswrapper[4718]: I1210 15:04:02.035945 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e45712e4-047b-4b52-bc5f-983749972808" path="/var/lib/kubelet/pods/e45712e4-047b-4b52-bc5f-983749972808/volumes" Dec 10 15:04:02 crc kubenswrapper[4718]: I1210 15:04:02.532884 4718 generic.go:334] "Generic (PLEG): container finished" podID="177ad74b-362a-478f-a755-7c2862fa179d" containerID="cff43455e0dde74a710a781c716e9016f8bc211aa31bda90bf7a76d541b00762" exitCode=0 Dec 10 15:04:02 crc kubenswrapper[4718]: I1210 15:04:02.532995 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c6b84c7df-hwqf9" event={"ID":"177ad74b-362a-478f-a755-7c2862fa179d","Type":"ContainerDied","Data":"cff43455e0dde74a710a781c716e9016f8bc211aa31bda90bf7a76d541b00762"} Dec 10 15:04:03 crc kubenswrapper[4718]: I1210 15:04:03.548811 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c6b84c7df-hwqf9" event={"ID":"177ad74b-362a-478f-a755-7c2862fa179d","Type":"ContainerStarted","Data":"d6a8d22c0b40d56f1c55e4b6f6a906cc9584bc039511d85d62e1b9e4d9665d38"} Dec 10 15:04:03 crc kubenswrapper[4718]: I1210 15:04:03.549203 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5c6b84c7df-hwqf9" Dec 10 15:04:03 crc kubenswrapper[4718]: I1210 15:04:03.572050 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5c6b84c7df-hwqf9" podStartSLOduration=4.572016412 podStartE2EDuration="4.572016412s" podCreationTimestamp="2025-12-10 15:03:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 15:04:03.570556825 +0000 UTC m=+1948.519780252" watchObservedRunningTime="2025-12-10 15:04:03.572016412 +0000 UTC m=+1948.521239829" Dec 10 15:04:10 crc kubenswrapper[4718]: I1210 15:04:10.468425 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5c6b84c7df-hwqf9" Dec 10 15:04:10 crc kubenswrapper[4718]: I1210 15:04:10.561895 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-bf6c7df67-pg5nw"] Dec 10 15:04:10 crc kubenswrapper[4718]: I1210 15:04:10.562650 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-bf6c7df67-pg5nw" podUID="ec75500c-e3d3-4180-9313-8ac2fb33ac55" containerName="dnsmasq-dns" containerID="cri-o://742c51439d9e3c96d0e45835a338c25b3502a3a4165b9a41049be6c0cb0a163b" gracePeriod=10 Dec 10 15:04:11 crc kubenswrapper[4718]: I1210 15:04:11.648493 4718 generic.go:334] "Generic (PLEG): container finished" podID="ec75500c-e3d3-4180-9313-8ac2fb33ac55" containerID="742c51439d9e3c96d0e45835a338c25b3502a3a4165b9a41049be6c0cb0a163b" exitCode=0 Dec 10 15:04:11 crc kubenswrapper[4718]: I1210 15:04:11.648579 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bf6c7df67-pg5nw" event={"ID":"ec75500c-e3d3-4180-9313-8ac2fb33ac55","Type":"ContainerDied","Data":"742c51439d9e3c96d0e45835a338c25b3502a3a4165b9a41049be6c0cb0a163b"} Dec 10 15:04:13 crc kubenswrapper[4718]: I1210 15:04:13.327790 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bf6c7df67-pg5nw" Dec 10 15:04:13 crc kubenswrapper[4718]: I1210 15:04:13.416094 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ec75500c-e3d3-4180-9313-8ac2fb33ac55-dns-svc\") pod \"ec75500c-e3d3-4180-9313-8ac2fb33ac55\" (UID: \"ec75500c-e3d3-4180-9313-8ac2fb33ac55\") " Dec 10 15:04:13 crc kubenswrapper[4718]: I1210 15:04:13.416166 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ec75500c-e3d3-4180-9313-8ac2fb33ac55-ovsdbserver-nb\") pod \"ec75500c-e3d3-4180-9313-8ac2fb33ac55\" (UID: \"ec75500c-e3d3-4180-9313-8ac2fb33ac55\") " Dec 10 15:04:13 crc kubenswrapper[4718]: I1210 15:04:13.416250 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5bc4k\" (UniqueName: \"kubernetes.io/projected/ec75500c-e3d3-4180-9313-8ac2fb33ac55-kube-api-access-5bc4k\") pod \"ec75500c-e3d3-4180-9313-8ac2fb33ac55\" (UID: \"ec75500c-e3d3-4180-9313-8ac2fb33ac55\") " Dec 10 15:04:13 crc kubenswrapper[4718]: I1210 15:04:13.416313 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ec75500c-e3d3-4180-9313-8ac2fb33ac55-ovsdbserver-sb\") pod \"ec75500c-e3d3-4180-9313-8ac2fb33ac55\" (UID: \"ec75500c-e3d3-4180-9313-8ac2fb33ac55\") " Dec 10 15:04:13 crc kubenswrapper[4718]: I1210 15:04:13.416409 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ec75500c-e3d3-4180-9313-8ac2fb33ac55-config\") pod \"ec75500c-e3d3-4180-9313-8ac2fb33ac55\" (UID: \"ec75500c-e3d3-4180-9313-8ac2fb33ac55\") " Dec 10 15:04:13 crc kubenswrapper[4718]: I1210 15:04:13.416436 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/ec75500c-e3d3-4180-9313-8ac2fb33ac55-openstack-edpm-ipam\") pod \"ec75500c-e3d3-4180-9313-8ac2fb33ac55\" (UID: \"ec75500c-e3d3-4180-9313-8ac2fb33ac55\") " Dec 10 15:04:13 crc kubenswrapper[4718]: I1210 15:04:13.417057 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ec75500c-e3d3-4180-9313-8ac2fb33ac55-dns-swift-storage-0\") pod \"ec75500c-e3d3-4180-9313-8ac2fb33ac55\" (UID: \"ec75500c-e3d3-4180-9313-8ac2fb33ac55\") " Dec 10 15:04:13 crc kubenswrapper[4718]: I1210 15:04:13.440201 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ec75500c-e3d3-4180-9313-8ac2fb33ac55-kube-api-access-5bc4k" (OuterVolumeSpecName: "kube-api-access-5bc4k") pod "ec75500c-e3d3-4180-9313-8ac2fb33ac55" (UID: "ec75500c-e3d3-4180-9313-8ac2fb33ac55"). InnerVolumeSpecName "kube-api-access-5bc4k". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:04:13 crc kubenswrapper[4718]: I1210 15:04:13.477459 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ec75500c-e3d3-4180-9313-8ac2fb33ac55-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "ec75500c-e3d3-4180-9313-8ac2fb33ac55" (UID: "ec75500c-e3d3-4180-9313-8ac2fb33ac55"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 15:04:13 crc kubenswrapper[4718]: I1210 15:04:13.480593 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ec75500c-e3d3-4180-9313-8ac2fb33ac55-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "ec75500c-e3d3-4180-9313-8ac2fb33ac55" (UID: "ec75500c-e3d3-4180-9313-8ac2fb33ac55"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 15:04:13 crc kubenswrapper[4718]: I1210 15:04:13.481372 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ec75500c-e3d3-4180-9313-8ac2fb33ac55-config" (OuterVolumeSpecName: "config") pod "ec75500c-e3d3-4180-9313-8ac2fb33ac55" (UID: "ec75500c-e3d3-4180-9313-8ac2fb33ac55"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 15:04:13 crc kubenswrapper[4718]: I1210 15:04:13.494740 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ec75500c-e3d3-4180-9313-8ac2fb33ac55-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "ec75500c-e3d3-4180-9313-8ac2fb33ac55" (UID: "ec75500c-e3d3-4180-9313-8ac2fb33ac55"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 15:04:13 crc kubenswrapper[4718]: I1210 15:04:13.507633 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ec75500c-e3d3-4180-9313-8ac2fb33ac55-openstack-edpm-ipam" (OuterVolumeSpecName: "openstack-edpm-ipam") pod "ec75500c-e3d3-4180-9313-8ac2fb33ac55" (UID: "ec75500c-e3d3-4180-9313-8ac2fb33ac55"). InnerVolumeSpecName "openstack-edpm-ipam". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 15:04:13 crc kubenswrapper[4718]: I1210 15:04:13.520110 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ec75500c-e3d3-4180-9313-8ac2fb33ac55-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "ec75500c-e3d3-4180-9313-8ac2fb33ac55" (UID: "ec75500c-e3d3-4180-9313-8ac2fb33ac55"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 15:04:13 crc kubenswrapper[4718]: I1210 15:04:13.520883 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ec75500c-e3d3-4180-9313-8ac2fb33ac55-dns-swift-storage-0\") pod \"ec75500c-e3d3-4180-9313-8ac2fb33ac55\" (UID: \"ec75500c-e3d3-4180-9313-8ac2fb33ac55\") " Dec 10 15:04:13 crc kubenswrapper[4718]: I1210 15:04:13.522136 4718 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ec75500c-e3d3-4180-9313-8ac2fb33ac55-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 10 15:04:13 crc kubenswrapper[4718]: I1210 15:04:13.522160 4718 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ec75500c-e3d3-4180-9313-8ac2fb33ac55-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 10 15:04:13 crc kubenswrapper[4718]: I1210 15:04:13.522174 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5bc4k\" (UniqueName: \"kubernetes.io/projected/ec75500c-e3d3-4180-9313-8ac2fb33ac55-kube-api-access-5bc4k\") on node \"crc\" DevicePath \"\"" Dec 10 15:04:13 crc kubenswrapper[4718]: I1210 15:04:13.522183 4718 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ec75500c-e3d3-4180-9313-8ac2fb33ac55-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 10 15:04:13 crc kubenswrapper[4718]: I1210 15:04:13.522194 4718 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ec75500c-e3d3-4180-9313-8ac2fb33ac55-config\") on node \"crc\" DevicePath \"\"" Dec 10 15:04:13 crc kubenswrapper[4718]: I1210 15:04:13.522202 4718 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/ec75500c-e3d3-4180-9313-8ac2fb33ac55-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Dec 10 15:04:13 crc kubenswrapper[4718]: W1210 15:04:13.522318 4718 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/ec75500c-e3d3-4180-9313-8ac2fb33ac55/volumes/kubernetes.io~configmap/dns-swift-storage-0 Dec 10 15:04:13 crc kubenswrapper[4718]: I1210 15:04:13.522332 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ec75500c-e3d3-4180-9313-8ac2fb33ac55-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "ec75500c-e3d3-4180-9313-8ac2fb33ac55" (UID: "ec75500c-e3d3-4180-9313-8ac2fb33ac55"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 15:04:13 crc kubenswrapper[4718]: I1210 15:04:13.624154 4718 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ec75500c-e3d3-4180-9313-8ac2fb33ac55-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 10 15:04:13 crc kubenswrapper[4718]: I1210 15:04:13.678553 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bf6c7df67-pg5nw" event={"ID":"ec75500c-e3d3-4180-9313-8ac2fb33ac55","Type":"ContainerDied","Data":"f2d4bc9febda7a0d7064498e9551a57d90f64ff037d7eeb4220a77d051f71dd5"} Dec 10 15:04:13 crc kubenswrapper[4718]: I1210 15:04:13.678648 4718 scope.go:117] "RemoveContainer" containerID="742c51439d9e3c96d0e45835a338c25b3502a3a4165b9a41049be6c0cb0a163b" Dec 10 15:04:13 crc kubenswrapper[4718]: I1210 15:04:13.678654 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bf6c7df67-pg5nw" Dec 10 15:04:13 crc kubenswrapper[4718]: I1210 15:04:13.683824 4718 generic.go:334] "Generic (PLEG): container finished" podID="4e530819-d029-4526-aed9-2cd33568dbcb" containerID="565f47481709146237d82a283e76048fabcf4d73093db6b1b908363c9450125f" exitCode=0 Dec 10 15:04:13 crc kubenswrapper[4718]: I1210 15:04:13.683933 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"4e530819-d029-4526-aed9-2cd33568dbcb","Type":"ContainerDied","Data":"565f47481709146237d82a283e76048fabcf4d73093db6b1b908363c9450125f"} Dec 10 15:04:13 crc kubenswrapper[4718]: I1210 15:04:13.688050 4718 generic.go:334] "Generic (PLEG): container finished" podID="55b4c58e-c07e-4cd2-8592-f57b1d9f9233" containerID="1eea9919acf435ed7d507e8aef2d77fb146551db48aac26c08d0daf2f968f0ea" exitCode=0 Dec 10 15:04:13 crc kubenswrapper[4718]: I1210 15:04:13.688098 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"55b4c58e-c07e-4cd2-8592-f57b1d9f9233","Type":"ContainerDied","Data":"1eea9919acf435ed7d507e8aef2d77fb146551db48aac26c08d0daf2f968f0ea"} Dec 10 15:04:13 crc kubenswrapper[4718]: I1210 15:04:13.725470 4718 scope.go:117] "RemoveContainer" containerID="42e5a945a62d6bfd18939e334cb8bf30bf528059265faab209774e588b3f3fa7" Dec 10 15:04:13 crc kubenswrapper[4718]: I1210 15:04:13.753981 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-bf6c7df67-pg5nw"] Dec 10 15:04:13 crc kubenswrapper[4718]: I1210 15:04:13.768528 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-bf6c7df67-pg5nw"] Dec 10 15:04:14 crc kubenswrapper[4718]: I1210 15:04:14.084999 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ec75500c-e3d3-4180-9313-8ac2fb33ac55" path="/var/lib/kubelet/pods/ec75500c-e3d3-4180-9313-8ac2fb33ac55/volumes" Dec 10 15:04:14 crc kubenswrapper[4718]: I1210 15:04:14.704552 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"4e530819-d029-4526-aed9-2cd33568dbcb","Type":"ContainerStarted","Data":"2173b7112b4e1049a8cbfa89653968bab136139684908181e7109723259b5597"} Dec 10 15:04:14 crc kubenswrapper[4718]: I1210 15:04:14.704915 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Dec 10 15:04:14 crc kubenswrapper[4718]: I1210 15:04:14.708060 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"55b4c58e-c07e-4cd2-8592-f57b1d9f9233","Type":"ContainerStarted","Data":"2933ff1e7577c2293109eecbc4779d40a08d886f7fbf977b2f820950e61df20b"} Dec 10 15:04:14 crc kubenswrapper[4718]: I1210 15:04:14.708295 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Dec 10 15:04:14 crc kubenswrapper[4718]: I1210 15:04:14.761153 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=39.761115399 podStartE2EDuration="39.761115399s" podCreationTimestamp="2025-12-10 15:03:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 15:04:14.748254656 +0000 UTC m=+1959.697478073" watchObservedRunningTime="2025-12-10 15:04:14.761115399 +0000 UTC m=+1959.710338816" Dec 10 15:04:14 crc kubenswrapper[4718]: I1210 15:04:14.806621 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=37.806593521 podStartE2EDuration="37.806593521s" podCreationTimestamp="2025-12-10 15:03:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 15:04:14.804197091 +0000 UTC m=+1959.753420518" watchObservedRunningTime="2025-12-10 15:04:14.806593521 +0000 UTC m=+1959.755816938" Dec 10 15:04:26 crc kubenswrapper[4718]: I1210 15:04:26.454112 4718 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="4e530819-d029-4526-aed9-2cd33568dbcb" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.226:5671: connect: connection refused" Dec 10 15:04:27 crc kubenswrapper[4718]: I1210 15:04:27.623636 4718 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="55b4c58e-c07e-4cd2-8592-f57b1d9f9233" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.227:5671: connect: connection refused" Dec 10 15:04:29 crc kubenswrapper[4718]: I1210 15:04:29.998468 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-89nsq"] Dec 10 15:04:29 crc kubenswrapper[4718]: E1210 15:04:29.999759 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec75500c-e3d3-4180-9313-8ac2fb33ac55" containerName="init" Dec 10 15:04:29 crc kubenswrapper[4718]: I1210 15:04:29.999783 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec75500c-e3d3-4180-9313-8ac2fb33ac55" containerName="init" Dec 10 15:04:29 crc kubenswrapper[4718]: E1210 15:04:29.999808 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec75500c-e3d3-4180-9313-8ac2fb33ac55" containerName="dnsmasq-dns" Dec 10 15:04:29 crc kubenswrapper[4718]: I1210 15:04:29.999817 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec75500c-e3d3-4180-9313-8ac2fb33ac55" containerName="dnsmasq-dns" Dec 10 15:04:29 crc kubenswrapper[4718]: E1210 15:04:29.999838 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e45712e4-047b-4b52-bc5f-983749972808" containerName="init" Dec 10 15:04:29 crc kubenswrapper[4718]: I1210 15:04:29.999848 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="e45712e4-047b-4b52-bc5f-983749972808" containerName="init" Dec 10 15:04:29 crc kubenswrapper[4718]: E1210 15:04:29.999877 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e45712e4-047b-4b52-bc5f-983749972808" containerName="dnsmasq-dns" Dec 10 15:04:29 crc kubenswrapper[4718]: I1210 15:04:29.999884 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="e45712e4-047b-4b52-bc5f-983749972808" containerName="dnsmasq-dns" Dec 10 15:04:30 crc kubenswrapper[4718]: I1210 15:04:30.000127 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="ec75500c-e3d3-4180-9313-8ac2fb33ac55" containerName="dnsmasq-dns" Dec 10 15:04:30 crc kubenswrapper[4718]: I1210 15:04:30.000159 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="e45712e4-047b-4b52-bc5f-983749972808" containerName="dnsmasq-dns" Dec 10 15:04:30 crc kubenswrapper[4718]: I1210 15:04:30.001427 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-89nsq" Dec 10 15:04:30 crc kubenswrapper[4718]: I1210 15:04:30.005208 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 10 15:04:30 crc kubenswrapper[4718]: I1210 15:04:30.005309 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 10 15:04:30 crc kubenswrapper[4718]: I1210 15:04:30.005547 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 10 15:04:30 crc kubenswrapper[4718]: I1210 15:04:30.005820 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-vqd8j" Dec 10 15:04:30 crc kubenswrapper[4718]: I1210 15:04:30.066582 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-89nsq"] Dec 10 15:04:30 crc kubenswrapper[4718]: I1210 15:04:30.163373 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/10400b99-0213-470b-b37a-f0b9cd98ab2b-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-89nsq\" (UID: \"10400b99-0213-470b-b37a-f0b9cd98ab2b\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-89nsq" Dec 10 15:04:30 crc kubenswrapper[4718]: I1210 15:04:30.163587 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/10400b99-0213-470b-b37a-f0b9cd98ab2b-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-89nsq\" (UID: \"10400b99-0213-470b-b37a-f0b9cd98ab2b\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-89nsq" Dec 10 15:04:30 crc kubenswrapper[4718]: I1210 15:04:30.163689 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/10400b99-0213-470b-b37a-f0b9cd98ab2b-ssh-key\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-89nsq\" (UID: \"10400b99-0213-470b-b37a-f0b9cd98ab2b\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-89nsq" Dec 10 15:04:30 crc kubenswrapper[4718]: I1210 15:04:30.163820 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qxwpg\" (UniqueName: \"kubernetes.io/projected/10400b99-0213-470b-b37a-f0b9cd98ab2b-kube-api-access-qxwpg\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-89nsq\" (UID: \"10400b99-0213-470b-b37a-f0b9cd98ab2b\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-89nsq" Dec 10 15:04:30 crc kubenswrapper[4718]: I1210 15:04:30.266521 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/10400b99-0213-470b-b37a-f0b9cd98ab2b-ssh-key\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-89nsq\" (UID: \"10400b99-0213-470b-b37a-f0b9cd98ab2b\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-89nsq" Dec 10 15:04:30 crc kubenswrapper[4718]: I1210 15:04:30.266668 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qxwpg\" (UniqueName: \"kubernetes.io/projected/10400b99-0213-470b-b37a-f0b9cd98ab2b-kube-api-access-qxwpg\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-89nsq\" (UID: \"10400b99-0213-470b-b37a-f0b9cd98ab2b\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-89nsq" Dec 10 15:04:30 crc kubenswrapper[4718]: I1210 15:04:30.266766 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/10400b99-0213-470b-b37a-f0b9cd98ab2b-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-89nsq\" (UID: \"10400b99-0213-470b-b37a-f0b9cd98ab2b\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-89nsq" Dec 10 15:04:30 crc kubenswrapper[4718]: I1210 15:04:30.266858 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/10400b99-0213-470b-b37a-f0b9cd98ab2b-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-89nsq\" (UID: \"10400b99-0213-470b-b37a-f0b9cd98ab2b\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-89nsq" Dec 10 15:04:30 crc kubenswrapper[4718]: I1210 15:04:30.273776 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/10400b99-0213-470b-b37a-f0b9cd98ab2b-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-89nsq\" (UID: \"10400b99-0213-470b-b37a-f0b9cd98ab2b\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-89nsq" Dec 10 15:04:30 crc kubenswrapper[4718]: I1210 15:04:30.274657 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/10400b99-0213-470b-b37a-f0b9cd98ab2b-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-89nsq\" (UID: \"10400b99-0213-470b-b37a-f0b9cd98ab2b\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-89nsq" Dec 10 15:04:30 crc kubenswrapper[4718]: I1210 15:04:30.275324 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/10400b99-0213-470b-b37a-f0b9cd98ab2b-ssh-key\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-89nsq\" (UID: \"10400b99-0213-470b-b37a-f0b9cd98ab2b\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-89nsq" Dec 10 15:04:30 crc kubenswrapper[4718]: I1210 15:04:30.310514 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qxwpg\" (UniqueName: \"kubernetes.io/projected/10400b99-0213-470b-b37a-f0b9cd98ab2b-kube-api-access-qxwpg\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-89nsq\" (UID: \"10400b99-0213-470b-b37a-f0b9cd98ab2b\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-89nsq" Dec 10 15:04:30 crc kubenswrapper[4718]: I1210 15:04:30.328925 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-89nsq" Dec 10 15:04:31 crc kubenswrapper[4718]: I1210 15:04:31.109196 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-89nsq"] Dec 10 15:04:32 crc kubenswrapper[4718]: I1210 15:04:32.043312 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-89nsq" event={"ID":"10400b99-0213-470b-b37a-f0b9cd98ab2b","Type":"ContainerStarted","Data":"ec3ee67230fed4b136336578bbe4a1106ce4c631df5743d1c59f80c55295e15e"} Dec 10 15:04:36 crc kubenswrapper[4718]: I1210 15:04:36.454862 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Dec 10 15:04:37 crc kubenswrapper[4718]: I1210 15:04:37.625979 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Dec 10 15:04:45 crc kubenswrapper[4718]: I1210 15:04:45.261258 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 10 15:04:46 crc kubenswrapper[4718]: I1210 15:04:46.253396 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-89nsq" event={"ID":"10400b99-0213-470b-b37a-f0b9cd98ab2b","Type":"ContainerStarted","Data":"2013f0a3d68e3a4617bec514ce13c929b789cae2ab8619f8966f33d5736a4307"} Dec 10 15:04:46 crc kubenswrapper[4718]: I1210 15:04:46.307964 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-89nsq" podStartSLOduration=3.174202127 podStartE2EDuration="17.307928724s" podCreationTimestamp="2025-12-10 15:04:29 +0000 UTC" firstStartedPulling="2025-12-10 15:04:31.122631913 +0000 UTC m=+1976.071855320" lastFinishedPulling="2025-12-10 15:04:45.2563585 +0000 UTC m=+1990.205581917" observedRunningTime="2025-12-10 15:04:46.295740188 +0000 UTC m=+1991.244963605" watchObservedRunningTime="2025-12-10 15:04:46.307928724 +0000 UTC m=+1991.257152141" Dec 10 15:04:48 crc kubenswrapper[4718]: I1210 15:04:48.085286 4718 patch_prober.go:28] interesting pod/machine-config-daemon-8zmhn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 10 15:04:48 crc kubenswrapper[4718]: I1210 15:04:48.085919 4718 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" podUID="8db53917-7cfb-496d-b8a0-5cc68f3be4e7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 10 15:04:52 crc kubenswrapper[4718]: I1210 15:04:52.046025 4718 scope.go:117] "RemoveContainer" containerID="05464e0792ca08ae50c09bb7386a735a606b53c8af4f8c35fb6a7490fef3355f" Dec 10 15:04:52 crc kubenswrapper[4718]: I1210 15:04:52.086095 4718 scope.go:117] "RemoveContainer" containerID="b6f101c7d12c30882997bb52c30f3bffc266d2a63aafc2678435cbabb9f6a25b" Dec 10 15:04:52 crc kubenswrapper[4718]: I1210 15:04:52.116068 4718 scope.go:117] "RemoveContainer" containerID="601978f1b4f9d5b5406fb43ec35cd234eb953245e78bb08c3e5c147e1a0788aa" Dec 10 15:05:00 crc kubenswrapper[4718]: I1210 15:05:00.599329 4718 generic.go:334] "Generic (PLEG): container finished" podID="10400b99-0213-470b-b37a-f0b9cd98ab2b" containerID="2013f0a3d68e3a4617bec514ce13c929b789cae2ab8619f8966f33d5736a4307" exitCode=0 Dec 10 15:05:00 crc kubenswrapper[4718]: I1210 15:05:00.599436 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-89nsq" event={"ID":"10400b99-0213-470b-b37a-f0b9cd98ab2b","Type":"ContainerDied","Data":"2013f0a3d68e3a4617bec514ce13c929b789cae2ab8619f8966f33d5736a4307"} Dec 10 15:05:02 crc kubenswrapper[4718]: I1210 15:05:02.257307 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-89nsq" Dec 10 15:05:02 crc kubenswrapper[4718]: I1210 15:05:02.316163 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qxwpg\" (UniqueName: \"kubernetes.io/projected/10400b99-0213-470b-b37a-f0b9cd98ab2b-kube-api-access-qxwpg\") pod \"10400b99-0213-470b-b37a-f0b9cd98ab2b\" (UID: \"10400b99-0213-470b-b37a-f0b9cd98ab2b\") " Dec 10 15:05:02 crc kubenswrapper[4718]: I1210 15:05:02.316720 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/10400b99-0213-470b-b37a-f0b9cd98ab2b-repo-setup-combined-ca-bundle\") pod \"10400b99-0213-470b-b37a-f0b9cd98ab2b\" (UID: \"10400b99-0213-470b-b37a-f0b9cd98ab2b\") " Dec 10 15:05:02 crc kubenswrapper[4718]: I1210 15:05:02.317229 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/10400b99-0213-470b-b37a-f0b9cd98ab2b-inventory\") pod \"10400b99-0213-470b-b37a-f0b9cd98ab2b\" (UID: \"10400b99-0213-470b-b37a-f0b9cd98ab2b\") " Dec 10 15:05:02 crc kubenswrapper[4718]: I1210 15:05:02.317555 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/10400b99-0213-470b-b37a-f0b9cd98ab2b-ssh-key\") pod \"10400b99-0213-470b-b37a-f0b9cd98ab2b\" (UID: \"10400b99-0213-470b-b37a-f0b9cd98ab2b\") " Dec 10 15:05:02 crc kubenswrapper[4718]: I1210 15:05:02.325905 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/10400b99-0213-470b-b37a-f0b9cd98ab2b-kube-api-access-qxwpg" (OuterVolumeSpecName: "kube-api-access-qxwpg") pod "10400b99-0213-470b-b37a-f0b9cd98ab2b" (UID: "10400b99-0213-470b-b37a-f0b9cd98ab2b"). InnerVolumeSpecName "kube-api-access-qxwpg". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:05:02 crc kubenswrapper[4718]: I1210 15:05:02.327341 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/10400b99-0213-470b-b37a-f0b9cd98ab2b-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "10400b99-0213-470b-b37a-f0b9cd98ab2b" (UID: "10400b99-0213-470b-b37a-f0b9cd98ab2b"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:05:02 crc kubenswrapper[4718]: I1210 15:05:02.354769 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/10400b99-0213-470b-b37a-f0b9cd98ab2b-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "10400b99-0213-470b-b37a-f0b9cd98ab2b" (UID: "10400b99-0213-470b-b37a-f0b9cd98ab2b"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:05:02 crc kubenswrapper[4718]: I1210 15:05:02.357575 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/10400b99-0213-470b-b37a-f0b9cd98ab2b-inventory" (OuterVolumeSpecName: "inventory") pod "10400b99-0213-470b-b37a-f0b9cd98ab2b" (UID: "10400b99-0213-470b-b37a-f0b9cd98ab2b"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:05:02 crc kubenswrapper[4718]: I1210 15:05:02.421215 4718 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/10400b99-0213-470b-b37a-f0b9cd98ab2b-inventory\") on node \"crc\" DevicePath \"\"" Dec 10 15:05:02 crc kubenswrapper[4718]: I1210 15:05:02.421261 4718 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/10400b99-0213-470b-b37a-f0b9cd98ab2b-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 10 15:05:02 crc kubenswrapper[4718]: I1210 15:05:02.421273 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qxwpg\" (UniqueName: \"kubernetes.io/projected/10400b99-0213-470b-b37a-f0b9cd98ab2b-kube-api-access-qxwpg\") on node \"crc\" DevicePath \"\"" Dec 10 15:05:02 crc kubenswrapper[4718]: I1210 15:05:02.421287 4718 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/10400b99-0213-470b-b37a-f0b9cd98ab2b-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 10 15:05:02 crc kubenswrapper[4718]: I1210 15:05:02.633500 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-89nsq" event={"ID":"10400b99-0213-470b-b37a-f0b9cd98ab2b","Type":"ContainerDied","Data":"ec3ee67230fed4b136336578bbe4a1106ce4c631df5743d1c59f80c55295e15e"} Dec 10 15:05:02 crc kubenswrapper[4718]: I1210 15:05:02.633805 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-89nsq" Dec 10 15:05:02 crc kubenswrapper[4718]: I1210 15:05:02.633815 4718 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ec3ee67230fed4b136336578bbe4a1106ce4c631df5743d1c59f80c55295e15e" Dec 10 15:05:02 crc kubenswrapper[4718]: I1210 15:05:02.735069 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-b6wmm"] Dec 10 15:05:02 crc kubenswrapper[4718]: E1210 15:05:02.735766 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="10400b99-0213-470b-b37a-f0b9cd98ab2b" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Dec 10 15:05:02 crc kubenswrapper[4718]: I1210 15:05:02.735804 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="10400b99-0213-470b-b37a-f0b9cd98ab2b" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Dec 10 15:05:02 crc kubenswrapper[4718]: I1210 15:05:02.736068 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="10400b99-0213-470b-b37a-f0b9cd98ab2b" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Dec 10 15:05:02 crc kubenswrapper[4718]: I1210 15:05:02.737549 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-b6wmm" Dec 10 15:05:02 crc kubenswrapper[4718]: I1210 15:05:02.742325 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 10 15:05:02 crc kubenswrapper[4718]: I1210 15:05:02.742429 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-vqd8j" Dec 10 15:05:02 crc kubenswrapper[4718]: I1210 15:05:02.742808 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 10 15:05:02 crc kubenswrapper[4718]: I1210 15:05:02.745702 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 10 15:05:02 crc kubenswrapper[4718]: I1210 15:05:02.765592 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-b6wmm"] Dec 10 15:05:02 crc kubenswrapper[4718]: I1210 15:05:02.832531 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mn26m\" (UniqueName: \"kubernetes.io/projected/99702136-5bd9-4803-8ce9-8a89bd572648-kube-api-access-mn26m\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-b6wmm\" (UID: \"99702136-5bd9-4803-8ce9-8a89bd572648\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-b6wmm" Dec 10 15:05:02 crc kubenswrapper[4718]: I1210 15:05:02.832589 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/99702136-5bd9-4803-8ce9-8a89bd572648-ssh-key\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-b6wmm\" (UID: \"99702136-5bd9-4803-8ce9-8a89bd572648\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-b6wmm" Dec 10 15:05:02 crc kubenswrapper[4718]: I1210 15:05:02.832648 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/99702136-5bd9-4803-8ce9-8a89bd572648-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-b6wmm\" (UID: \"99702136-5bd9-4803-8ce9-8a89bd572648\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-b6wmm" Dec 10 15:05:02 crc kubenswrapper[4718]: E1210 15:05:02.912627 4718 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod10400b99_0213_470b_b37a_f0b9cd98ab2b.slice/crio-ec3ee67230fed4b136336578bbe4a1106ce4c631df5743d1c59f80c55295e15e\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod10400b99_0213_470b_b37a_f0b9cd98ab2b.slice\": RecentStats: unable to find data in memory cache]" Dec 10 15:05:02 crc kubenswrapper[4718]: I1210 15:05:02.934752 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mn26m\" (UniqueName: \"kubernetes.io/projected/99702136-5bd9-4803-8ce9-8a89bd572648-kube-api-access-mn26m\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-b6wmm\" (UID: \"99702136-5bd9-4803-8ce9-8a89bd572648\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-b6wmm" Dec 10 15:05:02 crc kubenswrapper[4718]: I1210 15:05:02.934814 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/99702136-5bd9-4803-8ce9-8a89bd572648-ssh-key\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-b6wmm\" (UID: \"99702136-5bd9-4803-8ce9-8a89bd572648\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-b6wmm" Dec 10 15:05:02 crc kubenswrapper[4718]: I1210 15:05:02.934853 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/99702136-5bd9-4803-8ce9-8a89bd572648-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-b6wmm\" (UID: \"99702136-5bd9-4803-8ce9-8a89bd572648\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-b6wmm" Dec 10 15:05:02 crc kubenswrapper[4718]: I1210 15:05:02.938813 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/99702136-5bd9-4803-8ce9-8a89bd572648-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-b6wmm\" (UID: \"99702136-5bd9-4803-8ce9-8a89bd572648\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-b6wmm" Dec 10 15:05:02 crc kubenswrapper[4718]: I1210 15:05:02.940490 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/99702136-5bd9-4803-8ce9-8a89bd572648-ssh-key\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-b6wmm\" (UID: \"99702136-5bd9-4803-8ce9-8a89bd572648\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-b6wmm" Dec 10 15:05:02 crc kubenswrapper[4718]: I1210 15:05:02.952676 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mn26m\" (UniqueName: \"kubernetes.io/projected/99702136-5bd9-4803-8ce9-8a89bd572648-kube-api-access-mn26m\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-b6wmm\" (UID: \"99702136-5bd9-4803-8ce9-8a89bd572648\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-b6wmm" Dec 10 15:05:03 crc kubenswrapper[4718]: I1210 15:05:03.066354 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-b6wmm" Dec 10 15:05:03 crc kubenswrapper[4718]: I1210 15:05:03.650746 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-b6wmm"] Dec 10 15:05:04 crc kubenswrapper[4718]: I1210 15:05:04.658749 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-b6wmm" event={"ID":"99702136-5bd9-4803-8ce9-8a89bd572648","Type":"ContainerStarted","Data":"a8138daf5622c583f1265e3e5c5145a42f67f02be7a8abe42ab5e04497dd202b"} Dec 10 15:05:04 crc kubenswrapper[4718]: I1210 15:05:04.659075 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-b6wmm" event={"ID":"99702136-5bd9-4803-8ce9-8a89bd572648","Type":"ContainerStarted","Data":"2b57dcf4afd19f96e9da469359ac2b748e922ce21cbad9eb7b3a8c7346c9508a"} Dec 10 15:05:04 crc kubenswrapper[4718]: I1210 15:05:04.689004 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-b6wmm" podStartSLOduration=1.9598516419999998 podStartE2EDuration="2.68896792s" podCreationTimestamp="2025-12-10 15:05:02 +0000 UTC" firstStartedPulling="2025-12-10 15:05:03.647705124 +0000 UTC m=+2008.596928541" lastFinishedPulling="2025-12-10 15:05:04.376821412 +0000 UTC m=+2009.326044819" observedRunningTime="2025-12-10 15:05:04.677725568 +0000 UTC m=+2009.626948975" watchObservedRunningTime="2025-12-10 15:05:04.68896792 +0000 UTC m=+2009.638191337" Dec 10 15:05:07 crc kubenswrapper[4718]: I1210 15:05:07.700596 4718 generic.go:334] "Generic (PLEG): container finished" podID="99702136-5bd9-4803-8ce9-8a89bd572648" containerID="a8138daf5622c583f1265e3e5c5145a42f67f02be7a8abe42ab5e04497dd202b" exitCode=0 Dec 10 15:05:07 crc kubenswrapper[4718]: I1210 15:05:07.700710 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-b6wmm" event={"ID":"99702136-5bd9-4803-8ce9-8a89bd572648","Type":"ContainerDied","Data":"a8138daf5622c583f1265e3e5c5145a42f67f02be7a8abe42ab5e04497dd202b"} Dec 10 15:05:09 crc kubenswrapper[4718]: I1210 15:05:09.205581 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-b6wmm" Dec 10 15:05:09 crc kubenswrapper[4718]: I1210 15:05:09.324241 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/99702136-5bd9-4803-8ce9-8a89bd572648-inventory\") pod \"99702136-5bd9-4803-8ce9-8a89bd572648\" (UID: \"99702136-5bd9-4803-8ce9-8a89bd572648\") " Dec 10 15:05:09 crc kubenswrapper[4718]: I1210 15:05:09.324313 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mn26m\" (UniqueName: \"kubernetes.io/projected/99702136-5bd9-4803-8ce9-8a89bd572648-kube-api-access-mn26m\") pod \"99702136-5bd9-4803-8ce9-8a89bd572648\" (UID: \"99702136-5bd9-4803-8ce9-8a89bd572648\") " Dec 10 15:05:09 crc kubenswrapper[4718]: I1210 15:05:09.324415 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/99702136-5bd9-4803-8ce9-8a89bd572648-ssh-key\") pod \"99702136-5bd9-4803-8ce9-8a89bd572648\" (UID: \"99702136-5bd9-4803-8ce9-8a89bd572648\") " Dec 10 15:05:09 crc kubenswrapper[4718]: I1210 15:05:09.330954 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/99702136-5bd9-4803-8ce9-8a89bd572648-kube-api-access-mn26m" (OuterVolumeSpecName: "kube-api-access-mn26m") pod "99702136-5bd9-4803-8ce9-8a89bd572648" (UID: "99702136-5bd9-4803-8ce9-8a89bd572648"). InnerVolumeSpecName "kube-api-access-mn26m". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:05:09 crc kubenswrapper[4718]: I1210 15:05:09.363336 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/99702136-5bd9-4803-8ce9-8a89bd572648-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "99702136-5bd9-4803-8ce9-8a89bd572648" (UID: "99702136-5bd9-4803-8ce9-8a89bd572648"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:05:09 crc kubenswrapper[4718]: I1210 15:05:09.369168 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/99702136-5bd9-4803-8ce9-8a89bd572648-inventory" (OuterVolumeSpecName: "inventory") pod "99702136-5bd9-4803-8ce9-8a89bd572648" (UID: "99702136-5bd9-4803-8ce9-8a89bd572648"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:05:09 crc kubenswrapper[4718]: I1210 15:05:09.427341 4718 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/99702136-5bd9-4803-8ce9-8a89bd572648-inventory\") on node \"crc\" DevicePath \"\"" Dec 10 15:05:09 crc kubenswrapper[4718]: I1210 15:05:09.427403 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mn26m\" (UniqueName: \"kubernetes.io/projected/99702136-5bd9-4803-8ce9-8a89bd572648-kube-api-access-mn26m\") on node \"crc\" DevicePath \"\"" Dec 10 15:05:09 crc kubenswrapper[4718]: I1210 15:05:09.427418 4718 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/99702136-5bd9-4803-8ce9-8a89bd572648-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 10 15:05:09 crc kubenswrapper[4718]: I1210 15:05:09.723983 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-b6wmm" event={"ID":"99702136-5bd9-4803-8ce9-8a89bd572648","Type":"ContainerDied","Data":"2b57dcf4afd19f96e9da469359ac2b748e922ce21cbad9eb7b3a8c7346c9508a"} Dec 10 15:05:09 crc kubenswrapper[4718]: I1210 15:05:09.724039 4718 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2b57dcf4afd19f96e9da469359ac2b748e922ce21cbad9eb7b3a8c7346c9508a" Dec 10 15:05:09 crc kubenswrapper[4718]: I1210 15:05:09.724065 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-b6wmm" Dec 10 15:05:09 crc kubenswrapper[4718]: I1210 15:05:09.830961 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-wkbj7"] Dec 10 15:05:09 crc kubenswrapper[4718]: E1210 15:05:09.832125 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="99702136-5bd9-4803-8ce9-8a89bd572648" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Dec 10 15:05:09 crc kubenswrapper[4718]: I1210 15:05:09.832158 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="99702136-5bd9-4803-8ce9-8a89bd572648" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Dec 10 15:05:09 crc kubenswrapper[4718]: I1210 15:05:09.832569 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="99702136-5bd9-4803-8ce9-8a89bd572648" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Dec 10 15:05:09 crc kubenswrapper[4718]: I1210 15:05:09.833478 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-wkbj7" Dec 10 15:05:09 crc kubenswrapper[4718]: I1210 15:05:09.842781 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-wkbj7"] Dec 10 15:05:09 crc kubenswrapper[4718]: I1210 15:05:09.846017 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-vqd8j" Dec 10 15:05:09 crc kubenswrapper[4718]: I1210 15:05:09.846301 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 10 15:05:09 crc kubenswrapper[4718]: I1210 15:05:09.846527 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 10 15:05:09 crc kubenswrapper[4718]: I1210 15:05:09.847307 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 10 15:05:09 crc kubenswrapper[4718]: I1210 15:05:09.945799 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a7b1a942-17e0-4573-abd2-4bf182a8eef0-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-wkbj7\" (UID: \"a7b1a942-17e0-4573-abd2-4bf182a8eef0\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-wkbj7" Dec 10 15:05:09 crc kubenswrapper[4718]: I1210 15:05:09.947175 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-drsd4\" (UniqueName: \"kubernetes.io/projected/a7b1a942-17e0-4573-abd2-4bf182a8eef0-kube-api-access-drsd4\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-wkbj7\" (UID: \"a7b1a942-17e0-4573-abd2-4bf182a8eef0\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-wkbj7" Dec 10 15:05:09 crc kubenswrapper[4718]: I1210 15:05:09.947426 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7b1a942-17e0-4573-abd2-4bf182a8eef0-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-wkbj7\" (UID: \"a7b1a942-17e0-4573-abd2-4bf182a8eef0\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-wkbj7" Dec 10 15:05:09 crc kubenswrapper[4718]: I1210 15:05:09.947967 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a7b1a942-17e0-4573-abd2-4bf182a8eef0-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-wkbj7\" (UID: \"a7b1a942-17e0-4573-abd2-4bf182a8eef0\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-wkbj7" Dec 10 15:05:10 crc kubenswrapper[4718]: I1210 15:05:10.050422 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-drsd4\" (UniqueName: \"kubernetes.io/projected/a7b1a942-17e0-4573-abd2-4bf182a8eef0-kube-api-access-drsd4\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-wkbj7\" (UID: \"a7b1a942-17e0-4573-abd2-4bf182a8eef0\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-wkbj7" Dec 10 15:05:10 crc kubenswrapper[4718]: I1210 15:05:10.050843 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7b1a942-17e0-4573-abd2-4bf182a8eef0-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-wkbj7\" (UID: \"a7b1a942-17e0-4573-abd2-4bf182a8eef0\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-wkbj7" Dec 10 15:05:10 crc kubenswrapper[4718]: I1210 15:05:10.050958 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a7b1a942-17e0-4573-abd2-4bf182a8eef0-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-wkbj7\" (UID: \"a7b1a942-17e0-4573-abd2-4bf182a8eef0\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-wkbj7" Dec 10 15:05:10 crc kubenswrapper[4718]: I1210 15:05:10.051067 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a7b1a942-17e0-4573-abd2-4bf182a8eef0-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-wkbj7\" (UID: \"a7b1a942-17e0-4573-abd2-4bf182a8eef0\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-wkbj7" Dec 10 15:05:10 crc kubenswrapper[4718]: I1210 15:05:10.056944 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a7b1a942-17e0-4573-abd2-4bf182a8eef0-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-wkbj7\" (UID: \"a7b1a942-17e0-4573-abd2-4bf182a8eef0\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-wkbj7" Dec 10 15:05:10 crc kubenswrapper[4718]: I1210 15:05:10.057486 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a7b1a942-17e0-4573-abd2-4bf182a8eef0-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-wkbj7\" (UID: \"a7b1a942-17e0-4573-abd2-4bf182a8eef0\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-wkbj7" Dec 10 15:05:10 crc kubenswrapper[4718]: I1210 15:05:10.067837 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7b1a942-17e0-4573-abd2-4bf182a8eef0-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-wkbj7\" (UID: \"a7b1a942-17e0-4573-abd2-4bf182a8eef0\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-wkbj7" Dec 10 15:05:10 crc kubenswrapper[4718]: I1210 15:05:10.070243 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-drsd4\" (UniqueName: \"kubernetes.io/projected/a7b1a942-17e0-4573-abd2-4bf182a8eef0-kube-api-access-drsd4\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-wkbj7\" (UID: \"a7b1a942-17e0-4573-abd2-4bf182a8eef0\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-wkbj7" Dec 10 15:05:10 crc kubenswrapper[4718]: I1210 15:05:10.167299 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-wkbj7" Dec 10 15:05:10 crc kubenswrapper[4718]: I1210 15:05:10.793729 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-wkbj7"] Dec 10 15:05:11 crc kubenswrapper[4718]: I1210 15:05:11.750110 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-wkbj7" event={"ID":"a7b1a942-17e0-4573-abd2-4bf182a8eef0","Type":"ContainerStarted","Data":"2f35288a0f758c0fd6a3def60df7839a6b1e35d3e955dd4008239d019f2eb5f5"} Dec 10 15:05:11 crc kubenswrapper[4718]: I1210 15:05:11.752154 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-wkbj7" event={"ID":"a7b1a942-17e0-4573-abd2-4bf182a8eef0","Type":"ContainerStarted","Data":"6f11ca9c0b1a4e75f17c79607e9bb0a025940081f6dead505f2aa44c5c460169"} Dec 10 15:05:11 crc kubenswrapper[4718]: I1210 15:05:11.774766 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-wkbj7" podStartSLOduration=2.146325583 podStartE2EDuration="2.774726182s" podCreationTimestamp="2025-12-10 15:05:09 +0000 UTC" firstStartedPulling="2025-12-10 15:05:10.801023693 +0000 UTC m=+2015.750247110" lastFinishedPulling="2025-12-10 15:05:11.429424282 +0000 UTC m=+2016.378647709" observedRunningTime="2025-12-10 15:05:11.767050099 +0000 UTC m=+2016.716273526" watchObservedRunningTime="2025-12-10 15:05:11.774726182 +0000 UTC m=+2016.723949599" Dec 10 15:05:18 crc kubenswrapper[4718]: I1210 15:05:18.084563 4718 patch_prober.go:28] interesting pod/machine-config-daemon-8zmhn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 10 15:05:18 crc kubenswrapper[4718]: I1210 15:05:18.085691 4718 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" podUID="8db53917-7cfb-496d-b8a0-5cc68f3be4e7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 10 15:05:48 crc kubenswrapper[4718]: I1210 15:05:48.084167 4718 patch_prober.go:28] interesting pod/machine-config-daemon-8zmhn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 10 15:05:48 crc kubenswrapper[4718]: I1210 15:05:48.084907 4718 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" podUID="8db53917-7cfb-496d-b8a0-5cc68f3be4e7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 10 15:05:48 crc kubenswrapper[4718]: I1210 15:05:48.084975 4718 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" Dec 10 15:05:48 crc kubenswrapper[4718]: I1210 15:05:48.086309 4718 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"5ce4270c443111ab27138184c2f5dff045fa11d171a6178ec7316680440101fb"} pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 10 15:05:48 crc kubenswrapper[4718]: I1210 15:05:48.086379 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" podUID="8db53917-7cfb-496d-b8a0-5cc68f3be4e7" containerName="machine-config-daemon" containerID="cri-o://5ce4270c443111ab27138184c2f5dff045fa11d171a6178ec7316680440101fb" gracePeriod=600 Dec 10 15:05:49 crc kubenswrapper[4718]: I1210 15:05:49.230775 4718 generic.go:334] "Generic (PLEG): container finished" podID="8db53917-7cfb-496d-b8a0-5cc68f3be4e7" containerID="5ce4270c443111ab27138184c2f5dff045fa11d171a6178ec7316680440101fb" exitCode=0 Dec 10 15:05:49 crc kubenswrapper[4718]: I1210 15:05:49.231000 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" event={"ID":"8db53917-7cfb-496d-b8a0-5cc68f3be4e7","Type":"ContainerDied","Data":"5ce4270c443111ab27138184c2f5dff045fa11d171a6178ec7316680440101fb"} Dec 10 15:05:49 crc kubenswrapper[4718]: I1210 15:05:49.231408 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" event={"ID":"8db53917-7cfb-496d-b8a0-5cc68f3be4e7","Type":"ContainerStarted","Data":"4270ecfcf97657b622e29b00041edfd4de3210eeefd5147a944aedf873bf5e26"} Dec 10 15:05:49 crc kubenswrapper[4718]: I1210 15:05:49.231442 4718 scope.go:117] "RemoveContainer" containerID="76beffb68a4302af15a19b5c310d822594a21c7c7ec551bf4bf551ca3dcf1f21" Dec 10 15:05:52 crc kubenswrapper[4718]: I1210 15:05:52.213326 4718 scope.go:117] "RemoveContainer" containerID="7478490f9f7961a45a1eb2837c66227a4e5d802f587b00eb8cdcd1973c9c2efe" Dec 10 15:06:06 crc kubenswrapper[4718]: I1210 15:06:06.065271 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-db-create-ktpkr"] Dec 10 15:06:06 crc kubenswrapper[4718]: I1210 15:06:06.082890 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-d3c5-account-create-update-8jq74"] Dec 10 15:06:06 crc kubenswrapper[4718]: I1210 15:06:06.095226 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/watcher-db-create-ktpkr"] Dec 10 15:06:06 crc kubenswrapper[4718]: I1210 15:06:06.106545 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/watcher-d3c5-account-create-update-8jq74"] Dec 10 15:06:08 crc kubenswrapper[4718]: I1210 15:06:08.065453 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7dfaf5d8-4439-43ec-b379-df78a4e20036" path="/var/lib/kubelet/pods/7dfaf5d8-4439-43ec-b379-df78a4e20036/volumes" Dec 10 15:06:08 crc kubenswrapper[4718]: I1210 15:06:08.067604 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0434939-9b67-49dd-a86c-d3c366f75328" path="/var/lib/kubelet/pods/a0434939-9b67-49dd-a86c-d3c366f75328/volumes" Dec 10 15:06:14 crc kubenswrapper[4718]: I1210 15:06:14.038013 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-316a-account-create-update-xsvgc"] Dec 10 15:06:14 crc kubenswrapper[4718]: I1210 15:06:14.049736 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-kbxrn"] Dec 10 15:06:14 crc kubenswrapper[4718]: I1210 15:06:14.062978 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-dgmf7"] Dec 10 15:06:14 crc kubenswrapper[4718]: I1210 15:06:14.075284 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-316a-account-create-update-xsvgc"] Dec 10 15:06:14 crc kubenswrapper[4718]: I1210 15:06:14.085855 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-66e1-account-create-update-tt7r2"] Dec 10 15:06:14 crc kubenswrapper[4718]: I1210 15:06:14.096249 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-kbxrn"] Dec 10 15:06:14 crc kubenswrapper[4718]: I1210 15:06:14.106123 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-dgmf7"] Dec 10 15:06:14 crc kubenswrapper[4718]: I1210 15:06:14.115379 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-66e1-account-create-update-tt7r2"] Dec 10 15:06:16 crc kubenswrapper[4718]: I1210 15:06:16.066768 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="55ba3ecd-dc8b-455c-a7a5-737d89a02227" path="/var/lib/kubelet/pods/55ba3ecd-dc8b-455c-a7a5-737d89a02227/volumes" Dec 10 15:06:16 crc kubenswrapper[4718]: I1210 15:06:16.068554 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="65778ba5-a5bf-495f-97ba-045a6ef28ce8" path="/var/lib/kubelet/pods/65778ba5-a5bf-495f-97ba-045a6ef28ce8/volumes" Dec 10 15:06:16 crc kubenswrapper[4718]: I1210 15:06:16.069292 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6c0ffc92-31c4-4d50-be24-ce7b6fed6506" path="/var/lib/kubelet/pods/6c0ffc92-31c4-4d50-be24-ce7b6fed6506/volumes" Dec 10 15:06:16 crc kubenswrapper[4718]: I1210 15:06:16.093277 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="93986540-1957-4930-a489-dd0e648099a7" path="/var/lib/kubelet/pods/93986540-1957-4930-a489-dd0e648099a7/volumes" Dec 10 15:06:39 crc kubenswrapper[4718]: I1210 15:06:39.887246 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-pzd7w"] Dec 10 15:06:39 crc kubenswrapper[4718]: I1210 15:06:39.894758 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pzd7w" Dec 10 15:06:39 crc kubenswrapper[4718]: I1210 15:06:39.909499 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-pzd7w"] Dec 10 15:06:40 crc kubenswrapper[4718]: I1210 15:06:40.034842 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cxx7c\" (UniqueName: \"kubernetes.io/projected/049385dc-cc47-4078-955f-28c98f6fc9ec-kube-api-access-cxx7c\") pod \"redhat-operators-pzd7w\" (UID: \"049385dc-cc47-4078-955f-28c98f6fc9ec\") " pod="openshift-marketplace/redhat-operators-pzd7w" Dec 10 15:06:40 crc kubenswrapper[4718]: I1210 15:06:40.035247 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/049385dc-cc47-4078-955f-28c98f6fc9ec-catalog-content\") pod \"redhat-operators-pzd7w\" (UID: \"049385dc-cc47-4078-955f-28c98f6fc9ec\") " pod="openshift-marketplace/redhat-operators-pzd7w" Dec 10 15:06:40 crc kubenswrapper[4718]: I1210 15:06:40.035293 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/049385dc-cc47-4078-955f-28c98f6fc9ec-utilities\") pod \"redhat-operators-pzd7w\" (UID: \"049385dc-cc47-4078-955f-28c98f6fc9ec\") " pod="openshift-marketplace/redhat-operators-pzd7w" Dec 10 15:06:40 crc kubenswrapper[4718]: I1210 15:06:40.137677 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cxx7c\" (UniqueName: \"kubernetes.io/projected/049385dc-cc47-4078-955f-28c98f6fc9ec-kube-api-access-cxx7c\") pod \"redhat-operators-pzd7w\" (UID: \"049385dc-cc47-4078-955f-28c98f6fc9ec\") " pod="openshift-marketplace/redhat-operators-pzd7w" Dec 10 15:06:40 crc kubenswrapper[4718]: I1210 15:06:40.137785 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/049385dc-cc47-4078-955f-28c98f6fc9ec-catalog-content\") pod \"redhat-operators-pzd7w\" (UID: \"049385dc-cc47-4078-955f-28c98f6fc9ec\") " pod="openshift-marketplace/redhat-operators-pzd7w" Dec 10 15:06:40 crc kubenswrapper[4718]: I1210 15:06:40.138154 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/049385dc-cc47-4078-955f-28c98f6fc9ec-utilities\") pod \"redhat-operators-pzd7w\" (UID: \"049385dc-cc47-4078-955f-28c98f6fc9ec\") " pod="openshift-marketplace/redhat-operators-pzd7w" Dec 10 15:06:40 crc kubenswrapper[4718]: I1210 15:06:40.138451 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/049385dc-cc47-4078-955f-28c98f6fc9ec-catalog-content\") pod \"redhat-operators-pzd7w\" (UID: \"049385dc-cc47-4078-955f-28c98f6fc9ec\") " pod="openshift-marketplace/redhat-operators-pzd7w" Dec 10 15:06:40 crc kubenswrapper[4718]: I1210 15:06:40.138780 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/049385dc-cc47-4078-955f-28c98f6fc9ec-utilities\") pod \"redhat-operators-pzd7w\" (UID: \"049385dc-cc47-4078-955f-28c98f6fc9ec\") " pod="openshift-marketplace/redhat-operators-pzd7w" Dec 10 15:06:40 crc kubenswrapper[4718]: I1210 15:06:40.166155 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cxx7c\" (UniqueName: \"kubernetes.io/projected/049385dc-cc47-4078-955f-28c98f6fc9ec-kube-api-access-cxx7c\") pod \"redhat-operators-pzd7w\" (UID: \"049385dc-cc47-4078-955f-28c98f6fc9ec\") " pod="openshift-marketplace/redhat-operators-pzd7w" Dec 10 15:06:40 crc kubenswrapper[4718]: I1210 15:06:40.222635 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pzd7w" Dec 10 15:06:40 crc kubenswrapper[4718]: I1210 15:06:40.779936 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-pzd7w"] Dec 10 15:06:40 crc kubenswrapper[4718]: I1210 15:06:40.884488 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pzd7w" event={"ID":"049385dc-cc47-4078-955f-28c98f6fc9ec","Type":"ContainerStarted","Data":"00c3df77c38539283a1dca594bbe6445f3850ac5846a52723720a2dd40c843ec"} Dec 10 15:06:41 crc kubenswrapper[4718]: I1210 15:06:41.903118 4718 generic.go:334] "Generic (PLEG): container finished" podID="049385dc-cc47-4078-955f-28c98f6fc9ec" containerID="06dd01131b9eeeb19668bf3d1c7b7dfad25ad793dd649ff7a37fe75c58d6fdc5" exitCode=0 Dec 10 15:06:41 crc kubenswrapper[4718]: I1210 15:06:41.903206 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pzd7w" event={"ID":"049385dc-cc47-4078-955f-28c98f6fc9ec","Type":"ContainerDied","Data":"06dd01131b9eeeb19668bf3d1c7b7dfad25ad793dd649ff7a37fe75c58d6fdc5"} Dec 10 15:06:41 crc kubenswrapper[4718]: I1210 15:06:41.906568 4718 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 10 15:06:43 crc kubenswrapper[4718]: I1210 15:06:43.939987 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pzd7w" event={"ID":"049385dc-cc47-4078-955f-28c98f6fc9ec","Type":"ContainerStarted","Data":"5058e564fdcaf8cbefa1ebde7a3aed929b5874a21cff095b82ecb6905aa66f38"} Dec 10 15:06:46 crc kubenswrapper[4718]: E1210 15:06:46.274018 4718 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod049385dc_cc47_4078_955f_28c98f6fc9ec.slice/crio-conmon-5058e564fdcaf8cbefa1ebde7a3aed929b5874a21cff095b82ecb6905aa66f38.scope\": RecentStats: unable to find data in memory cache]" Dec 10 15:06:46 crc kubenswrapper[4718]: I1210 15:06:46.981275 4718 generic.go:334] "Generic (PLEG): container finished" podID="049385dc-cc47-4078-955f-28c98f6fc9ec" containerID="5058e564fdcaf8cbefa1ebde7a3aed929b5874a21cff095b82ecb6905aa66f38" exitCode=0 Dec 10 15:06:46 crc kubenswrapper[4718]: I1210 15:06:46.981403 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pzd7w" event={"ID":"049385dc-cc47-4078-955f-28c98f6fc9ec","Type":"ContainerDied","Data":"5058e564fdcaf8cbefa1ebde7a3aed929b5874a21cff095b82ecb6905aa66f38"} Dec 10 15:06:48 crc kubenswrapper[4718]: I1210 15:06:48.562751 4718 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/frr-k8s-jhgkf" podUID="8c7642dd-0879-49cf-870a-a30a11c4d1b9" containerName="frr" probeResult="failure" output="Get \"http://127.0.0.1:7573/livez\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 10 15:06:50 crc kubenswrapper[4718]: I1210 15:06:50.035081 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pzd7w" event={"ID":"049385dc-cc47-4078-955f-28c98f6fc9ec","Type":"ContainerStarted","Data":"a32f2cf72bafcc104f322e2dda77a25f2120333efbf53b471ee98d10849c6b2c"} Dec 10 15:06:50 crc kubenswrapper[4718]: I1210 15:06:50.084296 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-pzd7w" podStartSLOduration=3.907319745 podStartE2EDuration="11.084157984s" podCreationTimestamp="2025-12-10 15:06:39 +0000 UTC" firstStartedPulling="2025-12-10 15:06:41.905906443 +0000 UTC m=+2106.855129860" lastFinishedPulling="2025-12-10 15:06:49.082744682 +0000 UTC m=+2114.031968099" observedRunningTime="2025-12-10 15:06:50.071250234 +0000 UTC m=+2115.020473651" watchObservedRunningTime="2025-12-10 15:06:50.084157984 +0000 UTC m=+2115.033381411" Dec 10 15:06:50 crc kubenswrapper[4718]: I1210 15:06:50.223211 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-pzd7w" Dec 10 15:06:50 crc kubenswrapper[4718]: I1210 15:06:50.223874 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-pzd7w" Dec 10 15:06:51 crc kubenswrapper[4718]: I1210 15:06:51.286363 4718 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-pzd7w" podUID="049385dc-cc47-4078-955f-28c98f6fc9ec" containerName="registry-server" probeResult="failure" output=< Dec 10 15:06:51 crc kubenswrapper[4718]: timeout: failed to connect service ":50051" within 1s Dec 10 15:06:51 crc kubenswrapper[4718]: > Dec 10 15:06:52 crc kubenswrapper[4718]: I1210 15:06:52.301092 4718 scope.go:117] "RemoveContainer" containerID="6febd6def6c02ad2a58fbe71e1ce8de745dc9b13bd89f9fbcfcc1cfcbab515b6" Dec 10 15:06:52 crc kubenswrapper[4718]: I1210 15:06:52.357743 4718 scope.go:117] "RemoveContainer" containerID="f64d88482fac23cf5723258ca8d01796d926859245dc69f1d95365586300f637" Dec 10 15:06:52 crc kubenswrapper[4718]: I1210 15:06:52.408736 4718 scope.go:117] "RemoveContainer" containerID="bc8c905ce26e7970b0ce341f8acc58987ce97e6ab2c599c5a2e10fd5a25ca7b0" Dec 10 15:06:52 crc kubenswrapper[4718]: I1210 15:06:52.481984 4718 scope.go:117] "RemoveContainer" containerID="022e20bd087211a2be64a6e1310f61cd5da2269ffc534257c6e8b0151e372ef8" Dec 10 15:06:52 crc kubenswrapper[4718]: I1210 15:06:52.544703 4718 scope.go:117] "RemoveContainer" containerID="55b437207d781e209e0bdb47ccc6f14714239cdd2c89f1876c97f24de28499c8" Dec 10 15:06:52 crc kubenswrapper[4718]: I1210 15:06:52.587129 4718 scope.go:117] "RemoveContainer" containerID="98c682674f43ddeeef8f3dc4167fea6925bd60c6b114f68a2d403e50048e343c" Dec 10 15:06:52 crc kubenswrapper[4718]: I1210 15:06:52.618260 4718 scope.go:117] "RemoveContainer" containerID="165041dfec3fd1a13de2cdfb612c54a2d79536a91631ef4f4d900216e1f99d71" Dec 10 15:06:52 crc kubenswrapper[4718]: I1210 15:06:52.647341 4718 scope.go:117] "RemoveContainer" containerID="13b3a993de8477c61dfb0bdc85bda6748a6cb25aa537ce81262096a726dcce73" Dec 10 15:06:52 crc kubenswrapper[4718]: I1210 15:06:52.705912 4718 scope.go:117] "RemoveContainer" containerID="9bcf705243f11ced10fe30234cf27288acb50f6abd504752daa8bf97090c014c" Dec 10 15:06:52 crc kubenswrapper[4718]: I1210 15:06:52.734410 4718 scope.go:117] "RemoveContainer" containerID="59bdd796d7d6c933cb1363548c6ed8b05419bc137c307cf44d7a62ed849e92f0" Dec 10 15:06:52 crc kubenswrapper[4718]: I1210 15:06:52.761978 4718 scope.go:117] "RemoveContainer" containerID="1239efa9e73f912dd8e937153183ce37860c0b79e87c56baa44eb8b5fb376ac7" Dec 10 15:06:52 crc kubenswrapper[4718]: I1210 15:06:52.784316 4718 scope.go:117] "RemoveContainer" containerID="3858ea3a4ddc2da9264644a70e9fe5c89b6c4dec6a43e1dfa65be42268a53a87" Dec 10 15:06:53 crc kubenswrapper[4718]: I1210 15:06:53.084564 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-k6j4m"] Dec 10 15:06:53 crc kubenswrapper[4718]: I1210 15:06:53.103478 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-k6j4m"] Dec 10 15:06:54 crc kubenswrapper[4718]: I1210 15:06:54.038781 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="df5dab54-fa6b-4cfd-9c05-a6eeeec57de3" path="/var/lib/kubelet/pods/df5dab54-fa6b-4cfd-9c05-a6eeeec57de3/volumes" Dec 10 15:06:54 crc kubenswrapper[4718]: I1210 15:06:54.055671 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-5fkv5"] Dec 10 15:06:54 crc kubenswrapper[4718]: I1210 15:06:54.073432 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-5fkv5"] Dec 10 15:06:54 crc kubenswrapper[4718]: I1210 15:06:54.092067 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-fe2e-account-create-update-q9jg8"] Dec 10 15:06:54 crc kubenswrapper[4718]: I1210 15:06:54.105824 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-fe2e-account-create-update-q9jg8"] Dec 10 15:06:55 crc kubenswrapper[4718]: I1210 15:06:55.058719 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db97-account-create-update-5zs2l"] Dec 10 15:06:55 crc kubenswrapper[4718]: I1210 15:06:55.074171 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db97-account-create-update-5zs2l"] Dec 10 15:06:56 crc kubenswrapper[4718]: I1210 15:06:56.038607 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1e87b030-a8e8-444f-9dda-a2d7a563aba9" path="/var/lib/kubelet/pods/1e87b030-a8e8-444f-9dda-a2d7a563aba9/volumes" Dec 10 15:06:56 crc kubenswrapper[4718]: I1210 15:06:56.039706 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="73737af3-b524-4da3-a97a-d3815b32ae7b" path="/var/lib/kubelet/pods/73737af3-b524-4da3-a97a-d3815b32ae7b/volumes" Dec 10 15:06:56 crc kubenswrapper[4718]: I1210 15:06:56.040368 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ec673fdd-e3b3-4576-aefa-c53822bd5f27" path="/var/lib/kubelet/pods/ec673fdd-e3b3-4576-aefa-c53822bd5f27/volumes" Dec 10 15:07:00 crc kubenswrapper[4718]: I1210 15:07:00.048798 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-9569-account-create-update-gxrxr"] Dec 10 15:07:00 crc kubenswrapper[4718]: I1210 15:07:00.149433 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-pxzzc"] Dec 10 15:07:00 crc kubenswrapper[4718]: I1210 15:07:00.162322 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-3d20-account-create-update-z8f8t"] Dec 10 15:07:00 crc kubenswrapper[4718]: I1210 15:07:00.176580 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-mxtgr"] Dec 10 15:07:00 crc kubenswrapper[4718]: I1210 15:07:00.187516 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-3d20-account-create-update-z8f8t"] Dec 10 15:07:00 crc kubenswrapper[4718]: I1210 15:07:00.197954 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-mxtgr"] Dec 10 15:07:00 crc kubenswrapper[4718]: I1210 15:07:00.208295 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-pxzzc"] Dec 10 15:07:00 crc kubenswrapper[4718]: I1210 15:07:00.218500 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-9569-account-create-update-gxrxr"] Dec 10 15:07:00 crc kubenswrapper[4718]: I1210 15:07:00.284373 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-pzd7w" Dec 10 15:07:00 crc kubenswrapper[4718]: I1210 15:07:00.346805 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-pzd7w" Dec 10 15:07:00 crc kubenswrapper[4718]: I1210 15:07:00.529573 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-pzd7w"] Dec 10 15:07:02 crc kubenswrapper[4718]: I1210 15:07:02.033601 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="40ac237b-96dc-4d03-9afd-3693b621a63b" path="/var/lib/kubelet/pods/40ac237b-96dc-4d03-9afd-3693b621a63b/volumes" Dec 10 15:07:02 crc kubenswrapper[4718]: I1210 15:07:02.035171 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="694a1ec1-f2d8-4637-b0ae-2bdba236854b" path="/var/lib/kubelet/pods/694a1ec1-f2d8-4637-b0ae-2bdba236854b/volumes" Dec 10 15:07:02 crc kubenswrapper[4718]: I1210 15:07:02.035877 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d1528b35-a982-4f53-90fb-0f0374ffcdb3" path="/var/lib/kubelet/pods/d1528b35-a982-4f53-90fb-0f0374ffcdb3/volumes" Dec 10 15:07:02 crc kubenswrapper[4718]: I1210 15:07:02.036547 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fcd536b9-1513-4c72-8cb9-acbf491e6b3d" path="/var/lib/kubelet/pods/fcd536b9-1513-4c72-8cb9-acbf491e6b3d/volumes" Dec 10 15:07:02 crc kubenswrapper[4718]: I1210 15:07:02.225674 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-pzd7w" podUID="049385dc-cc47-4078-955f-28c98f6fc9ec" containerName="registry-server" containerID="cri-o://a32f2cf72bafcc104f322e2dda77a25f2120333efbf53b471ee98d10849c6b2c" gracePeriod=2 Dec 10 15:07:02 crc kubenswrapper[4718]: I1210 15:07:02.750894 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pzd7w" Dec 10 15:07:02 crc kubenswrapper[4718]: I1210 15:07:02.903827 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/049385dc-cc47-4078-955f-28c98f6fc9ec-utilities\") pod \"049385dc-cc47-4078-955f-28c98f6fc9ec\" (UID: \"049385dc-cc47-4078-955f-28c98f6fc9ec\") " Dec 10 15:07:02 crc kubenswrapper[4718]: I1210 15:07:02.904218 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cxx7c\" (UniqueName: \"kubernetes.io/projected/049385dc-cc47-4078-955f-28c98f6fc9ec-kube-api-access-cxx7c\") pod \"049385dc-cc47-4078-955f-28c98f6fc9ec\" (UID: \"049385dc-cc47-4078-955f-28c98f6fc9ec\") " Dec 10 15:07:02 crc kubenswrapper[4718]: I1210 15:07:02.904515 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/049385dc-cc47-4078-955f-28c98f6fc9ec-catalog-content\") pod \"049385dc-cc47-4078-955f-28c98f6fc9ec\" (UID: \"049385dc-cc47-4078-955f-28c98f6fc9ec\") " Dec 10 15:07:02 crc kubenswrapper[4718]: I1210 15:07:02.905145 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/049385dc-cc47-4078-955f-28c98f6fc9ec-utilities" (OuterVolumeSpecName: "utilities") pod "049385dc-cc47-4078-955f-28c98f6fc9ec" (UID: "049385dc-cc47-4078-955f-28c98f6fc9ec"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 15:07:02 crc kubenswrapper[4718]: I1210 15:07:02.905452 4718 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/049385dc-cc47-4078-955f-28c98f6fc9ec-utilities\") on node \"crc\" DevicePath \"\"" Dec 10 15:07:02 crc kubenswrapper[4718]: I1210 15:07:02.914851 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/049385dc-cc47-4078-955f-28c98f6fc9ec-kube-api-access-cxx7c" (OuterVolumeSpecName: "kube-api-access-cxx7c") pod "049385dc-cc47-4078-955f-28c98f6fc9ec" (UID: "049385dc-cc47-4078-955f-28c98f6fc9ec"). InnerVolumeSpecName "kube-api-access-cxx7c". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:07:03 crc kubenswrapper[4718]: I1210 15:07:03.008135 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cxx7c\" (UniqueName: \"kubernetes.io/projected/049385dc-cc47-4078-955f-28c98f6fc9ec-kube-api-access-cxx7c\") on node \"crc\" DevicePath \"\"" Dec 10 15:07:03 crc kubenswrapper[4718]: I1210 15:07:03.022185 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/049385dc-cc47-4078-955f-28c98f6fc9ec-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "049385dc-cc47-4078-955f-28c98f6fc9ec" (UID: "049385dc-cc47-4078-955f-28c98f6fc9ec"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 15:07:03 crc kubenswrapper[4718]: I1210 15:07:03.112453 4718 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/049385dc-cc47-4078-955f-28c98f6fc9ec-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 10 15:07:03 crc kubenswrapper[4718]: I1210 15:07:03.241017 4718 generic.go:334] "Generic (PLEG): container finished" podID="049385dc-cc47-4078-955f-28c98f6fc9ec" containerID="a32f2cf72bafcc104f322e2dda77a25f2120333efbf53b471ee98d10849c6b2c" exitCode=0 Dec 10 15:07:03 crc kubenswrapper[4718]: I1210 15:07:03.241096 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pzd7w" event={"ID":"049385dc-cc47-4078-955f-28c98f6fc9ec","Type":"ContainerDied","Data":"a32f2cf72bafcc104f322e2dda77a25f2120333efbf53b471ee98d10849c6b2c"} Dec 10 15:07:03 crc kubenswrapper[4718]: I1210 15:07:03.241215 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pzd7w" event={"ID":"049385dc-cc47-4078-955f-28c98f6fc9ec","Type":"ContainerDied","Data":"00c3df77c38539283a1dca594bbe6445f3850ac5846a52723720a2dd40c843ec"} Dec 10 15:07:03 crc kubenswrapper[4718]: I1210 15:07:03.241214 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pzd7w" Dec 10 15:07:03 crc kubenswrapper[4718]: I1210 15:07:03.241241 4718 scope.go:117] "RemoveContainer" containerID="a32f2cf72bafcc104f322e2dda77a25f2120333efbf53b471ee98d10849c6b2c" Dec 10 15:07:03 crc kubenswrapper[4718]: I1210 15:07:03.287605 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-pzd7w"] Dec 10 15:07:03 crc kubenswrapper[4718]: I1210 15:07:03.294184 4718 scope.go:117] "RemoveContainer" containerID="5058e564fdcaf8cbefa1ebde7a3aed929b5874a21cff095b82ecb6905aa66f38" Dec 10 15:07:03 crc kubenswrapper[4718]: I1210 15:07:03.298118 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-pzd7w"] Dec 10 15:07:03 crc kubenswrapper[4718]: I1210 15:07:03.334278 4718 scope.go:117] "RemoveContainer" containerID="06dd01131b9eeeb19668bf3d1c7b7dfad25ad793dd649ff7a37fe75c58d6fdc5" Dec 10 15:07:03 crc kubenswrapper[4718]: I1210 15:07:03.375860 4718 scope.go:117] "RemoveContainer" containerID="a32f2cf72bafcc104f322e2dda77a25f2120333efbf53b471ee98d10849c6b2c" Dec 10 15:07:03 crc kubenswrapper[4718]: E1210 15:07:03.376940 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a32f2cf72bafcc104f322e2dda77a25f2120333efbf53b471ee98d10849c6b2c\": container with ID starting with a32f2cf72bafcc104f322e2dda77a25f2120333efbf53b471ee98d10849c6b2c not found: ID does not exist" containerID="a32f2cf72bafcc104f322e2dda77a25f2120333efbf53b471ee98d10849c6b2c" Dec 10 15:07:03 crc kubenswrapper[4718]: I1210 15:07:03.377043 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a32f2cf72bafcc104f322e2dda77a25f2120333efbf53b471ee98d10849c6b2c"} err="failed to get container status \"a32f2cf72bafcc104f322e2dda77a25f2120333efbf53b471ee98d10849c6b2c\": rpc error: code = NotFound desc = could not find container \"a32f2cf72bafcc104f322e2dda77a25f2120333efbf53b471ee98d10849c6b2c\": container with ID starting with a32f2cf72bafcc104f322e2dda77a25f2120333efbf53b471ee98d10849c6b2c not found: ID does not exist" Dec 10 15:07:03 crc kubenswrapper[4718]: I1210 15:07:03.377130 4718 scope.go:117] "RemoveContainer" containerID="5058e564fdcaf8cbefa1ebde7a3aed929b5874a21cff095b82ecb6905aa66f38" Dec 10 15:07:03 crc kubenswrapper[4718]: E1210 15:07:03.377960 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5058e564fdcaf8cbefa1ebde7a3aed929b5874a21cff095b82ecb6905aa66f38\": container with ID starting with 5058e564fdcaf8cbefa1ebde7a3aed929b5874a21cff095b82ecb6905aa66f38 not found: ID does not exist" containerID="5058e564fdcaf8cbefa1ebde7a3aed929b5874a21cff095b82ecb6905aa66f38" Dec 10 15:07:03 crc kubenswrapper[4718]: I1210 15:07:03.378020 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5058e564fdcaf8cbefa1ebde7a3aed929b5874a21cff095b82ecb6905aa66f38"} err="failed to get container status \"5058e564fdcaf8cbefa1ebde7a3aed929b5874a21cff095b82ecb6905aa66f38\": rpc error: code = NotFound desc = could not find container \"5058e564fdcaf8cbefa1ebde7a3aed929b5874a21cff095b82ecb6905aa66f38\": container with ID starting with 5058e564fdcaf8cbefa1ebde7a3aed929b5874a21cff095b82ecb6905aa66f38 not found: ID does not exist" Dec 10 15:07:03 crc kubenswrapper[4718]: I1210 15:07:03.378052 4718 scope.go:117] "RemoveContainer" containerID="06dd01131b9eeeb19668bf3d1c7b7dfad25ad793dd649ff7a37fe75c58d6fdc5" Dec 10 15:07:03 crc kubenswrapper[4718]: E1210 15:07:03.378608 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"06dd01131b9eeeb19668bf3d1c7b7dfad25ad793dd649ff7a37fe75c58d6fdc5\": container with ID starting with 06dd01131b9eeeb19668bf3d1c7b7dfad25ad793dd649ff7a37fe75c58d6fdc5 not found: ID does not exist" containerID="06dd01131b9eeeb19668bf3d1c7b7dfad25ad793dd649ff7a37fe75c58d6fdc5" Dec 10 15:07:03 crc kubenswrapper[4718]: I1210 15:07:03.378666 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"06dd01131b9eeeb19668bf3d1c7b7dfad25ad793dd649ff7a37fe75c58d6fdc5"} err="failed to get container status \"06dd01131b9eeeb19668bf3d1c7b7dfad25ad793dd649ff7a37fe75c58d6fdc5\": rpc error: code = NotFound desc = could not find container \"06dd01131b9eeeb19668bf3d1c7b7dfad25ad793dd649ff7a37fe75c58d6fdc5\": container with ID starting with 06dd01131b9eeeb19668bf3d1c7b7dfad25ad793dd649ff7a37fe75c58d6fdc5 not found: ID does not exist" Dec 10 15:07:04 crc kubenswrapper[4718]: I1210 15:07:04.041684 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="049385dc-cc47-4078-955f-28c98f6fc9ec" path="/var/lib/kubelet/pods/049385dc-cc47-4078-955f-28c98f6fc9ec/volumes" Dec 10 15:07:29 crc kubenswrapper[4718]: I1210 15:07:29.055992 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-nhtpt"] Dec 10 15:07:29 crc kubenswrapper[4718]: I1210 15:07:29.067591 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-nhtpt"] Dec 10 15:07:30 crc kubenswrapper[4718]: I1210 15:07:30.034321 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="801e3b76-dd13-4285-9597-8f7874496ed5" path="/var/lib/kubelet/pods/801e3b76-dd13-4285-9597-8f7874496ed5/volumes" Dec 10 15:07:48 crc kubenswrapper[4718]: I1210 15:07:48.083799 4718 patch_prober.go:28] interesting pod/machine-config-daemon-8zmhn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 10 15:07:48 crc kubenswrapper[4718]: I1210 15:07:48.084860 4718 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" podUID="8db53917-7cfb-496d-b8a0-5cc68f3be4e7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 10 15:07:53 crc kubenswrapper[4718]: I1210 15:07:53.020938 4718 scope.go:117] "RemoveContainer" containerID="bc7204ed242e5df8ba92dc5033f2c984c42e3690025e33a8e30668744e0363c3" Dec 10 15:07:53 crc kubenswrapper[4718]: I1210 15:07:53.057778 4718 scope.go:117] "RemoveContainer" containerID="f59b49d2f55b83625d0744fc11c15650a1ded84a919859b9530099dc5d596cd3" Dec 10 15:07:53 crc kubenswrapper[4718]: I1210 15:07:53.124638 4718 scope.go:117] "RemoveContainer" containerID="1eef66815823a1a20fa4b36826326ff1e43676f5e62b09cf904ad050f90d1ffe" Dec 10 15:07:53 crc kubenswrapper[4718]: I1210 15:07:53.192330 4718 scope.go:117] "RemoveContainer" containerID="46fc31ff953bc13a4411e6db8f2d2c6f86fb3366991f5bce9d8fd36837803827" Dec 10 15:07:53 crc kubenswrapper[4718]: I1210 15:07:53.236290 4718 scope.go:117] "RemoveContainer" containerID="4517535a28edcb323efd5e3b0dc7f232ca9c4d6123a5a197136016ebf4b3b423" Dec 10 15:07:53 crc kubenswrapper[4718]: I1210 15:07:53.283262 4718 scope.go:117] "RemoveContainer" containerID="db973de307b14a5162d5187cdbfacfe4c57fd955103c73d63283b37fdf9a5823" Dec 10 15:07:53 crc kubenswrapper[4718]: I1210 15:07:53.344438 4718 scope.go:117] "RemoveContainer" containerID="5828b8554fc36ef8d4562434851260cc4164618ef4d7a238ab7439964d6b5803" Dec 10 15:07:53 crc kubenswrapper[4718]: I1210 15:07:53.370861 4718 scope.go:117] "RemoveContainer" containerID="b45e03e56b3c9b1b14020bda8abf447ed3f61d00e80d7e8264977ea6905c5c8e" Dec 10 15:07:53 crc kubenswrapper[4718]: I1210 15:07:53.419206 4718 scope.go:117] "RemoveContainer" containerID="914878a308caab8ea00b865994a333e870b11adb944ec4e4bfd010a94c7bcb68" Dec 10 15:08:17 crc kubenswrapper[4718]: I1210 15:08:17.053616 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-db-sync-bcnrt"] Dec 10 15:08:17 crc kubenswrapper[4718]: I1210 15:08:17.063964 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/watcher-db-sync-bcnrt"] Dec 10 15:08:18 crc kubenswrapper[4718]: I1210 15:08:18.039106 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf1fd7f3-5d9a-44b4-8e4e-e71df148b109" path="/var/lib/kubelet/pods/bf1fd7f3-5d9a-44b4-8e4e-e71df148b109/volumes" Dec 10 15:08:18 crc kubenswrapper[4718]: I1210 15:08:18.083963 4718 patch_prober.go:28] interesting pod/machine-config-daemon-8zmhn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 10 15:08:18 crc kubenswrapper[4718]: I1210 15:08:18.084038 4718 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" podUID="8db53917-7cfb-496d-b8a0-5cc68f3be4e7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 10 15:08:29 crc kubenswrapper[4718]: I1210 15:08:29.055774 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-qklhp"] Dec 10 15:08:29 crc kubenswrapper[4718]: I1210 15:08:29.068485 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-hwv84"] Dec 10 15:08:29 crc kubenswrapper[4718]: I1210 15:08:29.080871 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-qklhp"] Dec 10 15:08:29 crc kubenswrapper[4718]: I1210 15:08:29.090267 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-hwv84"] Dec 10 15:08:30 crc kubenswrapper[4718]: I1210 15:08:30.034815 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="80b183de-b86e-49a8-8c3c-ebf398fc65eb" path="/var/lib/kubelet/pods/80b183de-b86e-49a8-8c3c-ebf398fc65eb/volumes" Dec 10 15:08:30 crc kubenswrapper[4718]: I1210 15:08:30.035512 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f6329eaf-fcae-417e-96a8-96719f02420b" path="/var/lib/kubelet/pods/f6329eaf-fcae-417e-96a8-96719f02420b/volumes" Dec 10 15:08:32 crc kubenswrapper[4718]: I1210 15:08:32.037689 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-qgdwr"] Dec 10 15:08:32 crc kubenswrapper[4718]: I1210 15:08:32.053725 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-qgdwr"] Dec 10 15:08:34 crc kubenswrapper[4718]: I1210 15:08:34.034357 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b3828426-9676-412b-aeaa-22c7c97989c4" path="/var/lib/kubelet/pods/b3828426-9676-412b-aeaa-22c7c97989c4/volumes" Dec 10 15:08:48 crc kubenswrapper[4718]: I1210 15:08:48.085250 4718 patch_prober.go:28] interesting pod/machine-config-daemon-8zmhn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 10 15:08:48 crc kubenswrapper[4718]: I1210 15:08:48.085807 4718 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" podUID="8db53917-7cfb-496d-b8a0-5cc68f3be4e7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 10 15:08:48 crc kubenswrapper[4718]: I1210 15:08:48.085909 4718 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" Dec 10 15:08:48 crc kubenswrapper[4718]: I1210 15:08:48.086893 4718 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"4270ecfcf97657b622e29b00041edfd4de3210eeefd5147a944aedf873bf5e26"} pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 10 15:08:48 crc kubenswrapper[4718]: I1210 15:08:48.086987 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" podUID="8db53917-7cfb-496d-b8a0-5cc68f3be4e7" containerName="machine-config-daemon" containerID="cri-o://4270ecfcf97657b622e29b00041edfd4de3210eeefd5147a944aedf873bf5e26" gracePeriod=600 Dec 10 15:08:48 crc kubenswrapper[4718]: E1210 15:08:48.226818 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8zmhn_openshift-machine-config-operator(8db53917-7cfb-496d-b8a0-5cc68f3be4e7)\"" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" podUID="8db53917-7cfb-496d-b8a0-5cc68f3be4e7" Dec 10 15:08:48 crc kubenswrapper[4718]: I1210 15:08:48.552098 4718 generic.go:334] "Generic (PLEG): container finished" podID="8db53917-7cfb-496d-b8a0-5cc68f3be4e7" containerID="4270ecfcf97657b622e29b00041edfd4de3210eeefd5147a944aedf873bf5e26" exitCode=0 Dec 10 15:08:48 crc kubenswrapper[4718]: I1210 15:08:48.552196 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" event={"ID":"8db53917-7cfb-496d-b8a0-5cc68f3be4e7","Type":"ContainerDied","Data":"4270ecfcf97657b622e29b00041edfd4de3210eeefd5147a944aedf873bf5e26"} Dec 10 15:08:48 crc kubenswrapper[4718]: I1210 15:08:48.552478 4718 scope.go:117] "RemoveContainer" containerID="5ce4270c443111ab27138184c2f5dff045fa11d171a6178ec7316680440101fb" Dec 10 15:08:48 crc kubenswrapper[4718]: I1210 15:08:48.553358 4718 scope.go:117] "RemoveContainer" containerID="4270ecfcf97657b622e29b00041edfd4de3210eeefd5147a944aedf873bf5e26" Dec 10 15:08:48 crc kubenswrapper[4718]: E1210 15:08:48.553759 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8zmhn_openshift-machine-config-operator(8db53917-7cfb-496d-b8a0-5cc68f3be4e7)\"" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" podUID="8db53917-7cfb-496d-b8a0-5cc68f3be4e7" Dec 10 15:08:53 crc kubenswrapper[4718]: I1210 15:08:53.670321 4718 scope.go:117] "RemoveContainer" containerID="fa6a2558a3cb00acb2ee5107a07d8a0a8f64bf8afa3aa4aec3b558b02c87faa8" Dec 10 15:08:53 crc kubenswrapper[4718]: I1210 15:08:53.731279 4718 scope.go:117] "RemoveContainer" containerID="b5545b7e7d53cd55e962135153804b1b5030323a12b587abfdac17ff83a30908" Dec 10 15:08:53 crc kubenswrapper[4718]: I1210 15:08:53.797061 4718 scope.go:117] "RemoveContainer" containerID="98f769a12127da0cc2a40d9e3421fce565b7ca1da8624cc5b376f8fc402d8596" Dec 10 15:08:53 crc kubenswrapper[4718]: I1210 15:08:53.845304 4718 scope.go:117] "RemoveContainer" containerID="afb065c6f5d1d7dc86c06200e773d7019392ee936131ec9ded0efe4e02f88d46" Dec 10 15:08:53 crc kubenswrapper[4718]: I1210 15:08:53.932981 4718 scope.go:117] "RemoveContainer" containerID="557591a1a71180960c70024cdaabc7123b9d03913c6069cf5a5e26eb6ce3fbc5" Dec 10 15:08:53 crc kubenswrapper[4718]: I1210 15:08:53.953990 4718 scope.go:117] "RemoveContainer" containerID="23d4dcb35f647dcee7ccd4837f689e76a074605956223bd46cff165951a06847" Dec 10 15:08:59 crc kubenswrapper[4718]: I1210 15:08:59.020557 4718 scope.go:117] "RemoveContainer" containerID="4270ecfcf97657b622e29b00041edfd4de3210eeefd5147a944aedf873bf5e26" Dec 10 15:08:59 crc kubenswrapper[4718]: E1210 15:08:59.021887 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8zmhn_openshift-machine-config-operator(8db53917-7cfb-496d-b8a0-5cc68f3be4e7)\"" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" podUID="8db53917-7cfb-496d-b8a0-5cc68f3be4e7" Dec 10 15:09:00 crc kubenswrapper[4718]: I1210 15:09:00.702570 4718 generic.go:334] "Generic (PLEG): container finished" podID="a7b1a942-17e0-4573-abd2-4bf182a8eef0" containerID="2f35288a0f758c0fd6a3def60df7839a6b1e35d3e955dd4008239d019f2eb5f5" exitCode=0 Dec 10 15:09:00 crc kubenswrapper[4718]: I1210 15:09:00.702931 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-wkbj7" event={"ID":"a7b1a942-17e0-4573-abd2-4bf182a8eef0","Type":"ContainerDied","Data":"2f35288a0f758c0fd6a3def60df7839a6b1e35d3e955dd4008239d019f2eb5f5"} Dec 10 15:09:01 crc kubenswrapper[4718]: I1210 15:09:01.046352 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-cwrnt"] Dec 10 15:09:01 crc kubenswrapper[4718]: I1210 15:09:01.060340 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-cwrnt"] Dec 10 15:09:02 crc kubenswrapper[4718]: I1210 15:09:02.037466 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3990dc15-53e8-4cd7-a25d-f9b322b74f3e" path="/var/lib/kubelet/pods/3990dc15-53e8-4cd7-a25d-f9b322b74f3e/volumes" Dec 10 15:09:02 crc kubenswrapper[4718]: I1210 15:09:02.321485 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-wkbj7" Dec 10 15:09:02 crc kubenswrapper[4718]: I1210 15:09:02.450803 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a7b1a942-17e0-4573-abd2-4bf182a8eef0-inventory\") pod \"a7b1a942-17e0-4573-abd2-4bf182a8eef0\" (UID: \"a7b1a942-17e0-4573-abd2-4bf182a8eef0\") " Dec 10 15:09:02 crc kubenswrapper[4718]: I1210 15:09:02.451047 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a7b1a942-17e0-4573-abd2-4bf182a8eef0-ssh-key\") pod \"a7b1a942-17e0-4573-abd2-4bf182a8eef0\" (UID: \"a7b1a942-17e0-4573-abd2-4bf182a8eef0\") " Dec 10 15:09:02 crc kubenswrapper[4718]: I1210 15:09:02.451156 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7b1a942-17e0-4573-abd2-4bf182a8eef0-bootstrap-combined-ca-bundle\") pod \"a7b1a942-17e0-4573-abd2-4bf182a8eef0\" (UID: \"a7b1a942-17e0-4573-abd2-4bf182a8eef0\") " Dec 10 15:09:02 crc kubenswrapper[4718]: I1210 15:09:02.451335 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-drsd4\" (UniqueName: \"kubernetes.io/projected/a7b1a942-17e0-4573-abd2-4bf182a8eef0-kube-api-access-drsd4\") pod \"a7b1a942-17e0-4573-abd2-4bf182a8eef0\" (UID: \"a7b1a942-17e0-4573-abd2-4bf182a8eef0\") " Dec 10 15:09:02 crc kubenswrapper[4718]: I1210 15:09:02.458779 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a7b1a942-17e0-4573-abd2-4bf182a8eef0-kube-api-access-drsd4" (OuterVolumeSpecName: "kube-api-access-drsd4") pod "a7b1a942-17e0-4573-abd2-4bf182a8eef0" (UID: "a7b1a942-17e0-4573-abd2-4bf182a8eef0"). InnerVolumeSpecName "kube-api-access-drsd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:09:02 crc kubenswrapper[4718]: I1210 15:09:02.458971 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7b1a942-17e0-4573-abd2-4bf182a8eef0-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "a7b1a942-17e0-4573-abd2-4bf182a8eef0" (UID: "a7b1a942-17e0-4573-abd2-4bf182a8eef0"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:09:02 crc kubenswrapper[4718]: I1210 15:09:02.493495 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7b1a942-17e0-4573-abd2-4bf182a8eef0-inventory" (OuterVolumeSpecName: "inventory") pod "a7b1a942-17e0-4573-abd2-4bf182a8eef0" (UID: "a7b1a942-17e0-4573-abd2-4bf182a8eef0"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:09:02 crc kubenswrapper[4718]: I1210 15:09:02.501926 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7b1a942-17e0-4573-abd2-4bf182a8eef0-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "a7b1a942-17e0-4573-abd2-4bf182a8eef0" (UID: "a7b1a942-17e0-4573-abd2-4bf182a8eef0"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:09:02 crc kubenswrapper[4718]: I1210 15:09:02.555448 4718 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a7b1a942-17e0-4573-abd2-4bf182a8eef0-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 10 15:09:02 crc kubenswrapper[4718]: I1210 15:09:02.555498 4718 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7b1a942-17e0-4573-abd2-4bf182a8eef0-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 10 15:09:02 crc kubenswrapper[4718]: I1210 15:09:02.555518 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-drsd4\" (UniqueName: \"kubernetes.io/projected/a7b1a942-17e0-4573-abd2-4bf182a8eef0-kube-api-access-drsd4\") on node \"crc\" DevicePath \"\"" Dec 10 15:09:02 crc kubenswrapper[4718]: I1210 15:09:02.555530 4718 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a7b1a942-17e0-4573-abd2-4bf182a8eef0-inventory\") on node \"crc\" DevicePath \"\"" Dec 10 15:09:02 crc kubenswrapper[4718]: I1210 15:09:02.727269 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-wkbj7" event={"ID":"a7b1a942-17e0-4573-abd2-4bf182a8eef0","Type":"ContainerDied","Data":"6f11ca9c0b1a4e75f17c79607e9bb0a025940081f6dead505f2aa44c5c460169"} Dec 10 15:09:02 crc kubenswrapper[4718]: I1210 15:09:02.727330 4718 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6f11ca9c0b1a4e75f17c79607e9bb0a025940081f6dead505f2aa44c5c460169" Dec 10 15:09:02 crc kubenswrapper[4718]: I1210 15:09:02.727372 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-wkbj7" Dec 10 15:09:02 crc kubenswrapper[4718]: I1210 15:09:02.828856 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-k9sjb"] Dec 10 15:09:02 crc kubenswrapper[4718]: E1210 15:09:02.829553 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="049385dc-cc47-4078-955f-28c98f6fc9ec" containerName="registry-server" Dec 10 15:09:02 crc kubenswrapper[4718]: I1210 15:09:02.829613 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="049385dc-cc47-4078-955f-28c98f6fc9ec" containerName="registry-server" Dec 10 15:09:02 crc kubenswrapper[4718]: E1210 15:09:02.829649 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a7b1a942-17e0-4573-abd2-4bf182a8eef0" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Dec 10 15:09:02 crc kubenswrapper[4718]: I1210 15:09:02.829657 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="a7b1a942-17e0-4573-abd2-4bf182a8eef0" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Dec 10 15:09:02 crc kubenswrapper[4718]: E1210 15:09:02.829689 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="049385dc-cc47-4078-955f-28c98f6fc9ec" containerName="extract-content" Dec 10 15:09:02 crc kubenswrapper[4718]: I1210 15:09:02.829695 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="049385dc-cc47-4078-955f-28c98f6fc9ec" containerName="extract-content" Dec 10 15:09:02 crc kubenswrapper[4718]: E1210 15:09:02.829715 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="049385dc-cc47-4078-955f-28c98f6fc9ec" containerName="extract-utilities" Dec 10 15:09:02 crc kubenswrapper[4718]: I1210 15:09:02.829721 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="049385dc-cc47-4078-955f-28c98f6fc9ec" containerName="extract-utilities" Dec 10 15:09:02 crc kubenswrapper[4718]: I1210 15:09:02.829939 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="049385dc-cc47-4078-955f-28c98f6fc9ec" containerName="registry-server" Dec 10 15:09:02 crc kubenswrapper[4718]: I1210 15:09:02.829955 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="a7b1a942-17e0-4573-abd2-4bf182a8eef0" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Dec 10 15:09:02 crc kubenswrapper[4718]: I1210 15:09:02.830992 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-k9sjb" Dec 10 15:09:02 crc kubenswrapper[4718]: I1210 15:09:02.834703 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 10 15:09:02 crc kubenswrapper[4718]: I1210 15:09:02.835025 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 10 15:09:02 crc kubenswrapper[4718]: I1210 15:09:02.836658 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-vqd8j" Dec 10 15:09:02 crc kubenswrapper[4718]: I1210 15:09:02.839166 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 10 15:09:02 crc kubenswrapper[4718]: I1210 15:09:02.846216 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-k9sjb"] Dec 10 15:09:02 crc kubenswrapper[4718]: I1210 15:09:02.964782 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/93b2ee40-9140-4ccf-8af3-d9bfc04ca78c-ssh-key\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-k9sjb\" (UID: \"93b2ee40-9140-4ccf-8af3-d9bfc04ca78c\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-k9sjb" Dec 10 15:09:02 crc kubenswrapper[4718]: I1210 15:09:02.964970 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f8657\" (UniqueName: \"kubernetes.io/projected/93b2ee40-9140-4ccf-8af3-d9bfc04ca78c-kube-api-access-f8657\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-k9sjb\" (UID: \"93b2ee40-9140-4ccf-8af3-d9bfc04ca78c\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-k9sjb" Dec 10 15:09:02 crc kubenswrapper[4718]: I1210 15:09:02.965071 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/93b2ee40-9140-4ccf-8af3-d9bfc04ca78c-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-k9sjb\" (UID: \"93b2ee40-9140-4ccf-8af3-d9bfc04ca78c\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-k9sjb" Dec 10 15:09:03 crc kubenswrapper[4718]: I1210 15:09:03.067643 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/93b2ee40-9140-4ccf-8af3-d9bfc04ca78c-ssh-key\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-k9sjb\" (UID: \"93b2ee40-9140-4ccf-8af3-d9bfc04ca78c\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-k9sjb" Dec 10 15:09:03 crc kubenswrapper[4718]: I1210 15:09:03.068057 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f8657\" (UniqueName: \"kubernetes.io/projected/93b2ee40-9140-4ccf-8af3-d9bfc04ca78c-kube-api-access-f8657\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-k9sjb\" (UID: \"93b2ee40-9140-4ccf-8af3-d9bfc04ca78c\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-k9sjb" Dec 10 15:09:03 crc kubenswrapper[4718]: I1210 15:09:03.068130 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/93b2ee40-9140-4ccf-8af3-d9bfc04ca78c-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-k9sjb\" (UID: \"93b2ee40-9140-4ccf-8af3-d9bfc04ca78c\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-k9sjb" Dec 10 15:09:03 crc kubenswrapper[4718]: I1210 15:09:03.075207 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/93b2ee40-9140-4ccf-8af3-d9bfc04ca78c-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-k9sjb\" (UID: \"93b2ee40-9140-4ccf-8af3-d9bfc04ca78c\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-k9sjb" Dec 10 15:09:03 crc kubenswrapper[4718]: I1210 15:09:03.075202 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/93b2ee40-9140-4ccf-8af3-d9bfc04ca78c-ssh-key\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-k9sjb\" (UID: \"93b2ee40-9140-4ccf-8af3-d9bfc04ca78c\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-k9sjb" Dec 10 15:09:03 crc kubenswrapper[4718]: I1210 15:09:03.088750 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f8657\" (UniqueName: \"kubernetes.io/projected/93b2ee40-9140-4ccf-8af3-d9bfc04ca78c-kube-api-access-f8657\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-k9sjb\" (UID: \"93b2ee40-9140-4ccf-8af3-d9bfc04ca78c\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-k9sjb" Dec 10 15:09:03 crc kubenswrapper[4718]: I1210 15:09:03.159156 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-k9sjb" Dec 10 15:09:03 crc kubenswrapper[4718]: I1210 15:09:03.805451 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-k9sjb"] Dec 10 15:09:04 crc kubenswrapper[4718]: I1210 15:09:04.756760 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-k9sjb" event={"ID":"93b2ee40-9140-4ccf-8af3-d9bfc04ca78c","Type":"ContainerStarted","Data":"fd03f52e5c746cf135634b154d3158731777f94f81f49fc165ace780e15dccf8"} Dec 10 15:09:05 crc kubenswrapper[4718]: I1210 15:09:05.770044 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-k9sjb" event={"ID":"93b2ee40-9140-4ccf-8af3-d9bfc04ca78c","Type":"ContainerStarted","Data":"747c547ba7b9f823c115484e901bb93ebe7f4f2c8d9a45cc5b048b4a3ce0a758"} Dec 10 15:09:05 crc kubenswrapper[4718]: I1210 15:09:05.793893 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-k9sjb" podStartSLOduration=2.968533758 podStartE2EDuration="3.793849687s" podCreationTimestamp="2025-12-10 15:09:02 +0000 UTC" firstStartedPulling="2025-12-10 15:09:03.837548003 +0000 UTC m=+2248.786771420" lastFinishedPulling="2025-12-10 15:09:04.662863932 +0000 UTC m=+2249.612087349" observedRunningTime="2025-12-10 15:09:05.788753237 +0000 UTC m=+2250.737976654" watchObservedRunningTime="2025-12-10 15:09:05.793849687 +0000 UTC m=+2250.743073104" Dec 10 15:09:13 crc kubenswrapper[4718]: I1210 15:09:13.020925 4718 scope.go:117] "RemoveContainer" containerID="4270ecfcf97657b622e29b00041edfd4de3210eeefd5147a944aedf873bf5e26" Dec 10 15:09:13 crc kubenswrapper[4718]: E1210 15:09:13.022069 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8zmhn_openshift-machine-config-operator(8db53917-7cfb-496d-b8a0-5cc68f3be4e7)\"" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" podUID="8db53917-7cfb-496d-b8a0-5cc68f3be4e7" Dec 10 15:09:27 crc kubenswrapper[4718]: I1210 15:09:27.020680 4718 scope.go:117] "RemoveContainer" containerID="4270ecfcf97657b622e29b00041edfd4de3210eeefd5147a944aedf873bf5e26" Dec 10 15:09:27 crc kubenswrapper[4718]: E1210 15:09:27.021654 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8zmhn_openshift-machine-config-operator(8db53917-7cfb-496d-b8a0-5cc68f3be4e7)\"" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" podUID="8db53917-7cfb-496d-b8a0-5cc68f3be4e7" Dec 10 15:09:37 crc kubenswrapper[4718]: I1210 15:09:37.128346 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-hctjl"] Dec 10 15:09:37 crc kubenswrapper[4718]: I1210 15:09:37.139975 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-hctjl"] Dec 10 15:09:38 crc kubenswrapper[4718]: I1210 15:09:38.037904 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="71a092ad-773d-47b4-bc1f-73358adecf4a" path="/var/lib/kubelet/pods/71a092ad-773d-47b4-bc1f-73358adecf4a/volumes" Dec 10 15:09:40 crc kubenswrapper[4718]: I1210 15:09:40.022732 4718 scope.go:117] "RemoveContainer" containerID="4270ecfcf97657b622e29b00041edfd4de3210eeefd5147a944aedf873bf5e26" Dec 10 15:09:40 crc kubenswrapper[4718]: E1210 15:09:40.023686 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8zmhn_openshift-machine-config-operator(8db53917-7cfb-496d-b8a0-5cc68f3be4e7)\"" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" podUID="8db53917-7cfb-496d-b8a0-5cc68f3be4e7" Dec 10 15:09:49 crc kubenswrapper[4718]: I1210 15:09:49.054954 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-zr6j2"] Dec 10 15:09:49 crc kubenswrapper[4718]: I1210 15:09:49.070416 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-zr6j2"] Dec 10 15:09:49 crc kubenswrapper[4718]: I1210 15:09:49.188516 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-9ppmz"] Dec 10 15:09:49 crc kubenswrapper[4718]: I1210 15:09:49.191865 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9ppmz" Dec 10 15:09:49 crc kubenswrapper[4718]: I1210 15:09:49.220583 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-9ppmz"] Dec 10 15:09:49 crc kubenswrapper[4718]: I1210 15:09:49.303230 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pcgkg\" (UniqueName: \"kubernetes.io/projected/1b2a1e5f-eb1c-4070-9ddd-a0cb2a883c3e-kube-api-access-pcgkg\") pod \"community-operators-9ppmz\" (UID: \"1b2a1e5f-eb1c-4070-9ddd-a0cb2a883c3e\") " pod="openshift-marketplace/community-operators-9ppmz" Dec 10 15:09:49 crc kubenswrapper[4718]: I1210 15:09:49.303305 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1b2a1e5f-eb1c-4070-9ddd-a0cb2a883c3e-catalog-content\") pod \"community-operators-9ppmz\" (UID: \"1b2a1e5f-eb1c-4070-9ddd-a0cb2a883c3e\") " pod="openshift-marketplace/community-operators-9ppmz" Dec 10 15:09:49 crc kubenswrapper[4718]: I1210 15:09:49.303462 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1b2a1e5f-eb1c-4070-9ddd-a0cb2a883c3e-utilities\") pod \"community-operators-9ppmz\" (UID: \"1b2a1e5f-eb1c-4070-9ddd-a0cb2a883c3e\") " pod="openshift-marketplace/community-operators-9ppmz" Dec 10 15:09:49 crc kubenswrapper[4718]: I1210 15:09:49.405776 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pcgkg\" (UniqueName: \"kubernetes.io/projected/1b2a1e5f-eb1c-4070-9ddd-a0cb2a883c3e-kube-api-access-pcgkg\") pod \"community-operators-9ppmz\" (UID: \"1b2a1e5f-eb1c-4070-9ddd-a0cb2a883c3e\") " pod="openshift-marketplace/community-operators-9ppmz" Dec 10 15:09:49 crc kubenswrapper[4718]: I1210 15:09:49.405843 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1b2a1e5f-eb1c-4070-9ddd-a0cb2a883c3e-catalog-content\") pod \"community-operators-9ppmz\" (UID: \"1b2a1e5f-eb1c-4070-9ddd-a0cb2a883c3e\") " pod="openshift-marketplace/community-operators-9ppmz" Dec 10 15:09:49 crc kubenswrapper[4718]: I1210 15:09:49.406011 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1b2a1e5f-eb1c-4070-9ddd-a0cb2a883c3e-utilities\") pod \"community-operators-9ppmz\" (UID: \"1b2a1e5f-eb1c-4070-9ddd-a0cb2a883c3e\") " pod="openshift-marketplace/community-operators-9ppmz" Dec 10 15:09:49 crc kubenswrapper[4718]: I1210 15:09:49.406669 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1b2a1e5f-eb1c-4070-9ddd-a0cb2a883c3e-utilities\") pod \"community-operators-9ppmz\" (UID: \"1b2a1e5f-eb1c-4070-9ddd-a0cb2a883c3e\") " pod="openshift-marketplace/community-operators-9ppmz" Dec 10 15:09:49 crc kubenswrapper[4718]: I1210 15:09:49.406864 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1b2a1e5f-eb1c-4070-9ddd-a0cb2a883c3e-catalog-content\") pod \"community-operators-9ppmz\" (UID: \"1b2a1e5f-eb1c-4070-9ddd-a0cb2a883c3e\") " pod="openshift-marketplace/community-operators-9ppmz" Dec 10 15:09:49 crc kubenswrapper[4718]: I1210 15:09:49.436512 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pcgkg\" (UniqueName: \"kubernetes.io/projected/1b2a1e5f-eb1c-4070-9ddd-a0cb2a883c3e-kube-api-access-pcgkg\") pod \"community-operators-9ppmz\" (UID: \"1b2a1e5f-eb1c-4070-9ddd-a0cb2a883c3e\") " pod="openshift-marketplace/community-operators-9ppmz" Dec 10 15:09:49 crc kubenswrapper[4718]: I1210 15:09:49.518739 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9ppmz" Dec 10 15:09:50 crc kubenswrapper[4718]: I1210 15:09:50.037763 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="db4ee945-67d7-4670-9192-2ecaf4f03c3d" path="/var/lib/kubelet/pods/db4ee945-67d7-4670-9192-2ecaf4f03c3d/volumes" Dec 10 15:09:50 crc kubenswrapper[4718]: I1210 15:09:50.134416 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-9ppmz"] Dec 10 15:09:50 crc kubenswrapper[4718]: I1210 15:09:50.350163 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9ppmz" event={"ID":"1b2a1e5f-eb1c-4070-9ddd-a0cb2a883c3e","Type":"ContainerStarted","Data":"2239dac1841d68669b8bef31315314f241bd2e28047f5b6644d122a7d8fe20b7"} Dec 10 15:09:51 crc kubenswrapper[4718]: I1210 15:09:51.021152 4718 scope.go:117] "RemoveContainer" containerID="4270ecfcf97657b622e29b00041edfd4de3210eeefd5147a944aedf873bf5e26" Dec 10 15:09:51 crc kubenswrapper[4718]: E1210 15:09:51.021705 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8zmhn_openshift-machine-config-operator(8db53917-7cfb-496d-b8a0-5cc68f3be4e7)\"" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" podUID="8db53917-7cfb-496d-b8a0-5cc68f3be4e7" Dec 10 15:09:51 crc kubenswrapper[4718]: I1210 15:09:51.367186 4718 generic.go:334] "Generic (PLEG): container finished" podID="1b2a1e5f-eb1c-4070-9ddd-a0cb2a883c3e" containerID="16d0b72ce0b64ddface8a5196cfcbb8344cd1b17ab07acef22fab7c9df7217b7" exitCode=0 Dec 10 15:09:51 crc kubenswrapper[4718]: I1210 15:09:51.367266 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9ppmz" event={"ID":"1b2a1e5f-eb1c-4070-9ddd-a0cb2a883c3e","Type":"ContainerDied","Data":"16d0b72ce0b64ddface8a5196cfcbb8344cd1b17ab07acef22fab7c9df7217b7"} Dec 10 15:09:53 crc kubenswrapper[4718]: I1210 15:09:53.390931 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9ppmz" event={"ID":"1b2a1e5f-eb1c-4070-9ddd-a0cb2a883c3e","Type":"ContainerStarted","Data":"330696d53bf2fcdd150be923dfcd303cfa693508609ad2a772a5f3c8bf4791ea"} Dec 10 15:09:54 crc kubenswrapper[4718]: I1210 15:09:54.044152 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-bqj7n"] Dec 10 15:09:54 crc kubenswrapper[4718]: I1210 15:09:54.055311 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-bqj7n"] Dec 10 15:09:54 crc kubenswrapper[4718]: I1210 15:09:54.128581 4718 scope.go:117] "RemoveContainer" containerID="cb50add0d25a417f0511946c6a546ff6032e149e5d3e6c23746e66422a172656" Dec 10 15:09:54 crc kubenswrapper[4718]: I1210 15:09:54.169943 4718 scope.go:117] "RemoveContainer" containerID="1ac122f7920ac5e36138ce8ff9e8333fbba94203b46380f16113189b53ac545e" Dec 10 15:09:54 crc kubenswrapper[4718]: I1210 15:09:54.240079 4718 scope.go:117] "RemoveContainer" containerID="72987f27f5a542319df56b0be9e453dd2a1038e82678d8220b4c98c21b0878ff" Dec 10 15:09:54 crc kubenswrapper[4718]: I1210 15:09:54.403338 4718 generic.go:334] "Generic (PLEG): container finished" podID="1b2a1e5f-eb1c-4070-9ddd-a0cb2a883c3e" containerID="330696d53bf2fcdd150be923dfcd303cfa693508609ad2a772a5f3c8bf4791ea" exitCode=0 Dec 10 15:09:54 crc kubenswrapper[4718]: I1210 15:09:54.403426 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9ppmz" event={"ID":"1b2a1e5f-eb1c-4070-9ddd-a0cb2a883c3e","Type":"ContainerDied","Data":"330696d53bf2fcdd150be923dfcd303cfa693508609ad2a772a5f3c8bf4791ea"} Dec 10 15:09:56 crc kubenswrapper[4718]: I1210 15:09:56.039657 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2ffe7b9f-4057-4b1c-a2dd-18fbafb5474e" path="/var/lib/kubelet/pods/2ffe7b9f-4057-4b1c-a2dd-18fbafb5474e/volumes" Dec 10 15:09:56 crc kubenswrapper[4718]: I1210 15:09:56.045983 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-q7sbn"] Dec 10 15:09:56 crc kubenswrapper[4718]: I1210 15:09:56.059923 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-q7sbn"] Dec 10 15:09:56 crc kubenswrapper[4718]: I1210 15:09:56.427904 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9ppmz" event={"ID":"1b2a1e5f-eb1c-4070-9ddd-a0cb2a883c3e","Type":"ContainerStarted","Data":"e7923e10bb5cf71f83f170c307bb32eab3ad5258cf20d44e9f3ec95b3a96397c"} Dec 10 15:09:56 crc kubenswrapper[4718]: I1210 15:09:56.452664 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-9ppmz" podStartSLOduration=2.843678302 podStartE2EDuration="7.452642623s" podCreationTimestamp="2025-12-10 15:09:49 +0000 UTC" firstStartedPulling="2025-12-10 15:09:51.374800395 +0000 UTC m=+2296.324023812" lastFinishedPulling="2025-12-10 15:09:55.983764716 +0000 UTC m=+2300.932988133" observedRunningTime="2025-12-10 15:09:56.450544069 +0000 UTC m=+2301.399767486" watchObservedRunningTime="2025-12-10 15:09:56.452642623 +0000 UTC m=+2301.401866040" Dec 10 15:09:57 crc kubenswrapper[4718]: I1210 15:09:57.060528 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-abb8-account-create-update-gvhvt"] Dec 10 15:09:57 crc kubenswrapper[4718]: I1210 15:09:57.071040 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-abb8-account-create-update-gvhvt"] Dec 10 15:09:58 crc kubenswrapper[4718]: I1210 15:09:58.034896 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="494a054f-4818-4649-83ef-5b058e0d9436" path="/var/lib/kubelet/pods/494a054f-4818-4649-83ef-5b058e0d9436/volumes" Dec 10 15:09:58 crc kubenswrapper[4718]: I1210 15:09:58.036210 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="53c8e09f-d39f-4f38-9e9e-199399c09a14" path="/var/lib/kubelet/pods/53c8e09f-d39f-4f38-9e9e-199399c09a14/volumes" Dec 10 15:09:58 crc kubenswrapper[4718]: I1210 15:09:58.043618 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-1795-account-create-update-xl996"] Dec 10 15:09:58 crc kubenswrapper[4718]: I1210 15:09:58.066629 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-1795-account-create-update-xl996"] Dec 10 15:09:58 crc kubenswrapper[4718]: I1210 15:09:58.078803 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-z2hg5"] Dec 10 15:09:58 crc kubenswrapper[4718]: I1210 15:09:58.118945 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-z2hg5"] Dec 10 15:09:59 crc kubenswrapper[4718]: I1210 15:09:59.518978 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-9ppmz" Dec 10 15:09:59 crc kubenswrapper[4718]: I1210 15:09:59.519257 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-9ppmz" Dec 10 15:09:59 crc kubenswrapper[4718]: I1210 15:09:59.571128 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-9ppmz" Dec 10 15:10:00 crc kubenswrapper[4718]: I1210 15:10:00.043971 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="92f397d2-d553-4a53-88b2-314c2dc7ebf6" path="/var/lib/kubelet/pods/92f397d2-d553-4a53-88b2-314c2dc7ebf6/volumes" Dec 10 15:10:00 crc kubenswrapper[4718]: I1210 15:10:00.044923 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e8a444d7-abdd-44f0-82e1-cc8d1cd9b240" path="/var/lib/kubelet/pods/e8a444d7-abdd-44f0-82e1-cc8d1cd9b240/volumes" Dec 10 15:10:00 crc kubenswrapper[4718]: I1210 15:10:00.046257 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-9820-account-create-update-dph9s"] Dec 10 15:10:00 crc kubenswrapper[4718]: I1210 15:10:00.065984 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-9820-account-create-update-dph9s"] Dec 10 15:10:01 crc kubenswrapper[4718]: I1210 15:10:01.542051 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-9ppmz" Dec 10 15:10:01 crc kubenswrapper[4718]: I1210 15:10:01.613020 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-9ppmz"] Dec 10 15:10:02 crc kubenswrapper[4718]: I1210 15:10:02.020618 4718 scope.go:117] "RemoveContainer" containerID="4270ecfcf97657b622e29b00041edfd4de3210eeefd5147a944aedf873bf5e26" Dec 10 15:10:02 crc kubenswrapper[4718]: E1210 15:10:02.021119 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8zmhn_openshift-machine-config-operator(8db53917-7cfb-496d-b8a0-5cc68f3be4e7)\"" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" podUID="8db53917-7cfb-496d-b8a0-5cc68f3be4e7" Dec 10 15:10:02 crc kubenswrapper[4718]: I1210 15:10:02.032306 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="973eb84a-8809-4047-9112-4501f249ba68" path="/var/lib/kubelet/pods/973eb84a-8809-4047-9112-4501f249ba68/volumes" Dec 10 15:10:03 crc kubenswrapper[4718]: I1210 15:10:03.506716 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-9ppmz" podUID="1b2a1e5f-eb1c-4070-9ddd-a0cb2a883c3e" containerName="registry-server" containerID="cri-o://e7923e10bb5cf71f83f170c307bb32eab3ad5258cf20d44e9f3ec95b3a96397c" gracePeriod=2 Dec 10 15:10:04 crc kubenswrapper[4718]: I1210 15:10:04.521046 4718 generic.go:334] "Generic (PLEG): container finished" podID="1b2a1e5f-eb1c-4070-9ddd-a0cb2a883c3e" containerID="e7923e10bb5cf71f83f170c307bb32eab3ad5258cf20d44e9f3ec95b3a96397c" exitCode=0 Dec 10 15:10:04 crc kubenswrapper[4718]: I1210 15:10:04.521126 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9ppmz" event={"ID":"1b2a1e5f-eb1c-4070-9ddd-a0cb2a883c3e","Type":"ContainerDied","Data":"e7923e10bb5cf71f83f170c307bb32eab3ad5258cf20d44e9f3ec95b3a96397c"} Dec 10 15:10:04 crc kubenswrapper[4718]: I1210 15:10:04.521590 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9ppmz" event={"ID":"1b2a1e5f-eb1c-4070-9ddd-a0cb2a883c3e","Type":"ContainerDied","Data":"2239dac1841d68669b8bef31315314f241bd2e28047f5b6644d122a7d8fe20b7"} Dec 10 15:10:04 crc kubenswrapper[4718]: I1210 15:10:04.521613 4718 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2239dac1841d68669b8bef31315314f241bd2e28047f5b6644d122a7d8fe20b7" Dec 10 15:10:04 crc kubenswrapper[4718]: I1210 15:10:04.530624 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9ppmz" Dec 10 15:10:04 crc kubenswrapper[4718]: I1210 15:10:04.620909 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1b2a1e5f-eb1c-4070-9ddd-a0cb2a883c3e-utilities\") pod \"1b2a1e5f-eb1c-4070-9ddd-a0cb2a883c3e\" (UID: \"1b2a1e5f-eb1c-4070-9ddd-a0cb2a883c3e\") " Dec 10 15:10:04 crc kubenswrapper[4718]: I1210 15:10:04.621035 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1b2a1e5f-eb1c-4070-9ddd-a0cb2a883c3e-catalog-content\") pod \"1b2a1e5f-eb1c-4070-9ddd-a0cb2a883c3e\" (UID: \"1b2a1e5f-eb1c-4070-9ddd-a0cb2a883c3e\") " Dec 10 15:10:04 crc kubenswrapper[4718]: I1210 15:10:04.621114 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcgkg\" (UniqueName: \"kubernetes.io/projected/1b2a1e5f-eb1c-4070-9ddd-a0cb2a883c3e-kube-api-access-pcgkg\") pod \"1b2a1e5f-eb1c-4070-9ddd-a0cb2a883c3e\" (UID: \"1b2a1e5f-eb1c-4070-9ddd-a0cb2a883c3e\") " Dec 10 15:10:04 crc kubenswrapper[4718]: I1210 15:10:04.623283 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1b2a1e5f-eb1c-4070-9ddd-a0cb2a883c3e-utilities" (OuterVolumeSpecName: "utilities") pod "1b2a1e5f-eb1c-4070-9ddd-a0cb2a883c3e" (UID: "1b2a1e5f-eb1c-4070-9ddd-a0cb2a883c3e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 15:10:04 crc kubenswrapper[4718]: I1210 15:10:04.631798 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1b2a1e5f-eb1c-4070-9ddd-a0cb2a883c3e-kube-api-access-pcgkg" (OuterVolumeSpecName: "kube-api-access-pcgkg") pod "1b2a1e5f-eb1c-4070-9ddd-a0cb2a883c3e" (UID: "1b2a1e5f-eb1c-4070-9ddd-a0cb2a883c3e"). InnerVolumeSpecName "kube-api-access-pcgkg". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:10:04 crc kubenswrapper[4718]: I1210 15:10:04.675248 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1b2a1e5f-eb1c-4070-9ddd-a0cb2a883c3e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1b2a1e5f-eb1c-4070-9ddd-a0cb2a883c3e" (UID: "1b2a1e5f-eb1c-4070-9ddd-a0cb2a883c3e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 15:10:04 crc kubenswrapper[4718]: I1210 15:10:04.724786 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcgkg\" (UniqueName: \"kubernetes.io/projected/1b2a1e5f-eb1c-4070-9ddd-a0cb2a883c3e-kube-api-access-pcgkg\") on node \"crc\" DevicePath \"\"" Dec 10 15:10:04 crc kubenswrapper[4718]: I1210 15:10:04.724849 4718 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1b2a1e5f-eb1c-4070-9ddd-a0cb2a883c3e-utilities\") on node \"crc\" DevicePath \"\"" Dec 10 15:10:04 crc kubenswrapper[4718]: I1210 15:10:04.724863 4718 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1b2a1e5f-eb1c-4070-9ddd-a0cb2a883c3e-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 10 15:10:05 crc kubenswrapper[4718]: I1210 15:10:05.532069 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9ppmz" Dec 10 15:10:05 crc kubenswrapper[4718]: I1210 15:10:05.573837 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-9ppmz"] Dec 10 15:10:05 crc kubenswrapper[4718]: I1210 15:10:05.584783 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-9ppmz"] Dec 10 15:10:06 crc kubenswrapper[4718]: I1210 15:10:06.035255 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1b2a1e5f-eb1c-4070-9ddd-a0cb2a883c3e" path="/var/lib/kubelet/pods/1b2a1e5f-eb1c-4070-9ddd-a0cb2a883c3e/volumes" Dec 10 15:10:17 crc kubenswrapper[4718]: I1210 15:10:17.021152 4718 scope.go:117] "RemoveContainer" containerID="4270ecfcf97657b622e29b00041edfd4de3210eeefd5147a944aedf873bf5e26" Dec 10 15:10:17 crc kubenswrapper[4718]: E1210 15:10:17.022185 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8zmhn_openshift-machine-config-operator(8db53917-7cfb-496d-b8a0-5cc68f3be4e7)\"" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" podUID="8db53917-7cfb-496d-b8a0-5cc68f3be4e7" Dec 10 15:10:28 crc kubenswrapper[4718]: I1210 15:10:28.020901 4718 scope.go:117] "RemoveContainer" containerID="4270ecfcf97657b622e29b00041edfd4de3210eeefd5147a944aedf873bf5e26" Dec 10 15:10:28 crc kubenswrapper[4718]: E1210 15:10:28.021938 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8zmhn_openshift-machine-config-operator(8db53917-7cfb-496d-b8a0-5cc68f3be4e7)\"" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" podUID="8db53917-7cfb-496d-b8a0-5cc68f3be4e7" Dec 10 15:10:39 crc kubenswrapper[4718]: I1210 15:10:39.585442 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-vpffn"] Dec 10 15:10:39 crc kubenswrapper[4718]: E1210 15:10:39.586770 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1b2a1e5f-eb1c-4070-9ddd-a0cb2a883c3e" containerName="extract-content" Dec 10 15:10:39 crc kubenswrapper[4718]: I1210 15:10:39.586792 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b2a1e5f-eb1c-4070-9ddd-a0cb2a883c3e" containerName="extract-content" Dec 10 15:10:39 crc kubenswrapper[4718]: E1210 15:10:39.586839 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1b2a1e5f-eb1c-4070-9ddd-a0cb2a883c3e" containerName="registry-server" Dec 10 15:10:39 crc kubenswrapper[4718]: I1210 15:10:39.586849 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b2a1e5f-eb1c-4070-9ddd-a0cb2a883c3e" containerName="registry-server" Dec 10 15:10:39 crc kubenswrapper[4718]: E1210 15:10:39.586889 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1b2a1e5f-eb1c-4070-9ddd-a0cb2a883c3e" containerName="extract-utilities" Dec 10 15:10:39 crc kubenswrapper[4718]: I1210 15:10:39.586902 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b2a1e5f-eb1c-4070-9ddd-a0cb2a883c3e" containerName="extract-utilities" Dec 10 15:10:39 crc kubenswrapper[4718]: I1210 15:10:39.587189 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="1b2a1e5f-eb1c-4070-9ddd-a0cb2a883c3e" containerName="registry-server" Dec 10 15:10:39 crc kubenswrapper[4718]: I1210 15:10:39.589067 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vpffn" Dec 10 15:10:39 crc kubenswrapper[4718]: I1210 15:10:39.605428 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-vpffn"] Dec 10 15:10:39 crc kubenswrapper[4718]: I1210 15:10:39.670349 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6wmf2\" (UniqueName: \"kubernetes.io/projected/c2c1e20e-5859-490b-a388-df2b1fbc9ef1-kube-api-access-6wmf2\") pod \"redhat-marketplace-vpffn\" (UID: \"c2c1e20e-5859-490b-a388-df2b1fbc9ef1\") " pod="openshift-marketplace/redhat-marketplace-vpffn" Dec 10 15:10:39 crc kubenswrapper[4718]: I1210 15:10:39.670494 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c2c1e20e-5859-490b-a388-df2b1fbc9ef1-utilities\") pod \"redhat-marketplace-vpffn\" (UID: \"c2c1e20e-5859-490b-a388-df2b1fbc9ef1\") " pod="openshift-marketplace/redhat-marketplace-vpffn" Dec 10 15:10:39 crc kubenswrapper[4718]: I1210 15:10:39.670536 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c2c1e20e-5859-490b-a388-df2b1fbc9ef1-catalog-content\") pod \"redhat-marketplace-vpffn\" (UID: \"c2c1e20e-5859-490b-a388-df2b1fbc9ef1\") " pod="openshift-marketplace/redhat-marketplace-vpffn" Dec 10 15:10:39 crc kubenswrapper[4718]: I1210 15:10:39.772263 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c2c1e20e-5859-490b-a388-df2b1fbc9ef1-utilities\") pod \"redhat-marketplace-vpffn\" (UID: \"c2c1e20e-5859-490b-a388-df2b1fbc9ef1\") " pod="openshift-marketplace/redhat-marketplace-vpffn" Dec 10 15:10:39 crc kubenswrapper[4718]: I1210 15:10:39.772347 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c2c1e20e-5859-490b-a388-df2b1fbc9ef1-catalog-content\") pod \"redhat-marketplace-vpffn\" (UID: \"c2c1e20e-5859-490b-a388-df2b1fbc9ef1\") " pod="openshift-marketplace/redhat-marketplace-vpffn" Dec 10 15:10:39 crc kubenswrapper[4718]: I1210 15:10:39.772461 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6wmf2\" (UniqueName: \"kubernetes.io/projected/c2c1e20e-5859-490b-a388-df2b1fbc9ef1-kube-api-access-6wmf2\") pod \"redhat-marketplace-vpffn\" (UID: \"c2c1e20e-5859-490b-a388-df2b1fbc9ef1\") " pod="openshift-marketplace/redhat-marketplace-vpffn" Dec 10 15:10:39 crc kubenswrapper[4718]: I1210 15:10:39.772907 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c2c1e20e-5859-490b-a388-df2b1fbc9ef1-utilities\") pod \"redhat-marketplace-vpffn\" (UID: \"c2c1e20e-5859-490b-a388-df2b1fbc9ef1\") " pod="openshift-marketplace/redhat-marketplace-vpffn" Dec 10 15:10:39 crc kubenswrapper[4718]: I1210 15:10:39.772987 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c2c1e20e-5859-490b-a388-df2b1fbc9ef1-catalog-content\") pod \"redhat-marketplace-vpffn\" (UID: \"c2c1e20e-5859-490b-a388-df2b1fbc9ef1\") " pod="openshift-marketplace/redhat-marketplace-vpffn" Dec 10 15:10:39 crc kubenswrapper[4718]: I1210 15:10:39.796881 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6wmf2\" (UniqueName: \"kubernetes.io/projected/c2c1e20e-5859-490b-a388-df2b1fbc9ef1-kube-api-access-6wmf2\") pod \"redhat-marketplace-vpffn\" (UID: \"c2c1e20e-5859-490b-a388-df2b1fbc9ef1\") " pod="openshift-marketplace/redhat-marketplace-vpffn" Dec 10 15:10:39 crc kubenswrapper[4718]: I1210 15:10:39.911563 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vpffn" Dec 10 15:10:40 crc kubenswrapper[4718]: I1210 15:10:40.756267 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-vpffn"] Dec 10 15:10:41 crc kubenswrapper[4718]: I1210 15:10:41.012510 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vpffn" event={"ID":"c2c1e20e-5859-490b-a388-df2b1fbc9ef1","Type":"ContainerStarted","Data":"669cd6b6a55b326a91ae3d07c3ca5052aec8ad1dc13bd03ad42029815e7378e2"} Dec 10 15:10:42 crc kubenswrapper[4718]: I1210 15:10:42.022528 4718 scope.go:117] "RemoveContainer" containerID="4270ecfcf97657b622e29b00041edfd4de3210eeefd5147a944aedf873bf5e26" Dec 10 15:10:42 crc kubenswrapper[4718]: E1210 15:10:42.023280 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8zmhn_openshift-machine-config-operator(8db53917-7cfb-496d-b8a0-5cc68f3be4e7)\"" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" podUID="8db53917-7cfb-496d-b8a0-5cc68f3be4e7" Dec 10 15:10:42 crc kubenswrapper[4718]: I1210 15:10:42.030637 4718 generic.go:334] "Generic (PLEG): container finished" podID="c2c1e20e-5859-490b-a388-df2b1fbc9ef1" containerID="60b2e8e9f8b72c9a897b577ad853722bbf9d2154752d877f22a7b467a0dc2752" exitCode=0 Dec 10 15:10:42 crc kubenswrapper[4718]: I1210 15:10:42.039699 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vpffn" event={"ID":"c2c1e20e-5859-490b-a388-df2b1fbc9ef1","Type":"ContainerDied","Data":"60b2e8e9f8b72c9a897b577ad853722bbf9d2154752d877f22a7b467a0dc2752"} Dec 10 15:10:43 crc kubenswrapper[4718]: E1210 15:10:43.450128 4718 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc2c1e20e_5859_490b_a388_df2b1fbc9ef1.slice/crio-fb250341291d79f8dbbdadc84f266da1ca43c69e49e0a02a23913a2d2c7c5fb1.scope\": RecentStats: unable to find data in memory cache]" Dec 10 15:10:44 crc kubenswrapper[4718]: I1210 15:10:44.064159 4718 generic.go:334] "Generic (PLEG): container finished" podID="c2c1e20e-5859-490b-a388-df2b1fbc9ef1" containerID="fb250341291d79f8dbbdadc84f266da1ca43c69e49e0a02a23913a2d2c7c5fb1" exitCode=0 Dec 10 15:10:44 crc kubenswrapper[4718]: I1210 15:10:44.064228 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vpffn" event={"ID":"c2c1e20e-5859-490b-a388-df2b1fbc9ef1","Type":"ContainerDied","Data":"fb250341291d79f8dbbdadc84f266da1ca43c69e49e0a02a23913a2d2c7c5fb1"} Dec 10 15:10:45 crc kubenswrapper[4718]: I1210 15:10:45.082316 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vpffn" event={"ID":"c2c1e20e-5859-490b-a388-df2b1fbc9ef1","Type":"ContainerStarted","Data":"a4e4ab537efa657dce444bf22e701e51b17f9a0dc88c15f3601b947b12f44471"} Dec 10 15:10:45 crc kubenswrapper[4718]: I1210 15:10:45.115172 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-vpffn" podStartSLOduration=3.414812004 podStartE2EDuration="6.115134248s" podCreationTimestamp="2025-12-10 15:10:39 +0000 UTC" firstStartedPulling="2025-12-10 15:10:42.037404052 +0000 UTC m=+2346.986627469" lastFinishedPulling="2025-12-10 15:10:44.737726296 +0000 UTC m=+2349.686949713" observedRunningTime="2025-12-10 15:10:45.10378915 +0000 UTC m=+2350.053012567" watchObservedRunningTime="2025-12-10 15:10:45.115134248 +0000 UTC m=+2350.064357665" Dec 10 15:10:49 crc kubenswrapper[4718]: I1210 15:10:49.912126 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-vpffn" Dec 10 15:10:49 crc kubenswrapper[4718]: I1210 15:10:49.913741 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-vpffn" Dec 10 15:10:49 crc kubenswrapper[4718]: I1210 15:10:49.972650 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-vpffn" Dec 10 15:10:50 crc kubenswrapper[4718]: I1210 15:10:50.210899 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-vpffn" Dec 10 15:10:51 crc kubenswrapper[4718]: I1210 15:10:51.551244 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-vpffn"] Dec 10 15:10:52 crc kubenswrapper[4718]: I1210 15:10:52.188225 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-vpffn" podUID="c2c1e20e-5859-490b-a388-df2b1fbc9ef1" containerName="registry-server" containerID="cri-o://a4e4ab537efa657dce444bf22e701e51b17f9a0dc88c15f3601b947b12f44471" gracePeriod=2 Dec 10 15:10:52 crc kubenswrapper[4718]: I1210 15:10:52.681778 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vpffn" Dec 10 15:10:52 crc kubenswrapper[4718]: I1210 15:10:52.707697 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6wmf2\" (UniqueName: \"kubernetes.io/projected/c2c1e20e-5859-490b-a388-df2b1fbc9ef1-kube-api-access-6wmf2\") pod \"c2c1e20e-5859-490b-a388-df2b1fbc9ef1\" (UID: \"c2c1e20e-5859-490b-a388-df2b1fbc9ef1\") " Dec 10 15:10:52 crc kubenswrapper[4718]: I1210 15:10:52.707837 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c2c1e20e-5859-490b-a388-df2b1fbc9ef1-catalog-content\") pod \"c2c1e20e-5859-490b-a388-df2b1fbc9ef1\" (UID: \"c2c1e20e-5859-490b-a388-df2b1fbc9ef1\") " Dec 10 15:10:52 crc kubenswrapper[4718]: I1210 15:10:52.707883 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c2c1e20e-5859-490b-a388-df2b1fbc9ef1-utilities\") pod \"c2c1e20e-5859-490b-a388-df2b1fbc9ef1\" (UID: \"c2c1e20e-5859-490b-a388-df2b1fbc9ef1\") " Dec 10 15:10:52 crc kubenswrapper[4718]: I1210 15:10:52.709249 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c2c1e20e-5859-490b-a388-df2b1fbc9ef1-utilities" (OuterVolumeSpecName: "utilities") pod "c2c1e20e-5859-490b-a388-df2b1fbc9ef1" (UID: "c2c1e20e-5859-490b-a388-df2b1fbc9ef1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 15:10:52 crc kubenswrapper[4718]: I1210 15:10:52.715912 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c2c1e20e-5859-490b-a388-df2b1fbc9ef1-kube-api-access-6wmf2" (OuterVolumeSpecName: "kube-api-access-6wmf2") pod "c2c1e20e-5859-490b-a388-df2b1fbc9ef1" (UID: "c2c1e20e-5859-490b-a388-df2b1fbc9ef1"). InnerVolumeSpecName "kube-api-access-6wmf2". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:10:52 crc kubenswrapper[4718]: I1210 15:10:52.741147 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c2c1e20e-5859-490b-a388-df2b1fbc9ef1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c2c1e20e-5859-490b-a388-df2b1fbc9ef1" (UID: "c2c1e20e-5859-490b-a388-df2b1fbc9ef1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 15:10:52 crc kubenswrapper[4718]: I1210 15:10:52.810522 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6wmf2\" (UniqueName: \"kubernetes.io/projected/c2c1e20e-5859-490b-a388-df2b1fbc9ef1-kube-api-access-6wmf2\") on node \"crc\" DevicePath \"\"" Dec 10 15:10:52 crc kubenswrapper[4718]: I1210 15:10:52.810594 4718 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c2c1e20e-5859-490b-a388-df2b1fbc9ef1-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 10 15:10:52 crc kubenswrapper[4718]: I1210 15:10:52.810614 4718 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c2c1e20e-5859-490b-a388-df2b1fbc9ef1-utilities\") on node \"crc\" DevicePath \"\"" Dec 10 15:10:53 crc kubenswrapper[4718]: I1210 15:10:53.204702 4718 generic.go:334] "Generic (PLEG): container finished" podID="c2c1e20e-5859-490b-a388-df2b1fbc9ef1" containerID="a4e4ab537efa657dce444bf22e701e51b17f9a0dc88c15f3601b947b12f44471" exitCode=0 Dec 10 15:10:53 crc kubenswrapper[4718]: I1210 15:10:53.204802 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vpffn" event={"ID":"c2c1e20e-5859-490b-a388-df2b1fbc9ef1","Type":"ContainerDied","Data":"a4e4ab537efa657dce444bf22e701e51b17f9a0dc88c15f3601b947b12f44471"} Dec 10 15:10:53 crc kubenswrapper[4718]: I1210 15:10:53.204919 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vpffn" Dec 10 15:10:53 crc kubenswrapper[4718]: I1210 15:10:53.205404 4718 scope.go:117] "RemoveContainer" containerID="a4e4ab537efa657dce444bf22e701e51b17f9a0dc88c15f3601b947b12f44471" Dec 10 15:10:53 crc kubenswrapper[4718]: I1210 15:10:53.205355 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vpffn" event={"ID":"c2c1e20e-5859-490b-a388-df2b1fbc9ef1","Type":"ContainerDied","Data":"669cd6b6a55b326a91ae3d07c3ca5052aec8ad1dc13bd03ad42029815e7378e2"} Dec 10 15:10:53 crc kubenswrapper[4718]: I1210 15:10:53.243770 4718 scope.go:117] "RemoveContainer" containerID="fb250341291d79f8dbbdadc84f266da1ca43c69e49e0a02a23913a2d2c7c5fb1" Dec 10 15:10:53 crc kubenswrapper[4718]: I1210 15:10:53.249271 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-vpffn"] Dec 10 15:10:53 crc kubenswrapper[4718]: I1210 15:10:53.263521 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-vpffn"] Dec 10 15:10:53 crc kubenswrapper[4718]: I1210 15:10:53.279963 4718 scope.go:117] "RemoveContainer" containerID="60b2e8e9f8b72c9a897b577ad853722bbf9d2154752d877f22a7b467a0dc2752" Dec 10 15:10:53 crc kubenswrapper[4718]: I1210 15:10:53.332871 4718 scope.go:117] "RemoveContainer" containerID="a4e4ab537efa657dce444bf22e701e51b17f9a0dc88c15f3601b947b12f44471" Dec 10 15:10:53 crc kubenswrapper[4718]: E1210 15:10:53.333585 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a4e4ab537efa657dce444bf22e701e51b17f9a0dc88c15f3601b947b12f44471\": container with ID starting with a4e4ab537efa657dce444bf22e701e51b17f9a0dc88c15f3601b947b12f44471 not found: ID does not exist" containerID="a4e4ab537efa657dce444bf22e701e51b17f9a0dc88c15f3601b947b12f44471" Dec 10 15:10:53 crc kubenswrapper[4718]: I1210 15:10:53.333621 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a4e4ab537efa657dce444bf22e701e51b17f9a0dc88c15f3601b947b12f44471"} err="failed to get container status \"a4e4ab537efa657dce444bf22e701e51b17f9a0dc88c15f3601b947b12f44471\": rpc error: code = NotFound desc = could not find container \"a4e4ab537efa657dce444bf22e701e51b17f9a0dc88c15f3601b947b12f44471\": container with ID starting with a4e4ab537efa657dce444bf22e701e51b17f9a0dc88c15f3601b947b12f44471 not found: ID does not exist" Dec 10 15:10:53 crc kubenswrapper[4718]: I1210 15:10:53.333649 4718 scope.go:117] "RemoveContainer" containerID="fb250341291d79f8dbbdadc84f266da1ca43c69e49e0a02a23913a2d2c7c5fb1" Dec 10 15:10:53 crc kubenswrapper[4718]: E1210 15:10:53.334824 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fb250341291d79f8dbbdadc84f266da1ca43c69e49e0a02a23913a2d2c7c5fb1\": container with ID starting with fb250341291d79f8dbbdadc84f266da1ca43c69e49e0a02a23913a2d2c7c5fb1 not found: ID does not exist" containerID="fb250341291d79f8dbbdadc84f266da1ca43c69e49e0a02a23913a2d2c7c5fb1" Dec 10 15:10:53 crc kubenswrapper[4718]: I1210 15:10:53.334854 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fb250341291d79f8dbbdadc84f266da1ca43c69e49e0a02a23913a2d2c7c5fb1"} err="failed to get container status \"fb250341291d79f8dbbdadc84f266da1ca43c69e49e0a02a23913a2d2c7c5fb1\": rpc error: code = NotFound desc = could not find container \"fb250341291d79f8dbbdadc84f266da1ca43c69e49e0a02a23913a2d2c7c5fb1\": container with ID starting with fb250341291d79f8dbbdadc84f266da1ca43c69e49e0a02a23913a2d2c7c5fb1 not found: ID does not exist" Dec 10 15:10:53 crc kubenswrapper[4718]: I1210 15:10:53.334870 4718 scope.go:117] "RemoveContainer" containerID="60b2e8e9f8b72c9a897b577ad853722bbf9d2154752d877f22a7b467a0dc2752" Dec 10 15:10:53 crc kubenswrapper[4718]: E1210 15:10:53.336013 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"60b2e8e9f8b72c9a897b577ad853722bbf9d2154752d877f22a7b467a0dc2752\": container with ID starting with 60b2e8e9f8b72c9a897b577ad853722bbf9d2154752d877f22a7b467a0dc2752 not found: ID does not exist" containerID="60b2e8e9f8b72c9a897b577ad853722bbf9d2154752d877f22a7b467a0dc2752" Dec 10 15:10:53 crc kubenswrapper[4718]: I1210 15:10:53.336040 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"60b2e8e9f8b72c9a897b577ad853722bbf9d2154752d877f22a7b467a0dc2752"} err="failed to get container status \"60b2e8e9f8b72c9a897b577ad853722bbf9d2154752d877f22a7b467a0dc2752\": rpc error: code = NotFound desc = could not find container \"60b2e8e9f8b72c9a897b577ad853722bbf9d2154752d877f22a7b467a0dc2752\": container with ID starting with 60b2e8e9f8b72c9a897b577ad853722bbf9d2154752d877f22a7b467a0dc2752 not found: ID does not exist" Dec 10 15:10:54 crc kubenswrapper[4718]: I1210 15:10:54.035358 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c2c1e20e-5859-490b-a388-df2b1fbc9ef1" path="/var/lib/kubelet/pods/c2c1e20e-5859-490b-a388-df2b1fbc9ef1/volumes" Dec 10 15:10:54 crc kubenswrapper[4718]: I1210 15:10:54.394101 4718 scope.go:117] "RemoveContainer" containerID="a29fd9c61aa28706fa5c1ffbb42321687f1b88f8cd61205213d14fefcf0558fa" Dec 10 15:10:54 crc kubenswrapper[4718]: I1210 15:10:54.434576 4718 scope.go:117] "RemoveContainer" containerID="f7a33f0cb27f4558a08f843750f07d74190973272c6b174236e70bfa6c0c2489" Dec 10 15:10:54 crc kubenswrapper[4718]: I1210 15:10:54.485179 4718 scope.go:117] "RemoveContainer" containerID="4aba8bb9ec93f14d819c0a3a755c64b0c571a598daa9bd5de906e209defd3259" Dec 10 15:10:54 crc kubenswrapper[4718]: I1210 15:10:54.549886 4718 scope.go:117] "RemoveContainer" containerID="71042674cf241a0594e797aeb1d33b8781828a24c8a0265bb141be1271360622" Dec 10 15:10:54 crc kubenswrapper[4718]: I1210 15:10:54.611589 4718 scope.go:117] "RemoveContainer" containerID="3788c1c2dfd3bab1fc2a97ccb9179fee7e7a0e8383936e75c17253c6eabcf485" Dec 10 15:10:54 crc kubenswrapper[4718]: I1210 15:10:54.660025 4718 scope.go:117] "RemoveContainer" containerID="fdb8c0b77873ec44286374f4c946c1bb142cba1c170126b8e161a530444c509c" Dec 10 15:10:55 crc kubenswrapper[4718]: I1210 15:10:55.022823 4718 scope.go:117] "RemoveContainer" containerID="4270ecfcf97657b622e29b00041edfd4de3210eeefd5147a944aedf873bf5e26" Dec 10 15:10:55 crc kubenswrapper[4718]: E1210 15:10:55.023280 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8zmhn_openshift-machine-config-operator(8db53917-7cfb-496d-b8a0-5cc68f3be4e7)\"" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" podUID="8db53917-7cfb-496d-b8a0-5cc68f3be4e7" Dec 10 15:10:57 crc kubenswrapper[4718]: I1210 15:10:57.956198 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-k4zfl"] Dec 10 15:10:57 crc kubenswrapper[4718]: E1210 15:10:57.957490 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c2c1e20e-5859-490b-a388-df2b1fbc9ef1" containerName="extract-content" Dec 10 15:10:57 crc kubenswrapper[4718]: I1210 15:10:57.957511 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="c2c1e20e-5859-490b-a388-df2b1fbc9ef1" containerName="extract-content" Dec 10 15:10:57 crc kubenswrapper[4718]: E1210 15:10:57.957558 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c2c1e20e-5859-490b-a388-df2b1fbc9ef1" containerName="extract-utilities" Dec 10 15:10:57 crc kubenswrapper[4718]: I1210 15:10:57.957567 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="c2c1e20e-5859-490b-a388-df2b1fbc9ef1" containerName="extract-utilities" Dec 10 15:10:57 crc kubenswrapper[4718]: E1210 15:10:57.957597 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c2c1e20e-5859-490b-a388-df2b1fbc9ef1" containerName="registry-server" Dec 10 15:10:57 crc kubenswrapper[4718]: I1210 15:10:57.957605 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="c2c1e20e-5859-490b-a388-df2b1fbc9ef1" containerName="registry-server" Dec 10 15:10:57 crc kubenswrapper[4718]: I1210 15:10:57.957898 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="c2c1e20e-5859-490b-a388-df2b1fbc9ef1" containerName="registry-server" Dec 10 15:10:57 crc kubenswrapper[4718]: I1210 15:10:57.959929 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-k4zfl" Dec 10 15:10:57 crc kubenswrapper[4718]: I1210 15:10:57.978974 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-k4zfl"] Dec 10 15:10:58 crc kubenswrapper[4718]: I1210 15:10:58.147048 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/52deaec7-cc2d-4943-b0e0-bc09ad511065-catalog-content\") pod \"certified-operators-k4zfl\" (UID: \"52deaec7-cc2d-4943-b0e0-bc09ad511065\") " pod="openshift-marketplace/certified-operators-k4zfl" Dec 10 15:10:58 crc kubenswrapper[4718]: I1210 15:10:58.147615 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/52deaec7-cc2d-4943-b0e0-bc09ad511065-utilities\") pod \"certified-operators-k4zfl\" (UID: \"52deaec7-cc2d-4943-b0e0-bc09ad511065\") " pod="openshift-marketplace/certified-operators-k4zfl" Dec 10 15:10:58 crc kubenswrapper[4718]: I1210 15:10:58.147774 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h2db6\" (UniqueName: \"kubernetes.io/projected/52deaec7-cc2d-4943-b0e0-bc09ad511065-kube-api-access-h2db6\") pod \"certified-operators-k4zfl\" (UID: \"52deaec7-cc2d-4943-b0e0-bc09ad511065\") " pod="openshift-marketplace/certified-operators-k4zfl" Dec 10 15:10:58 crc kubenswrapper[4718]: I1210 15:10:58.251377 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/52deaec7-cc2d-4943-b0e0-bc09ad511065-utilities\") pod \"certified-operators-k4zfl\" (UID: \"52deaec7-cc2d-4943-b0e0-bc09ad511065\") " pod="openshift-marketplace/certified-operators-k4zfl" Dec 10 15:10:58 crc kubenswrapper[4718]: I1210 15:10:58.251502 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h2db6\" (UniqueName: \"kubernetes.io/projected/52deaec7-cc2d-4943-b0e0-bc09ad511065-kube-api-access-h2db6\") pod \"certified-operators-k4zfl\" (UID: \"52deaec7-cc2d-4943-b0e0-bc09ad511065\") " pod="openshift-marketplace/certified-operators-k4zfl" Dec 10 15:10:58 crc kubenswrapper[4718]: I1210 15:10:58.251705 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/52deaec7-cc2d-4943-b0e0-bc09ad511065-catalog-content\") pod \"certified-operators-k4zfl\" (UID: \"52deaec7-cc2d-4943-b0e0-bc09ad511065\") " pod="openshift-marketplace/certified-operators-k4zfl" Dec 10 15:10:58 crc kubenswrapper[4718]: I1210 15:10:58.252054 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/52deaec7-cc2d-4943-b0e0-bc09ad511065-utilities\") pod \"certified-operators-k4zfl\" (UID: \"52deaec7-cc2d-4943-b0e0-bc09ad511065\") " pod="openshift-marketplace/certified-operators-k4zfl" Dec 10 15:10:58 crc kubenswrapper[4718]: I1210 15:10:58.252324 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/52deaec7-cc2d-4943-b0e0-bc09ad511065-catalog-content\") pod \"certified-operators-k4zfl\" (UID: \"52deaec7-cc2d-4943-b0e0-bc09ad511065\") " pod="openshift-marketplace/certified-operators-k4zfl" Dec 10 15:10:58 crc kubenswrapper[4718]: I1210 15:10:58.280147 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h2db6\" (UniqueName: \"kubernetes.io/projected/52deaec7-cc2d-4943-b0e0-bc09ad511065-kube-api-access-h2db6\") pod \"certified-operators-k4zfl\" (UID: \"52deaec7-cc2d-4943-b0e0-bc09ad511065\") " pod="openshift-marketplace/certified-operators-k4zfl" Dec 10 15:10:58 crc kubenswrapper[4718]: I1210 15:10:58.283617 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-k4zfl" Dec 10 15:10:58 crc kubenswrapper[4718]: I1210 15:10:58.957015 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-k4zfl"] Dec 10 15:10:59 crc kubenswrapper[4718]: I1210 15:10:59.277515 4718 generic.go:334] "Generic (PLEG): container finished" podID="52deaec7-cc2d-4943-b0e0-bc09ad511065" containerID="dc89d69ccfeea5ab736c1481bc20d76d6b036b652d4e2789268cbce01f5bff36" exitCode=0 Dec 10 15:10:59 crc kubenswrapper[4718]: I1210 15:10:59.277775 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-k4zfl" event={"ID":"52deaec7-cc2d-4943-b0e0-bc09ad511065","Type":"ContainerDied","Data":"dc89d69ccfeea5ab736c1481bc20d76d6b036b652d4e2789268cbce01f5bff36"} Dec 10 15:10:59 crc kubenswrapper[4718]: I1210 15:10:59.278031 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-k4zfl" event={"ID":"52deaec7-cc2d-4943-b0e0-bc09ad511065","Type":"ContainerStarted","Data":"b0898db4b1c85368abec23504e7256f4c5f0be8f9c5e58c43796f25e95171e33"} Dec 10 15:11:02 crc kubenswrapper[4718]: I1210 15:11:02.329958 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-k4zfl" event={"ID":"52deaec7-cc2d-4943-b0e0-bc09ad511065","Type":"ContainerStarted","Data":"02e7a0f99890aee5cd4b55926ce099b46ab25507cacab7bd54e7e76a01abee4d"} Dec 10 15:11:04 crc kubenswrapper[4718]: I1210 15:11:04.352956 4718 generic.go:334] "Generic (PLEG): container finished" podID="52deaec7-cc2d-4943-b0e0-bc09ad511065" containerID="02e7a0f99890aee5cd4b55926ce099b46ab25507cacab7bd54e7e76a01abee4d" exitCode=0 Dec 10 15:11:04 crc kubenswrapper[4718]: I1210 15:11:04.353460 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-k4zfl" event={"ID":"52deaec7-cc2d-4943-b0e0-bc09ad511065","Type":"ContainerDied","Data":"02e7a0f99890aee5cd4b55926ce099b46ab25507cacab7bd54e7e76a01abee4d"} Dec 10 15:11:06 crc kubenswrapper[4718]: I1210 15:11:06.379250 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-k4zfl" event={"ID":"52deaec7-cc2d-4943-b0e0-bc09ad511065","Type":"ContainerStarted","Data":"8ece470529a9f3352edd09d8e8b9baa3cee3122dbede2412ba5495901a5cf1ec"} Dec 10 15:11:06 crc kubenswrapper[4718]: I1210 15:11:06.403009 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-k4zfl" podStartSLOduration=3.371653693 podStartE2EDuration="9.402974543s" podCreationTimestamp="2025-12-10 15:10:57 +0000 UTC" firstStartedPulling="2025-12-10 15:10:59.280121112 +0000 UTC m=+2364.229344529" lastFinishedPulling="2025-12-10 15:11:05.311441962 +0000 UTC m=+2370.260665379" observedRunningTime="2025-12-10 15:11:06.397261308 +0000 UTC m=+2371.346484735" watchObservedRunningTime="2025-12-10 15:11:06.402974543 +0000 UTC m=+2371.352197960" Dec 10 15:11:07 crc kubenswrapper[4718]: I1210 15:11:07.020714 4718 scope.go:117] "RemoveContainer" containerID="4270ecfcf97657b622e29b00041edfd4de3210eeefd5147a944aedf873bf5e26" Dec 10 15:11:07 crc kubenswrapper[4718]: E1210 15:11:07.021444 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8zmhn_openshift-machine-config-operator(8db53917-7cfb-496d-b8a0-5cc68f3be4e7)\"" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" podUID="8db53917-7cfb-496d-b8a0-5cc68f3be4e7" Dec 10 15:11:08 crc kubenswrapper[4718]: I1210 15:11:08.284691 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-k4zfl" Dec 10 15:11:08 crc kubenswrapper[4718]: I1210 15:11:08.284812 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-k4zfl" Dec 10 15:11:08 crc kubenswrapper[4718]: I1210 15:11:08.333875 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-k4zfl" Dec 10 15:11:18 crc kubenswrapper[4718]: I1210 15:11:18.354088 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-k4zfl" Dec 10 15:11:18 crc kubenswrapper[4718]: I1210 15:11:18.415221 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-k4zfl"] Dec 10 15:11:18 crc kubenswrapper[4718]: I1210 15:11:18.535104 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-k4zfl" podUID="52deaec7-cc2d-4943-b0e0-bc09ad511065" containerName="registry-server" containerID="cri-o://8ece470529a9f3352edd09d8e8b9baa3cee3122dbede2412ba5495901a5cf1ec" gracePeriod=2 Dec 10 15:11:19 crc kubenswrapper[4718]: I1210 15:11:19.020683 4718 scope.go:117] "RemoveContainer" containerID="4270ecfcf97657b622e29b00041edfd4de3210eeefd5147a944aedf873bf5e26" Dec 10 15:11:19 crc kubenswrapper[4718]: E1210 15:11:19.021225 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8zmhn_openshift-machine-config-operator(8db53917-7cfb-496d-b8a0-5cc68f3be4e7)\"" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" podUID="8db53917-7cfb-496d-b8a0-5cc68f3be4e7" Dec 10 15:11:19 crc kubenswrapper[4718]: I1210 15:11:19.033547 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-k4zfl" Dec 10 15:11:19 crc kubenswrapper[4718]: I1210 15:11:19.137121 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h2db6\" (UniqueName: \"kubernetes.io/projected/52deaec7-cc2d-4943-b0e0-bc09ad511065-kube-api-access-h2db6\") pod \"52deaec7-cc2d-4943-b0e0-bc09ad511065\" (UID: \"52deaec7-cc2d-4943-b0e0-bc09ad511065\") " Dec 10 15:11:19 crc kubenswrapper[4718]: I1210 15:11:19.137212 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/52deaec7-cc2d-4943-b0e0-bc09ad511065-catalog-content\") pod \"52deaec7-cc2d-4943-b0e0-bc09ad511065\" (UID: \"52deaec7-cc2d-4943-b0e0-bc09ad511065\") " Dec 10 15:11:19 crc kubenswrapper[4718]: I1210 15:11:19.137361 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/52deaec7-cc2d-4943-b0e0-bc09ad511065-utilities\") pod \"52deaec7-cc2d-4943-b0e0-bc09ad511065\" (UID: \"52deaec7-cc2d-4943-b0e0-bc09ad511065\") " Dec 10 15:11:19 crc kubenswrapper[4718]: I1210 15:11:19.139076 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/52deaec7-cc2d-4943-b0e0-bc09ad511065-utilities" (OuterVolumeSpecName: "utilities") pod "52deaec7-cc2d-4943-b0e0-bc09ad511065" (UID: "52deaec7-cc2d-4943-b0e0-bc09ad511065"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 15:11:19 crc kubenswrapper[4718]: I1210 15:11:19.143262 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/52deaec7-cc2d-4943-b0e0-bc09ad511065-kube-api-access-h2db6" (OuterVolumeSpecName: "kube-api-access-h2db6") pod "52deaec7-cc2d-4943-b0e0-bc09ad511065" (UID: "52deaec7-cc2d-4943-b0e0-bc09ad511065"). InnerVolumeSpecName "kube-api-access-h2db6". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:11:19 crc kubenswrapper[4718]: I1210 15:11:19.199152 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/52deaec7-cc2d-4943-b0e0-bc09ad511065-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "52deaec7-cc2d-4943-b0e0-bc09ad511065" (UID: "52deaec7-cc2d-4943-b0e0-bc09ad511065"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 15:11:19 crc kubenswrapper[4718]: I1210 15:11:19.240105 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h2db6\" (UniqueName: \"kubernetes.io/projected/52deaec7-cc2d-4943-b0e0-bc09ad511065-kube-api-access-h2db6\") on node \"crc\" DevicePath \"\"" Dec 10 15:11:19 crc kubenswrapper[4718]: I1210 15:11:19.240340 4718 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/52deaec7-cc2d-4943-b0e0-bc09ad511065-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 10 15:11:19 crc kubenswrapper[4718]: I1210 15:11:19.240358 4718 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/52deaec7-cc2d-4943-b0e0-bc09ad511065-utilities\") on node \"crc\" DevicePath \"\"" Dec 10 15:11:19 crc kubenswrapper[4718]: I1210 15:11:19.549063 4718 generic.go:334] "Generic (PLEG): container finished" podID="52deaec7-cc2d-4943-b0e0-bc09ad511065" containerID="8ece470529a9f3352edd09d8e8b9baa3cee3122dbede2412ba5495901a5cf1ec" exitCode=0 Dec 10 15:11:19 crc kubenswrapper[4718]: I1210 15:11:19.549141 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-k4zfl" Dec 10 15:11:19 crc kubenswrapper[4718]: I1210 15:11:19.549135 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-k4zfl" event={"ID":"52deaec7-cc2d-4943-b0e0-bc09ad511065","Type":"ContainerDied","Data":"8ece470529a9f3352edd09d8e8b9baa3cee3122dbede2412ba5495901a5cf1ec"} Dec 10 15:11:19 crc kubenswrapper[4718]: I1210 15:11:19.549220 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-k4zfl" event={"ID":"52deaec7-cc2d-4943-b0e0-bc09ad511065","Type":"ContainerDied","Data":"b0898db4b1c85368abec23504e7256f4c5f0be8f9c5e58c43796f25e95171e33"} Dec 10 15:11:19 crc kubenswrapper[4718]: I1210 15:11:19.549251 4718 scope.go:117] "RemoveContainer" containerID="8ece470529a9f3352edd09d8e8b9baa3cee3122dbede2412ba5495901a5cf1ec" Dec 10 15:11:19 crc kubenswrapper[4718]: I1210 15:11:19.580560 4718 scope.go:117] "RemoveContainer" containerID="02e7a0f99890aee5cd4b55926ce099b46ab25507cacab7bd54e7e76a01abee4d" Dec 10 15:11:19 crc kubenswrapper[4718]: I1210 15:11:19.590840 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-k4zfl"] Dec 10 15:11:19 crc kubenswrapper[4718]: I1210 15:11:19.600667 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-k4zfl"] Dec 10 15:11:19 crc kubenswrapper[4718]: I1210 15:11:19.617870 4718 scope.go:117] "RemoveContainer" containerID="dc89d69ccfeea5ab736c1481bc20d76d6b036b652d4e2789268cbce01f5bff36" Dec 10 15:11:19 crc kubenswrapper[4718]: I1210 15:11:19.658409 4718 scope.go:117] "RemoveContainer" containerID="8ece470529a9f3352edd09d8e8b9baa3cee3122dbede2412ba5495901a5cf1ec" Dec 10 15:11:19 crc kubenswrapper[4718]: E1210 15:11:19.659239 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8ece470529a9f3352edd09d8e8b9baa3cee3122dbede2412ba5495901a5cf1ec\": container with ID starting with 8ece470529a9f3352edd09d8e8b9baa3cee3122dbede2412ba5495901a5cf1ec not found: ID does not exist" containerID="8ece470529a9f3352edd09d8e8b9baa3cee3122dbede2412ba5495901a5cf1ec" Dec 10 15:11:19 crc kubenswrapper[4718]: I1210 15:11:19.659352 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8ece470529a9f3352edd09d8e8b9baa3cee3122dbede2412ba5495901a5cf1ec"} err="failed to get container status \"8ece470529a9f3352edd09d8e8b9baa3cee3122dbede2412ba5495901a5cf1ec\": rpc error: code = NotFound desc = could not find container \"8ece470529a9f3352edd09d8e8b9baa3cee3122dbede2412ba5495901a5cf1ec\": container with ID starting with 8ece470529a9f3352edd09d8e8b9baa3cee3122dbede2412ba5495901a5cf1ec not found: ID does not exist" Dec 10 15:11:19 crc kubenswrapper[4718]: I1210 15:11:19.659421 4718 scope.go:117] "RemoveContainer" containerID="02e7a0f99890aee5cd4b55926ce099b46ab25507cacab7bd54e7e76a01abee4d" Dec 10 15:11:19 crc kubenswrapper[4718]: E1210 15:11:19.660011 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"02e7a0f99890aee5cd4b55926ce099b46ab25507cacab7bd54e7e76a01abee4d\": container with ID starting with 02e7a0f99890aee5cd4b55926ce099b46ab25507cacab7bd54e7e76a01abee4d not found: ID does not exist" containerID="02e7a0f99890aee5cd4b55926ce099b46ab25507cacab7bd54e7e76a01abee4d" Dec 10 15:11:19 crc kubenswrapper[4718]: I1210 15:11:19.660130 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"02e7a0f99890aee5cd4b55926ce099b46ab25507cacab7bd54e7e76a01abee4d"} err="failed to get container status \"02e7a0f99890aee5cd4b55926ce099b46ab25507cacab7bd54e7e76a01abee4d\": rpc error: code = NotFound desc = could not find container \"02e7a0f99890aee5cd4b55926ce099b46ab25507cacab7bd54e7e76a01abee4d\": container with ID starting with 02e7a0f99890aee5cd4b55926ce099b46ab25507cacab7bd54e7e76a01abee4d not found: ID does not exist" Dec 10 15:11:19 crc kubenswrapper[4718]: I1210 15:11:19.660228 4718 scope.go:117] "RemoveContainer" containerID="dc89d69ccfeea5ab736c1481bc20d76d6b036b652d4e2789268cbce01f5bff36" Dec 10 15:11:19 crc kubenswrapper[4718]: E1210 15:11:19.660768 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dc89d69ccfeea5ab736c1481bc20d76d6b036b652d4e2789268cbce01f5bff36\": container with ID starting with dc89d69ccfeea5ab736c1481bc20d76d6b036b652d4e2789268cbce01f5bff36 not found: ID does not exist" containerID="dc89d69ccfeea5ab736c1481bc20d76d6b036b652d4e2789268cbce01f5bff36" Dec 10 15:11:19 crc kubenswrapper[4718]: I1210 15:11:19.660824 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dc89d69ccfeea5ab736c1481bc20d76d6b036b652d4e2789268cbce01f5bff36"} err="failed to get container status \"dc89d69ccfeea5ab736c1481bc20d76d6b036b652d4e2789268cbce01f5bff36\": rpc error: code = NotFound desc = could not find container \"dc89d69ccfeea5ab736c1481bc20d76d6b036b652d4e2789268cbce01f5bff36\": container with ID starting with dc89d69ccfeea5ab736c1481bc20d76d6b036b652d4e2789268cbce01f5bff36 not found: ID does not exist" Dec 10 15:11:20 crc kubenswrapper[4718]: I1210 15:11:20.034319 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="52deaec7-cc2d-4943-b0e0-bc09ad511065" path="/var/lib/kubelet/pods/52deaec7-cc2d-4943-b0e0-bc09ad511065/volumes" Dec 10 15:11:31 crc kubenswrapper[4718]: I1210 15:11:31.368250 4718 scope.go:117] "RemoveContainer" containerID="4270ecfcf97657b622e29b00041edfd4de3210eeefd5147a944aedf873bf5e26" Dec 10 15:11:31 crc kubenswrapper[4718]: E1210 15:11:31.377453 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8zmhn_openshift-machine-config-operator(8db53917-7cfb-496d-b8a0-5cc68f3be4e7)\"" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" podUID="8db53917-7cfb-496d-b8a0-5cc68f3be4e7" Dec 10 15:11:32 crc kubenswrapper[4718]: I1210 15:11:32.053489 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-6q6w4"] Dec 10 15:11:32 crc kubenswrapper[4718]: I1210 15:11:32.063585 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-6q6w4"] Dec 10 15:11:34 crc kubenswrapper[4718]: I1210 15:11:34.035973 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dae6a105-9e31-412f-8809-ce10dfacfe35" path="/var/lib/kubelet/pods/dae6a105-9e31-412f-8809-ce10dfacfe35/volumes" Dec 10 15:11:34 crc kubenswrapper[4718]: I1210 15:11:34.710503 4718 generic.go:334] "Generic (PLEG): container finished" podID="93b2ee40-9140-4ccf-8af3-d9bfc04ca78c" containerID="747c547ba7b9f823c115484e901bb93ebe7f4f2c8d9a45cc5b048b4a3ce0a758" exitCode=0 Dec 10 15:11:34 crc kubenswrapper[4718]: I1210 15:11:34.710552 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-k9sjb" event={"ID":"93b2ee40-9140-4ccf-8af3-d9bfc04ca78c","Type":"ContainerDied","Data":"747c547ba7b9f823c115484e901bb93ebe7f4f2c8d9a45cc5b048b4a3ce0a758"} Dec 10 15:11:34 crc kubenswrapper[4718]: E1210 15:11:34.990904 4718 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod93b2ee40_9140_4ccf_8af3_d9bfc04ca78c.slice/crio-conmon-747c547ba7b9f823c115484e901bb93ebe7f4f2c8d9a45cc5b048b4a3ce0a758.scope\": RecentStats: unable to find data in memory cache]" Dec 10 15:11:36 crc kubenswrapper[4718]: I1210 15:11:36.358761 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-k9sjb" Dec 10 15:11:36 crc kubenswrapper[4718]: I1210 15:11:36.375374 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/93b2ee40-9140-4ccf-8af3-d9bfc04ca78c-inventory\") pod \"93b2ee40-9140-4ccf-8af3-d9bfc04ca78c\" (UID: \"93b2ee40-9140-4ccf-8af3-d9bfc04ca78c\") " Dec 10 15:11:36 crc kubenswrapper[4718]: I1210 15:11:36.375610 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/93b2ee40-9140-4ccf-8af3-d9bfc04ca78c-ssh-key\") pod \"93b2ee40-9140-4ccf-8af3-d9bfc04ca78c\" (UID: \"93b2ee40-9140-4ccf-8af3-d9bfc04ca78c\") " Dec 10 15:11:36 crc kubenswrapper[4718]: I1210 15:11:36.375739 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f8657\" (UniqueName: \"kubernetes.io/projected/93b2ee40-9140-4ccf-8af3-d9bfc04ca78c-kube-api-access-f8657\") pod \"93b2ee40-9140-4ccf-8af3-d9bfc04ca78c\" (UID: \"93b2ee40-9140-4ccf-8af3-d9bfc04ca78c\") " Dec 10 15:11:36 crc kubenswrapper[4718]: I1210 15:11:36.385837 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/93b2ee40-9140-4ccf-8af3-d9bfc04ca78c-kube-api-access-f8657" (OuterVolumeSpecName: "kube-api-access-f8657") pod "93b2ee40-9140-4ccf-8af3-d9bfc04ca78c" (UID: "93b2ee40-9140-4ccf-8af3-d9bfc04ca78c"). InnerVolumeSpecName "kube-api-access-f8657". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:11:36 crc kubenswrapper[4718]: I1210 15:11:36.426310 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/93b2ee40-9140-4ccf-8af3-d9bfc04ca78c-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "93b2ee40-9140-4ccf-8af3-d9bfc04ca78c" (UID: "93b2ee40-9140-4ccf-8af3-d9bfc04ca78c"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:11:36 crc kubenswrapper[4718]: I1210 15:11:36.426533 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/93b2ee40-9140-4ccf-8af3-d9bfc04ca78c-inventory" (OuterVolumeSpecName: "inventory") pod "93b2ee40-9140-4ccf-8af3-d9bfc04ca78c" (UID: "93b2ee40-9140-4ccf-8af3-d9bfc04ca78c"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:11:36 crc kubenswrapper[4718]: I1210 15:11:36.479278 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f8657\" (UniqueName: \"kubernetes.io/projected/93b2ee40-9140-4ccf-8af3-d9bfc04ca78c-kube-api-access-f8657\") on node \"crc\" DevicePath \"\"" Dec 10 15:11:36 crc kubenswrapper[4718]: I1210 15:11:36.479359 4718 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/93b2ee40-9140-4ccf-8af3-d9bfc04ca78c-inventory\") on node \"crc\" DevicePath \"\"" Dec 10 15:11:36 crc kubenswrapper[4718]: I1210 15:11:36.479375 4718 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/93b2ee40-9140-4ccf-8af3-d9bfc04ca78c-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 10 15:11:36 crc kubenswrapper[4718]: I1210 15:11:36.736293 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-k9sjb" event={"ID":"93b2ee40-9140-4ccf-8af3-d9bfc04ca78c","Type":"ContainerDied","Data":"fd03f52e5c746cf135634b154d3158731777f94f81f49fc165ace780e15dccf8"} Dec 10 15:11:36 crc kubenswrapper[4718]: I1210 15:11:36.736780 4718 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fd03f52e5c746cf135634b154d3158731777f94f81f49fc165ace780e15dccf8" Dec 10 15:11:36 crc kubenswrapper[4718]: I1210 15:11:36.736346 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-k9sjb" Dec 10 15:11:36 crc kubenswrapper[4718]: I1210 15:11:36.847235 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-2plpc"] Dec 10 15:11:36 crc kubenswrapper[4718]: E1210 15:11:36.847987 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="93b2ee40-9140-4ccf-8af3-d9bfc04ca78c" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Dec 10 15:11:36 crc kubenswrapper[4718]: I1210 15:11:36.848012 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="93b2ee40-9140-4ccf-8af3-d9bfc04ca78c" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Dec 10 15:11:36 crc kubenswrapper[4718]: E1210 15:11:36.848031 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="52deaec7-cc2d-4943-b0e0-bc09ad511065" containerName="extract-utilities" Dec 10 15:11:36 crc kubenswrapper[4718]: I1210 15:11:36.848040 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="52deaec7-cc2d-4943-b0e0-bc09ad511065" containerName="extract-utilities" Dec 10 15:11:36 crc kubenswrapper[4718]: E1210 15:11:36.848073 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="52deaec7-cc2d-4943-b0e0-bc09ad511065" containerName="registry-server" Dec 10 15:11:36 crc kubenswrapper[4718]: I1210 15:11:36.848083 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="52deaec7-cc2d-4943-b0e0-bc09ad511065" containerName="registry-server" Dec 10 15:11:36 crc kubenswrapper[4718]: E1210 15:11:36.848095 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="52deaec7-cc2d-4943-b0e0-bc09ad511065" containerName="extract-content" Dec 10 15:11:36 crc kubenswrapper[4718]: I1210 15:11:36.848101 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="52deaec7-cc2d-4943-b0e0-bc09ad511065" containerName="extract-content" Dec 10 15:11:36 crc kubenswrapper[4718]: I1210 15:11:36.848309 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="52deaec7-cc2d-4943-b0e0-bc09ad511065" containerName="registry-server" Dec 10 15:11:36 crc kubenswrapper[4718]: I1210 15:11:36.848331 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="93b2ee40-9140-4ccf-8af3-d9bfc04ca78c" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Dec 10 15:11:36 crc kubenswrapper[4718]: I1210 15:11:36.849489 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-2plpc" Dec 10 15:11:36 crc kubenswrapper[4718]: I1210 15:11:36.855096 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 10 15:11:36 crc kubenswrapper[4718]: I1210 15:11:36.855275 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-vqd8j" Dec 10 15:11:36 crc kubenswrapper[4718]: I1210 15:11:36.855788 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 10 15:11:36 crc kubenswrapper[4718]: I1210 15:11:36.856209 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 10 15:11:36 crc kubenswrapper[4718]: I1210 15:11:36.866291 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-2plpc"] Dec 10 15:11:36 crc kubenswrapper[4718]: I1210 15:11:36.992443 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cxckj\" (UniqueName: \"kubernetes.io/projected/7cea2862-2631-4d3a-98f8-29afc2428d28-kube-api-access-cxckj\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-2plpc\" (UID: \"7cea2862-2631-4d3a-98f8-29afc2428d28\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-2plpc" Dec 10 15:11:36 crc kubenswrapper[4718]: I1210 15:11:36.993197 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/7cea2862-2631-4d3a-98f8-29afc2428d28-ssh-key\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-2plpc\" (UID: \"7cea2862-2631-4d3a-98f8-29afc2428d28\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-2plpc" Dec 10 15:11:36 crc kubenswrapper[4718]: I1210 15:11:36.993473 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7cea2862-2631-4d3a-98f8-29afc2428d28-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-2plpc\" (UID: \"7cea2862-2631-4d3a-98f8-29afc2428d28\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-2plpc" Dec 10 15:11:37 crc kubenswrapper[4718]: I1210 15:11:37.095805 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7cea2862-2631-4d3a-98f8-29afc2428d28-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-2plpc\" (UID: \"7cea2862-2631-4d3a-98f8-29afc2428d28\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-2plpc" Dec 10 15:11:37 crc kubenswrapper[4718]: I1210 15:11:37.096444 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cxckj\" (UniqueName: \"kubernetes.io/projected/7cea2862-2631-4d3a-98f8-29afc2428d28-kube-api-access-cxckj\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-2plpc\" (UID: \"7cea2862-2631-4d3a-98f8-29afc2428d28\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-2plpc" Dec 10 15:11:37 crc kubenswrapper[4718]: I1210 15:11:37.096573 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/7cea2862-2631-4d3a-98f8-29afc2428d28-ssh-key\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-2plpc\" (UID: \"7cea2862-2631-4d3a-98f8-29afc2428d28\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-2plpc" Dec 10 15:11:37 crc kubenswrapper[4718]: I1210 15:11:37.103253 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7cea2862-2631-4d3a-98f8-29afc2428d28-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-2plpc\" (UID: \"7cea2862-2631-4d3a-98f8-29afc2428d28\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-2plpc" Dec 10 15:11:37 crc kubenswrapper[4718]: I1210 15:11:37.108361 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/7cea2862-2631-4d3a-98f8-29afc2428d28-ssh-key\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-2plpc\" (UID: \"7cea2862-2631-4d3a-98f8-29afc2428d28\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-2plpc" Dec 10 15:11:37 crc kubenswrapper[4718]: I1210 15:11:37.126460 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cxckj\" (UniqueName: \"kubernetes.io/projected/7cea2862-2631-4d3a-98f8-29afc2428d28-kube-api-access-cxckj\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-2plpc\" (UID: \"7cea2862-2631-4d3a-98f8-29afc2428d28\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-2plpc" Dec 10 15:11:37 crc kubenswrapper[4718]: I1210 15:11:37.173948 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-2plpc" Dec 10 15:11:37 crc kubenswrapper[4718]: I1210 15:11:37.902760 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-2plpc"] Dec 10 15:11:38 crc kubenswrapper[4718]: I1210 15:11:38.719333 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 10 15:11:38 crc kubenswrapper[4718]: I1210 15:11:38.762201 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-2plpc" event={"ID":"7cea2862-2631-4d3a-98f8-29afc2428d28","Type":"ContainerStarted","Data":"8679373b109bdd1133e60d7e7159410d440f86a77a87af40fc337e528dfbad9f"} Dec 10 15:11:39 crc kubenswrapper[4718]: I1210 15:11:39.773599 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-2plpc" event={"ID":"7cea2862-2631-4d3a-98f8-29afc2428d28","Type":"ContainerStarted","Data":"18a4344a18898e7bab89b30a29408f3c572369113d05caa1f2afe612f92afe0d"} Dec 10 15:11:39 crc kubenswrapper[4718]: I1210 15:11:39.800637 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-2plpc" podStartSLOduration=2.9951572090000003 podStartE2EDuration="3.800606267s" podCreationTimestamp="2025-12-10 15:11:36 +0000 UTC" firstStartedPulling="2025-12-10 15:11:37.911296792 +0000 UTC m=+2402.860520199" lastFinishedPulling="2025-12-10 15:11:38.71674584 +0000 UTC m=+2403.665969257" observedRunningTime="2025-12-10 15:11:39.791787383 +0000 UTC m=+2404.741010810" watchObservedRunningTime="2025-12-10 15:11:39.800606267 +0000 UTC m=+2404.749829684" Dec 10 15:11:46 crc kubenswrapper[4718]: I1210 15:11:46.028984 4718 scope.go:117] "RemoveContainer" containerID="4270ecfcf97657b622e29b00041edfd4de3210eeefd5147a944aedf873bf5e26" Dec 10 15:11:46 crc kubenswrapper[4718]: E1210 15:11:46.031883 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8zmhn_openshift-machine-config-operator(8db53917-7cfb-496d-b8a0-5cc68f3be4e7)\"" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" podUID="8db53917-7cfb-496d-b8a0-5cc68f3be4e7" Dec 10 15:11:54 crc kubenswrapper[4718]: I1210 15:11:54.885804 4718 scope.go:117] "RemoveContainer" containerID="61f1739de043d1b2e2f7dfa75726df28ce1f3d5681467714e80208bdee87b116" Dec 10 15:12:01 crc kubenswrapper[4718]: I1210 15:12:01.020960 4718 scope.go:117] "RemoveContainer" containerID="4270ecfcf97657b622e29b00041edfd4de3210eeefd5147a944aedf873bf5e26" Dec 10 15:12:01 crc kubenswrapper[4718]: E1210 15:12:01.021807 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8zmhn_openshift-machine-config-operator(8db53917-7cfb-496d-b8a0-5cc68f3be4e7)\"" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" podUID="8db53917-7cfb-496d-b8a0-5cc68f3be4e7" Dec 10 15:12:02 crc kubenswrapper[4718]: I1210 15:12:02.051442 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-4pldw"] Dec 10 15:12:02 crc kubenswrapper[4718]: I1210 15:12:02.063584 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-4pldw"] Dec 10 15:12:04 crc kubenswrapper[4718]: I1210 15:12:04.040199 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4aa714e1-7bc1-4d1d-b829-f053c2a4404c" path="/var/lib/kubelet/pods/4aa714e1-7bc1-4d1d-b829-f053c2a4404c/volumes" Dec 10 15:12:15 crc kubenswrapper[4718]: I1210 15:12:15.021530 4718 scope.go:117] "RemoveContainer" containerID="4270ecfcf97657b622e29b00041edfd4de3210eeefd5147a944aedf873bf5e26" Dec 10 15:12:15 crc kubenswrapper[4718]: E1210 15:12:15.022947 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8zmhn_openshift-machine-config-operator(8db53917-7cfb-496d-b8a0-5cc68f3be4e7)\"" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" podUID="8db53917-7cfb-496d-b8a0-5cc68f3be4e7" Dec 10 15:12:17 crc kubenswrapper[4718]: I1210 15:12:17.041578 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-5pzcj"] Dec 10 15:12:17 crc kubenswrapper[4718]: I1210 15:12:17.054840 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-5pzcj"] Dec 10 15:12:18 crc kubenswrapper[4718]: I1210 15:12:18.035333 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="32c38ab3-5fa2-49cd-b76c-625823fb56a6" path="/var/lib/kubelet/pods/32c38ab3-5fa2-49cd-b76c-625823fb56a6/volumes" Dec 10 15:12:27 crc kubenswrapper[4718]: I1210 15:12:27.020758 4718 scope.go:117] "RemoveContainer" containerID="4270ecfcf97657b622e29b00041edfd4de3210eeefd5147a944aedf873bf5e26" Dec 10 15:12:27 crc kubenswrapper[4718]: E1210 15:12:27.021614 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8zmhn_openshift-machine-config-operator(8db53917-7cfb-496d-b8a0-5cc68f3be4e7)\"" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" podUID="8db53917-7cfb-496d-b8a0-5cc68f3be4e7" Dec 10 15:12:41 crc kubenswrapper[4718]: I1210 15:12:41.021222 4718 scope.go:117] "RemoveContainer" containerID="4270ecfcf97657b622e29b00041edfd4de3210eeefd5147a944aedf873bf5e26" Dec 10 15:12:41 crc kubenswrapper[4718]: E1210 15:12:41.022022 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8zmhn_openshift-machine-config-operator(8db53917-7cfb-496d-b8a0-5cc68f3be4e7)\"" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" podUID="8db53917-7cfb-496d-b8a0-5cc68f3be4e7" Dec 10 15:12:50 crc kubenswrapper[4718]: I1210 15:12:50.050964 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-8lwcn"] Dec 10 15:12:50 crc kubenswrapper[4718]: I1210 15:12:50.059237 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-8lwcn"] Dec 10 15:12:52 crc kubenswrapper[4718]: I1210 15:12:52.033545 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3b4b4d05-de48-4103-9df2-bb976cd7f843" path="/var/lib/kubelet/pods/3b4b4d05-de48-4103-9df2-bb976cd7f843/volumes" Dec 10 15:12:55 crc kubenswrapper[4718]: I1210 15:12:55.002560 4718 scope.go:117] "RemoveContainer" containerID="43b23f2d7a8abf2cb507c784db0e8e74a942b2fb0ae70adf892ff570fb9e939a" Dec 10 15:12:55 crc kubenswrapper[4718]: I1210 15:12:55.021705 4718 scope.go:117] "RemoveContainer" containerID="4270ecfcf97657b622e29b00041edfd4de3210eeefd5147a944aedf873bf5e26" Dec 10 15:12:55 crc kubenswrapper[4718]: E1210 15:12:55.022602 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8zmhn_openshift-machine-config-operator(8db53917-7cfb-496d-b8a0-5cc68f3be4e7)\"" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" podUID="8db53917-7cfb-496d-b8a0-5cc68f3be4e7" Dec 10 15:12:55 crc kubenswrapper[4718]: I1210 15:12:55.066927 4718 scope.go:117] "RemoveContainer" containerID="4fcbacd776f0cf7418050c3e0442a241e1ac5e95255f3e49040d04b9790e13c8" Dec 10 15:12:55 crc kubenswrapper[4718]: I1210 15:12:55.128716 4718 scope.go:117] "RemoveContainer" containerID="e3aa7ef178ffe4b6b1e323b58f203e4cb53e3de8063ba7747d78f20949204d59" Dec 10 15:12:58 crc kubenswrapper[4718]: I1210 15:12:58.637855 4718 generic.go:334] "Generic (PLEG): container finished" podID="7cea2862-2631-4d3a-98f8-29afc2428d28" containerID="18a4344a18898e7bab89b30a29408f3c572369113d05caa1f2afe612f92afe0d" exitCode=0 Dec 10 15:12:58 crc kubenswrapper[4718]: I1210 15:12:58.638044 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-2plpc" event={"ID":"7cea2862-2631-4d3a-98f8-29afc2428d28","Type":"ContainerDied","Data":"18a4344a18898e7bab89b30a29408f3c572369113d05caa1f2afe612f92afe0d"} Dec 10 15:13:00 crc kubenswrapper[4718]: I1210 15:13:00.117826 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-2plpc" Dec 10 15:13:00 crc kubenswrapper[4718]: I1210 15:13:00.181176 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/7cea2862-2631-4d3a-98f8-29afc2428d28-ssh-key\") pod \"7cea2862-2631-4d3a-98f8-29afc2428d28\" (UID: \"7cea2862-2631-4d3a-98f8-29afc2428d28\") " Dec 10 15:13:00 crc kubenswrapper[4718]: I1210 15:13:00.182300 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7cea2862-2631-4d3a-98f8-29afc2428d28-inventory\") pod \"7cea2862-2631-4d3a-98f8-29afc2428d28\" (UID: \"7cea2862-2631-4d3a-98f8-29afc2428d28\") " Dec 10 15:13:00 crc kubenswrapper[4718]: I1210 15:13:00.182688 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cxckj\" (UniqueName: \"kubernetes.io/projected/7cea2862-2631-4d3a-98f8-29afc2428d28-kube-api-access-cxckj\") pod \"7cea2862-2631-4d3a-98f8-29afc2428d28\" (UID: \"7cea2862-2631-4d3a-98f8-29afc2428d28\") " Dec 10 15:13:00 crc kubenswrapper[4718]: I1210 15:13:00.190164 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7cea2862-2631-4d3a-98f8-29afc2428d28-kube-api-access-cxckj" (OuterVolumeSpecName: "kube-api-access-cxckj") pod "7cea2862-2631-4d3a-98f8-29afc2428d28" (UID: "7cea2862-2631-4d3a-98f8-29afc2428d28"). InnerVolumeSpecName "kube-api-access-cxckj". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:13:00 crc kubenswrapper[4718]: I1210 15:13:00.225098 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7cea2862-2631-4d3a-98f8-29afc2428d28-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "7cea2862-2631-4d3a-98f8-29afc2428d28" (UID: "7cea2862-2631-4d3a-98f8-29afc2428d28"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:13:00 crc kubenswrapper[4718]: I1210 15:13:00.229700 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7cea2862-2631-4d3a-98f8-29afc2428d28-inventory" (OuterVolumeSpecName: "inventory") pod "7cea2862-2631-4d3a-98f8-29afc2428d28" (UID: "7cea2862-2631-4d3a-98f8-29afc2428d28"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:13:00 crc kubenswrapper[4718]: I1210 15:13:00.286332 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cxckj\" (UniqueName: \"kubernetes.io/projected/7cea2862-2631-4d3a-98f8-29afc2428d28-kube-api-access-cxckj\") on node \"crc\" DevicePath \"\"" Dec 10 15:13:00 crc kubenswrapper[4718]: I1210 15:13:00.286428 4718 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/7cea2862-2631-4d3a-98f8-29afc2428d28-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 10 15:13:00 crc kubenswrapper[4718]: I1210 15:13:00.286446 4718 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7cea2862-2631-4d3a-98f8-29afc2428d28-inventory\") on node \"crc\" DevicePath \"\"" Dec 10 15:13:00 crc kubenswrapper[4718]: I1210 15:13:00.664023 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-2plpc" event={"ID":"7cea2862-2631-4d3a-98f8-29afc2428d28","Type":"ContainerDied","Data":"8679373b109bdd1133e60d7e7159410d440f86a77a87af40fc337e528dfbad9f"} Dec 10 15:13:00 crc kubenswrapper[4718]: I1210 15:13:00.664117 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-2plpc" Dec 10 15:13:00 crc kubenswrapper[4718]: I1210 15:13:00.664149 4718 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8679373b109bdd1133e60d7e7159410d440f86a77a87af40fc337e528dfbad9f" Dec 10 15:13:00 crc kubenswrapper[4718]: I1210 15:13:00.769953 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-nt5t9"] Dec 10 15:13:00 crc kubenswrapper[4718]: E1210 15:13:00.770931 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7cea2862-2631-4d3a-98f8-29afc2428d28" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Dec 10 15:13:00 crc kubenswrapper[4718]: I1210 15:13:00.770965 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="7cea2862-2631-4d3a-98f8-29afc2428d28" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Dec 10 15:13:00 crc kubenswrapper[4718]: I1210 15:13:00.771230 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="7cea2862-2631-4d3a-98f8-29afc2428d28" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Dec 10 15:13:00 crc kubenswrapper[4718]: I1210 15:13:00.772483 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-nt5t9" Dec 10 15:13:00 crc kubenswrapper[4718]: I1210 15:13:00.775520 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 10 15:13:00 crc kubenswrapper[4718]: I1210 15:13:00.776463 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-vqd8j" Dec 10 15:13:00 crc kubenswrapper[4718]: I1210 15:13:00.777072 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 10 15:13:00 crc kubenswrapper[4718]: I1210 15:13:00.780760 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 10 15:13:00 crc kubenswrapper[4718]: I1210 15:13:00.786602 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-nt5t9"] Dec 10 15:13:00 crc kubenswrapper[4718]: I1210 15:13:00.900930 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e98b8eb6-cfd6-4125-973e-7cda6cdeceeb-ssh-key\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-nt5t9\" (UID: \"e98b8eb6-cfd6-4125-973e-7cda6cdeceeb\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-nt5t9" Dec 10 15:13:00 crc kubenswrapper[4718]: I1210 15:13:00.901012 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e98b8eb6-cfd6-4125-973e-7cda6cdeceeb-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-nt5t9\" (UID: \"e98b8eb6-cfd6-4125-973e-7cda6cdeceeb\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-nt5t9" Dec 10 15:13:00 crc kubenswrapper[4718]: I1210 15:13:00.901074 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ccl7r\" (UniqueName: \"kubernetes.io/projected/e98b8eb6-cfd6-4125-973e-7cda6cdeceeb-kube-api-access-ccl7r\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-nt5t9\" (UID: \"e98b8eb6-cfd6-4125-973e-7cda6cdeceeb\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-nt5t9" Dec 10 15:13:01 crc kubenswrapper[4718]: I1210 15:13:01.021923 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e98b8eb6-cfd6-4125-973e-7cda6cdeceeb-ssh-key\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-nt5t9\" (UID: \"e98b8eb6-cfd6-4125-973e-7cda6cdeceeb\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-nt5t9" Dec 10 15:13:01 crc kubenswrapper[4718]: I1210 15:13:01.022137 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e98b8eb6-cfd6-4125-973e-7cda6cdeceeb-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-nt5t9\" (UID: \"e98b8eb6-cfd6-4125-973e-7cda6cdeceeb\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-nt5t9" Dec 10 15:13:01 crc kubenswrapper[4718]: I1210 15:13:01.022192 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ccl7r\" (UniqueName: \"kubernetes.io/projected/e98b8eb6-cfd6-4125-973e-7cda6cdeceeb-kube-api-access-ccl7r\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-nt5t9\" (UID: \"e98b8eb6-cfd6-4125-973e-7cda6cdeceeb\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-nt5t9" Dec 10 15:13:01 crc kubenswrapper[4718]: I1210 15:13:01.031873 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e98b8eb6-cfd6-4125-973e-7cda6cdeceeb-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-nt5t9\" (UID: \"e98b8eb6-cfd6-4125-973e-7cda6cdeceeb\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-nt5t9" Dec 10 15:13:01 crc kubenswrapper[4718]: I1210 15:13:01.038284 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e98b8eb6-cfd6-4125-973e-7cda6cdeceeb-ssh-key\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-nt5t9\" (UID: \"e98b8eb6-cfd6-4125-973e-7cda6cdeceeb\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-nt5t9" Dec 10 15:13:01 crc kubenswrapper[4718]: I1210 15:13:01.047149 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ccl7r\" (UniqueName: \"kubernetes.io/projected/e98b8eb6-cfd6-4125-973e-7cda6cdeceeb-kube-api-access-ccl7r\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-nt5t9\" (UID: \"e98b8eb6-cfd6-4125-973e-7cda6cdeceeb\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-nt5t9" Dec 10 15:13:01 crc kubenswrapper[4718]: I1210 15:13:01.090785 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-nt5t9" Dec 10 15:13:01 crc kubenswrapper[4718]: I1210 15:13:01.742172 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-nt5t9"] Dec 10 15:13:01 crc kubenswrapper[4718]: I1210 15:13:01.755215 4718 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 10 15:13:02 crc kubenswrapper[4718]: I1210 15:13:02.689805 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-nt5t9" event={"ID":"e98b8eb6-cfd6-4125-973e-7cda6cdeceeb","Type":"ContainerStarted","Data":"74faf684e51afcb913604403b275f38b4301c6d81d39cce896e3b3cc552ed0b6"} Dec 10 15:13:02 crc kubenswrapper[4718]: I1210 15:13:02.690611 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-nt5t9" event={"ID":"e98b8eb6-cfd6-4125-973e-7cda6cdeceeb","Type":"ContainerStarted","Data":"997d7259c6f076299c4968b2211b49ce3cbf80aab6748bda616c10b265ba494a"} Dec 10 15:13:07 crc kubenswrapper[4718]: I1210 15:13:07.020360 4718 scope.go:117] "RemoveContainer" containerID="4270ecfcf97657b622e29b00041edfd4de3210eeefd5147a944aedf873bf5e26" Dec 10 15:13:07 crc kubenswrapper[4718]: E1210 15:13:07.021227 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8zmhn_openshift-machine-config-operator(8db53917-7cfb-496d-b8a0-5cc68f3be4e7)\"" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" podUID="8db53917-7cfb-496d-b8a0-5cc68f3be4e7" Dec 10 15:13:08 crc kubenswrapper[4718]: I1210 15:13:08.791091 4718 generic.go:334] "Generic (PLEG): container finished" podID="e98b8eb6-cfd6-4125-973e-7cda6cdeceeb" containerID="74faf684e51afcb913604403b275f38b4301c6d81d39cce896e3b3cc552ed0b6" exitCode=0 Dec 10 15:13:08 crc kubenswrapper[4718]: I1210 15:13:08.791203 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-nt5t9" event={"ID":"e98b8eb6-cfd6-4125-973e-7cda6cdeceeb","Type":"ContainerDied","Data":"74faf684e51afcb913604403b275f38b4301c6d81d39cce896e3b3cc552ed0b6"} Dec 10 15:13:10 crc kubenswrapper[4718]: I1210 15:13:10.301416 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-nt5t9" Dec 10 15:13:10 crc kubenswrapper[4718]: I1210 15:13:10.360145 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e98b8eb6-cfd6-4125-973e-7cda6cdeceeb-inventory\") pod \"e98b8eb6-cfd6-4125-973e-7cda6cdeceeb\" (UID: \"e98b8eb6-cfd6-4125-973e-7cda6cdeceeb\") " Dec 10 15:13:10 crc kubenswrapper[4718]: I1210 15:13:10.360347 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e98b8eb6-cfd6-4125-973e-7cda6cdeceeb-ssh-key\") pod \"e98b8eb6-cfd6-4125-973e-7cda6cdeceeb\" (UID: \"e98b8eb6-cfd6-4125-973e-7cda6cdeceeb\") " Dec 10 15:13:10 crc kubenswrapper[4718]: I1210 15:13:10.360668 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ccl7r\" (UniqueName: \"kubernetes.io/projected/e98b8eb6-cfd6-4125-973e-7cda6cdeceeb-kube-api-access-ccl7r\") pod \"e98b8eb6-cfd6-4125-973e-7cda6cdeceeb\" (UID: \"e98b8eb6-cfd6-4125-973e-7cda6cdeceeb\") " Dec 10 15:13:10 crc kubenswrapper[4718]: I1210 15:13:10.373057 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e98b8eb6-cfd6-4125-973e-7cda6cdeceeb-kube-api-access-ccl7r" (OuterVolumeSpecName: "kube-api-access-ccl7r") pod "e98b8eb6-cfd6-4125-973e-7cda6cdeceeb" (UID: "e98b8eb6-cfd6-4125-973e-7cda6cdeceeb"). InnerVolumeSpecName "kube-api-access-ccl7r". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:13:10 crc kubenswrapper[4718]: I1210 15:13:10.401033 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e98b8eb6-cfd6-4125-973e-7cda6cdeceeb-inventory" (OuterVolumeSpecName: "inventory") pod "e98b8eb6-cfd6-4125-973e-7cda6cdeceeb" (UID: "e98b8eb6-cfd6-4125-973e-7cda6cdeceeb"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:13:10 crc kubenswrapper[4718]: I1210 15:13:10.408112 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e98b8eb6-cfd6-4125-973e-7cda6cdeceeb-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "e98b8eb6-cfd6-4125-973e-7cda6cdeceeb" (UID: "e98b8eb6-cfd6-4125-973e-7cda6cdeceeb"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:13:10 crc kubenswrapper[4718]: I1210 15:13:10.465550 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ccl7r\" (UniqueName: \"kubernetes.io/projected/e98b8eb6-cfd6-4125-973e-7cda6cdeceeb-kube-api-access-ccl7r\") on node \"crc\" DevicePath \"\"" Dec 10 15:13:10 crc kubenswrapper[4718]: I1210 15:13:10.465595 4718 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e98b8eb6-cfd6-4125-973e-7cda6cdeceeb-inventory\") on node \"crc\" DevicePath \"\"" Dec 10 15:13:10 crc kubenswrapper[4718]: I1210 15:13:10.465605 4718 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e98b8eb6-cfd6-4125-973e-7cda6cdeceeb-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 10 15:13:10 crc kubenswrapper[4718]: I1210 15:13:10.817625 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-nt5t9" event={"ID":"e98b8eb6-cfd6-4125-973e-7cda6cdeceeb","Type":"ContainerDied","Data":"997d7259c6f076299c4968b2211b49ce3cbf80aab6748bda616c10b265ba494a"} Dec 10 15:13:10 crc kubenswrapper[4718]: I1210 15:13:10.817687 4718 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="997d7259c6f076299c4968b2211b49ce3cbf80aab6748bda616c10b265ba494a" Dec 10 15:13:10 crc kubenswrapper[4718]: I1210 15:13:10.817836 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-nt5t9" Dec 10 15:13:10 crc kubenswrapper[4718]: I1210 15:13:10.892165 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-8vcjs"] Dec 10 15:13:10 crc kubenswrapper[4718]: E1210 15:13:10.892666 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e98b8eb6-cfd6-4125-973e-7cda6cdeceeb" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Dec 10 15:13:10 crc kubenswrapper[4718]: I1210 15:13:10.892685 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="e98b8eb6-cfd6-4125-973e-7cda6cdeceeb" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Dec 10 15:13:10 crc kubenswrapper[4718]: I1210 15:13:10.892939 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="e98b8eb6-cfd6-4125-973e-7cda6cdeceeb" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Dec 10 15:13:10 crc kubenswrapper[4718]: I1210 15:13:10.893775 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-8vcjs" Dec 10 15:13:10 crc kubenswrapper[4718]: I1210 15:13:10.897924 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 10 15:13:10 crc kubenswrapper[4718]: I1210 15:13:10.898160 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-vqd8j" Dec 10 15:13:10 crc kubenswrapper[4718]: I1210 15:13:10.898259 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 10 15:13:10 crc kubenswrapper[4718]: I1210 15:13:10.901004 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 10 15:13:10 crc kubenswrapper[4718]: I1210 15:13:10.916446 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-8vcjs"] Dec 10 15:13:10 crc kubenswrapper[4718]: I1210 15:13:10.976778 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/df7b87da-2bb2-494d-b840-478a58f1950c-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-8vcjs\" (UID: \"df7b87da-2bb2-494d-b840-478a58f1950c\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-8vcjs" Dec 10 15:13:10 crc kubenswrapper[4718]: I1210 15:13:10.976835 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/df7b87da-2bb2-494d-b840-478a58f1950c-ssh-key\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-8vcjs\" (UID: \"df7b87da-2bb2-494d-b840-478a58f1950c\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-8vcjs" Dec 10 15:13:10 crc kubenswrapper[4718]: I1210 15:13:10.976892 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4hjwf\" (UniqueName: \"kubernetes.io/projected/df7b87da-2bb2-494d-b840-478a58f1950c-kube-api-access-4hjwf\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-8vcjs\" (UID: \"df7b87da-2bb2-494d-b840-478a58f1950c\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-8vcjs" Dec 10 15:13:11 crc kubenswrapper[4718]: I1210 15:13:11.079826 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/df7b87da-2bb2-494d-b840-478a58f1950c-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-8vcjs\" (UID: \"df7b87da-2bb2-494d-b840-478a58f1950c\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-8vcjs" Dec 10 15:13:11 crc kubenswrapper[4718]: I1210 15:13:11.080235 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/df7b87da-2bb2-494d-b840-478a58f1950c-ssh-key\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-8vcjs\" (UID: \"df7b87da-2bb2-494d-b840-478a58f1950c\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-8vcjs" Dec 10 15:13:11 crc kubenswrapper[4718]: I1210 15:13:11.080424 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4hjwf\" (UniqueName: \"kubernetes.io/projected/df7b87da-2bb2-494d-b840-478a58f1950c-kube-api-access-4hjwf\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-8vcjs\" (UID: \"df7b87da-2bb2-494d-b840-478a58f1950c\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-8vcjs" Dec 10 15:13:11 crc kubenswrapper[4718]: I1210 15:13:11.084190 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/df7b87da-2bb2-494d-b840-478a58f1950c-ssh-key\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-8vcjs\" (UID: \"df7b87da-2bb2-494d-b840-478a58f1950c\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-8vcjs" Dec 10 15:13:11 crc kubenswrapper[4718]: I1210 15:13:11.090172 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/df7b87da-2bb2-494d-b840-478a58f1950c-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-8vcjs\" (UID: \"df7b87da-2bb2-494d-b840-478a58f1950c\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-8vcjs" Dec 10 15:13:11 crc kubenswrapper[4718]: I1210 15:13:11.099850 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4hjwf\" (UniqueName: \"kubernetes.io/projected/df7b87da-2bb2-494d-b840-478a58f1950c-kube-api-access-4hjwf\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-8vcjs\" (UID: \"df7b87da-2bb2-494d-b840-478a58f1950c\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-8vcjs" Dec 10 15:13:11 crc kubenswrapper[4718]: I1210 15:13:11.225042 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-8vcjs" Dec 10 15:13:11 crc kubenswrapper[4718]: I1210 15:13:11.790431 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-8vcjs"] Dec 10 15:13:11 crc kubenswrapper[4718]: I1210 15:13:11.831551 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-8vcjs" event={"ID":"df7b87da-2bb2-494d-b840-478a58f1950c","Type":"ContainerStarted","Data":"d348b9508a412c28861d6c0a28a7b04f1d2dfbe956c400ece1b8dfdfa471892a"} Dec 10 15:13:12 crc kubenswrapper[4718]: I1210 15:13:12.845669 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-8vcjs" event={"ID":"df7b87da-2bb2-494d-b840-478a58f1950c","Type":"ContainerStarted","Data":"421f81e8b29fa190c791f218f19f5909214ffea3a02fb2e167678135091522d0"} Dec 10 15:13:20 crc kubenswrapper[4718]: I1210 15:13:20.020821 4718 scope.go:117] "RemoveContainer" containerID="4270ecfcf97657b622e29b00041edfd4de3210eeefd5147a944aedf873bf5e26" Dec 10 15:13:20 crc kubenswrapper[4718]: E1210 15:13:20.021846 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8zmhn_openshift-machine-config-operator(8db53917-7cfb-496d-b8a0-5cc68f3be4e7)\"" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" podUID="8db53917-7cfb-496d-b8a0-5cc68f3be4e7" Dec 10 15:13:33 crc kubenswrapper[4718]: I1210 15:13:33.020342 4718 scope.go:117] "RemoveContainer" containerID="4270ecfcf97657b622e29b00041edfd4de3210eeefd5147a944aedf873bf5e26" Dec 10 15:13:33 crc kubenswrapper[4718]: E1210 15:13:33.021307 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8zmhn_openshift-machine-config-operator(8db53917-7cfb-496d-b8a0-5cc68f3be4e7)\"" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" podUID="8db53917-7cfb-496d-b8a0-5cc68f3be4e7" Dec 10 15:13:45 crc kubenswrapper[4718]: I1210 15:13:45.021670 4718 scope.go:117] "RemoveContainer" containerID="4270ecfcf97657b622e29b00041edfd4de3210eeefd5147a944aedf873bf5e26" Dec 10 15:13:45 crc kubenswrapper[4718]: E1210 15:13:45.022589 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8zmhn_openshift-machine-config-operator(8db53917-7cfb-496d-b8a0-5cc68f3be4e7)\"" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" podUID="8db53917-7cfb-496d-b8a0-5cc68f3be4e7" Dec 10 15:13:54 crc kubenswrapper[4718]: I1210 15:13:54.352733 4718 generic.go:334] "Generic (PLEG): container finished" podID="df7b87da-2bb2-494d-b840-478a58f1950c" containerID="421f81e8b29fa190c791f218f19f5909214ffea3a02fb2e167678135091522d0" exitCode=0 Dec 10 15:13:54 crc kubenswrapper[4718]: I1210 15:13:54.353457 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-8vcjs" event={"ID":"df7b87da-2bb2-494d-b840-478a58f1950c","Type":"ContainerDied","Data":"421f81e8b29fa190c791f218f19f5909214ffea3a02fb2e167678135091522d0"} Dec 10 15:13:55 crc kubenswrapper[4718]: I1210 15:13:55.885149 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-8vcjs" Dec 10 15:13:55 crc kubenswrapper[4718]: I1210 15:13:55.986936 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4hjwf\" (UniqueName: \"kubernetes.io/projected/df7b87da-2bb2-494d-b840-478a58f1950c-kube-api-access-4hjwf\") pod \"df7b87da-2bb2-494d-b840-478a58f1950c\" (UID: \"df7b87da-2bb2-494d-b840-478a58f1950c\") " Dec 10 15:13:55 crc kubenswrapper[4718]: I1210 15:13:55.987103 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/df7b87da-2bb2-494d-b840-478a58f1950c-inventory\") pod \"df7b87da-2bb2-494d-b840-478a58f1950c\" (UID: \"df7b87da-2bb2-494d-b840-478a58f1950c\") " Dec 10 15:13:55 crc kubenswrapper[4718]: I1210 15:13:55.987440 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/df7b87da-2bb2-494d-b840-478a58f1950c-ssh-key\") pod \"df7b87da-2bb2-494d-b840-478a58f1950c\" (UID: \"df7b87da-2bb2-494d-b840-478a58f1950c\") " Dec 10 15:13:56 crc kubenswrapper[4718]: I1210 15:13:55.995529 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/df7b87da-2bb2-494d-b840-478a58f1950c-kube-api-access-4hjwf" (OuterVolumeSpecName: "kube-api-access-4hjwf") pod "df7b87da-2bb2-494d-b840-478a58f1950c" (UID: "df7b87da-2bb2-494d-b840-478a58f1950c"). InnerVolumeSpecName "kube-api-access-4hjwf". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:13:56 crc kubenswrapper[4718]: I1210 15:13:56.032624 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/df7b87da-2bb2-494d-b840-478a58f1950c-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "df7b87da-2bb2-494d-b840-478a58f1950c" (UID: "df7b87da-2bb2-494d-b840-478a58f1950c"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:13:56 crc kubenswrapper[4718]: I1210 15:13:56.035656 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/df7b87da-2bb2-494d-b840-478a58f1950c-inventory" (OuterVolumeSpecName: "inventory") pod "df7b87da-2bb2-494d-b840-478a58f1950c" (UID: "df7b87da-2bb2-494d-b840-478a58f1950c"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:13:56 crc kubenswrapper[4718]: I1210 15:13:56.092202 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4hjwf\" (UniqueName: \"kubernetes.io/projected/df7b87da-2bb2-494d-b840-478a58f1950c-kube-api-access-4hjwf\") on node \"crc\" DevicePath \"\"" Dec 10 15:13:56 crc kubenswrapper[4718]: I1210 15:13:56.097774 4718 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/df7b87da-2bb2-494d-b840-478a58f1950c-inventory\") on node \"crc\" DevicePath \"\"" Dec 10 15:13:56 crc kubenswrapper[4718]: I1210 15:13:56.098048 4718 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/df7b87da-2bb2-494d-b840-478a58f1950c-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 10 15:13:56 crc kubenswrapper[4718]: I1210 15:13:56.376806 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-8vcjs" event={"ID":"df7b87da-2bb2-494d-b840-478a58f1950c","Type":"ContainerDied","Data":"d348b9508a412c28861d6c0a28a7b04f1d2dfbe956c400ece1b8dfdfa471892a"} Dec 10 15:13:56 crc kubenswrapper[4718]: I1210 15:13:56.377910 4718 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d348b9508a412c28861d6c0a28a7b04f1d2dfbe956c400ece1b8dfdfa471892a" Dec 10 15:13:56 crc kubenswrapper[4718]: I1210 15:13:56.376965 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-8vcjs" Dec 10 15:13:56 crc kubenswrapper[4718]: I1210 15:13:56.500043 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-vzmzr"] Dec 10 15:13:56 crc kubenswrapper[4718]: E1210 15:13:56.500661 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df7b87da-2bb2-494d-b840-478a58f1950c" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Dec 10 15:13:56 crc kubenswrapper[4718]: I1210 15:13:56.500686 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="df7b87da-2bb2-494d-b840-478a58f1950c" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Dec 10 15:13:56 crc kubenswrapper[4718]: I1210 15:13:56.500993 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="df7b87da-2bb2-494d-b840-478a58f1950c" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Dec 10 15:13:56 crc kubenswrapper[4718]: I1210 15:13:56.504454 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-vzmzr" Dec 10 15:13:56 crc kubenswrapper[4718]: I1210 15:13:56.514672 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 10 15:13:56 crc kubenswrapper[4718]: I1210 15:13:56.515033 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 10 15:13:56 crc kubenswrapper[4718]: I1210 15:13:56.516181 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 10 15:13:56 crc kubenswrapper[4718]: I1210 15:13:56.522104 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-vqd8j" Dec 10 15:13:56 crc kubenswrapper[4718]: I1210 15:13:56.523773 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-vzmzr"] Dec 10 15:13:56 crc kubenswrapper[4718]: I1210 15:13:56.616607 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2c9d42bb-c253-4ef3-92eb-71a2ffbbee3c-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-vzmzr\" (UID: \"2c9d42bb-c253-4ef3-92eb-71a2ffbbee3c\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-vzmzr" Dec 10 15:13:56 crc kubenswrapper[4718]: I1210 15:13:56.617058 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2c9d42bb-c253-4ef3-92eb-71a2ffbbee3c-ssh-key\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-vzmzr\" (UID: \"2c9d42bb-c253-4ef3-92eb-71a2ffbbee3c\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-vzmzr" Dec 10 15:13:56 crc kubenswrapper[4718]: I1210 15:13:56.617287 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z67c8\" (UniqueName: \"kubernetes.io/projected/2c9d42bb-c253-4ef3-92eb-71a2ffbbee3c-kube-api-access-z67c8\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-vzmzr\" (UID: \"2c9d42bb-c253-4ef3-92eb-71a2ffbbee3c\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-vzmzr" Dec 10 15:13:56 crc kubenswrapper[4718]: I1210 15:13:56.719207 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2c9d42bb-c253-4ef3-92eb-71a2ffbbee3c-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-vzmzr\" (UID: \"2c9d42bb-c253-4ef3-92eb-71a2ffbbee3c\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-vzmzr" Dec 10 15:13:56 crc kubenswrapper[4718]: I1210 15:13:56.719415 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2c9d42bb-c253-4ef3-92eb-71a2ffbbee3c-ssh-key\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-vzmzr\" (UID: \"2c9d42bb-c253-4ef3-92eb-71a2ffbbee3c\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-vzmzr" Dec 10 15:13:56 crc kubenswrapper[4718]: I1210 15:13:56.719582 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z67c8\" (UniqueName: \"kubernetes.io/projected/2c9d42bb-c253-4ef3-92eb-71a2ffbbee3c-kube-api-access-z67c8\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-vzmzr\" (UID: \"2c9d42bb-c253-4ef3-92eb-71a2ffbbee3c\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-vzmzr" Dec 10 15:13:56 crc kubenswrapper[4718]: I1210 15:13:56.725627 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2c9d42bb-c253-4ef3-92eb-71a2ffbbee3c-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-vzmzr\" (UID: \"2c9d42bb-c253-4ef3-92eb-71a2ffbbee3c\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-vzmzr" Dec 10 15:13:56 crc kubenswrapper[4718]: I1210 15:13:56.735155 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2c9d42bb-c253-4ef3-92eb-71a2ffbbee3c-ssh-key\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-vzmzr\" (UID: \"2c9d42bb-c253-4ef3-92eb-71a2ffbbee3c\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-vzmzr" Dec 10 15:13:56 crc kubenswrapper[4718]: I1210 15:13:56.751634 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z67c8\" (UniqueName: \"kubernetes.io/projected/2c9d42bb-c253-4ef3-92eb-71a2ffbbee3c-kube-api-access-z67c8\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-vzmzr\" (UID: \"2c9d42bb-c253-4ef3-92eb-71a2ffbbee3c\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-vzmzr" Dec 10 15:13:56 crc kubenswrapper[4718]: I1210 15:13:56.829479 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-vzmzr" Dec 10 15:13:57 crc kubenswrapper[4718]: I1210 15:13:57.020703 4718 scope.go:117] "RemoveContainer" containerID="4270ecfcf97657b622e29b00041edfd4de3210eeefd5147a944aedf873bf5e26" Dec 10 15:13:57 crc kubenswrapper[4718]: I1210 15:13:57.392745 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" event={"ID":"8db53917-7cfb-496d-b8a0-5cc68f3be4e7","Type":"ContainerStarted","Data":"7bafd0187b023eee030e985bc8fa82589611e4791ca7f50b17d0f5ccdb81fde8"} Dec 10 15:13:57 crc kubenswrapper[4718]: I1210 15:13:57.514102 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-vzmzr"] Dec 10 15:13:58 crc kubenswrapper[4718]: I1210 15:13:58.411992 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-vzmzr" event={"ID":"2c9d42bb-c253-4ef3-92eb-71a2ffbbee3c","Type":"ContainerStarted","Data":"15b486c687cef9da8105e541ffb8402e7c942a663ee5d9a636494c5e79b9ff3f"} Dec 10 15:13:59 crc kubenswrapper[4718]: I1210 15:13:59.429381 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-vzmzr" event={"ID":"2c9d42bb-c253-4ef3-92eb-71a2ffbbee3c","Type":"ContainerStarted","Data":"74083fa021a51c4177d2725c7f150e44ee13f08f96112a4a2e9713c4e08a5207"} Dec 10 15:13:59 crc kubenswrapper[4718]: I1210 15:13:59.460629 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-vzmzr" podStartSLOduration=2.816654135 podStartE2EDuration="3.460559352s" podCreationTimestamp="2025-12-10 15:13:56 +0000 UTC" firstStartedPulling="2025-12-10 15:13:57.527858325 +0000 UTC m=+2542.477081742" lastFinishedPulling="2025-12-10 15:13:58.171763542 +0000 UTC m=+2543.120986959" observedRunningTime="2025-12-10 15:13:59.454660782 +0000 UTC m=+2544.403884199" watchObservedRunningTime="2025-12-10 15:13:59.460559352 +0000 UTC m=+2544.409782779" Dec 10 15:14:54 crc kubenswrapper[4718]: I1210 15:14:54.039956 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-vzmzr" event={"ID":"2c9d42bb-c253-4ef3-92eb-71a2ffbbee3c","Type":"ContainerDied","Data":"74083fa021a51c4177d2725c7f150e44ee13f08f96112a4a2e9713c4e08a5207"} Dec 10 15:14:54 crc kubenswrapper[4718]: I1210 15:14:54.039869 4718 generic.go:334] "Generic (PLEG): container finished" podID="2c9d42bb-c253-4ef3-92eb-71a2ffbbee3c" containerID="74083fa021a51c4177d2725c7f150e44ee13f08f96112a4a2e9713c4e08a5207" exitCode=0 Dec 10 15:14:55 crc kubenswrapper[4718]: I1210 15:14:55.540283 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-vzmzr" Dec 10 15:14:55 crc kubenswrapper[4718]: I1210 15:14:55.700279 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z67c8\" (UniqueName: \"kubernetes.io/projected/2c9d42bb-c253-4ef3-92eb-71a2ffbbee3c-kube-api-access-z67c8\") pod \"2c9d42bb-c253-4ef3-92eb-71a2ffbbee3c\" (UID: \"2c9d42bb-c253-4ef3-92eb-71a2ffbbee3c\") " Dec 10 15:14:55 crc kubenswrapper[4718]: I1210 15:14:55.700539 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2c9d42bb-c253-4ef3-92eb-71a2ffbbee3c-inventory\") pod \"2c9d42bb-c253-4ef3-92eb-71a2ffbbee3c\" (UID: \"2c9d42bb-c253-4ef3-92eb-71a2ffbbee3c\") " Dec 10 15:14:55 crc kubenswrapper[4718]: I1210 15:14:55.700698 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2c9d42bb-c253-4ef3-92eb-71a2ffbbee3c-ssh-key\") pod \"2c9d42bb-c253-4ef3-92eb-71a2ffbbee3c\" (UID: \"2c9d42bb-c253-4ef3-92eb-71a2ffbbee3c\") " Dec 10 15:14:55 crc kubenswrapper[4718]: I1210 15:14:55.707058 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2c9d42bb-c253-4ef3-92eb-71a2ffbbee3c-kube-api-access-z67c8" (OuterVolumeSpecName: "kube-api-access-z67c8") pod "2c9d42bb-c253-4ef3-92eb-71a2ffbbee3c" (UID: "2c9d42bb-c253-4ef3-92eb-71a2ffbbee3c"). InnerVolumeSpecName "kube-api-access-z67c8". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:14:55 crc kubenswrapper[4718]: I1210 15:14:55.736132 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2c9d42bb-c253-4ef3-92eb-71a2ffbbee3c-inventory" (OuterVolumeSpecName: "inventory") pod "2c9d42bb-c253-4ef3-92eb-71a2ffbbee3c" (UID: "2c9d42bb-c253-4ef3-92eb-71a2ffbbee3c"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:14:55 crc kubenswrapper[4718]: I1210 15:14:55.736885 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2c9d42bb-c253-4ef3-92eb-71a2ffbbee3c-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "2c9d42bb-c253-4ef3-92eb-71a2ffbbee3c" (UID: "2c9d42bb-c253-4ef3-92eb-71a2ffbbee3c"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:14:55 crc kubenswrapper[4718]: I1210 15:14:55.804793 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z67c8\" (UniqueName: \"kubernetes.io/projected/2c9d42bb-c253-4ef3-92eb-71a2ffbbee3c-kube-api-access-z67c8\") on node \"crc\" DevicePath \"\"" Dec 10 15:14:55 crc kubenswrapper[4718]: I1210 15:14:55.804842 4718 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2c9d42bb-c253-4ef3-92eb-71a2ffbbee3c-inventory\") on node \"crc\" DevicePath \"\"" Dec 10 15:14:55 crc kubenswrapper[4718]: I1210 15:14:55.804852 4718 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2c9d42bb-c253-4ef3-92eb-71a2ffbbee3c-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 10 15:14:56 crc kubenswrapper[4718]: I1210 15:14:56.073550 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-vzmzr" event={"ID":"2c9d42bb-c253-4ef3-92eb-71a2ffbbee3c","Type":"ContainerDied","Data":"15b486c687cef9da8105e541ffb8402e7c942a663ee5d9a636494c5e79b9ff3f"} Dec 10 15:14:56 crc kubenswrapper[4718]: I1210 15:14:56.073620 4718 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="15b486c687cef9da8105e541ffb8402e7c942a663ee5d9a636494c5e79b9ff3f" Dec 10 15:14:56 crc kubenswrapper[4718]: I1210 15:14:56.073727 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-vzmzr" Dec 10 15:14:56 crc kubenswrapper[4718]: I1210 15:14:56.512492 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-bsdk2"] Dec 10 15:14:56 crc kubenswrapper[4718]: E1210 15:14:56.515751 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c9d42bb-c253-4ef3-92eb-71a2ffbbee3c" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Dec 10 15:14:56 crc kubenswrapper[4718]: I1210 15:14:56.515801 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c9d42bb-c253-4ef3-92eb-71a2ffbbee3c" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Dec 10 15:14:56 crc kubenswrapper[4718]: I1210 15:14:56.516500 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="2c9d42bb-c253-4ef3-92eb-71a2ffbbee3c" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Dec 10 15:14:56 crc kubenswrapper[4718]: I1210 15:14:56.517867 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-bsdk2" Dec 10 15:14:56 crc kubenswrapper[4718]: I1210 15:14:56.521924 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 10 15:14:56 crc kubenswrapper[4718]: I1210 15:14:56.522216 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 10 15:14:56 crc kubenswrapper[4718]: I1210 15:14:56.522340 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-vqd8j" Dec 10 15:14:56 crc kubenswrapper[4718]: I1210 15:14:56.522584 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 10 15:14:56 crc kubenswrapper[4718]: I1210 15:14:56.545049 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-bsdk2"] Dec 10 15:14:56 crc kubenswrapper[4718]: I1210 15:14:56.624592 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-swbdv\" (UniqueName: \"kubernetes.io/projected/4d6f6be0-1d66-4c7e-a8e9-09826a416501-kube-api-access-swbdv\") pod \"ssh-known-hosts-edpm-deployment-bsdk2\" (UID: \"4d6f6be0-1d66-4c7e-a8e9-09826a416501\") " pod="openstack/ssh-known-hosts-edpm-deployment-bsdk2" Dec 10 15:14:56 crc kubenswrapper[4718]: I1210 15:14:56.624676 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/4d6f6be0-1d66-4c7e-a8e9-09826a416501-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-bsdk2\" (UID: \"4d6f6be0-1d66-4c7e-a8e9-09826a416501\") " pod="openstack/ssh-known-hosts-edpm-deployment-bsdk2" Dec 10 15:14:56 crc kubenswrapper[4718]: I1210 15:14:56.624742 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4d6f6be0-1d66-4c7e-a8e9-09826a416501-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-bsdk2\" (UID: \"4d6f6be0-1d66-4c7e-a8e9-09826a416501\") " pod="openstack/ssh-known-hosts-edpm-deployment-bsdk2" Dec 10 15:14:56 crc kubenswrapper[4718]: I1210 15:14:56.726681 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-swbdv\" (UniqueName: \"kubernetes.io/projected/4d6f6be0-1d66-4c7e-a8e9-09826a416501-kube-api-access-swbdv\") pod \"ssh-known-hosts-edpm-deployment-bsdk2\" (UID: \"4d6f6be0-1d66-4c7e-a8e9-09826a416501\") " pod="openstack/ssh-known-hosts-edpm-deployment-bsdk2" Dec 10 15:14:56 crc kubenswrapper[4718]: I1210 15:14:56.726751 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/4d6f6be0-1d66-4c7e-a8e9-09826a416501-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-bsdk2\" (UID: \"4d6f6be0-1d66-4c7e-a8e9-09826a416501\") " pod="openstack/ssh-known-hosts-edpm-deployment-bsdk2" Dec 10 15:14:56 crc kubenswrapper[4718]: I1210 15:14:56.726806 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4d6f6be0-1d66-4c7e-a8e9-09826a416501-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-bsdk2\" (UID: \"4d6f6be0-1d66-4c7e-a8e9-09826a416501\") " pod="openstack/ssh-known-hosts-edpm-deployment-bsdk2" Dec 10 15:14:56 crc kubenswrapper[4718]: I1210 15:14:56.738379 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/4d6f6be0-1d66-4c7e-a8e9-09826a416501-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-bsdk2\" (UID: \"4d6f6be0-1d66-4c7e-a8e9-09826a416501\") " pod="openstack/ssh-known-hosts-edpm-deployment-bsdk2" Dec 10 15:14:56 crc kubenswrapper[4718]: I1210 15:14:56.738570 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4d6f6be0-1d66-4c7e-a8e9-09826a416501-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-bsdk2\" (UID: \"4d6f6be0-1d66-4c7e-a8e9-09826a416501\") " pod="openstack/ssh-known-hosts-edpm-deployment-bsdk2" Dec 10 15:14:56 crc kubenswrapper[4718]: I1210 15:14:56.749541 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-swbdv\" (UniqueName: \"kubernetes.io/projected/4d6f6be0-1d66-4c7e-a8e9-09826a416501-kube-api-access-swbdv\") pod \"ssh-known-hosts-edpm-deployment-bsdk2\" (UID: \"4d6f6be0-1d66-4c7e-a8e9-09826a416501\") " pod="openstack/ssh-known-hosts-edpm-deployment-bsdk2" Dec 10 15:14:56 crc kubenswrapper[4718]: I1210 15:14:56.845206 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-bsdk2" Dec 10 15:14:57 crc kubenswrapper[4718]: I1210 15:14:57.442087 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-bsdk2"] Dec 10 15:14:58 crc kubenswrapper[4718]: I1210 15:14:58.104680 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-bsdk2" event={"ID":"4d6f6be0-1d66-4c7e-a8e9-09826a416501","Type":"ContainerStarted","Data":"4d432d4ab408d14d112f2068d2db10f8513a4998098c6dc87b727eb1b4282477"} Dec 10 15:14:59 crc kubenswrapper[4718]: I1210 15:14:59.123891 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-bsdk2" event={"ID":"4d6f6be0-1d66-4c7e-a8e9-09826a416501","Type":"ContainerStarted","Data":"7f8410331b3ca103e59c90e2f84f6b94dfe9203a51028b42e701f97df11fdc1b"} Dec 10 15:14:59 crc kubenswrapper[4718]: I1210 15:14:59.158583 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ssh-known-hosts-edpm-deployment-bsdk2" podStartSLOduration=2.643518011 podStartE2EDuration="3.158555987s" podCreationTimestamp="2025-12-10 15:14:56 +0000 UTC" firstStartedPulling="2025-12-10 15:14:57.452802122 +0000 UTC m=+2602.402025539" lastFinishedPulling="2025-12-10 15:14:57.967840098 +0000 UTC m=+2602.917063515" observedRunningTime="2025-12-10 15:14:59.143971887 +0000 UTC m=+2604.093195304" watchObservedRunningTime="2025-12-10 15:14:59.158555987 +0000 UTC m=+2604.107779394" Dec 10 15:15:00 crc kubenswrapper[4718]: I1210 15:15:00.191127 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29422995-2ksw5"] Dec 10 15:15:00 crc kubenswrapper[4718]: I1210 15:15:00.195001 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29422995-2ksw5" Dec 10 15:15:00 crc kubenswrapper[4718]: I1210 15:15:00.199304 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 10 15:15:00 crc kubenswrapper[4718]: I1210 15:15:00.199739 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 10 15:15:00 crc kubenswrapper[4718]: I1210 15:15:00.213911 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29422995-2ksw5"] Dec 10 15:15:00 crc kubenswrapper[4718]: I1210 15:15:00.311605 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7d6a73fa-7a27-4ede-ab43-bb532b0d7dc6-secret-volume\") pod \"collect-profiles-29422995-2ksw5\" (UID: \"7d6a73fa-7a27-4ede-ab43-bb532b0d7dc6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29422995-2ksw5" Dec 10 15:15:00 crc kubenswrapper[4718]: I1210 15:15:00.312138 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g4ngf\" (UniqueName: \"kubernetes.io/projected/7d6a73fa-7a27-4ede-ab43-bb532b0d7dc6-kube-api-access-g4ngf\") pod \"collect-profiles-29422995-2ksw5\" (UID: \"7d6a73fa-7a27-4ede-ab43-bb532b0d7dc6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29422995-2ksw5" Dec 10 15:15:00 crc kubenswrapper[4718]: I1210 15:15:00.312464 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7d6a73fa-7a27-4ede-ab43-bb532b0d7dc6-config-volume\") pod \"collect-profiles-29422995-2ksw5\" (UID: \"7d6a73fa-7a27-4ede-ab43-bb532b0d7dc6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29422995-2ksw5" Dec 10 15:15:00 crc kubenswrapper[4718]: I1210 15:15:00.414958 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7d6a73fa-7a27-4ede-ab43-bb532b0d7dc6-config-volume\") pod \"collect-profiles-29422995-2ksw5\" (UID: \"7d6a73fa-7a27-4ede-ab43-bb532b0d7dc6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29422995-2ksw5" Dec 10 15:15:00 crc kubenswrapper[4718]: I1210 15:15:00.415107 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7d6a73fa-7a27-4ede-ab43-bb532b0d7dc6-secret-volume\") pod \"collect-profiles-29422995-2ksw5\" (UID: \"7d6a73fa-7a27-4ede-ab43-bb532b0d7dc6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29422995-2ksw5" Dec 10 15:15:00 crc kubenswrapper[4718]: I1210 15:15:00.415181 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g4ngf\" (UniqueName: \"kubernetes.io/projected/7d6a73fa-7a27-4ede-ab43-bb532b0d7dc6-kube-api-access-g4ngf\") pod \"collect-profiles-29422995-2ksw5\" (UID: \"7d6a73fa-7a27-4ede-ab43-bb532b0d7dc6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29422995-2ksw5" Dec 10 15:15:00 crc kubenswrapper[4718]: I1210 15:15:00.416276 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7d6a73fa-7a27-4ede-ab43-bb532b0d7dc6-config-volume\") pod \"collect-profiles-29422995-2ksw5\" (UID: \"7d6a73fa-7a27-4ede-ab43-bb532b0d7dc6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29422995-2ksw5" Dec 10 15:15:00 crc kubenswrapper[4718]: I1210 15:15:00.433931 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7d6a73fa-7a27-4ede-ab43-bb532b0d7dc6-secret-volume\") pod \"collect-profiles-29422995-2ksw5\" (UID: \"7d6a73fa-7a27-4ede-ab43-bb532b0d7dc6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29422995-2ksw5" Dec 10 15:15:00 crc kubenswrapper[4718]: I1210 15:15:00.439491 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g4ngf\" (UniqueName: \"kubernetes.io/projected/7d6a73fa-7a27-4ede-ab43-bb532b0d7dc6-kube-api-access-g4ngf\") pod \"collect-profiles-29422995-2ksw5\" (UID: \"7d6a73fa-7a27-4ede-ab43-bb532b0d7dc6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29422995-2ksw5" Dec 10 15:15:00 crc kubenswrapper[4718]: I1210 15:15:00.524065 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29422995-2ksw5" Dec 10 15:15:01 crc kubenswrapper[4718]: I1210 15:15:01.023219 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29422995-2ksw5"] Dec 10 15:15:01 crc kubenswrapper[4718]: I1210 15:15:01.203755 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29422995-2ksw5" event={"ID":"7d6a73fa-7a27-4ede-ab43-bb532b0d7dc6","Type":"ContainerStarted","Data":"790fbf6a918add5ba281a00b361f6016ab8bf1318a0887812700522b07f5972f"} Dec 10 15:15:02 crc kubenswrapper[4718]: I1210 15:15:02.224160 4718 generic.go:334] "Generic (PLEG): container finished" podID="7d6a73fa-7a27-4ede-ab43-bb532b0d7dc6" containerID="2853606cb19265265246cb5773ac1f7585f84f129a28ccbcac4d94839ff262d5" exitCode=0 Dec 10 15:15:02 crc kubenswrapper[4718]: I1210 15:15:02.224298 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29422995-2ksw5" event={"ID":"7d6a73fa-7a27-4ede-ab43-bb532b0d7dc6","Type":"ContainerDied","Data":"2853606cb19265265246cb5773ac1f7585f84f129a28ccbcac4d94839ff262d5"} Dec 10 15:15:03 crc kubenswrapper[4718]: I1210 15:15:03.663713 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29422995-2ksw5" Dec 10 15:15:03 crc kubenswrapper[4718]: I1210 15:15:03.783229 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7d6a73fa-7a27-4ede-ab43-bb532b0d7dc6-secret-volume\") pod \"7d6a73fa-7a27-4ede-ab43-bb532b0d7dc6\" (UID: \"7d6a73fa-7a27-4ede-ab43-bb532b0d7dc6\") " Dec 10 15:15:03 crc kubenswrapper[4718]: I1210 15:15:03.783667 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7d6a73fa-7a27-4ede-ab43-bb532b0d7dc6-config-volume\") pod \"7d6a73fa-7a27-4ede-ab43-bb532b0d7dc6\" (UID: \"7d6a73fa-7a27-4ede-ab43-bb532b0d7dc6\") " Dec 10 15:15:03 crc kubenswrapper[4718]: I1210 15:15:03.784059 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g4ngf\" (UniqueName: \"kubernetes.io/projected/7d6a73fa-7a27-4ede-ab43-bb532b0d7dc6-kube-api-access-g4ngf\") pod \"7d6a73fa-7a27-4ede-ab43-bb532b0d7dc6\" (UID: \"7d6a73fa-7a27-4ede-ab43-bb532b0d7dc6\") " Dec 10 15:15:03 crc kubenswrapper[4718]: I1210 15:15:03.983027 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7d6a73fa-7a27-4ede-ab43-bb532b0d7dc6-config-volume" (OuterVolumeSpecName: "config-volume") pod "7d6a73fa-7a27-4ede-ab43-bb532b0d7dc6" (UID: "7d6a73fa-7a27-4ede-ab43-bb532b0d7dc6"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 15:15:04 crc kubenswrapper[4718]: I1210 15:15:04.010810 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7d6a73fa-7a27-4ede-ab43-bb532b0d7dc6-kube-api-access-g4ngf" (OuterVolumeSpecName: "kube-api-access-g4ngf") pod "7d6a73fa-7a27-4ede-ab43-bb532b0d7dc6" (UID: "7d6a73fa-7a27-4ede-ab43-bb532b0d7dc6"). InnerVolumeSpecName "kube-api-access-g4ngf". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:15:04 crc kubenswrapper[4718]: I1210 15:15:04.010979 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7d6a73fa-7a27-4ede-ab43-bb532b0d7dc6-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "7d6a73fa-7a27-4ede-ab43-bb532b0d7dc6" (UID: "7d6a73fa-7a27-4ede-ab43-bb532b0d7dc6"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:15:04 crc kubenswrapper[4718]: I1210 15:15:04.090994 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g4ngf\" (UniqueName: \"kubernetes.io/projected/7d6a73fa-7a27-4ede-ab43-bb532b0d7dc6-kube-api-access-g4ngf\") on node \"crc\" DevicePath \"\"" Dec 10 15:15:04 crc kubenswrapper[4718]: I1210 15:15:04.091191 4718 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7d6a73fa-7a27-4ede-ab43-bb532b0d7dc6-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 10 15:15:04 crc kubenswrapper[4718]: I1210 15:15:04.091827 4718 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7d6a73fa-7a27-4ede-ab43-bb532b0d7dc6-config-volume\") on node \"crc\" DevicePath \"\"" Dec 10 15:15:04 crc kubenswrapper[4718]: I1210 15:15:04.249123 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29422995-2ksw5" event={"ID":"7d6a73fa-7a27-4ede-ab43-bb532b0d7dc6","Type":"ContainerDied","Data":"790fbf6a918add5ba281a00b361f6016ab8bf1318a0887812700522b07f5972f"} Dec 10 15:15:04 crc kubenswrapper[4718]: I1210 15:15:04.249193 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29422995-2ksw5" Dec 10 15:15:04 crc kubenswrapper[4718]: I1210 15:15:04.249208 4718 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="790fbf6a918add5ba281a00b361f6016ab8bf1318a0887812700522b07f5972f" Dec 10 15:15:04 crc kubenswrapper[4718]: I1210 15:15:04.752972 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29422950-4cxn6"] Dec 10 15:15:04 crc kubenswrapper[4718]: I1210 15:15:04.762504 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29422950-4cxn6"] Dec 10 15:15:06 crc kubenswrapper[4718]: I1210 15:15:06.041950 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dff4ff8a-f156-4da4-ba81-477078a8345d" path="/var/lib/kubelet/pods/dff4ff8a-f156-4da4-ba81-477078a8345d/volumes" Dec 10 15:15:06 crc kubenswrapper[4718]: I1210 15:15:06.328906 4718 generic.go:334] "Generic (PLEG): container finished" podID="4d6f6be0-1d66-4c7e-a8e9-09826a416501" containerID="7f8410331b3ca103e59c90e2f84f6b94dfe9203a51028b42e701f97df11fdc1b" exitCode=0 Dec 10 15:15:06 crc kubenswrapper[4718]: I1210 15:15:06.329003 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-bsdk2" event={"ID":"4d6f6be0-1d66-4c7e-a8e9-09826a416501","Type":"ContainerDied","Data":"7f8410331b3ca103e59c90e2f84f6b94dfe9203a51028b42e701f97df11fdc1b"} Dec 10 15:15:07 crc kubenswrapper[4718]: I1210 15:15:07.834947 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-bsdk2" Dec 10 15:15:07 crc kubenswrapper[4718]: I1210 15:15:07.840763 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-swbdv\" (UniqueName: \"kubernetes.io/projected/4d6f6be0-1d66-4c7e-a8e9-09826a416501-kube-api-access-swbdv\") pod \"4d6f6be0-1d66-4c7e-a8e9-09826a416501\" (UID: \"4d6f6be0-1d66-4c7e-a8e9-09826a416501\") " Dec 10 15:15:07 crc kubenswrapper[4718]: I1210 15:15:07.840861 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4d6f6be0-1d66-4c7e-a8e9-09826a416501-ssh-key-openstack-edpm-ipam\") pod \"4d6f6be0-1d66-4c7e-a8e9-09826a416501\" (UID: \"4d6f6be0-1d66-4c7e-a8e9-09826a416501\") " Dec 10 15:15:07 crc kubenswrapper[4718]: I1210 15:15:07.840969 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/4d6f6be0-1d66-4c7e-a8e9-09826a416501-inventory-0\") pod \"4d6f6be0-1d66-4c7e-a8e9-09826a416501\" (UID: \"4d6f6be0-1d66-4c7e-a8e9-09826a416501\") " Dec 10 15:15:07 crc kubenswrapper[4718]: I1210 15:15:07.847251 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4d6f6be0-1d66-4c7e-a8e9-09826a416501-kube-api-access-swbdv" (OuterVolumeSpecName: "kube-api-access-swbdv") pod "4d6f6be0-1d66-4c7e-a8e9-09826a416501" (UID: "4d6f6be0-1d66-4c7e-a8e9-09826a416501"). InnerVolumeSpecName "kube-api-access-swbdv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:15:07 crc kubenswrapper[4718]: I1210 15:15:07.879757 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4d6f6be0-1d66-4c7e-a8e9-09826a416501-inventory-0" (OuterVolumeSpecName: "inventory-0") pod "4d6f6be0-1d66-4c7e-a8e9-09826a416501" (UID: "4d6f6be0-1d66-4c7e-a8e9-09826a416501"). InnerVolumeSpecName "inventory-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:15:07 crc kubenswrapper[4718]: I1210 15:15:07.888686 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4d6f6be0-1d66-4c7e-a8e9-09826a416501-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "4d6f6be0-1d66-4c7e-a8e9-09826a416501" (UID: "4d6f6be0-1d66-4c7e-a8e9-09826a416501"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:15:07 crc kubenswrapper[4718]: I1210 15:15:07.944097 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-swbdv\" (UniqueName: \"kubernetes.io/projected/4d6f6be0-1d66-4c7e-a8e9-09826a416501-kube-api-access-swbdv\") on node \"crc\" DevicePath \"\"" Dec 10 15:15:07 crc kubenswrapper[4718]: I1210 15:15:07.944143 4718 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4d6f6be0-1d66-4c7e-a8e9-09826a416501-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Dec 10 15:15:07 crc kubenswrapper[4718]: I1210 15:15:07.944154 4718 reconciler_common.go:293] "Volume detached for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/4d6f6be0-1d66-4c7e-a8e9-09826a416501-inventory-0\") on node \"crc\" DevicePath \"\"" Dec 10 15:15:08 crc kubenswrapper[4718]: I1210 15:15:08.353990 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-bsdk2" event={"ID":"4d6f6be0-1d66-4c7e-a8e9-09826a416501","Type":"ContainerDied","Data":"4d432d4ab408d14d112f2068d2db10f8513a4998098c6dc87b727eb1b4282477"} Dec 10 15:15:08 crc kubenswrapper[4718]: I1210 15:15:08.354045 4718 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4d432d4ab408d14d112f2068d2db10f8513a4998098c6dc87b727eb1b4282477" Dec 10 15:15:08 crc kubenswrapper[4718]: I1210 15:15:08.354073 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-bsdk2" Dec 10 15:15:08 crc kubenswrapper[4718]: I1210 15:15:08.501637 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-dt6pb"] Dec 10 15:15:08 crc kubenswrapper[4718]: E1210 15:15:08.503675 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d6f6be0-1d66-4c7e-a8e9-09826a416501" containerName="ssh-known-hosts-edpm-deployment" Dec 10 15:15:08 crc kubenswrapper[4718]: I1210 15:15:08.503768 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d6f6be0-1d66-4c7e-a8e9-09826a416501" containerName="ssh-known-hosts-edpm-deployment" Dec 10 15:15:08 crc kubenswrapper[4718]: E1210 15:15:08.503806 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d6a73fa-7a27-4ede-ab43-bb532b0d7dc6" containerName="collect-profiles" Dec 10 15:15:08 crc kubenswrapper[4718]: I1210 15:15:08.503817 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d6a73fa-7a27-4ede-ab43-bb532b0d7dc6" containerName="collect-profiles" Dec 10 15:15:08 crc kubenswrapper[4718]: I1210 15:15:08.504508 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="7d6a73fa-7a27-4ede-ab43-bb532b0d7dc6" containerName="collect-profiles" Dec 10 15:15:08 crc kubenswrapper[4718]: I1210 15:15:08.504570 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="4d6f6be0-1d66-4c7e-a8e9-09826a416501" containerName="ssh-known-hosts-edpm-deployment" Dec 10 15:15:08 crc kubenswrapper[4718]: I1210 15:15:08.510717 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-dt6pb" Dec 10 15:15:08 crc kubenswrapper[4718]: I1210 15:15:08.515703 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 10 15:15:08 crc kubenswrapper[4718]: I1210 15:15:08.516306 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 10 15:15:08 crc kubenswrapper[4718]: I1210 15:15:08.516396 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 10 15:15:08 crc kubenswrapper[4718]: I1210 15:15:08.516442 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-vqd8j" Dec 10 15:15:08 crc kubenswrapper[4718]: I1210 15:15:08.527572 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-dt6pb"] Dec 10 15:15:08 crc kubenswrapper[4718]: I1210 15:15:08.555816 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/311630cc-3a9b-48d5-9407-879b0f508508-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-dt6pb\" (UID: \"311630cc-3a9b-48d5-9407-879b0f508508\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-dt6pb" Dec 10 15:15:08 crc kubenswrapper[4718]: I1210 15:15:08.556014 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/311630cc-3a9b-48d5-9407-879b0f508508-ssh-key\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-dt6pb\" (UID: \"311630cc-3a9b-48d5-9407-879b0f508508\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-dt6pb" Dec 10 15:15:08 crc kubenswrapper[4718]: I1210 15:15:08.556309 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-85hpq\" (UniqueName: \"kubernetes.io/projected/311630cc-3a9b-48d5-9407-879b0f508508-kube-api-access-85hpq\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-dt6pb\" (UID: \"311630cc-3a9b-48d5-9407-879b0f508508\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-dt6pb" Dec 10 15:15:08 crc kubenswrapper[4718]: I1210 15:15:08.658199 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-85hpq\" (UniqueName: \"kubernetes.io/projected/311630cc-3a9b-48d5-9407-879b0f508508-kube-api-access-85hpq\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-dt6pb\" (UID: \"311630cc-3a9b-48d5-9407-879b0f508508\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-dt6pb" Dec 10 15:15:08 crc kubenswrapper[4718]: I1210 15:15:08.658280 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/311630cc-3a9b-48d5-9407-879b0f508508-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-dt6pb\" (UID: \"311630cc-3a9b-48d5-9407-879b0f508508\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-dt6pb" Dec 10 15:15:08 crc kubenswrapper[4718]: I1210 15:15:08.658405 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/311630cc-3a9b-48d5-9407-879b0f508508-ssh-key\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-dt6pb\" (UID: \"311630cc-3a9b-48d5-9407-879b0f508508\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-dt6pb" Dec 10 15:15:08 crc kubenswrapper[4718]: I1210 15:15:08.663794 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/311630cc-3a9b-48d5-9407-879b0f508508-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-dt6pb\" (UID: \"311630cc-3a9b-48d5-9407-879b0f508508\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-dt6pb" Dec 10 15:15:08 crc kubenswrapper[4718]: I1210 15:15:08.663930 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/311630cc-3a9b-48d5-9407-879b0f508508-ssh-key\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-dt6pb\" (UID: \"311630cc-3a9b-48d5-9407-879b0f508508\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-dt6pb" Dec 10 15:15:08 crc kubenswrapper[4718]: I1210 15:15:08.680071 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-85hpq\" (UniqueName: \"kubernetes.io/projected/311630cc-3a9b-48d5-9407-879b0f508508-kube-api-access-85hpq\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-dt6pb\" (UID: \"311630cc-3a9b-48d5-9407-879b0f508508\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-dt6pb" Dec 10 15:15:08 crc kubenswrapper[4718]: I1210 15:15:08.853263 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-dt6pb" Dec 10 15:15:09 crc kubenswrapper[4718]: I1210 15:15:09.388333 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-dt6pb"] Dec 10 15:15:10 crc kubenswrapper[4718]: I1210 15:15:10.381268 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-dt6pb" event={"ID":"311630cc-3a9b-48d5-9407-879b0f508508","Type":"ContainerStarted","Data":"4f3ab22d565bf97d03d5637af9b90790bf9f90ce18fd2caaa024c2eaab52f82f"} Dec 10 15:15:11 crc kubenswrapper[4718]: I1210 15:15:11.393869 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-dt6pb" event={"ID":"311630cc-3a9b-48d5-9407-879b0f508508","Type":"ContainerStarted","Data":"bf0dc4876336ce5818a52256c7b602f7a5b3db90f62eb4aa3e2aafeeffc1c019"} Dec 10 15:15:11 crc kubenswrapper[4718]: I1210 15:15:11.436922 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-dt6pb" podStartSLOduration=2.627894735 podStartE2EDuration="3.436894534s" podCreationTimestamp="2025-12-10 15:15:08 +0000 UTC" firstStartedPulling="2025-12-10 15:15:09.391927147 +0000 UTC m=+2614.341150564" lastFinishedPulling="2025-12-10 15:15:10.200926956 +0000 UTC m=+2615.150150363" observedRunningTime="2025-12-10 15:15:11.429017924 +0000 UTC m=+2616.378241361" watchObservedRunningTime="2025-12-10 15:15:11.436894534 +0000 UTC m=+2616.386117941" Dec 10 15:15:19 crc kubenswrapper[4718]: I1210 15:15:19.499244 4718 generic.go:334] "Generic (PLEG): container finished" podID="311630cc-3a9b-48d5-9407-879b0f508508" containerID="bf0dc4876336ce5818a52256c7b602f7a5b3db90f62eb4aa3e2aafeeffc1c019" exitCode=0 Dec 10 15:15:19 crc kubenswrapper[4718]: I1210 15:15:19.499346 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-dt6pb" event={"ID":"311630cc-3a9b-48d5-9407-879b0f508508","Type":"ContainerDied","Data":"bf0dc4876336ce5818a52256c7b602f7a5b3db90f62eb4aa3e2aafeeffc1c019"} Dec 10 15:15:21 crc kubenswrapper[4718]: I1210 15:15:21.057440 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-dt6pb" Dec 10 15:15:21 crc kubenswrapper[4718]: I1210 15:15:21.151136 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/311630cc-3a9b-48d5-9407-879b0f508508-ssh-key\") pod \"311630cc-3a9b-48d5-9407-879b0f508508\" (UID: \"311630cc-3a9b-48d5-9407-879b0f508508\") " Dec 10 15:15:21 crc kubenswrapper[4718]: I1210 15:15:21.151304 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/311630cc-3a9b-48d5-9407-879b0f508508-inventory\") pod \"311630cc-3a9b-48d5-9407-879b0f508508\" (UID: \"311630cc-3a9b-48d5-9407-879b0f508508\") " Dec 10 15:15:21 crc kubenswrapper[4718]: I1210 15:15:21.151615 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-85hpq\" (UniqueName: \"kubernetes.io/projected/311630cc-3a9b-48d5-9407-879b0f508508-kube-api-access-85hpq\") pod \"311630cc-3a9b-48d5-9407-879b0f508508\" (UID: \"311630cc-3a9b-48d5-9407-879b0f508508\") " Dec 10 15:15:21 crc kubenswrapper[4718]: I1210 15:15:21.159770 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/311630cc-3a9b-48d5-9407-879b0f508508-kube-api-access-85hpq" (OuterVolumeSpecName: "kube-api-access-85hpq") pod "311630cc-3a9b-48d5-9407-879b0f508508" (UID: "311630cc-3a9b-48d5-9407-879b0f508508"). InnerVolumeSpecName "kube-api-access-85hpq". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:15:21 crc kubenswrapper[4718]: I1210 15:15:21.190802 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/311630cc-3a9b-48d5-9407-879b0f508508-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "311630cc-3a9b-48d5-9407-879b0f508508" (UID: "311630cc-3a9b-48d5-9407-879b0f508508"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:15:21 crc kubenswrapper[4718]: I1210 15:15:21.194774 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/311630cc-3a9b-48d5-9407-879b0f508508-inventory" (OuterVolumeSpecName: "inventory") pod "311630cc-3a9b-48d5-9407-879b0f508508" (UID: "311630cc-3a9b-48d5-9407-879b0f508508"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:15:21 crc kubenswrapper[4718]: I1210 15:15:21.254101 4718 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/311630cc-3a9b-48d5-9407-879b0f508508-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 10 15:15:21 crc kubenswrapper[4718]: I1210 15:15:21.254141 4718 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/311630cc-3a9b-48d5-9407-879b0f508508-inventory\") on node \"crc\" DevicePath \"\"" Dec 10 15:15:21 crc kubenswrapper[4718]: I1210 15:15:21.254151 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-85hpq\" (UniqueName: \"kubernetes.io/projected/311630cc-3a9b-48d5-9407-879b0f508508-kube-api-access-85hpq\") on node \"crc\" DevicePath \"\"" Dec 10 15:15:21 crc kubenswrapper[4718]: I1210 15:15:21.552364 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-dt6pb" event={"ID":"311630cc-3a9b-48d5-9407-879b0f508508","Type":"ContainerDied","Data":"4f3ab22d565bf97d03d5637af9b90790bf9f90ce18fd2caaa024c2eaab52f82f"} Dec 10 15:15:21 crc kubenswrapper[4718]: I1210 15:15:21.552450 4718 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4f3ab22d565bf97d03d5637af9b90790bf9f90ce18fd2caaa024c2eaab52f82f" Dec 10 15:15:21 crc kubenswrapper[4718]: I1210 15:15:21.552452 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-dt6pb" Dec 10 15:15:21 crc kubenswrapper[4718]: I1210 15:15:21.688564 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-vldh6"] Dec 10 15:15:21 crc kubenswrapper[4718]: E1210 15:15:21.695314 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="311630cc-3a9b-48d5-9407-879b0f508508" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Dec 10 15:15:21 crc kubenswrapper[4718]: I1210 15:15:21.695410 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="311630cc-3a9b-48d5-9407-879b0f508508" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Dec 10 15:15:21 crc kubenswrapper[4718]: I1210 15:15:21.696454 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="311630cc-3a9b-48d5-9407-879b0f508508" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Dec 10 15:15:21 crc kubenswrapper[4718]: I1210 15:15:21.711547 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-vldh6" Dec 10 15:15:21 crc kubenswrapper[4718]: I1210 15:15:21.720830 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 10 15:15:21 crc kubenswrapper[4718]: I1210 15:15:21.722688 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 10 15:15:21 crc kubenswrapper[4718]: I1210 15:15:21.722708 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-vqd8j" Dec 10 15:15:21 crc kubenswrapper[4718]: I1210 15:15:21.726802 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 10 15:15:21 crc kubenswrapper[4718]: I1210 15:15:21.729246 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-vldh6"] Dec 10 15:15:21 crc kubenswrapper[4718]: I1210 15:15:21.811654 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/71d72952-f3a0-4c3c-97f8-26c143f154cc-ssh-key\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-vldh6\" (UID: \"71d72952-f3a0-4c3c-97f8-26c143f154cc\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-vldh6" Dec 10 15:15:21 crc kubenswrapper[4718]: I1210 15:15:21.812122 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/71d72952-f3a0-4c3c-97f8-26c143f154cc-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-vldh6\" (UID: \"71d72952-f3a0-4c3c-97f8-26c143f154cc\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-vldh6" Dec 10 15:15:21 crc kubenswrapper[4718]: I1210 15:15:21.812183 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6hw7n\" (UniqueName: \"kubernetes.io/projected/71d72952-f3a0-4c3c-97f8-26c143f154cc-kube-api-access-6hw7n\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-vldh6\" (UID: \"71d72952-f3a0-4c3c-97f8-26c143f154cc\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-vldh6" Dec 10 15:15:21 crc kubenswrapper[4718]: I1210 15:15:21.928185 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/71d72952-f3a0-4c3c-97f8-26c143f154cc-ssh-key\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-vldh6\" (UID: \"71d72952-f3a0-4c3c-97f8-26c143f154cc\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-vldh6" Dec 10 15:15:21 crc kubenswrapper[4718]: I1210 15:15:21.920383 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/71d72952-f3a0-4c3c-97f8-26c143f154cc-ssh-key\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-vldh6\" (UID: \"71d72952-f3a0-4c3c-97f8-26c143f154cc\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-vldh6" Dec 10 15:15:21 crc kubenswrapper[4718]: I1210 15:15:21.930594 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/71d72952-f3a0-4c3c-97f8-26c143f154cc-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-vldh6\" (UID: \"71d72952-f3a0-4c3c-97f8-26c143f154cc\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-vldh6" Dec 10 15:15:21 crc kubenswrapper[4718]: I1210 15:15:21.930698 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6hw7n\" (UniqueName: \"kubernetes.io/projected/71d72952-f3a0-4c3c-97f8-26c143f154cc-kube-api-access-6hw7n\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-vldh6\" (UID: \"71d72952-f3a0-4c3c-97f8-26c143f154cc\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-vldh6" Dec 10 15:15:21 crc kubenswrapper[4718]: I1210 15:15:21.936857 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/71d72952-f3a0-4c3c-97f8-26c143f154cc-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-vldh6\" (UID: \"71d72952-f3a0-4c3c-97f8-26c143f154cc\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-vldh6" Dec 10 15:15:21 crc kubenswrapper[4718]: I1210 15:15:21.962825 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6hw7n\" (UniqueName: \"kubernetes.io/projected/71d72952-f3a0-4c3c-97f8-26c143f154cc-kube-api-access-6hw7n\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-vldh6\" (UID: \"71d72952-f3a0-4c3c-97f8-26c143f154cc\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-vldh6" Dec 10 15:15:22 crc kubenswrapper[4718]: I1210 15:15:22.043782 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-vldh6" Dec 10 15:15:23 crc kubenswrapper[4718]: I1210 15:15:23.174370 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-vldh6"] Dec 10 15:15:23 crc kubenswrapper[4718]: I1210 15:15:23.671101 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-vldh6" event={"ID":"71d72952-f3a0-4c3c-97f8-26c143f154cc","Type":"ContainerStarted","Data":"868286f8548a0b680ac72863838a9dea279a68b557b1ac316c5971f99ce595c4"} Dec 10 15:15:24 crc kubenswrapper[4718]: I1210 15:15:24.699277 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-vldh6" event={"ID":"71d72952-f3a0-4c3c-97f8-26c143f154cc","Type":"ContainerStarted","Data":"cdf187713a05e2f8f81d383abef6edb59b39bfb9e10fbc740b0904687111d6d7"} Dec 10 15:15:24 crc kubenswrapper[4718]: I1210 15:15:24.730076 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-vldh6" podStartSLOduration=3.231281893 podStartE2EDuration="3.730051085s" podCreationTimestamp="2025-12-10 15:15:21 +0000 UTC" firstStartedPulling="2025-12-10 15:15:23.198176204 +0000 UTC m=+2628.147399621" lastFinishedPulling="2025-12-10 15:15:23.696945386 +0000 UTC m=+2628.646168813" observedRunningTime="2025-12-10 15:15:24.719864846 +0000 UTC m=+2629.669088283" watchObservedRunningTime="2025-12-10 15:15:24.730051085 +0000 UTC m=+2629.679274502" Dec 10 15:15:36 crc kubenswrapper[4718]: I1210 15:15:36.113128 4718 generic.go:334] "Generic (PLEG): container finished" podID="71d72952-f3a0-4c3c-97f8-26c143f154cc" containerID="cdf187713a05e2f8f81d383abef6edb59b39bfb9e10fbc740b0904687111d6d7" exitCode=0 Dec 10 15:15:36 crc kubenswrapper[4718]: I1210 15:15:36.113244 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-vldh6" event={"ID":"71d72952-f3a0-4c3c-97f8-26c143f154cc","Type":"ContainerDied","Data":"cdf187713a05e2f8f81d383abef6edb59b39bfb9e10fbc740b0904687111d6d7"} Dec 10 15:15:37 crc kubenswrapper[4718]: I1210 15:15:37.913609 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-vldh6" Dec 10 15:15:38 crc kubenswrapper[4718]: I1210 15:15:38.111090 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/71d72952-f3a0-4c3c-97f8-26c143f154cc-ssh-key\") pod \"71d72952-f3a0-4c3c-97f8-26c143f154cc\" (UID: \"71d72952-f3a0-4c3c-97f8-26c143f154cc\") " Dec 10 15:15:38 crc kubenswrapper[4718]: I1210 15:15:38.111511 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/71d72952-f3a0-4c3c-97f8-26c143f154cc-inventory\") pod \"71d72952-f3a0-4c3c-97f8-26c143f154cc\" (UID: \"71d72952-f3a0-4c3c-97f8-26c143f154cc\") " Dec 10 15:15:38 crc kubenswrapper[4718]: I1210 15:15:38.111594 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6hw7n\" (UniqueName: \"kubernetes.io/projected/71d72952-f3a0-4c3c-97f8-26c143f154cc-kube-api-access-6hw7n\") pod \"71d72952-f3a0-4c3c-97f8-26c143f154cc\" (UID: \"71d72952-f3a0-4c3c-97f8-26c143f154cc\") " Dec 10 15:15:38 crc kubenswrapper[4718]: I1210 15:15:38.124779 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/71d72952-f3a0-4c3c-97f8-26c143f154cc-kube-api-access-6hw7n" (OuterVolumeSpecName: "kube-api-access-6hw7n") pod "71d72952-f3a0-4c3c-97f8-26c143f154cc" (UID: "71d72952-f3a0-4c3c-97f8-26c143f154cc"). InnerVolumeSpecName "kube-api-access-6hw7n". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:15:38 crc kubenswrapper[4718]: I1210 15:15:38.137815 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-vldh6" event={"ID":"71d72952-f3a0-4c3c-97f8-26c143f154cc","Type":"ContainerDied","Data":"868286f8548a0b680ac72863838a9dea279a68b557b1ac316c5971f99ce595c4"} Dec 10 15:15:38 crc kubenswrapper[4718]: I1210 15:15:38.137872 4718 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="868286f8548a0b680ac72863838a9dea279a68b557b1ac316c5971f99ce595c4" Dec 10 15:15:38 crc kubenswrapper[4718]: I1210 15:15:38.137943 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-vldh6" Dec 10 15:15:38 crc kubenswrapper[4718]: I1210 15:15:38.158679 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/71d72952-f3a0-4c3c-97f8-26c143f154cc-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "71d72952-f3a0-4c3c-97f8-26c143f154cc" (UID: "71d72952-f3a0-4c3c-97f8-26c143f154cc"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:15:38 crc kubenswrapper[4718]: I1210 15:15:38.159562 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/71d72952-f3a0-4c3c-97f8-26c143f154cc-inventory" (OuterVolumeSpecName: "inventory") pod "71d72952-f3a0-4c3c-97f8-26c143f154cc" (UID: "71d72952-f3a0-4c3c-97f8-26c143f154cc"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:15:38 crc kubenswrapper[4718]: I1210 15:15:38.214345 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6hw7n\" (UniqueName: \"kubernetes.io/projected/71d72952-f3a0-4c3c-97f8-26c143f154cc-kube-api-access-6hw7n\") on node \"crc\" DevicePath \"\"" Dec 10 15:15:38 crc kubenswrapper[4718]: I1210 15:15:38.214408 4718 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/71d72952-f3a0-4c3c-97f8-26c143f154cc-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 10 15:15:38 crc kubenswrapper[4718]: I1210 15:15:38.214425 4718 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/71d72952-f3a0-4c3c-97f8-26c143f154cc-inventory\") on node \"crc\" DevicePath \"\"" Dec 10 15:15:38 crc kubenswrapper[4718]: I1210 15:15:38.241425 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-lnpsl"] Dec 10 15:15:38 crc kubenswrapper[4718]: E1210 15:15:38.242064 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="71d72952-f3a0-4c3c-97f8-26c143f154cc" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Dec 10 15:15:38 crc kubenswrapper[4718]: I1210 15:15:38.242091 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="71d72952-f3a0-4c3c-97f8-26c143f154cc" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Dec 10 15:15:38 crc kubenswrapper[4718]: I1210 15:15:38.242360 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="71d72952-f3a0-4c3c-97f8-26c143f154cc" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Dec 10 15:15:38 crc kubenswrapper[4718]: I1210 15:15:38.243476 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-lnpsl" Dec 10 15:15:38 crc kubenswrapper[4718]: I1210 15:15:38.247027 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-neutron-metadata-default-certs-0" Dec 10 15:15:38 crc kubenswrapper[4718]: I1210 15:15:38.247249 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-libvirt-default-certs-0" Dec 10 15:15:38 crc kubenswrapper[4718]: I1210 15:15:38.247209 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-telemetry-default-certs-0" Dec 10 15:15:38 crc kubenswrapper[4718]: I1210 15:15:38.251128 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-ovn-default-certs-0" Dec 10 15:15:38 crc kubenswrapper[4718]: I1210 15:15:38.253983 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-lnpsl"] Dec 10 15:15:38 crc kubenswrapper[4718]: I1210 15:15:38.418685 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/3de81842-2365-419e-88fd-b0b4611f3e8e-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-lnpsl\" (UID: \"3de81842-2365-419e-88fd-b0b4611f3e8e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-lnpsl" Dec 10 15:15:38 crc kubenswrapper[4718]: I1210 15:15:38.418770 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/3de81842-2365-419e-88fd-b0b4611f3e8e-ssh-key\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-lnpsl\" (UID: \"3de81842-2365-419e-88fd-b0b4611f3e8e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-lnpsl" Dec 10 15:15:38 crc kubenswrapper[4718]: I1210 15:15:38.418810 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3de81842-2365-419e-88fd-b0b4611f3e8e-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-lnpsl\" (UID: \"3de81842-2365-419e-88fd-b0b4611f3e8e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-lnpsl" Dec 10 15:15:38 crc kubenswrapper[4718]: I1210 15:15:38.419554 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3de81842-2365-419e-88fd-b0b4611f3e8e-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-lnpsl\" (UID: \"3de81842-2365-419e-88fd-b0b4611f3e8e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-lnpsl" Dec 10 15:15:38 crc kubenswrapper[4718]: I1210 15:15:38.419623 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3de81842-2365-419e-88fd-b0b4611f3e8e-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-lnpsl\" (UID: \"3de81842-2365-419e-88fd-b0b4611f3e8e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-lnpsl" Dec 10 15:15:38 crc kubenswrapper[4718]: I1210 15:15:38.419676 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/3de81842-2365-419e-88fd-b0b4611f3e8e-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-lnpsl\" (UID: \"3de81842-2365-419e-88fd-b0b4611f3e8e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-lnpsl" Dec 10 15:15:38 crc kubenswrapper[4718]: I1210 15:15:38.419790 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3de81842-2365-419e-88fd-b0b4611f3e8e-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-lnpsl\" (UID: \"3de81842-2365-419e-88fd-b0b4611f3e8e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-lnpsl" Dec 10 15:15:38 crc kubenswrapper[4718]: I1210 15:15:38.419819 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h42m2\" (UniqueName: \"kubernetes.io/projected/3de81842-2365-419e-88fd-b0b4611f3e8e-kube-api-access-h42m2\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-lnpsl\" (UID: \"3de81842-2365-419e-88fd-b0b4611f3e8e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-lnpsl" Dec 10 15:15:38 crc kubenswrapper[4718]: I1210 15:15:38.419911 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3de81842-2365-419e-88fd-b0b4611f3e8e-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-lnpsl\" (UID: \"3de81842-2365-419e-88fd-b0b4611f3e8e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-lnpsl" Dec 10 15:15:38 crc kubenswrapper[4718]: I1210 15:15:38.419970 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/3de81842-2365-419e-88fd-b0b4611f3e8e-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-lnpsl\" (UID: \"3de81842-2365-419e-88fd-b0b4611f3e8e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-lnpsl" Dec 10 15:15:38 crc kubenswrapper[4718]: I1210 15:15:38.420075 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3de81842-2365-419e-88fd-b0b4611f3e8e-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-lnpsl\" (UID: \"3de81842-2365-419e-88fd-b0b4611f3e8e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-lnpsl" Dec 10 15:15:38 crc kubenswrapper[4718]: I1210 15:15:38.420227 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3de81842-2365-419e-88fd-b0b4611f3e8e-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-lnpsl\" (UID: \"3de81842-2365-419e-88fd-b0b4611f3e8e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-lnpsl" Dec 10 15:15:38 crc kubenswrapper[4718]: I1210 15:15:38.420297 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3de81842-2365-419e-88fd-b0b4611f3e8e-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-lnpsl\" (UID: \"3de81842-2365-419e-88fd-b0b4611f3e8e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-lnpsl" Dec 10 15:15:38 crc kubenswrapper[4718]: I1210 15:15:38.420339 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/3de81842-2365-419e-88fd-b0b4611f3e8e-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-lnpsl\" (UID: \"3de81842-2365-419e-88fd-b0b4611f3e8e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-lnpsl" Dec 10 15:15:38 crc kubenswrapper[4718]: I1210 15:15:38.522623 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3de81842-2365-419e-88fd-b0b4611f3e8e-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-lnpsl\" (UID: \"3de81842-2365-419e-88fd-b0b4611f3e8e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-lnpsl" Dec 10 15:15:38 crc kubenswrapper[4718]: I1210 15:15:38.522726 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3de81842-2365-419e-88fd-b0b4611f3e8e-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-lnpsl\" (UID: \"3de81842-2365-419e-88fd-b0b4611f3e8e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-lnpsl" Dec 10 15:15:38 crc kubenswrapper[4718]: I1210 15:15:38.522774 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/3de81842-2365-419e-88fd-b0b4611f3e8e-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-lnpsl\" (UID: \"3de81842-2365-419e-88fd-b0b4611f3e8e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-lnpsl" Dec 10 15:15:38 crc kubenswrapper[4718]: I1210 15:15:38.522831 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/3de81842-2365-419e-88fd-b0b4611f3e8e-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-lnpsl\" (UID: \"3de81842-2365-419e-88fd-b0b4611f3e8e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-lnpsl" Dec 10 15:15:38 crc kubenswrapper[4718]: I1210 15:15:38.522870 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/3de81842-2365-419e-88fd-b0b4611f3e8e-ssh-key\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-lnpsl\" (UID: \"3de81842-2365-419e-88fd-b0b4611f3e8e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-lnpsl" Dec 10 15:15:38 crc kubenswrapper[4718]: I1210 15:15:38.522901 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3de81842-2365-419e-88fd-b0b4611f3e8e-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-lnpsl\" (UID: \"3de81842-2365-419e-88fd-b0b4611f3e8e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-lnpsl" Dec 10 15:15:38 crc kubenswrapper[4718]: I1210 15:15:38.522933 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3de81842-2365-419e-88fd-b0b4611f3e8e-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-lnpsl\" (UID: \"3de81842-2365-419e-88fd-b0b4611f3e8e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-lnpsl" Dec 10 15:15:38 crc kubenswrapper[4718]: I1210 15:15:38.522960 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3de81842-2365-419e-88fd-b0b4611f3e8e-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-lnpsl\" (UID: \"3de81842-2365-419e-88fd-b0b4611f3e8e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-lnpsl" Dec 10 15:15:38 crc kubenswrapper[4718]: I1210 15:15:38.522987 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/3de81842-2365-419e-88fd-b0b4611f3e8e-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-lnpsl\" (UID: \"3de81842-2365-419e-88fd-b0b4611f3e8e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-lnpsl" Dec 10 15:15:38 crc kubenswrapper[4718]: I1210 15:15:38.523060 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3de81842-2365-419e-88fd-b0b4611f3e8e-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-lnpsl\" (UID: \"3de81842-2365-419e-88fd-b0b4611f3e8e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-lnpsl" Dec 10 15:15:38 crc kubenswrapper[4718]: I1210 15:15:38.523089 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h42m2\" (UniqueName: \"kubernetes.io/projected/3de81842-2365-419e-88fd-b0b4611f3e8e-kube-api-access-h42m2\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-lnpsl\" (UID: \"3de81842-2365-419e-88fd-b0b4611f3e8e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-lnpsl" Dec 10 15:15:38 crc kubenswrapper[4718]: I1210 15:15:38.523120 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3de81842-2365-419e-88fd-b0b4611f3e8e-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-lnpsl\" (UID: \"3de81842-2365-419e-88fd-b0b4611f3e8e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-lnpsl" Dec 10 15:15:38 crc kubenswrapper[4718]: I1210 15:15:38.523146 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/3de81842-2365-419e-88fd-b0b4611f3e8e-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-lnpsl\" (UID: \"3de81842-2365-419e-88fd-b0b4611f3e8e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-lnpsl" Dec 10 15:15:38 crc kubenswrapper[4718]: I1210 15:15:38.523205 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3de81842-2365-419e-88fd-b0b4611f3e8e-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-lnpsl\" (UID: \"3de81842-2365-419e-88fd-b0b4611f3e8e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-lnpsl" Dec 10 15:15:38 crc kubenswrapper[4718]: I1210 15:15:38.527427 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3de81842-2365-419e-88fd-b0b4611f3e8e-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-lnpsl\" (UID: \"3de81842-2365-419e-88fd-b0b4611f3e8e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-lnpsl" Dec 10 15:15:38 crc kubenswrapper[4718]: I1210 15:15:38.527731 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3de81842-2365-419e-88fd-b0b4611f3e8e-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-lnpsl\" (UID: \"3de81842-2365-419e-88fd-b0b4611f3e8e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-lnpsl" Dec 10 15:15:38 crc kubenswrapper[4718]: I1210 15:15:38.528060 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3de81842-2365-419e-88fd-b0b4611f3e8e-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-lnpsl\" (UID: \"3de81842-2365-419e-88fd-b0b4611f3e8e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-lnpsl" Dec 10 15:15:38 crc kubenswrapper[4718]: I1210 15:15:38.528088 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/3de81842-2365-419e-88fd-b0b4611f3e8e-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-lnpsl\" (UID: \"3de81842-2365-419e-88fd-b0b4611f3e8e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-lnpsl" Dec 10 15:15:38 crc kubenswrapper[4718]: I1210 15:15:38.528939 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3de81842-2365-419e-88fd-b0b4611f3e8e-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-lnpsl\" (UID: \"3de81842-2365-419e-88fd-b0b4611f3e8e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-lnpsl" Dec 10 15:15:38 crc kubenswrapper[4718]: I1210 15:15:38.530035 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/3de81842-2365-419e-88fd-b0b4611f3e8e-ssh-key\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-lnpsl\" (UID: \"3de81842-2365-419e-88fd-b0b4611f3e8e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-lnpsl" Dec 10 15:15:38 crc kubenswrapper[4718]: I1210 15:15:38.530837 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/3de81842-2365-419e-88fd-b0b4611f3e8e-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-lnpsl\" (UID: \"3de81842-2365-419e-88fd-b0b4611f3e8e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-lnpsl" Dec 10 15:15:38 crc kubenswrapper[4718]: I1210 15:15:38.531016 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3de81842-2365-419e-88fd-b0b4611f3e8e-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-lnpsl\" (UID: \"3de81842-2365-419e-88fd-b0b4611f3e8e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-lnpsl" Dec 10 15:15:38 crc kubenswrapper[4718]: I1210 15:15:38.531170 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3de81842-2365-419e-88fd-b0b4611f3e8e-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-lnpsl\" (UID: \"3de81842-2365-419e-88fd-b0b4611f3e8e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-lnpsl" Dec 10 15:15:38 crc kubenswrapper[4718]: I1210 15:15:38.531662 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3de81842-2365-419e-88fd-b0b4611f3e8e-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-lnpsl\" (UID: \"3de81842-2365-419e-88fd-b0b4611f3e8e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-lnpsl" Dec 10 15:15:38 crc kubenswrapper[4718]: I1210 15:15:38.532539 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3de81842-2365-419e-88fd-b0b4611f3e8e-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-lnpsl\" (UID: \"3de81842-2365-419e-88fd-b0b4611f3e8e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-lnpsl" Dec 10 15:15:38 crc kubenswrapper[4718]: I1210 15:15:38.534819 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/3de81842-2365-419e-88fd-b0b4611f3e8e-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-lnpsl\" (UID: \"3de81842-2365-419e-88fd-b0b4611f3e8e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-lnpsl" Dec 10 15:15:38 crc kubenswrapper[4718]: I1210 15:15:38.542670 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/3de81842-2365-419e-88fd-b0b4611f3e8e-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-lnpsl\" (UID: \"3de81842-2365-419e-88fd-b0b4611f3e8e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-lnpsl" Dec 10 15:15:38 crc kubenswrapper[4718]: I1210 15:15:38.546201 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h42m2\" (UniqueName: \"kubernetes.io/projected/3de81842-2365-419e-88fd-b0b4611f3e8e-kube-api-access-h42m2\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-lnpsl\" (UID: \"3de81842-2365-419e-88fd-b0b4611f3e8e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-lnpsl" Dec 10 15:15:38 crc kubenswrapper[4718]: I1210 15:15:38.569973 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-lnpsl" Dec 10 15:15:39 crc kubenswrapper[4718]: I1210 15:15:39.201836 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-lnpsl"] Dec 10 15:15:40 crc kubenswrapper[4718]: I1210 15:15:40.179224 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-lnpsl" event={"ID":"3de81842-2365-419e-88fd-b0b4611f3e8e","Type":"ContainerStarted","Data":"dc06f49b4750b87744aebff64b67069eabfeffdfe919ea1355327572ed148c2a"} Dec 10 15:15:41 crc kubenswrapper[4718]: I1210 15:15:41.191873 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-lnpsl" event={"ID":"3de81842-2365-419e-88fd-b0b4611f3e8e","Type":"ContainerStarted","Data":"842db75ee0877c82ae972935e2855e2620ad9ad6f82a800b66e30b10b9909120"} Dec 10 15:15:41 crc kubenswrapper[4718]: I1210 15:15:41.219319 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-lnpsl" podStartSLOduration=2.706344322 podStartE2EDuration="3.219293125s" podCreationTimestamp="2025-12-10 15:15:38 +0000 UTC" firstStartedPulling="2025-12-10 15:15:39.214760265 +0000 UTC m=+2644.163983682" lastFinishedPulling="2025-12-10 15:15:39.727709068 +0000 UTC m=+2644.676932485" observedRunningTime="2025-12-10 15:15:41.218791702 +0000 UTC m=+2646.168015129" watchObservedRunningTime="2025-12-10 15:15:41.219293125 +0000 UTC m=+2646.168516542" Dec 10 15:15:55 crc kubenswrapper[4718]: I1210 15:15:55.303695 4718 scope.go:117] "RemoveContainer" containerID="16d0b72ce0b64ddface8a5196cfcbb8344cd1b17ab07acef22fab7c9df7217b7" Dec 10 15:15:55 crc kubenswrapper[4718]: I1210 15:15:55.337943 4718 scope.go:117] "RemoveContainer" containerID="3d0778dc12dd262505f9f4eb58b634c40987acff8ff079b4c716cfaf2f0e0a47" Dec 10 15:15:55 crc kubenswrapper[4718]: I1210 15:15:55.416566 4718 scope.go:117] "RemoveContainer" containerID="330696d53bf2fcdd150be923dfcd303cfa693508609ad2a772a5f3c8bf4791ea" Dec 10 15:16:18 crc kubenswrapper[4718]: I1210 15:16:18.084060 4718 patch_prober.go:28] interesting pod/machine-config-daemon-8zmhn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 10 15:16:18 crc kubenswrapper[4718]: I1210 15:16:18.085241 4718 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" podUID="8db53917-7cfb-496d-b8a0-5cc68f3be4e7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 10 15:16:24 crc kubenswrapper[4718]: I1210 15:16:24.165150 4718 generic.go:334] "Generic (PLEG): container finished" podID="3de81842-2365-419e-88fd-b0b4611f3e8e" containerID="842db75ee0877c82ae972935e2855e2620ad9ad6f82a800b66e30b10b9909120" exitCode=0 Dec 10 15:16:24 crc kubenswrapper[4718]: I1210 15:16:24.165412 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-lnpsl" event={"ID":"3de81842-2365-419e-88fd-b0b4611f3e8e","Type":"ContainerDied","Data":"842db75ee0877c82ae972935e2855e2620ad9ad6f82a800b66e30b10b9909120"} Dec 10 15:16:25 crc kubenswrapper[4718]: I1210 15:16:25.930887 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-lnpsl" Dec 10 15:16:26 crc kubenswrapper[4718]: I1210 15:16:26.007259 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3de81842-2365-419e-88fd-b0b4611f3e8e-nova-combined-ca-bundle\") pod \"3de81842-2365-419e-88fd-b0b4611f3e8e\" (UID: \"3de81842-2365-419e-88fd-b0b4611f3e8e\") " Dec 10 15:16:26 crc kubenswrapper[4718]: I1210 15:16:26.007592 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3de81842-2365-419e-88fd-b0b4611f3e8e-ovn-combined-ca-bundle\") pod \"3de81842-2365-419e-88fd-b0b4611f3e8e\" (UID: \"3de81842-2365-419e-88fd-b0b4611f3e8e\") " Dec 10 15:16:26 crc kubenswrapper[4718]: I1210 15:16:26.007768 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/3de81842-2365-419e-88fd-b0b4611f3e8e-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"3de81842-2365-419e-88fd-b0b4611f3e8e\" (UID: \"3de81842-2365-419e-88fd-b0b4611f3e8e\") " Dec 10 15:16:26 crc kubenswrapper[4718]: I1210 15:16:26.007802 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3de81842-2365-419e-88fd-b0b4611f3e8e-telemetry-combined-ca-bundle\") pod \"3de81842-2365-419e-88fd-b0b4611f3e8e\" (UID: \"3de81842-2365-419e-88fd-b0b4611f3e8e\") " Dec 10 15:16:26 crc kubenswrapper[4718]: I1210 15:16:26.007843 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/3de81842-2365-419e-88fd-b0b4611f3e8e-openstack-edpm-ipam-ovn-default-certs-0\") pod \"3de81842-2365-419e-88fd-b0b4611f3e8e\" (UID: \"3de81842-2365-419e-88fd-b0b4611f3e8e\") " Dec 10 15:16:26 crc kubenswrapper[4718]: I1210 15:16:26.007880 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/3de81842-2365-419e-88fd-b0b4611f3e8e-ssh-key\") pod \"3de81842-2365-419e-88fd-b0b4611f3e8e\" (UID: \"3de81842-2365-419e-88fd-b0b4611f3e8e\") " Dec 10 15:16:26 crc kubenswrapper[4718]: I1210 15:16:26.007935 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3de81842-2365-419e-88fd-b0b4611f3e8e-inventory\") pod \"3de81842-2365-419e-88fd-b0b4611f3e8e\" (UID: \"3de81842-2365-419e-88fd-b0b4611f3e8e\") " Dec 10 15:16:26 crc kubenswrapper[4718]: I1210 15:16:26.007983 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3de81842-2365-419e-88fd-b0b4611f3e8e-neutron-metadata-combined-ca-bundle\") pod \"3de81842-2365-419e-88fd-b0b4611f3e8e\" (UID: \"3de81842-2365-419e-88fd-b0b4611f3e8e\") " Dec 10 15:16:26 crc kubenswrapper[4718]: I1210 15:16:26.008008 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/3de81842-2365-419e-88fd-b0b4611f3e8e-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"3de81842-2365-419e-88fd-b0b4611f3e8e\" (UID: \"3de81842-2365-419e-88fd-b0b4611f3e8e\") " Dec 10 15:16:26 crc kubenswrapper[4718]: I1210 15:16:26.008034 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3de81842-2365-419e-88fd-b0b4611f3e8e-libvirt-combined-ca-bundle\") pod \"3de81842-2365-419e-88fd-b0b4611f3e8e\" (UID: \"3de81842-2365-419e-88fd-b0b4611f3e8e\") " Dec 10 15:16:26 crc kubenswrapper[4718]: I1210 15:16:26.008077 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3de81842-2365-419e-88fd-b0b4611f3e8e-repo-setup-combined-ca-bundle\") pod \"3de81842-2365-419e-88fd-b0b4611f3e8e\" (UID: \"3de81842-2365-419e-88fd-b0b4611f3e8e\") " Dec 10 15:16:26 crc kubenswrapper[4718]: I1210 15:16:26.008110 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3de81842-2365-419e-88fd-b0b4611f3e8e-bootstrap-combined-ca-bundle\") pod \"3de81842-2365-419e-88fd-b0b4611f3e8e\" (UID: \"3de81842-2365-419e-88fd-b0b4611f3e8e\") " Dec 10 15:16:26 crc kubenswrapper[4718]: I1210 15:16:26.008209 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/3de81842-2365-419e-88fd-b0b4611f3e8e-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"3de81842-2365-419e-88fd-b0b4611f3e8e\" (UID: \"3de81842-2365-419e-88fd-b0b4611f3e8e\") " Dec 10 15:16:26 crc kubenswrapper[4718]: I1210 15:16:26.008281 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h42m2\" (UniqueName: \"kubernetes.io/projected/3de81842-2365-419e-88fd-b0b4611f3e8e-kube-api-access-h42m2\") pod \"3de81842-2365-419e-88fd-b0b4611f3e8e\" (UID: \"3de81842-2365-419e-88fd-b0b4611f3e8e\") " Dec 10 15:16:26 crc kubenswrapper[4718]: I1210 15:16:26.015466 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3de81842-2365-419e-88fd-b0b4611f3e8e-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "3de81842-2365-419e-88fd-b0b4611f3e8e" (UID: "3de81842-2365-419e-88fd-b0b4611f3e8e"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:16:26 crc kubenswrapper[4718]: I1210 15:16:26.016080 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3de81842-2365-419e-88fd-b0b4611f3e8e-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "3de81842-2365-419e-88fd-b0b4611f3e8e" (UID: "3de81842-2365-419e-88fd-b0b4611f3e8e"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:16:26 crc kubenswrapper[4718]: I1210 15:16:26.016588 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3de81842-2365-419e-88fd-b0b4611f3e8e-openstack-edpm-ipam-telemetry-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-telemetry-default-certs-0") pod "3de81842-2365-419e-88fd-b0b4611f3e8e" (UID: "3de81842-2365-419e-88fd-b0b4611f3e8e"). InnerVolumeSpecName "openstack-edpm-ipam-telemetry-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:16:26 crc kubenswrapper[4718]: I1210 15:16:26.016809 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3de81842-2365-419e-88fd-b0b4611f3e8e-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "3de81842-2365-419e-88fd-b0b4611f3e8e" (UID: "3de81842-2365-419e-88fd-b0b4611f3e8e"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:16:26 crc kubenswrapper[4718]: I1210 15:16:26.017057 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3de81842-2365-419e-88fd-b0b4611f3e8e-kube-api-access-h42m2" (OuterVolumeSpecName: "kube-api-access-h42m2") pod "3de81842-2365-419e-88fd-b0b4611f3e8e" (UID: "3de81842-2365-419e-88fd-b0b4611f3e8e"). InnerVolumeSpecName "kube-api-access-h42m2". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:16:26 crc kubenswrapper[4718]: I1210 15:16:26.019960 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3de81842-2365-419e-88fd-b0b4611f3e8e-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "3de81842-2365-419e-88fd-b0b4611f3e8e" (UID: "3de81842-2365-419e-88fd-b0b4611f3e8e"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:16:26 crc kubenswrapper[4718]: I1210 15:16:26.022384 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3de81842-2365-419e-88fd-b0b4611f3e8e-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "3de81842-2365-419e-88fd-b0b4611f3e8e" (UID: "3de81842-2365-419e-88fd-b0b4611f3e8e"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:16:26 crc kubenswrapper[4718]: I1210 15:16:26.022860 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3de81842-2365-419e-88fd-b0b4611f3e8e-openstack-edpm-ipam-neutron-metadata-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-neutron-metadata-default-certs-0") pod "3de81842-2365-419e-88fd-b0b4611f3e8e" (UID: "3de81842-2365-419e-88fd-b0b4611f3e8e"). InnerVolumeSpecName "openstack-edpm-ipam-neutron-metadata-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:16:26 crc kubenswrapper[4718]: I1210 15:16:26.023613 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3de81842-2365-419e-88fd-b0b4611f3e8e-openstack-edpm-ipam-ovn-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-ovn-default-certs-0") pod "3de81842-2365-419e-88fd-b0b4611f3e8e" (UID: "3de81842-2365-419e-88fd-b0b4611f3e8e"). InnerVolumeSpecName "openstack-edpm-ipam-ovn-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:16:26 crc kubenswrapper[4718]: I1210 15:16:26.027541 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3de81842-2365-419e-88fd-b0b4611f3e8e-openstack-edpm-ipam-libvirt-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-libvirt-default-certs-0") pod "3de81842-2365-419e-88fd-b0b4611f3e8e" (UID: "3de81842-2365-419e-88fd-b0b4611f3e8e"). InnerVolumeSpecName "openstack-edpm-ipam-libvirt-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:16:26 crc kubenswrapper[4718]: I1210 15:16:26.029028 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3de81842-2365-419e-88fd-b0b4611f3e8e-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "3de81842-2365-419e-88fd-b0b4611f3e8e" (UID: "3de81842-2365-419e-88fd-b0b4611f3e8e"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:16:26 crc kubenswrapper[4718]: I1210 15:16:26.032289 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3de81842-2365-419e-88fd-b0b4611f3e8e-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "3de81842-2365-419e-88fd-b0b4611f3e8e" (UID: "3de81842-2365-419e-88fd-b0b4611f3e8e"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:16:26 crc kubenswrapper[4718]: I1210 15:16:26.053266 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3de81842-2365-419e-88fd-b0b4611f3e8e-inventory" (OuterVolumeSpecName: "inventory") pod "3de81842-2365-419e-88fd-b0b4611f3e8e" (UID: "3de81842-2365-419e-88fd-b0b4611f3e8e"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:16:26 crc kubenswrapper[4718]: I1210 15:16:26.074213 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3de81842-2365-419e-88fd-b0b4611f3e8e-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "3de81842-2365-419e-88fd-b0b4611f3e8e" (UID: "3de81842-2365-419e-88fd-b0b4611f3e8e"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:16:26 crc kubenswrapper[4718]: I1210 15:16:26.112589 4718 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/3de81842-2365-419e-88fd-b0b4611f3e8e-openstack-edpm-ipam-ovn-default-certs-0\") on node \"crc\" DevicePath \"\"" Dec 10 15:16:26 crc kubenswrapper[4718]: I1210 15:16:26.112630 4718 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/3de81842-2365-419e-88fd-b0b4611f3e8e-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 10 15:16:26 crc kubenswrapper[4718]: I1210 15:16:26.112643 4718 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3de81842-2365-419e-88fd-b0b4611f3e8e-inventory\") on node \"crc\" DevicePath \"\"" Dec 10 15:16:26 crc kubenswrapper[4718]: I1210 15:16:26.112656 4718 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3de81842-2365-419e-88fd-b0b4611f3e8e-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 10 15:16:26 crc kubenswrapper[4718]: I1210 15:16:26.112721 4718 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/3de81842-2365-419e-88fd-b0b4611f3e8e-openstack-edpm-ipam-neutron-metadata-default-certs-0\") on node \"crc\" DevicePath \"\"" Dec 10 15:16:26 crc kubenswrapper[4718]: I1210 15:16:26.112756 4718 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3de81842-2365-419e-88fd-b0b4611f3e8e-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 10 15:16:26 crc kubenswrapper[4718]: I1210 15:16:26.112813 4718 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3de81842-2365-419e-88fd-b0b4611f3e8e-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 10 15:16:26 crc kubenswrapper[4718]: I1210 15:16:26.112852 4718 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3de81842-2365-419e-88fd-b0b4611f3e8e-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 10 15:16:26 crc kubenswrapper[4718]: I1210 15:16:26.112871 4718 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/3de81842-2365-419e-88fd-b0b4611f3e8e-openstack-edpm-ipam-libvirt-default-certs-0\") on node \"crc\" DevicePath \"\"" Dec 10 15:16:26 crc kubenswrapper[4718]: I1210 15:16:26.112891 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h42m2\" (UniqueName: \"kubernetes.io/projected/3de81842-2365-419e-88fd-b0b4611f3e8e-kube-api-access-h42m2\") on node \"crc\" DevicePath \"\"" Dec 10 15:16:26 crc kubenswrapper[4718]: I1210 15:16:26.112904 4718 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3de81842-2365-419e-88fd-b0b4611f3e8e-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 10 15:16:26 crc kubenswrapper[4718]: I1210 15:16:26.112920 4718 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3de81842-2365-419e-88fd-b0b4611f3e8e-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 10 15:16:26 crc kubenswrapper[4718]: I1210 15:16:26.112933 4718 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/3de81842-2365-419e-88fd-b0b4611f3e8e-openstack-edpm-ipam-telemetry-default-certs-0\") on node \"crc\" DevicePath \"\"" Dec 10 15:16:26 crc kubenswrapper[4718]: I1210 15:16:26.112946 4718 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3de81842-2365-419e-88fd-b0b4611f3e8e-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 10 15:16:26 crc kubenswrapper[4718]: I1210 15:16:26.219971 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-lnpsl" event={"ID":"3de81842-2365-419e-88fd-b0b4611f3e8e","Type":"ContainerDied","Data":"dc06f49b4750b87744aebff64b67069eabfeffdfe919ea1355327572ed148c2a"} Dec 10 15:16:26 crc kubenswrapper[4718]: I1210 15:16:26.220035 4718 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dc06f49b4750b87744aebff64b67069eabfeffdfe919ea1355327572ed148c2a" Dec 10 15:16:26 crc kubenswrapper[4718]: I1210 15:16:26.220060 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-lnpsl" Dec 10 15:16:26 crc kubenswrapper[4718]: I1210 15:16:26.631797 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-sfg42"] Dec 10 15:16:26 crc kubenswrapper[4718]: E1210 15:16:26.632333 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3de81842-2365-419e-88fd-b0b4611f3e8e" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Dec 10 15:16:26 crc kubenswrapper[4718]: I1210 15:16:26.632351 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="3de81842-2365-419e-88fd-b0b4611f3e8e" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Dec 10 15:16:26 crc kubenswrapper[4718]: I1210 15:16:26.632625 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="3de81842-2365-419e-88fd-b0b4611f3e8e" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Dec 10 15:16:26 crc kubenswrapper[4718]: I1210 15:16:26.633381 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-sfg42" Dec 10 15:16:26 crc kubenswrapper[4718]: I1210 15:16:26.636289 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 10 15:16:26 crc kubenswrapper[4718]: I1210 15:16:26.636279 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 10 15:16:26 crc kubenswrapper[4718]: I1210 15:16:26.636339 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 10 15:16:26 crc kubenswrapper[4718]: I1210 15:16:26.636290 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-config" Dec 10 15:16:26 crc kubenswrapper[4718]: I1210 15:16:26.636710 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-vqd8j" Dec 10 15:16:26 crc kubenswrapper[4718]: I1210 15:16:26.641408 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-sfg42"] Dec 10 15:16:26 crc kubenswrapper[4718]: I1210 15:16:26.727246 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ace4c93-2b2a-4185-b16a-d782334fa608-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-sfg42\" (UID: \"8ace4c93-2b2a-4185-b16a-d782334fa608\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-sfg42" Dec 10 15:16:26 crc kubenswrapper[4718]: I1210 15:16:26.727336 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8ace4c93-2b2a-4185-b16a-d782334fa608-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-sfg42\" (UID: \"8ace4c93-2b2a-4185-b16a-d782334fa608\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-sfg42" Dec 10 15:16:26 crc kubenswrapper[4718]: I1210 15:16:26.727400 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/8ace4c93-2b2a-4185-b16a-d782334fa608-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-sfg42\" (UID: \"8ace4c93-2b2a-4185-b16a-d782334fa608\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-sfg42" Dec 10 15:16:26 crc kubenswrapper[4718]: I1210 15:16:26.727500 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dtxhq\" (UniqueName: \"kubernetes.io/projected/8ace4c93-2b2a-4185-b16a-d782334fa608-kube-api-access-dtxhq\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-sfg42\" (UID: \"8ace4c93-2b2a-4185-b16a-d782334fa608\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-sfg42" Dec 10 15:16:26 crc kubenswrapper[4718]: I1210 15:16:26.727547 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/8ace4c93-2b2a-4185-b16a-d782334fa608-ssh-key\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-sfg42\" (UID: \"8ace4c93-2b2a-4185-b16a-d782334fa608\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-sfg42" Dec 10 15:16:26 crc kubenswrapper[4718]: I1210 15:16:26.829416 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ace4c93-2b2a-4185-b16a-d782334fa608-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-sfg42\" (UID: \"8ace4c93-2b2a-4185-b16a-d782334fa608\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-sfg42" Dec 10 15:16:26 crc kubenswrapper[4718]: I1210 15:16:26.829490 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8ace4c93-2b2a-4185-b16a-d782334fa608-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-sfg42\" (UID: \"8ace4c93-2b2a-4185-b16a-d782334fa608\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-sfg42" Dec 10 15:16:26 crc kubenswrapper[4718]: I1210 15:16:26.829553 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/8ace4c93-2b2a-4185-b16a-d782334fa608-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-sfg42\" (UID: \"8ace4c93-2b2a-4185-b16a-d782334fa608\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-sfg42" Dec 10 15:16:26 crc kubenswrapper[4718]: I1210 15:16:26.829701 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dtxhq\" (UniqueName: \"kubernetes.io/projected/8ace4c93-2b2a-4185-b16a-d782334fa608-kube-api-access-dtxhq\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-sfg42\" (UID: \"8ace4c93-2b2a-4185-b16a-d782334fa608\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-sfg42" Dec 10 15:16:26 crc kubenswrapper[4718]: I1210 15:16:26.829767 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/8ace4c93-2b2a-4185-b16a-d782334fa608-ssh-key\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-sfg42\" (UID: \"8ace4c93-2b2a-4185-b16a-d782334fa608\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-sfg42" Dec 10 15:16:26 crc kubenswrapper[4718]: I1210 15:16:26.831034 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/8ace4c93-2b2a-4185-b16a-d782334fa608-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-sfg42\" (UID: \"8ace4c93-2b2a-4185-b16a-d782334fa608\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-sfg42" Dec 10 15:16:26 crc kubenswrapper[4718]: I1210 15:16:26.834060 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8ace4c93-2b2a-4185-b16a-d782334fa608-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-sfg42\" (UID: \"8ace4c93-2b2a-4185-b16a-d782334fa608\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-sfg42" Dec 10 15:16:26 crc kubenswrapper[4718]: I1210 15:16:26.835046 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/8ace4c93-2b2a-4185-b16a-d782334fa608-ssh-key\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-sfg42\" (UID: \"8ace4c93-2b2a-4185-b16a-d782334fa608\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-sfg42" Dec 10 15:16:26 crc kubenswrapper[4718]: I1210 15:16:26.837835 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ace4c93-2b2a-4185-b16a-d782334fa608-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-sfg42\" (UID: \"8ace4c93-2b2a-4185-b16a-d782334fa608\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-sfg42" Dec 10 15:16:26 crc kubenswrapper[4718]: I1210 15:16:26.853220 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dtxhq\" (UniqueName: \"kubernetes.io/projected/8ace4c93-2b2a-4185-b16a-d782334fa608-kube-api-access-dtxhq\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-sfg42\" (UID: \"8ace4c93-2b2a-4185-b16a-d782334fa608\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-sfg42" Dec 10 15:16:26 crc kubenswrapper[4718]: I1210 15:16:26.962714 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-sfg42" Dec 10 15:16:27 crc kubenswrapper[4718]: I1210 15:16:27.752457 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-sfg42"] Dec 10 15:16:28 crc kubenswrapper[4718]: I1210 15:16:28.246473 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-sfg42" event={"ID":"8ace4c93-2b2a-4185-b16a-d782334fa608","Type":"ContainerStarted","Data":"eaea478cc5b90b04ad5ce560e4ca27309a6556e73b02af6193b8f349c0015770"} Dec 10 15:16:31 crc kubenswrapper[4718]: I1210 15:16:31.281907 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-sfg42" event={"ID":"8ace4c93-2b2a-4185-b16a-d782334fa608","Type":"ContainerStarted","Data":"19d110d144ccfeab73581020173051f4d2eeea5ef283f82894c0b3b817285fab"} Dec 10 15:16:31 crc kubenswrapper[4718]: I1210 15:16:31.311428 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-sfg42" podStartSLOduration=3.125607688 podStartE2EDuration="5.31138126s" podCreationTimestamp="2025-12-10 15:16:26 +0000 UTC" firstStartedPulling="2025-12-10 15:16:27.760230015 +0000 UTC m=+2692.709453442" lastFinishedPulling="2025-12-10 15:16:29.946003597 +0000 UTC m=+2694.895227014" observedRunningTime="2025-12-10 15:16:31.299447897 +0000 UTC m=+2696.248671314" watchObservedRunningTime="2025-12-10 15:16:31.31138126 +0000 UTC m=+2696.260604677" Dec 10 15:16:48 crc kubenswrapper[4718]: I1210 15:16:48.085142 4718 patch_prober.go:28] interesting pod/machine-config-daemon-8zmhn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 10 15:16:48 crc kubenswrapper[4718]: I1210 15:16:48.085802 4718 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" podUID="8db53917-7cfb-496d-b8a0-5cc68f3be4e7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 10 15:16:55 crc kubenswrapper[4718]: I1210 15:16:55.598431 4718 scope.go:117] "RemoveContainer" containerID="e7923e10bb5cf71f83f170c307bb32eab3ad5258cf20d44e9f3ec95b3a96397c" Dec 10 15:17:18 crc kubenswrapper[4718]: I1210 15:17:18.085125 4718 patch_prober.go:28] interesting pod/machine-config-daemon-8zmhn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 10 15:17:18 crc kubenswrapper[4718]: I1210 15:17:18.086066 4718 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" podUID="8db53917-7cfb-496d-b8a0-5cc68f3be4e7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 10 15:17:18 crc kubenswrapper[4718]: I1210 15:17:18.086137 4718 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" Dec 10 15:17:18 crc kubenswrapper[4718]: I1210 15:17:18.087256 4718 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"7bafd0187b023eee030e985bc8fa82589611e4791ca7f50b17d0f5ccdb81fde8"} pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 10 15:17:18 crc kubenswrapper[4718]: I1210 15:17:18.087310 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" podUID="8db53917-7cfb-496d-b8a0-5cc68f3be4e7" containerName="machine-config-daemon" containerID="cri-o://7bafd0187b023eee030e985bc8fa82589611e4791ca7f50b17d0f5ccdb81fde8" gracePeriod=600 Dec 10 15:17:18 crc kubenswrapper[4718]: I1210 15:17:18.681819 4718 generic.go:334] "Generic (PLEG): container finished" podID="8db53917-7cfb-496d-b8a0-5cc68f3be4e7" containerID="7bafd0187b023eee030e985bc8fa82589611e4791ca7f50b17d0f5ccdb81fde8" exitCode=0 Dec 10 15:17:18 crc kubenswrapper[4718]: I1210 15:17:18.681910 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" event={"ID":"8db53917-7cfb-496d-b8a0-5cc68f3be4e7","Type":"ContainerDied","Data":"7bafd0187b023eee030e985bc8fa82589611e4791ca7f50b17d0f5ccdb81fde8"} Dec 10 15:17:18 crc kubenswrapper[4718]: I1210 15:17:18.682869 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" event={"ID":"8db53917-7cfb-496d-b8a0-5cc68f3be4e7","Type":"ContainerStarted","Data":"65f273b169c25d47441ce0cd6888b81917c4ea628742cda4d8d97b9f88784ba0"} Dec 10 15:17:18 crc kubenswrapper[4718]: I1210 15:17:18.682907 4718 scope.go:117] "RemoveContainer" containerID="4270ecfcf97657b622e29b00041edfd4de3210eeefd5147a944aedf873bf5e26" Dec 10 15:17:41 crc kubenswrapper[4718]: I1210 15:17:41.285165 4718 generic.go:334] "Generic (PLEG): container finished" podID="8ace4c93-2b2a-4185-b16a-d782334fa608" containerID="19d110d144ccfeab73581020173051f4d2eeea5ef283f82894c0b3b817285fab" exitCode=0 Dec 10 15:17:41 crc kubenswrapper[4718]: I1210 15:17:41.285260 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-sfg42" event={"ID":"8ace4c93-2b2a-4185-b16a-d782334fa608","Type":"ContainerDied","Data":"19d110d144ccfeab73581020173051f4d2eeea5ef283f82894c0b3b817285fab"} Dec 10 15:17:42 crc kubenswrapper[4718]: I1210 15:17:42.812883 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-sfg42" Dec 10 15:17:42 crc kubenswrapper[4718]: I1210 15:17:42.945969 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/8ace4c93-2b2a-4185-b16a-d782334fa608-ovncontroller-config-0\") pod \"8ace4c93-2b2a-4185-b16a-d782334fa608\" (UID: \"8ace4c93-2b2a-4185-b16a-d782334fa608\") " Dec 10 15:17:42 crc kubenswrapper[4718]: I1210 15:17:42.946092 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8ace4c93-2b2a-4185-b16a-d782334fa608-inventory\") pod \"8ace4c93-2b2a-4185-b16a-d782334fa608\" (UID: \"8ace4c93-2b2a-4185-b16a-d782334fa608\") " Dec 10 15:17:42 crc kubenswrapper[4718]: I1210 15:17:42.946168 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dtxhq\" (UniqueName: \"kubernetes.io/projected/8ace4c93-2b2a-4185-b16a-d782334fa608-kube-api-access-dtxhq\") pod \"8ace4c93-2b2a-4185-b16a-d782334fa608\" (UID: \"8ace4c93-2b2a-4185-b16a-d782334fa608\") " Dec 10 15:17:42 crc kubenswrapper[4718]: I1210 15:17:42.946288 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/8ace4c93-2b2a-4185-b16a-d782334fa608-ssh-key\") pod \"8ace4c93-2b2a-4185-b16a-d782334fa608\" (UID: \"8ace4c93-2b2a-4185-b16a-d782334fa608\") " Dec 10 15:17:42 crc kubenswrapper[4718]: I1210 15:17:42.946518 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ace4c93-2b2a-4185-b16a-d782334fa608-ovn-combined-ca-bundle\") pod \"8ace4c93-2b2a-4185-b16a-d782334fa608\" (UID: \"8ace4c93-2b2a-4185-b16a-d782334fa608\") " Dec 10 15:17:42 crc kubenswrapper[4718]: I1210 15:17:42.957605 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8ace4c93-2b2a-4185-b16a-d782334fa608-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "8ace4c93-2b2a-4185-b16a-d782334fa608" (UID: "8ace4c93-2b2a-4185-b16a-d782334fa608"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:17:42 crc kubenswrapper[4718]: I1210 15:17:42.958759 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8ace4c93-2b2a-4185-b16a-d782334fa608-kube-api-access-dtxhq" (OuterVolumeSpecName: "kube-api-access-dtxhq") pod "8ace4c93-2b2a-4185-b16a-d782334fa608" (UID: "8ace4c93-2b2a-4185-b16a-d782334fa608"). InnerVolumeSpecName "kube-api-access-dtxhq". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:17:42 crc kubenswrapper[4718]: I1210 15:17:42.988180 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8ace4c93-2b2a-4185-b16a-d782334fa608-ovncontroller-config-0" (OuterVolumeSpecName: "ovncontroller-config-0") pod "8ace4c93-2b2a-4185-b16a-d782334fa608" (UID: "8ace4c93-2b2a-4185-b16a-d782334fa608"). InnerVolumeSpecName "ovncontroller-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 15:17:42 crc kubenswrapper[4718]: I1210 15:17:42.988609 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8ace4c93-2b2a-4185-b16a-d782334fa608-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "8ace4c93-2b2a-4185-b16a-d782334fa608" (UID: "8ace4c93-2b2a-4185-b16a-d782334fa608"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:17:42 crc kubenswrapper[4718]: I1210 15:17:42.993166 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8ace4c93-2b2a-4185-b16a-d782334fa608-inventory" (OuterVolumeSpecName: "inventory") pod "8ace4c93-2b2a-4185-b16a-d782334fa608" (UID: "8ace4c93-2b2a-4185-b16a-d782334fa608"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:17:43 crc kubenswrapper[4718]: I1210 15:17:43.049978 4718 reconciler_common.go:293] "Volume detached for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/8ace4c93-2b2a-4185-b16a-d782334fa608-ovncontroller-config-0\") on node \"crc\" DevicePath \"\"" Dec 10 15:17:43 crc kubenswrapper[4718]: I1210 15:17:43.050030 4718 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8ace4c93-2b2a-4185-b16a-d782334fa608-inventory\") on node \"crc\" DevicePath \"\"" Dec 10 15:17:43 crc kubenswrapper[4718]: I1210 15:17:43.050043 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dtxhq\" (UniqueName: \"kubernetes.io/projected/8ace4c93-2b2a-4185-b16a-d782334fa608-kube-api-access-dtxhq\") on node \"crc\" DevicePath \"\"" Dec 10 15:17:43 crc kubenswrapper[4718]: I1210 15:17:43.050055 4718 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/8ace4c93-2b2a-4185-b16a-d782334fa608-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 10 15:17:43 crc kubenswrapper[4718]: I1210 15:17:43.050070 4718 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ace4c93-2b2a-4185-b16a-d782334fa608-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 10 15:17:43 crc kubenswrapper[4718]: I1210 15:17:43.315879 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-sfg42" event={"ID":"8ace4c93-2b2a-4185-b16a-d782334fa608","Type":"ContainerDied","Data":"eaea478cc5b90b04ad5ce560e4ca27309a6556e73b02af6193b8f349c0015770"} Dec 10 15:17:43 crc kubenswrapper[4718]: I1210 15:17:43.315941 4718 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="eaea478cc5b90b04ad5ce560e4ca27309a6556e73b02af6193b8f349c0015770" Dec 10 15:17:43 crc kubenswrapper[4718]: I1210 15:17:43.315944 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-sfg42" Dec 10 15:17:43 crc kubenswrapper[4718]: I1210 15:17:43.436111 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-b9vhb"] Dec 10 15:17:43 crc kubenswrapper[4718]: E1210 15:17:43.436745 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ace4c93-2b2a-4185-b16a-d782334fa608" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Dec 10 15:17:43 crc kubenswrapper[4718]: I1210 15:17:43.436771 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ace4c93-2b2a-4185-b16a-d782334fa608" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Dec 10 15:17:43 crc kubenswrapper[4718]: I1210 15:17:43.437098 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="8ace4c93-2b2a-4185-b16a-d782334fa608" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Dec 10 15:17:43 crc kubenswrapper[4718]: I1210 15:17:43.438238 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-b9vhb" Dec 10 15:17:43 crc kubenswrapper[4718]: I1210 15:17:43.441107 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 10 15:17:43 crc kubenswrapper[4718]: I1210 15:17:43.441508 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 10 15:17:43 crc kubenswrapper[4718]: I1210 15:17:43.450703 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-ovn-metadata-agent-neutron-config" Dec 10 15:17:43 crc kubenswrapper[4718]: I1210 15:17:43.450803 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-vqd8j" Dec 10 15:17:43 crc kubenswrapper[4718]: I1210 15:17:43.450870 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 10 15:17:43 crc kubenswrapper[4718]: I1210 15:17:43.452145 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-neutron-config" Dec 10 15:17:43 crc kubenswrapper[4718]: I1210 15:17:43.469560 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-b9vhb"] Dec 10 15:17:43 crc kubenswrapper[4718]: I1210 15:17:43.563460 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/2ce587d1-61d0-4844-bb2b-54894131a5bb-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-b9vhb\" (UID: \"2ce587d1-61d0-4844-bb2b-54894131a5bb\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-b9vhb" Dec 10 15:17:43 crc kubenswrapper[4718]: I1210 15:17:43.563628 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ce587d1-61d0-4844-bb2b-54894131a5bb-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-b9vhb\" (UID: \"2ce587d1-61d0-4844-bb2b-54894131a5bb\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-b9vhb" Dec 10 15:17:43 crc kubenswrapper[4718]: I1210 15:17:43.563691 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2ce587d1-61d0-4844-bb2b-54894131a5bb-ssh-key\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-b9vhb\" (UID: \"2ce587d1-61d0-4844-bb2b-54894131a5bb\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-b9vhb" Dec 10 15:17:43 crc kubenswrapper[4718]: I1210 15:17:43.564033 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/2ce587d1-61d0-4844-bb2b-54894131a5bb-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-b9vhb\" (UID: \"2ce587d1-61d0-4844-bb2b-54894131a5bb\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-b9vhb" Dec 10 15:17:43 crc kubenswrapper[4718]: I1210 15:17:43.564101 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2ce587d1-61d0-4844-bb2b-54894131a5bb-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-b9vhb\" (UID: \"2ce587d1-61d0-4844-bb2b-54894131a5bb\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-b9vhb" Dec 10 15:17:43 crc kubenswrapper[4718]: I1210 15:17:43.564677 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cwfxl\" (UniqueName: \"kubernetes.io/projected/2ce587d1-61d0-4844-bb2b-54894131a5bb-kube-api-access-cwfxl\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-b9vhb\" (UID: \"2ce587d1-61d0-4844-bb2b-54894131a5bb\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-b9vhb" Dec 10 15:17:43 crc kubenswrapper[4718]: I1210 15:17:43.815290 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/2ce587d1-61d0-4844-bb2b-54894131a5bb-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-b9vhb\" (UID: \"2ce587d1-61d0-4844-bb2b-54894131a5bb\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-b9vhb" Dec 10 15:17:43 crc kubenswrapper[4718]: I1210 15:17:43.815435 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ce587d1-61d0-4844-bb2b-54894131a5bb-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-b9vhb\" (UID: \"2ce587d1-61d0-4844-bb2b-54894131a5bb\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-b9vhb" Dec 10 15:17:43 crc kubenswrapper[4718]: I1210 15:17:43.815485 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2ce587d1-61d0-4844-bb2b-54894131a5bb-ssh-key\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-b9vhb\" (UID: \"2ce587d1-61d0-4844-bb2b-54894131a5bb\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-b9vhb" Dec 10 15:17:43 crc kubenswrapper[4718]: I1210 15:17:43.815552 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2ce587d1-61d0-4844-bb2b-54894131a5bb-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-b9vhb\" (UID: \"2ce587d1-61d0-4844-bb2b-54894131a5bb\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-b9vhb" Dec 10 15:17:43 crc kubenswrapper[4718]: I1210 15:17:43.815841 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/2ce587d1-61d0-4844-bb2b-54894131a5bb-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-b9vhb\" (UID: \"2ce587d1-61d0-4844-bb2b-54894131a5bb\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-b9vhb" Dec 10 15:17:43 crc kubenswrapper[4718]: I1210 15:17:43.816362 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cwfxl\" (UniqueName: \"kubernetes.io/projected/2ce587d1-61d0-4844-bb2b-54894131a5bb-kube-api-access-cwfxl\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-b9vhb\" (UID: \"2ce587d1-61d0-4844-bb2b-54894131a5bb\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-b9vhb" Dec 10 15:17:43 crc kubenswrapper[4718]: I1210 15:17:43.827834 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ce587d1-61d0-4844-bb2b-54894131a5bb-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-b9vhb\" (UID: \"2ce587d1-61d0-4844-bb2b-54894131a5bb\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-b9vhb" Dec 10 15:17:43 crc kubenswrapper[4718]: I1210 15:17:43.828746 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/2ce587d1-61d0-4844-bb2b-54894131a5bb-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-b9vhb\" (UID: \"2ce587d1-61d0-4844-bb2b-54894131a5bb\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-b9vhb" Dec 10 15:17:43 crc kubenswrapper[4718]: I1210 15:17:43.831990 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2ce587d1-61d0-4844-bb2b-54894131a5bb-ssh-key\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-b9vhb\" (UID: \"2ce587d1-61d0-4844-bb2b-54894131a5bb\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-b9vhb" Dec 10 15:17:43 crc kubenswrapper[4718]: I1210 15:17:43.842262 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2ce587d1-61d0-4844-bb2b-54894131a5bb-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-b9vhb\" (UID: \"2ce587d1-61d0-4844-bb2b-54894131a5bb\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-b9vhb" Dec 10 15:17:43 crc kubenswrapper[4718]: I1210 15:17:43.849468 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/2ce587d1-61d0-4844-bb2b-54894131a5bb-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-b9vhb\" (UID: \"2ce587d1-61d0-4844-bb2b-54894131a5bb\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-b9vhb" Dec 10 15:17:43 crc kubenswrapper[4718]: I1210 15:17:43.859886 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cwfxl\" (UniqueName: \"kubernetes.io/projected/2ce587d1-61d0-4844-bb2b-54894131a5bb-kube-api-access-cwfxl\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-b9vhb\" (UID: \"2ce587d1-61d0-4844-bb2b-54894131a5bb\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-b9vhb" Dec 10 15:17:44 crc kubenswrapper[4718]: I1210 15:17:44.066762 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-b9vhb" Dec 10 15:17:44 crc kubenswrapper[4718]: I1210 15:17:44.669941 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-b9vhb"] Dec 10 15:17:45 crc kubenswrapper[4718]: I1210 15:17:45.340291 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-b9vhb" event={"ID":"2ce587d1-61d0-4844-bb2b-54894131a5bb","Type":"ContainerStarted","Data":"c3327d3e375ad1d37996b8c4d219cdcea50aae0c871b4e454c2c78b879f2025a"} Dec 10 15:17:46 crc kubenswrapper[4718]: I1210 15:17:46.355411 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-b9vhb" event={"ID":"2ce587d1-61d0-4844-bb2b-54894131a5bb","Type":"ContainerStarted","Data":"077895d0398e553922cd84ee37772f91722f5ded4e60cb977c29d69ce587ddfc"} Dec 10 15:17:46 crc kubenswrapper[4718]: I1210 15:17:46.379155 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-b9vhb" podStartSLOduration=2.708388747 podStartE2EDuration="3.379125875s" podCreationTimestamp="2025-12-10 15:17:43 +0000 UTC" firstStartedPulling="2025-12-10 15:17:44.675827023 +0000 UTC m=+2769.625050440" lastFinishedPulling="2025-12-10 15:17:45.346564151 +0000 UTC m=+2770.295787568" observedRunningTime="2025-12-10 15:17:46.378787136 +0000 UTC m=+2771.328010563" watchObservedRunningTime="2025-12-10 15:17:46.379125875 +0000 UTC m=+2771.328349292" Dec 10 15:18:35 crc kubenswrapper[4718]: I1210 15:18:35.998822 4718 generic.go:334] "Generic (PLEG): container finished" podID="2ce587d1-61d0-4844-bb2b-54894131a5bb" containerID="077895d0398e553922cd84ee37772f91722f5ded4e60cb977c29d69ce587ddfc" exitCode=0 Dec 10 15:18:35 crc kubenswrapper[4718]: I1210 15:18:35.998864 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-b9vhb" event={"ID":"2ce587d1-61d0-4844-bb2b-54894131a5bb","Type":"ContainerDied","Data":"077895d0398e553922cd84ee37772f91722f5ded4e60cb977c29d69ce587ddfc"} Dec 10 15:18:37 crc kubenswrapper[4718]: I1210 15:18:37.489696 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-b9vhb" Dec 10 15:18:37 crc kubenswrapper[4718]: I1210 15:18:37.539363 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/2ce587d1-61d0-4844-bb2b-54894131a5bb-neutron-ovn-metadata-agent-neutron-config-0\") pod \"2ce587d1-61d0-4844-bb2b-54894131a5bb\" (UID: \"2ce587d1-61d0-4844-bb2b-54894131a5bb\") " Dec 10 15:18:37 crc kubenswrapper[4718]: I1210 15:18:37.539610 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2ce587d1-61d0-4844-bb2b-54894131a5bb-ssh-key\") pod \"2ce587d1-61d0-4844-bb2b-54894131a5bb\" (UID: \"2ce587d1-61d0-4844-bb2b-54894131a5bb\") " Dec 10 15:18:37 crc kubenswrapper[4718]: I1210 15:18:37.539735 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ce587d1-61d0-4844-bb2b-54894131a5bb-neutron-metadata-combined-ca-bundle\") pod \"2ce587d1-61d0-4844-bb2b-54894131a5bb\" (UID: \"2ce587d1-61d0-4844-bb2b-54894131a5bb\") " Dec 10 15:18:37 crc kubenswrapper[4718]: I1210 15:18:37.539831 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cwfxl\" (UniqueName: \"kubernetes.io/projected/2ce587d1-61d0-4844-bb2b-54894131a5bb-kube-api-access-cwfxl\") pod \"2ce587d1-61d0-4844-bb2b-54894131a5bb\" (UID: \"2ce587d1-61d0-4844-bb2b-54894131a5bb\") " Dec 10 15:18:37 crc kubenswrapper[4718]: I1210 15:18:37.539868 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2ce587d1-61d0-4844-bb2b-54894131a5bb-inventory\") pod \"2ce587d1-61d0-4844-bb2b-54894131a5bb\" (UID: \"2ce587d1-61d0-4844-bb2b-54894131a5bb\") " Dec 10 15:18:37 crc kubenswrapper[4718]: I1210 15:18:37.539913 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/2ce587d1-61d0-4844-bb2b-54894131a5bb-nova-metadata-neutron-config-0\") pod \"2ce587d1-61d0-4844-bb2b-54894131a5bb\" (UID: \"2ce587d1-61d0-4844-bb2b-54894131a5bb\") " Dec 10 15:18:37 crc kubenswrapper[4718]: I1210 15:18:37.557137 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2ce587d1-61d0-4844-bb2b-54894131a5bb-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "2ce587d1-61d0-4844-bb2b-54894131a5bb" (UID: "2ce587d1-61d0-4844-bb2b-54894131a5bb"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:18:37 crc kubenswrapper[4718]: I1210 15:18:37.557427 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2ce587d1-61d0-4844-bb2b-54894131a5bb-kube-api-access-cwfxl" (OuterVolumeSpecName: "kube-api-access-cwfxl") pod "2ce587d1-61d0-4844-bb2b-54894131a5bb" (UID: "2ce587d1-61d0-4844-bb2b-54894131a5bb"). InnerVolumeSpecName "kube-api-access-cwfxl". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:18:37 crc kubenswrapper[4718]: I1210 15:18:37.585311 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2ce587d1-61d0-4844-bb2b-54894131a5bb-neutron-ovn-metadata-agent-neutron-config-0" (OuterVolumeSpecName: "neutron-ovn-metadata-agent-neutron-config-0") pod "2ce587d1-61d0-4844-bb2b-54894131a5bb" (UID: "2ce587d1-61d0-4844-bb2b-54894131a5bb"). InnerVolumeSpecName "neutron-ovn-metadata-agent-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:18:37 crc kubenswrapper[4718]: I1210 15:18:37.588085 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2ce587d1-61d0-4844-bb2b-54894131a5bb-nova-metadata-neutron-config-0" (OuterVolumeSpecName: "nova-metadata-neutron-config-0") pod "2ce587d1-61d0-4844-bb2b-54894131a5bb" (UID: "2ce587d1-61d0-4844-bb2b-54894131a5bb"). InnerVolumeSpecName "nova-metadata-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:18:37 crc kubenswrapper[4718]: I1210 15:18:37.588578 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2ce587d1-61d0-4844-bb2b-54894131a5bb-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "2ce587d1-61d0-4844-bb2b-54894131a5bb" (UID: "2ce587d1-61d0-4844-bb2b-54894131a5bb"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:18:37 crc kubenswrapper[4718]: I1210 15:18:37.590252 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2ce587d1-61d0-4844-bb2b-54894131a5bb-inventory" (OuterVolumeSpecName: "inventory") pod "2ce587d1-61d0-4844-bb2b-54894131a5bb" (UID: "2ce587d1-61d0-4844-bb2b-54894131a5bb"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:18:37 crc kubenswrapper[4718]: I1210 15:18:37.643019 4718 reconciler_common.go:293] "Volume detached for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/2ce587d1-61d0-4844-bb2b-54894131a5bb-neutron-ovn-metadata-agent-neutron-config-0\") on node \"crc\" DevicePath \"\"" Dec 10 15:18:37 crc kubenswrapper[4718]: I1210 15:18:37.643060 4718 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2ce587d1-61d0-4844-bb2b-54894131a5bb-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 10 15:18:37 crc kubenswrapper[4718]: I1210 15:18:37.643080 4718 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ce587d1-61d0-4844-bb2b-54894131a5bb-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 10 15:18:37 crc kubenswrapper[4718]: I1210 15:18:37.643093 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cwfxl\" (UniqueName: \"kubernetes.io/projected/2ce587d1-61d0-4844-bb2b-54894131a5bb-kube-api-access-cwfxl\") on node \"crc\" DevicePath \"\"" Dec 10 15:18:37 crc kubenswrapper[4718]: I1210 15:18:37.643103 4718 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2ce587d1-61d0-4844-bb2b-54894131a5bb-inventory\") on node \"crc\" DevicePath \"\"" Dec 10 15:18:37 crc kubenswrapper[4718]: I1210 15:18:37.643112 4718 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/2ce587d1-61d0-4844-bb2b-54894131a5bb-nova-metadata-neutron-config-0\") on node \"crc\" DevicePath \"\"" Dec 10 15:18:38 crc kubenswrapper[4718]: I1210 15:18:38.022348 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-b9vhb" Dec 10 15:18:38 crc kubenswrapper[4718]: I1210 15:18:38.032946 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-b9vhb" event={"ID":"2ce587d1-61d0-4844-bb2b-54894131a5bb","Type":"ContainerDied","Data":"c3327d3e375ad1d37996b8c4d219cdcea50aae0c871b4e454c2c78b879f2025a"} Dec 10 15:18:38 crc kubenswrapper[4718]: I1210 15:18:38.033014 4718 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c3327d3e375ad1d37996b8c4d219cdcea50aae0c871b4e454c2c78b879f2025a" Dec 10 15:18:38 crc kubenswrapper[4718]: I1210 15:18:38.146735 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-r2rqj"] Dec 10 15:18:38 crc kubenswrapper[4718]: E1210 15:18:38.147596 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ce587d1-61d0-4844-bb2b-54894131a5bb" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Dec 10 15:18:38 crc kubenswrapper[4718]: I1210 15:18:38.147641 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ce587d1-61d0-4844-bb2b-54894131a5bb" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Dec 10 15:18:38 crc kubenswrapper[4718]: I1210 15:18:38.148011 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="2ce587d1-61d0-4844-bb2b-54894131a5bb" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Dec 10 15:18:38 crc kubenswrapper[4718]: I1210 15:18:38.149194 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-r2rqj" Dec 10 15:18:38 crc kubenswrapper[4718]: I1210 15:18:38.153596 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-vqd8j" Dec 10 15:18:38 crc kubenswrapper[4718]: I1210 15:18:38.153639 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 10 15:18:38 crc kubenswrapper[4718]: I1210 15:18:38.153853 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 10 15:18:38 crc kubenswrapper[4718]: I1210 15:18:38.153918 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 10 15:18:38 crc kubenswrapper[4718]: I1210 15:18:38.154209 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"libvirt-secret" Dec 10 15:18:38 crc kubenswrapper[4718]: I1210 15:18:38.154585 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/080c7769-f2d8-47fa-aa3d-a1b63190a679-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-r2rqj\" (UID: \"080c7769-f2d8-47fa-aa3d-a1b63190a679\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-r2rqj" Dec 10 15:18:38 crc kubenswrapper[4718]: I1210 15:18:38.154634 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/080c7769-f2d8-47fa-aa3d-a1b63190a679-ssh-key\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-r2rqj\" (UID: \"080c7769-f2d8-47fa-aa3d-a1b63190a679\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-r2rqj" Dec 10 15:18:38 crc kubenswrapper[4718]: I1210 15:18:38.154657 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/080c7769-f2d8-47fa-aa3d-a1b63190a679-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-r2rqj\" (UID: \"080c7769-f2d8-47fa-aa3d-a1b63190a679\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-r2rqj" Dec 10 15:18:38 crc kubenswrapper[4718]: I1210 15:18:38.154681 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/080c7769-f2d8-47fa-aa3d-a1b63190a679-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-r2rqj\" (UID: \"080c7769-f2d8-47fa-aa3d-a1b63190a679\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-r2rqj" Dec 10 15:18:38 crc kubenswrapper[4718]: I1210 15:18:38.154728 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sgb7b\" (UniqueName: \"kubernetes.io/projected/080c7769-f2d8-47fa-aa3d-a1b63190a679-kube-api-access-sgb7b\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-r2rqj\" (UID: \"080c7769-f2d8-47fa-aa3d-a1b63190a679\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-r2rqj" Dec 10 15:18:38 crc kubenswrapper[4718]: I1210 15:18:38.159006 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-r2rqj"] Dec 10 15:18:38 crc kubenswrapper[4718]: I1210 15:18:38.257995 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/080c7769-f2d8-47fa-aa3d-a1b63190a679-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-r2rqj\" (UID: \"080c7769-f2d8-47fa-aa3d-a1b63190a679\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-r2rqj" Dec 10 15:18:38 crc kubenswrapper[4718]: I1210 15:18:38.258059 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/080c7769-f2d8-47fa-aa3d-a1b63190a679-ssh-key\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-r2rqj\" (UID: \"080c7769-f2d8-47fa-aa3d-a1b63190a679\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-r2rqj" Dec 10 15:18:38 crc kubenswrapper[4718]: I1210 15:18:38.258078 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/080c7769-f2d8-47fa-aa3d-a1b63190a679-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-r2rqj\" (UID: \"080c7769-f2d8-47fa-aa3d-a1b63190a679\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-r2rqj" Dec 10 15:18:38 crc kubenswrapper[4718]: I1210 15:18:38.258099 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/080c7769-f2d8-47fa-aa3d-a1b63190a679-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-r2rqj\" (UID: \"080c7769-f2d8-47fa-aa3d-a1b63190a679\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-r2rqj" Dec 10 15:18:38 crc kubenswrapper[4718]: I1210 15:18:38.258159 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sgb7b\" (UniqueName: \"kubernetes.io/projected/080c7769-f2d8-47fa-aa3d-a1b63190a679-kube-api-access-sgb7b\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-r2rqj\" (UID: \"080c7769-f2d8-47fa-aa3d-a1b63190a679\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-r2rqj" Dec 10 15:18:38 crc kubenswrapper[4718]: I1210 15:18:38.264745 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/080c7769-f2d8-47fa-aa3d-a1b63190a679-ssh-key\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-r2rqj\" (UID: \"080c7769-f2d8-47fa-aa3d-a1b63190a679\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-r2rqj" Dec 10 15:18:38 crc kubenswrapper[4718]: I1210 15:18:38.264745 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/080c7769-f2d8-47fa-aa3d-a1b63190a679-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-r2rqj\" (UID: \"080c7769-f2d8-47fa-aa3d-a1b63190a679\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-r2rqj" Dec 10 15:18:38 crc kubenswrapper[4718]: I1210 15:18:38.264849 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/080c7769-f2d8-47fa-aa3d-a1b63190a679-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-r2rqj\" (UID: \"080c7769-f2d8-47fa-aa3d-a1b63190a679\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-r2rqj" Dec 10 15:18:38 crc kubenswrapper[4718]: I1210 15:18:38.271977 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/080c7769-f2d8-47fa-aa3d-a1b63190a679-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-r2rqj\" (UID: \"080c7769-f2d8-47fa-aa3d-a1b63190a679\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-r2rqj" Dec 10 15:18:38 crc kubenswrapper[4718]: I1210 15:18:38.277912 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sgb7b\" (UniqueName: \"kubernetes.io/projected/080c7769-f2d8-47fa-aa3d-a1b63190a679-kube-api-access-sgb7b\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-r2rqj\" (UID: \"080c7769-f2d8-47fa-aa3d-a1b63190a679\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-r2rqj" Dec 10 15:18:38 crc kubenswrapper[4718]: I1210 15:18:38.481048 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-r2rqj" Dec 10 15:18:38 crc kubenswrapper[4718]: E1210 15:18:38.709309 4718 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2ce587d1_61d0_4844_bb2b_54894131a5bb.slice/crio-077895d0398e553922cd84ee37772f91722f5ded4e60cb977c29d69ce587ddfc.scope\": RecentStats: unable to find data in memory cache]" Dec 10 15:18:39 crc kubenswrapper[4718]: I1210 15:18:39.079026 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-r2rqj"] Dec 10 15:18:39 crc kubenswrapper[4718]: W1210 15:18:39.082643 4718 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod080c7769_f2d8_47fa_aa3d_a1b63190a679.slice/crio-e2c212aa2084bc437d3112c3339e9c9763e75e9eb5e2a364ee900d6fdb49fc72 WatchSource:0}: Error finding container e2c212aa2084bc437d3112c3339e9c9763e75e9eb5e2a364ee900d6fdb49fc72: Status 404 returned error can't find the container with id e2c212aa2084bc437d3112c3339e9c9763e75e9eb5e2a364ee900d6fdb49fc72 Dec 10 15:18:39 crc kubenswrapper[4718]: I1210 15:18:39.087777 4718 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 10 15:18:40 crc kubenswrapper[4718]: I1210 15:18:40.074243 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-r2rqj" event={"ID":"080c7769-f2d8-47fa-aa3d-a1b63190a679","Type":"ContainerStarted","Data":"e2c212aa2084bc437d3112c3339e9c9763e75e9eb5e2a364ee900d6fdb49fc72"} Dec 10 15:18:41 crc kubenswrapper[4718]: I1210 15:18:41.080354 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-r2rqj" event={"ID":"080c7769-f2d8-47fa-aa3d-a1b63190a679","Type":"ContainerStarted","Data":"14428f469c27ac79998cbf88ca93395d235c579add17409ea7d615cc4e04ad53"} Dec 10 15:18:41 crc kubenswrapper[4718]: I1210 15:18:41.110237 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-r2rqj" podStartSLOduration=2.268683099 podStartE2EDuration="3.110177262s" podCreationTimestamp="2025-12-10 15:18:38 +0000 UTC" firstStartedPulling="2025-12-10 15:18:39.087333167 +0000 UTC m=+2824.036556584" lastFinishedPulling="2025-12-10 15:18:39.92882733 +0000 UTC m=+2824.878050747" observedRunningTime="2025-12-10 15:18:41.099692305 +0000 UTC m=+2826.048915722" watchObservedRunningTime="2025-12-10 15:18:41.110177262 +0000 UTC m=+2826.059400679" Dec 10 15:18:49 crc kubenswrapper[4718]: E1210 15:18:49.033192 4718 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2ce587d1_61d0_4844_bb2b_54894131a5bb.slice/crio-077895d0398e553922cd84ee37772f91722f5ded4e60cb977c29d69ce587ddfc.scope\": RecentStats: unable to find data in memory cache]" Dec 10 15:18:59 crc kubenswrapper[4718]: E1210 15:18:59.328451 4718 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2ce587d1_61d0_4844_bb2b_54894131a5bb.slice/crio-077895d0398e553922cd84ee37772f91722f5ded4e60cb977c29d69ce587ddfc.scope\": RecentStats: unable to find data in memory cache]" Dec 10 15:19:09 crc kubenswrapper[4718]: E1210 15:19:09.688017 4718 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2ce587d1_61d0_4844_bb2b_54894131a5bb.slice/crio-077895d0398e553922cd84ee37772f91722f5ded4e60cb977c29d69ce587ddfc.scope\": RecentStats: unable to find data in memory cache]" Dec 10 15:19:18 crc kubenswrapper[4718]: I1210 15:19:18.084029 4718 patch_prober.go:28] interesting pod/machine-config-daemon-8zmhn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 10 15:19:18 crc kubenswrapper[4718]: I1210 15:19:18.084680 4718 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" podUID="8db53917-7cfb-496d-b8a0-5cc68f3be4e7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 10 15:19:19 crc kubenswrapper[4718]: E1210 15:19:19.974052 4718 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2ce587d1_61d0_4844_bb2b_54894131a5bb.slice/crio-077895d0398e553922cd84ee37772f91722f5ded4e60cb977c29d69ce587ddfc.scope\": RecentStats: unable to find data in memory cache]" Dec 10 15:19:30 crc kubenswrapper[4718]: E1210 15:19:30.286201 4718 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2ce587d1_61d0_4844_bb2b_54894131a5bb.slice/crio-077895d0398e553922cd84ee37772f91722f5ded4e60cb977c29d69ce587ddfc.scope\": RecentStats: unable to find data in memory cache]" Dec 10 15:19:48 crc kubenswrapper[4718]: I1210 15:19:48.084304 4718 patch_prober.go:28] interesting pod/machine-config-daemon-8zmhn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 10 15:19:48 crc kubenswrapper[4718]: I1210 15:19:48.085250 4718 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" podUID="8db53917-7cfb-496d-b8a0-5cc68f3be4e7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 10 15:20:18 crc kubenswrapper[4718]: I1210 15:20:18.084812 4718 patch_prober.go:28] interesting pod/machine-config-daemon-8zmhn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 10 15:20:18 crc kubenswrapper[4718]: I1210 15:20:18.085345 4718 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" podUID="8db53917-7cfb-496d-b8a0-5cc68f3be4e7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 10 15:20:18 crc kubenswrapper[4718]: I1210 15:20:18.085413 4718 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" Dec 10 15:20:18 crc kubenswrapper[4718]: I1210 15:20:18.086496 4718 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"65f273b169c25d47441ce0cd6888b81917c4ea628742cda4d8d97b9f88784ba0"} pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 10 15:20:18 crc kubenswrapper[4718]: I1210 15:20:18.086550 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" podUID="8db53917-7cfb-496d-b8a0-5cc68f3be4e7" containerName="machine-config-daemon" containerID="cri-o://65f273b169c25d47441ce0cd6888b81917c4ea628742cda4d8d97b9f88784ba0" gracePeriod=600 Dec 10 15:20:18 crc kubenswrapper[4718]: E1210 15:20:18.214852 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8zmhn_openshift-machine-config-operator(8db53917-7cfb-496d-b8a0-5cc68f3be4e7)\"" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" podUID="8db53917-7cfb-496d-b8a0-5cc68f3be4e7" Dec 10 15:20:18 crc kubenswrapper[4718]: I1210 15:20:18.622749 4718 generic.go:334] "Generic (PLEG): container finished" podID="8db53917-7cfb-496d-b8a0-5cc68f3be4e7" containerID="65f273b169c25d47441ce0cd6888b81917c4ea628742cda4d8d97b9f88784ba0" exitCode=0 Dec 10 15:20:18 crc kubenswrapper[4718]: I1210 15:20:18.622821 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" event={"ID":"8db53917-7cfb-496d-b8a0-5cc68f3be4e7","Type":"ContainerDied","Data":"65f273b169c25d47441ce0cd6888b81917c4ea628742cda4d8d97b9f88784ba0"} Dec 10 15:20:18 crc kubenswrapper[4718]: I1210 15:20:18.622887 4718 scope.go:117] "RemoveContainer" containerID="7bafd0187b023eee030e985bc8fa82589611e4791ca7f50b17d0f5ccdb81fde8" Dec 10 15:20:18 crc kubenswrapper[4718]: I1210 15:20:18.623754 4718 scope.go:117] "RemoveContainer" containerID="65f273b169c25d47441ce0cd6888b81917c4ea628742cda4d8d97b9f88784ba0" Dec 10 15:20:18 crc kubenswrapper[4718]: E1210 15:20:18.624224 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8zmhn_openshift-machine-config-operator(8db53917-7cfb-496d-b8a0-5cc68f3be4e7)\"" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" podUID="8db53917-7cfb-496d-b8a0-5cc68f3be4e7" Dec 10 15:20:27 crc kubenswrapper[4718]: I1210 15:20:27.293826 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-csz77"] Dec 10 15:20:27 crc kubenswrapper[4718]: I1210 15:20:27.296944 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-csz77" Dec 10 15:20:27 crc kubenswrapper[4718]: I1210 15:20:27.312121 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-csz77"] Dec 10 15:20:27 crc kubenswrapper[4718]: I1210 15:20:27.320843 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zw8b4\" (UniqueName: \"kubernetes.io/projected/7c0bc83f-8884-4c57-b3c5-ab5d38a76393-kube-api-access-zw8b4\") pod \"community-operators-csz77\" (UID: \"7c0bc83f-8884-4c57-b3c5-ab5d38a76393\") " pod="openshift-marketplace/community-operators-csz77" Dec 10 15:20:27 crc kubenswrapper[4718]: I1210 15:20:27.320944 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7c0bc83f-8884-4c57-b3c5-ab5d38a76393-catalog-content\") pod \"community-operators-csz77\" (UID: \"7c0bc83f-8884-4c57-b3c5-ab5d38a76393\") " pod="openshift-marketplace/community-operators-csz77" Dec 10 15:20:27 crc kubenswrapper[4718]: I1210 15:20:27.321028 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7c0bc83f-8884-4c57-b3c5-ab5d38a76393-utilities\") pod \"community-operators-csz77\" (UID: \"7c0bc83f-8884-4c57-b3c5-ab5d38a76393\") " pod="openshift-marketplace/community-operators-csz77" Dec 10 15:20:27 crc kubenswrapper[4718]: I1210 15:20:27.476764 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7c0bc83f-8884-4c57-b3c5-ab5d38a76393-catalog-content\") pod \"community-operators-csz77\" (UID: \"7c0bc83f-8884-4c57-b3c5-ab5d38a76393\") " pod="openshift-marketplace/community-operators-csz77" Dec 10 15:20:27 crc kubenswrapper[4718]: I1210 15:20:27.476906 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7c0bc83f-8884-4c57-b3c5-ab5d38a76393-utilities\") pod \"community-operators-csz77\" (UID: \"7c0bc83f-8884-4c57-b3c5-ab5d38a76393\") " pod="openshift-marketplace/community-operators-csz77" Dec 10 15:20:27 crc kubenswrapper[4718]: I1210 15:20:27.477094 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zw8b4\" (UniqueName: \"kubernetes.io/projected/7c0bc83f-8884-4c57-b3c5-ab5d38a76393-kube-api-access-zw8b4\") pod \"community-operators-csz77\" (UID: \"7c0bc83f-8884-4c57-b3c5-ab5d38a76393\") " pod="openshift-marketplace/community-operators-csz77" Dec 10 15:20:27 crc kubenswrapper[4718]: I1210 15:20:27.477486 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7c0bc83f-8884-4c57-b3c5-ab5d38a76393-catalog-content\") pod \"community-operators-csz77\" (UID: \"7c0bc83f-8884-4c57-b3c5-ab5d38a76393\") " pod="openshift-marketplace/community-operators-csz77" Dec 10 15:20:27 crc kubenswrapper[4718]: I1210 15:20:27.477814 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7c0bc83f-8884-4c57-b3c5-ab5d38a76393-utilities\") pod \"community-operators-csz77\" (UID: \"7c0bc83f-8884-4c57-b3c5-ab5d38a76393\") " pod="openshift-marketplace/community-operators-csz77" Dec 10 15:20:27 crc kubenswrapper[4718]: I1210 15:20:27.512755 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zw8b4\" (UniqueName: \"kubernetes.io/projected/7c0bc83f-8884-4c57-b3c5-ab5d38a76393-kube-api-access-zw8b4\") pod \"community-operators-csz77\" (UID: \"7c0bc83f-8884-4c57-b3c5-ab5d38a76393\") " pod="openshift-marketplace/community-operators-csz77" Dec 10 15:20:27 crc kubenswrapper[4718]: I1210 15:20:27.633620 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-csz77" Dec 10 15:20:28 crc kubenswrapper[4718]: I1210 15:20:28.206587 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-csz77"] Dec 10 15:20:28 crc kubenswrapper[4718]: W1210 15:20:28.217655 4718 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7c0bc83f_8884_4c57_b3c5_ab5d38a76393.slice/crio-c90d75b31fa4ca0589dc5b48e2488f3792c5208723870cf80598f80cde25692a WatchSource:0}: Error finding container c90d75b31fa4ca0589dc5b48e2488f3792c5208723870cf80598f80cde25692a: Status 404 returned error can't find the container with id c90d75b31fa4ca0589dc5b48e2488f3792c5208723870cf80598f80cde25692a Dec 10 15:20:28 crc kubenswrapper[4718]: I1210 15:20:28.742857 4718 generic.go:334] "Generic (PLEG): container finished" podID="7c0bc83f-8884-4c57-b3c5-ab5d38a76393" containerID="a2c463916c69f37940766ffab2ea84220a095a36df8015d9a2eb89891bde0e8f" exitCode=0 Dec 10 15:20:28 crc kubenswrapper[4718]: I1210 15:20:28.742925 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-csz77" event={"ID":"7c0bc83f-8884-4c57-b3c5-ab5d38a76393","Type":"ContainerDied","Data":"a2c463916c69f37940766ffab2ea84220a095a36df8015d9a2eb89891bde0e8f"} Dec 10 15:20:28 crc kubenswrapper[4718]: I1210 15:20:28.743427 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-csz77" event={"ID":"7c0bc83f-8884-4c57-b3c5-ab5d38a76393","Type":"ContainerStarted","Data":"c90d75b31fa4ca0589dc5b48e2488f3792c5208723870cf80598f80cde25692a"} Dec 10 15:20:31 crc kubenswrapper[4718]: I1210 15:20:31.779186 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-csz77" event={"ID":"7c0bc83f-8884-4c57-b3c5-ab5d38a76393","Type":"ContainerStarted","Data":"a1f9b330c3430902c00d13a71d350bddbdedf06d8d5d4c4c77cbfafe2dd4be29"} Dec 10 15:20:32 crc kubenswrapper[4718]: I1210 15:20:32.234083 4718 scope.go:117] "RemoveContainer" containerID="65f273b169c25d47441ce0cd6888b81917c4ea628742cda4d8d97b9f88784ba0" Dec 10 15:20:32 crc kubenswrapper[4718]: E1210 15:20:32.234462 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8zmhn_openshift-machine-config-operator(8db53917-7cfb-496d-b8a0-5cc68f3be4e7)\"" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" podUID="8db53917-7cfb-496d-b8a0-5cc68f3be4e7" Dec 10 15:20:33 crc kubenswrapper[4718]: I1210 15:20:33.917659 4718 generic.go:334] "Generic (PLEG): container finished" podID="7c0bc83f-8884-4c57-b3c5-ab5d38a76393" containerID="a1f9b330c3430902c00d13a71d350bddbdedf06d8d5d4c4c77cbfafe2dd4be29" exitCode=0 Dec 10 15:20:33 crc kubenswrapper[4718]: I1210 15:20:33.917785 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-csz77" event={"ID":"7c0bc83f-8884-4c57-b3c5-ab5d38a76393","Type":"ContainerDied","Data":"a1f9b330c3430902c00d13a71d350bddbdedf06d8d5d4c4c77cbfafe2dd4be29"} Dec 10 15:20:35 crc kubenswrapper[4718]: I1210 15:20:35.043065 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-csz77" event={"ID":"7c0bc83f-8884-4c57-b3c5-ab5d38a76393","Type":"ContainerStarted","Data":"dc2f521366574926e9e8d0c955afa8c038669f9b1a63302b1f1722de5c1403e3"} Dec 10 15:20:35 crc kubenswrapper[4718]: I1210 15:20:35.073432 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-csz77" podStartSLOduration=2.436514349 podStartE2EDuration="8.073406006s" podCreationTimestamp="2025-12-10 15:20:27 +0000 UTC" firstStartedPulling="2025-12-10 15:20:28.745086556 +0000 UTC m=+2933.694309963" lastFinishedPulling="2025-12-10 15:20:34.381978203 +0000 UTC m=+2939.331201620" observedRunningTime="2025-12-10 15:20:35.069092427 +0000 UTC m=+2940.018315844" watchObservedRunningTime="2025-12-10 15:20:35.073406006 +0000 UTC m=+2940.022629423" Dec 10 15:20:35 crc kubenswrapper[4718]: I1210 15:20:35.473241 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-66hhb"] Dec 10 15:20:35 crc kubenswrapper[4718]: I1210 15:20:35.477121 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-66hhb" Dec 10 15:20:35 crc kubenswrapper[4718]: I1210 15:20:35.486956 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-66hhb"] Dec 10 15:20:35 crc kubenswrapper[4718]: I1210 15:20:35.641158 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d566q\" (UniqueName: \"kubernetes.io/projected/9228db18-ae68-483c-a474-3e654e72ea9a-kube-api-access-d566q\") pod \"redhat-operators-66hhb\" (UID: \"9228db18-ae68-483c-a474-3e654e72ea9a\") " pod="openshift-marketplace/redhat-operators-66hhb" Dec 10 15:20:35 crc kubenswrapper[4718]: I1210 15:20:35.641656 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9228db18-ae68-483c-a474-3e654e72ea9a-utilities\") pod \"redhat-operators-66hhb\" (UID: \"9228db18-ae68-483c-a474-3e654e72ea9a\") " pod="openshift-marketplace/redhat-operators-66hhb" Dec 10 15:20:35 crc kubenswrapper[4718]: I1210 15:20:35.642719 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9228db18-ae68-483c-a474-3e654e72ea9a-catalog-content\") pod \"redhat-operators-66hhb\" (UID: \"9228db18-ae68-483c-a474-3e654e72ea9a\") " pod="openshift-marketplace/redhat-operators-66hhb" Dec 10 15:20:35 crc kubenswrapper[4718]: I1210 15:20:35.745495 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9228db18-ae68-483c-a474-3e654e72ea9a-catalog-content\") pod \"redhat-operators-66hhb\" (UID: \"9228db18-ae68-483c-a474-3e654e72ea9a\") " pod="openshift-marketplace/redhat-operators-66hhb" Dec 10 15:20:35 crc kubenswrapper[4718]: I1210 15:20:35.745570 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d566q\" (UniqueName: \"kubernetes.io/projected/9228db18-ae68-483c-a474-3e654e72ea9a-kube-api-access-d566q\") pod \"redhat-operators-66hhb\" (UID: \"9228db18-ae68-483c-a474-3e654e72ea9a\") " pod="openshift-marketplace/redhat-operators-66hhb" Dec 10 15:20:35 crc kubenswrapper[4718]: I1210 15:20:35.745644 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9228db18-ae68-483c-a474-3e654e72ea9a-utilities\") pod \"redhat-operators-66hhb\" (UID: \"9228db18-ae68-483c-a474-3e654e72ea9a\") " pod="openshift-marketplace/redhat-operators-66hhb" Dec 10 15:20:35 crc kubenswrapper[4718]: I1210 15:20:35.746060 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9228db18-ae68-483c-a474-3e654e72ea9a-catalog-content\") pod \"redhat-operators-66hhb\" (UID: \"9228db18-ae68-483c-a474-3e654e72ea9a\") " pod="openshift-marketplace/redhat-operators-66hhb" Dec 10 15:20:35 crc kubenswrapper[4718]: I1210 15:20:35.746095 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9228db18-ae68-483c-a474-3e654e72ea9a-utilities\") pod \"redhat-operators-66hhb\" (UID: \"9228db18-ae68-483c-a474-3e654e72ea9a\") " pod="openshift-marketplace/redhat-operators-66hhb" Dec 10 15:20:35 crc kubenswrapper[4718]: I1210 15:20:35.775782 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d566q\" (UniqueName: \"kubernetes.io/projected/9228db18-ae68-483c-a474-3e654e72ea9a-kube-api-access-d566q\") pod \"redhat-operators-66hhb\" (UID: \"9228db18-ae68-483c-a474-3e654e72ea9a\") " pod="openshift-marketplace/redhat-operators-66hhb" Dec 10 15:20:35 crc kubenswrapper[4718]: I1210 15:20:35.804966 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-66hhb" Dec 10 15:20:36 crc kubenswrapper[4718]: I1210 15:20:36.684296 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-66hhb"] Dec 10 15:20:37 crc kubenswrapper[4718]: I1210 15:20:37.072124 4718 generic.go:334] "Generic (PLEG): container finished" podID="9228db18-ae68-483c-a474-3e654e72ea9a" containerID="64c0be3221163529c38bd07be807bf49600613c1d2c18b936315eacb0ccf7585" exitCode=0 Dec 10 15:20:37 crc kubenswrapper[4718]: I1210 15:20:37.072231 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-66hhb" event={"ID":"9228db18-ae68-483c-a474-3e654e72ea9a","Type":"ContainerDied","Data":"64c0be3221163529c38bd07be807bf49600613c1d2c18b936315eacb0ccf7585"} Dec 10 15:20:37 crc kubenswrapper[4718]: I1210 15:20:37.072473 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-66hhb" event={"ID":"9228db18-ae68-483c-a474-3e654e72ea9a","Type":"ContainerStarted","Data":"517b9e9b2667a92ba98087bdeaf6f2d5e3513bf21b7b678135d0683fcb5cd719"} Dec 10 15:20:37 crc kubenswrapper[4718]: I1210 15:20:37.634194 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-csz77" Dec 10 15:20:37 crc kubenswrapper[4718]: I1210 15:20:37.634544 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-csz77" Dec 10 15:20:38 crc kubenswrapper[4718]: I1210 15:20:38.683718 4718 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-csz77" podUID="7c0bc83f-8884-4c57-b3c5-ab5d38a76393" containerName="registry-server" probeResult="failure" output=< Dec 10 15:20:38 crc kubenswrapper[4718]: timeout: failed to connect service ":50051" within 1s Dec 10 15:20:38 crc kubenswrapper[4718]: > Dec 10 15:20:39 crc kubenswrapper[4718]: I1210 15:20:39.095206 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-66hhb" event={"ID":"9228db18-ae68-483c-a474-3e654e72ea9a","Type":"ContainerStarted","Data":"5de94783126f5d4f898bb367f75f0fb456feb0de372e7519150280a969269e12"} Dec 10 15:20:39 crc kubenswrapper[4718]: I1210 15:20:39.868817 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-pg2zx"] Dec 10 15:20:39 crc kubenswrapper[4718]: I1210 15:20:39.872011 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pg2zx" Dec 10 15:20:39 crc kubenswrapper[4718]: I1210 15:20:39.882429 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-pg2zx"] Dec 10 15:20:39 crc kubenswrapper[4718]: I1210 15:20:39.953373 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/85d9945e-e143-4ac5-9054-b1aff0f907a7-catalog-content\") pod \"redhat-marketplace-pg2zx\" (UID: \"85d9945e-e143-4ac5-9054-b1aff0f907a7\") " pod="openshift-marketplace/redhat-marketplace-pg2zx" Dec 10 15:20:39 crc kubenswrapper[4718]: I1210 15:20:39.953508 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zgkrj\" (UniqueName: \"kubernetes.io/projected/85d9945e-e143-4ac5-9054-b1aff0f907a7-kube-api-access-zgkrj\") pod \"redhat-marketplace-pg2zx\" (UID: \"85d9945e-e143-4ac5-9054-b1aff0f907a7\") " pod="openshift-marketplace/redhat-marketplace-pg2zx" Dec 10 15:20:39 crc kubenswrapper[4718]: I1210 15:20:39.953585 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/85d9945e-e143-4ac5-9054-b1aff0f907a7-utilities\") pod \"redhat-marketplace-pg2zx\" (UID: \"85d9945e-e143-4ac5-9054-b1aff0f907a7\") " pod="openshift-marketplace/redhat-marketplace-pg2zx" Dec 10 15:20:40 crc kubenswrapper[4718]: I1210 15:20:40.056019 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/85d9945e-e143-4ac5-9054-b1aff0f907a7-catalog-content\") pod \"redhat-marketplace-pg2zx\" (UID: \"85d9945e-e143-4ac5-9054-b1aff0f907a7\") " pod="openshift-marketplace/redhat-marketplace-pg2zx" Dec 10 15:20:40 crc kubenswrapper[4718]: I1210 15:20:40.056192 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zgkrj\" (UniqueName: \"kubernetes.io/projected/85d9945e-e143-4ac5-9054-b1aff0f907a7-kube-api-access-zgkrj\") pod \"redhat-marketplace-pg2zx\" (UID: \"85d9945e-e143-4ac5-9054-b1aff0f907a7\") " pod="openshift-marketplace/redhat-marketplace-pg2zx" Dec 10 15:20:40 crc kubenswrapper[4718]: I1210 15:20:40.056314 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/85d9945e-e143-4ac5-9054-b1aff0f907a7-utilities\") pod \"redhat-marketplace-pg2zx\" (UID: \"85d9945e-e143-4ac5-9054-b1aff0f907a7\") " pod="openshift-marketplace/redhat-marketplace-pg2zx" Dec 10 15:20:40 crc kubenswrapper[4718]: I1210 15:20:40.056903 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/85d9945e-e143-4ac5-9054-b1aff0f907a7-utilities\") pod \"redhat-marketplace-pg2zx\" (UID: \"85d9945e-e143-4ac5-9054-b1aff0f907a7\") " pod="openshift-marketplace/redhat-marketplace-pg2zx" Dec 10 15:20:40 crc kubenswrapper[4718]: I1210 15:20:40.057425 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/85d9945e-e143-4ac5-9054-b1aff0f907a7-catalog-content\") pod \"redhat-marketplace-pg2zx\" (UID: \"85d9945e-e143-4ac5-9054-b1aff0f907a7\") " pod="openshift-marketplace/redhat-marketplace-pg2zx" Dec 10 15:20:40 crc kubenswrapper[4718]: I1210 15:20:40.089904 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zgkrj\" (UniqueName: \"kubernetes.io/projected/85d9945e-e143-4ac5-9054-b1aff0f907a7-kube-api-access-zgkrj\") pod \"redhat-marketplace-pg2zx\" (UID: \"85d9945e-e143-4ac5-9054-b1aff0f907a7\") " pod="openshift-marketplace/redhat-marketplace-pg2zx" Dec 10 15:20:40 crc kubenswrapper[4718]: I1210 15:20:40.222779 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pg2zx" Dec 10 15:20:40 crc kubenswrapper[4718]: I1210 15:20:40.730826 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-pg2zx"] Dec 10 15:20:40 crc kubenswrapper[4718]: W1210 15:20:40.731043 4718 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod85d9945e_e143_4ac5_9054_b1aff0f907a7.slice/crio-a63dd2e77b4d3079e19ced1578379557c32d50a39c1f88239ff1147b39ee6a07 WatchSource:0}: Error finding container a63dd2e77b4d3079e19ced1578379557c32d50a39c1f88239ff1147b39ee6a07: Status 404 returned error can't find the container with id a63dd2e77b4d3079e19ced1578379557c32d50a39c1f88239ff1147b39ee6a07 Dec 10 15:20:41 crc kubenswrapper[4718]: I1210 15:20:41.115786 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pg2zx" event={"ID":"85d9945e-e143-4ac5-9054-b1aff0f907a7","Type":"ContainerStarted","Data":"a63dd2e77b4d3079e19ced1578379557c32d50a39c1f88239ff1147b39ee6a07"} Dec 10 15:20:42 crc kubenswrapper[4718]: I1210 15:20:42.129303 4718 generic.go:334] "Generic (PLEG): container finished" podID="9228db18-ae68-483c-a474-3e654e72ea9a" containerID="5de94783126f5d4f898bb367f75f0fb456feb0de372e7519150280a969269e12" exitCode=0 Dec 10 15:20:42 crc kubenswrapper[4718]: I1210 15:20:42.129353 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-66hhb" event={"ID":"9228db18-ae68-483c-a474-3e654e72ea9a","Type":"ContainerDied","Data":"5de94783126f5d4f898bb367f75f0fb456feb0de372e7519150280a969269e12"} Dec 10 15:20:43 crc kubenswrapper[4718]: I1210 15:20:43.145052 4718 generic.go:334] "Generic (PLEG): container finished" podID="85d9945e-e143-4ac5-9054-b1aff0f907a7" containerID="be5ecc76b8d4cc384ca2cafb70bf64944a4323191defe95e9d80a9b4c7acc3df" exitCode=0 Dec 10 15:20:43 crc kubenswrapper[4718]: I1210 15:20:43.146148 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pg2zx" event={"ID":"85d9945e-e143-4ac5-9054-b1aff0f907a7","Type":"ContainerDied","Data":"be5ecc76b8d4cc384ca2cafb70bf64944a4323191defe95e9d80a9b4c7acc3df"} Dec 10 15:20:44 crc kubenswrapper[4718]: I1210 15:20:44.161038 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pg2zx" event={"ID":"85d9945e-e143-4ac5-9054-b1aff0f907a7","Type":"ContainerStarted","Data":"4206e35f9f6ef8727df5e15ec883f6f9149cd175dd8f8ff91c09c4ad06ec84f8"} Dec 10 15:20:44 crc kubenswrapper[4718]: I1210 15:20:44.164408 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-66hhb" event={"ID":"9228db18-ae68-483c-a474-3e654e72ea9a","Type":"ContainerStarted","Data":"64ec2c64361e51446fd57034461be1a854094142f81d26a5bbe4764023cf9e41"} Dec 10 15:20:45 crc kubenswrapper[4718]: I1210 15:20:45.178111 4718 generic.go:334] "Generic (PLEG): container finished" podID="85d9945e-e143-4ac5-9054-b1aff0f907a7" containerID="4206e35f9f6ef8727df5e15ec883f6f9149cd175dd8f8ff91c09c4ad06ec84f8" exitCode=0 Dec 10 15:20:45 crc kubenswrapper[4718]: I1210 15:20:45.178207 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pg2zx" event={"ID":"85d9945e-e143-4ac5-9054-b1aff0f907a7","Type":"ContainerDied","Data":"4206e35f9f6ef8727df5e15ec883f6f9149cd175dd8f8ff91c09c4ad06ec84f8"} Dec 10 15:20:45 crc kubenswrapper[4718]: I1210 15:20:45.206287 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-66hhb" podStartSLOduration=3.943779666 podStartE2EDuration="10.206263085s" podCreationTimestamp="2025-12-10 15:20:35 +0000 UTC" firstStartedPulling="2025-12-10 15:20:37.075536426 +0000 UTC m=+2942.024759853" lastFinishedPulling="2025-12-10 15:20:43.338019855 +0000 UTC m=+2948.287243272" observedRunningTime="2025-12-10 15:20:44.210124926 +0000 UTC m=+2949.159348343" watchObservedRunningTime="2025-12-10 15:20:45.206263085 +0000 UTC m=+2950.155486502" Dec 10 15:20:45 crc kubenswrapper[4718]: I1210 15:20:45.805292 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-66hhb" Dec 10 15:20:45 crc kubenswrapper[4718]: I1210 15:20:45.805379 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-66hhb" Dec 10 15:20:46 crc kubenswrapper[4718]: I1210 15:20:46.029206 4718 scope.go:117] "RemoveContainer" containerID="65f273b169c25d47441ce0cd6888b81917c4ea628742cda4d8d97b9f88784ba0" Dec 10 15:20:46 crc kubenswrapper[4718]: E1210 15:20:46.029797 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8zmhn_openshift-machine-config-operator(8db53917-7cfb-496d-b8a0-5cc68f3be4e7)\"" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" podUID="8db53917-7cfb-496d-b8a0-5cc68f3be4e7" Dec 10 15:20:46 crc kubenswrapper[4718]: I1210 15:20:46.196021 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pg2zx" event={"ID":"85d9945e-e143-4ac5-9054-b1aff0f907a7","Type":"ContainerStarted","Data":"1a9607f1045f42d74b1e52abc9c37340b869b1d3d0c0d6be3fc14790b7618314"} Dec 10 15:20:46 crc kubenswrapper[4718]: I1210 15:20:46.234787 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-pg2zx" podStartSLOduration=4.472811257 podStartE2EDuration="7.234762496s" podCreationTimestamp="2025-12-10 15:20:39 +0000 UTC" firstStartedPulling="2025-12-10 15:20:43.149412967 +0000 UTC m=+2948.098636384" lastFinishedPulling="2025-12-10 15:20:45.911364206 +0000 UTC m=+2950.860587623" observedRunningTime="2025-12-10 15:20:46.221781486 +0000 UTC m=+2951.171004913" watchObservedRunningTime="2025-12-10 15:20:46.234762496 +0000 UTC m=+2951.183985913" Dec 10 15:20:46 crc kubenswrapper[4718]: I1210 15:20:46.853245 4718 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-66hhb" podUID="9228db18-ae68-483c-a474-3e654e72ea9a" containerName="registry-server" probeResult="failure" output=< Dec 10 15:20:46 crc kubenswrapper[4718]: timeout: failed to connect service ":50051" within 1s Dec 10 15:20:46 crc kubenswrapper[4718]: > Dec 10 15:20:47 crc kubenswrapper[4718]: I1210 15:20:47.693539 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-csz77" Dec 10 15:20:47 crc kubenswrapper[4718]: I1210 15:20:47.749415 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-csz77" Dec 10 15:20:50 crc kubenswrapper[4718]: I1210 15:20:50.222991 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-pg2zx" Dec 10 15:20:50 crc kubenswrapper[4718]: I1210 15:20:50.223257 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-pg2zx" Dec 10 15:20:50 crc kubenswrapper[4718]: I1210 15:20:50.272664 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-pg2zx" Dec 10 15:20:51 crc kubenswrapper[4718]: I1210 15:20:51.859430 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-csz77"] Dec 10 15:20:51 crc kubenswrapper[4718]: I1210 15:20:51.859968 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-csz77" podUID="7c0bc83f-8884-4c57-b3c5-ab5d38a76393" containerName="registry-server" containerID="cri-o://dc2f521366574926e9e8d0c955afa8c038669f9b1a63302b1f1722de5c1403e3" gracePeriod=2 Dec 10 15:20:52 crc kubenswrapper[4718]: I1210 15:20:52.307285 4718 generic.go:334] "Generic (PLEG): container finished" podID="7c0bc83f-8884-4c57-b3c5-ab5d38a76393" containerID="dc2f521366574926e9e8d0c955afa8c038669f9b1a63302b1f1722de5c1403e3" exitCode=0 Dec 10 15:20:52 crc kubenswrapper[4718]: I1210 15:20:52.307355 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-csz77" event={"ID":"7c0bc83f-8884-4c57-b3c5-ab5d38a76393","Type":"ContainerDied","Data":"dc2f521366574926e9e8d0c955afa8c038669f9b1a63302b1f1722de5c1403e3"} Dec 10 15:20:52 crc kubenswrapper[4718]: I1210 15:20:52.446492 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-csz77" Dec 10 15:20:52 crc kubenswrapper[4718]: I1210 15:20:52.496978 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7c0bc83f-8884-4c57-b3c5-ab5d38a76393-utilities\") pod \"7c0bc83f-8884-4c57-b3c5-ab5d38a76393\" (UID: \"7c0bc83f-8884-4c57-b3c5-ab5d38a76393\") " Dec 10 15:20:52 crc kubenswrapper[4718]: I1210 15:20:52.497234 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7c0bc83f-8884-4c57-b3c5-ab5d38a76393-catalog-content\") pod \"7c0bc83f-8884-4c57-b3c5-ab5d38a76393\" (UID: \"7c0bc83f-8884-4c57-b3c5-ab5d38a76393\") " Dec 10 15:20:52 crc kubenswrapper[4718]: I1210 15:20:52.497281 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zw8b4\" (UniqueName: \"kubernetes.io/projected/7c0bc83f-8884-4c57-b3c5-ab5d38a76393-kube-api-access-zw8b4\") pod \"7c0bc83f-8884-4c57-b3c5-ab5d38a76393\" (UID: \"7c0bc83f-8884-4c57-b3c5-ab5d38a76393\") " Dec 10 15:20:52 crc kubenswrapper[4718]: I1210 15:20:52.499854 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7c0bc83f-8884-4c57-b3c5-ab5d38a76393-utilities" (OuterVolumeSpecName: "utilities") pod "7c0bc83f-8884-4c57-b3c5-ab5d38a76393" (UID: "7c0bc83f-8884-4c57-b3c5-ab5d38a76393"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 15:20:52 crc kubenswrapper[4718]: I1210 15:20:52.506972 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7c0bc83f-8884-4c57-b3c5-ab5d38a76393-kube-api-access-zw8b4" (OuterVolumeSpecName: "kube-api-access-zw8b4") pod "7c0bc83f-8884-4c57-b3c5-ab5d38a76393" (UID: "7c0bc83f-8884-4c57-b3c5-ab5d38a76393"). InnerVolumeSpecName "kube-api-access-zw8b4". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:20:52 crc kubenswrapper[4718]: I1210 15:20:52.561202 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7c0bc83f-8884-4c57-b3c5-ab5d38a76393-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7c0bc83f-8884-4c57-b3c5-ab5d38a76393" (UID: "7c0bc83f-8884-4c57-b3c5-ab5d38a76393"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 15:20:52 crc kubenswrapper[4718]: I1210 15:20:52.601362 4718 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7c0bc83f-8884-4c57-b3c5-ab5d38a76393-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 10 15:20:52 crc kubenswrapper[4718]: I1210 15:20:52.601420 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zw8b4\" (UniqueName: \"kubernetes.io/projected/7c0bc83f-8884-4c57-b3c5-ab5d38a76393-kube-api-access-zw8b4\") on node \"crc\" DevicePath \"\"" Dec 10 15:20:52 crc kubenswrapper[4718]: I1210 15:20:52.601437 4718 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7c0bc83f-8884-4c57-b3c5-ab5d38a76393-utilities\") on node \"crc\" DevicePath \"\"" Dec 10 15:20:53 crc kubenswrapper[4718]: I1210 15:20:53.324677 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-csz77" event={"ID":"7c0bc83f-8884-4c57-b3c5-ab5d38a76393","Type":"ContainerDied","Data":"c90d75b31fa4ca0589dc5b48e2488f3792c5208723870cf80598f80cde25692a"} Dec 10 15:20:53 crc kubenswrapper[4718]: I1210 15:20:53.324956 4718 scope.go:117] "RemoveContainer" containerID="dc2f521366574926e9e8d0c955afa8c038669f9b1a63302b1f1722de5c1403e3" Dec 10 15:20:53 crc kubenswrapper[4718]: I1210 15:20:53.324769 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-csz77" Dec 10 15:20:53 crc kubenswrapper[4718]: I1210 15:20:53.348313 4718 scope.go:117] "RemoveContainer" containerID="a1f9b330c3430902c00d13a71d350bddbdedf06d8d5d4c4c77cbfafe2dd4be29" Dec 10 15:20:53 crc kubenswrapper[4718]: I1210 15:20:53.395582 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-csz77"] Dec 10 15:20:53 crc kubenswrapper[4718]: I1210 15:20:53.405075 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-csz77"] Dec 10 15:20:53 crc kubenswrapper[4718]: I1210 15:20:53.407601 4718 scope.go:117] "RemoveContainer" containerID="a2c463916c69f37940766ffab2ea84220a095a36df8015d9a2eb89891bde0e8f" Dec 10 15:20:54 crc kubenswrapper[4718]: I1210 15:20:54.034899 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7c0bc83f-8884-4c57-b3c5-ab5d38a76393" path="/var/lib/kubelet/pods/7c0bc83f-8884-4c57-b3c5-ab5d38a76393/volumes" Dec 10 15:20:55 crc kubenswrapper[4718]: I1210 15:20:55.859412 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-66hhb" Dec 10 15:20:55 crc kubenswrapper[4718]: I1210 15:20:55.921539 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-66hhb" Dec 10 15:20:58 crc kubenswrapper[4718]: I1210 15:20:58.273854 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-66hhb"] Dec 10 15:20:58 crc kubenswrapper[4718]: I1210 15:20:58.274741 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-66hhb" podUID="9228db18-ae68-483c-a474-3e654e72ea9a" containerName="registry-server" containerID="cri-o://64ec2c64361e51446fd57034461be1a854094142f81d26a5bbe4764023cf9e41" gracePeriod=2 Dec 10 15:20:58 crc kubenswrapper[4718]: I1210 15:20:58.749831 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-66hhb" Dec 10 15:20:58 crc kubenswrapper[4718]: I1210 15:20:58.877818 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9228db18-ae68-483c-a474-3e654e72ea9a-catalog-content\") pod \"9228db18-ae68-483c-a474-3e654e72ea9a\" (UID: \"9228db18-ae68-483c-a474-3e654e72ea9a\") " Dec 10 15:20:58 crc kubenswrapper[4718]: I1210 15:20:58.878062 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9228db18-ae68-483c-a474-3e654e72ea9a-utilities\") pod \"9228db18-ae68-483c-a474-3e654e72ea9a\" (UID: \"9228db18-ae68-483c-a474-3e654e72ea9a\") " Dec 10 15:20:58 crc kubenswrapper[4718]: I1210 15:20:58.878209 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d566q\" (UniqueName: \"kubernetes.io/projected/9228db18-ae68-483c-a474-3e654e72ea9a-kube-api-access-d566q\") pod \"9228db18-ae68-483c-a474-3e654e72ea9a\" (UID: \"9228db18-ae68-483c-a474-3e654e72ea9a\") " Dec 10 15:20:58 crc kubenswrapper[4718]: I1210 15:20:58.879263 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9228db18-ae68-483c-a474-3e654e72ea9a-utilities" (OuterVolumeSpecName: "utilities") pod "9228db18-ae68-483c-a474-3e654e72ea9a" (UID: "9228db18-ae68-483c-a474-3e654e72ea9a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 15:20:58 crc kubenswrapper[4718]: I1210 15:20:58.885287 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9228db18-ae68-483c-a474-3e654e72ea9a-kube-api-access-d566q" (OuterVolumeSpecName: "kube-api-access-d566q") pod "9228db18-ae68-483c-a474-3e654e72ea9a" (UID: "9228db18-ae68-483c-a474-3e654e72ea9a"). InnerVolumeSpecName "kube-api-access-d566q". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:20:58 crc kubenswrapper[4718]: I1210 15:20:58.981913 4718 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9228db18-ae68-483c-a474-3e654e72ea9a-utilities\") on node \"crc\" DevicePath \"\"" Dec 10 15:20:58 crc kubenswrapper[4718]: I1210 15:20:58.982002 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d566q\" (UniqueName: \"kubernetes.io/projected/9228db18-ae68-483c-a474-3e654e72ea9a-kube-api-access-d566q\") on node \"crc\" DevicePath \"\"" Dec 10 15:20:58 crc kubenswrapper[4718]: I1210 15:20:58.996315 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9228db18-ae68-483c-a474-3e654e72ea9a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9228db18-ae68-483c-a474-3e654e72ea9a" (UID: "9228db18-ae68-483c-a474-3e654e72ea9a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 15:20:59 crc kubenswrapper[4718]: I1210 15:20:59.085134 4718 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9228db18-ae68-483c-a474-3e654e72ea9a-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 10 15:20:59 crc kubenswrapper[4718]: I1210 15:20:59.392662 4718 generic.go:334] "Generic (PLEG): container finished" podID="9228db18-ae68-483c-a474-3e654e72ea9a" containerID="64ec2c64361e51446fd57034461be1a854094142f81d26a5bbe4764023cf9e41" exitCode=0 Dec 10 15:20:59 crc kubenswrapper[4718]: I1210 15:20:59.392729 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-66hhb" event={"ID":"9228db18-ae68-483c-a474-3e654e72ea9a","Type":"ContainerDied","Data":"64ec2c64361e51446fd57034461be1a854094142f81d26a5bbe4764023cf9e41"} Dec 10 15:20:59 crc kubenswrapper[4718]: I1210 15:20:59.392745 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-66hhb" Dec 10 15:20:59 crc kubenswrapper[4718]: I1210 15:20:59.392784 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-66hhb" event={"ID":"9228db18-ae68-483c-a474-3e654e72ea9a","Type":"ContainerDied","Data":"517b9e9b2667a92ba98087bdeaf6f2d5e3513bf21b7b678135d0683fcb5cd719"} Dec 10 15:20:59 crc kubenswrapper[4718]: I1210 15:20:59.392815 4718 scope.go:117] "RemoveContainer" containerID="64ec2c64361e51446fd57034461be1a854094142f81d26a5bbe4764023cf9e41" Dec 10 15:20:59 crc kubenswrapper[4718]: I1210 15:20:59.440628 4718 scope.go:117] "RemoveContainer" containerID="5de94783126f5d4f898bb367f75f0fb456feb0de372e7519150280a969269e12" Dec 10 15:20:59 crc kubenswrapper[4718]: I1210 15:20:59.441327 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-66hhb"] Dec 10 15:20:59 crc kubenswrapper[4718]: I1210 15:20:59.453904 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-66hhb"] Dec 10 15:20:59 crc kubenswrapper[4718]: I1210 15:20:59.471731 4718 scope.go:117] "RemoveContainer" containerID="64c0be3221163529c38bd07be807bf49600613c1d2c18b936315eacb0ccf7585" Dec 10 15:20:59 crc kubenswrapper[4718]: I1210 15:20:59.653190 4718 scope.go:117] "RemoveContainer" containerID="64ec2c64361e51446fd57034461be1a854094142f81d26a5bbe4764023cf9e41" Dec 10 15:20:59 crc kubenswrapper[4718]: E1210 15:20:59.653786 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"64ec2c64361e51446fd57034461be1a854094142f81d26a5bbe4764023cf9e41\": container with ID starting with 64ec2c64361e51446fd57034461be1a854094142f81d26a5bbe4764023cf9e41 not found: ID does not exist" containerID="64ec2c64361e51446fd57034461be1a854094142f81d26a5bbe4764023cf9e41" Dec 10 15:20:59 crc kubenswrapper[4718]: I1210 15:20:59.653839 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"64ec2c64361e51446fd57034461be1a854094142f81d26a5bbe4764023cf9e41"} err="failed to get container status \"64ec2c64361e51446fd57034461be1a854094142f81d26a5bbe4764023cf9e41\": rpc error: code = NotFound desc = could not find container \"64ec2c64361e51446fd57034461be1a854094142f81d26a5bbe4764023cf9e41\": container with ID starting with 64ec2c64361e51446fd57034461be1a854094142f81d26a5bbe4764023cf9e41 not found: ID does not exist" Dec 10 15:20:59 crc kubenswrapper[4718]: I1210 15:20:59.653876 4718 scope.go:117] "RemoveContainer" containerID="5de94783126f5d4f898bb367f75f0fb456feb0de372e7519150280a969269e12" Dec 10 15:20:59 crc kubenswrapper[4718]: E1210 15:20:59.654282 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5de94783126f5d4f898bb367f75f0fb456feb0de372e7519150280a969269e12\": container with ID starting with 5de94783126f5d4f898bb367f75f0fb456feb0de372e7519150280a969269e12 not found: ID does not exist" containerID="5de94783126f5d4f898bb367f75f0fb456feb0de372e7519150280a969269e12" Dec 10 15:20:59 crc kubenswrapper[4718]: I1210 15:20:59.654371 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5de94783126f5d4f898bb367f75f0fb456feb0de372e7519150280a969269e12"} err="failed to get container status \"5de94783126f5d4f898bb367f75f0fb456feb0de372e7519150280a969269e12\": rpc error: code = NotFound desc = could not find container \"5de94783126f5d4f898bb367f75f0fb456feb0de372e7519150280a969269e12\": container with ID starting with 5de94783126f5d4f898bb367f75f0fb456feb0de372e7519150280a969269e12 not found: ID does not exist" Dec 10 15:20:59 crc kubenswrapper[4718]: I1210 15:20:59.654452 4718 scope.go:117] "RemoveContainer" containerID="64c0be3221163529c38bd07be807bf49600613c1d2c18b936315eacb0ccf7585" Dec 10 15:20:59 crc kubenswrapper[4718]: E1210 15:20:59.654898 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"64c0be3221163529c38bd07be807bf49600613c1d2c18b936315eacb0ccf7585\": container with ID starting with 64c0be3221163529c38bd07be807bf49600613c1d2c18b936315eacb0ccf7585 not found: ID does not exist" containerID="64c0be3221163529c38bd07be807bf49600613c1d2c18b936315eacb0ccf7585" Dec 10 15:20:59 crc kubenswrapper[4718]: I1210 15:20:59.654954 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"64c0be3221163529c38bd07be807bf49600613c1d2c18b936315eacb0ccf7585"} err="failed to get container status \"64c0be3221163529c38bd07be807bf49600613c1d2c18b936315eacb0ccf7585\": rpc error: code = NotFound desc = could not find container \"64c0be3221163529c38bd07be807bf49600613c1d2c18b936315eacb0ccf7585\": container with ID starting with 64c0be3221163529c38bd07be807bf49600613c1d2c18b936315eacb0ccf7585 not found: ID does not exist" Dec 10 15:21:00 crc kubenswrapper[4718]: I1210 15:21:00.021735 4718 scope.go:117] "RemoveContainer" containerID="65f273b169c25d47441ce0cd6888b81917c4ea628742cda4d8d97b9f88784ba0" Dec 10 15:21:00 crc kubenswrapper[4718]: E1210 15:21:00.022124 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8zmhn_openshift-machine-config-operator(8db53917-7cfb-496d-b8a0-5cc68f3be4e7)\"" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" podUID="8db53917-7cfb-496d-b8a0-5cc68f3be4e7" Dec 10 15:21:00 crc kubenswrapper[4718]: I1210 15:21:00.036827 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9228db18-ae68-483c-a474-3e654e72ea9a" path="/var/lib/kubelet/pods/9228db18-ae68-483c-a474-3e654e72ea9a/volumes" Dec 10 15:21:00 crc kubenswrapper[4718]: I1210 15:21:00.280368 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-pg2zx" Dec 10 15:21:02 crc kubenswrapper[4718]: I1210 15:21:02.058075 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-pg2zx"] Dec 10 15:21:02 crc kubenswrapper[4718]: I1210 15:21:02.058335 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-pg2zx" podUID="85d9945e-e143-4ac5-9054-b1aff0f907a7" containerName="registry-server" containerID="cri-o://1a9607f1045f42d74b1e52abc9c37340b869b1d3d0c0d6be3fc14790b7618314" gracePeriod=2 Dec 10 15:21:02 crc kubenswrapper[4718]: I1210 15:21:02.623281 4718 generic.go:334] "Generic (PLEG): container finished" podID="85d9945e-e143-4ac5-9054-b1aff0f907a7" containerID="1a9607f1045f42d74b1e52abc9c37340b869b1d3d0c0d6be3fc14790b7618314" exitCode=0 Dec 10 15:21:02 crc kubenswrapper[4718]: I1210 15:21:02.623589 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pg2zx" event={"ID":"85d9945e-e143-4ac5-9054-b1aff0f907a7","Type":"ContainerDied","Data":"1a9607f1045f42d74b1e52abc9c37340b869b1d3d0c0d6be3fc14790b7618314"} Dec 10 15:21:02 crc kubenswrapper[4718]: I1210 15:21:02.879718 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pg2zx" Dec 10 15:21:03 crc kubenswrapper[4718]: I1210 15:21:03.014207 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/85d9945e-e143-4ac5-9054-b1aff0f907a7-catalog-content\") pod \"85d9945e-e143-4ac5-9054-b1aff0f907a7\" (UID: \"85d9945e-e143-4ac5-9054-b1aff0f907a7\") " Dec 10 15:21:03 crc kubenswrapper[4718]: I1210 15:21:03.014256 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgkrj\" (UniqueName: \"kubernetes.io/projected/85d9945e-e143-4ac5-9054-b1aff0f907a7-kube-api-access-zgkrj\") pod \"85d9945e-e143-4ac5-9054-b1aff0f907a7\" (UID: \"85d9945e-e143-4ac5-9054-b1aff0f907a7\") " Dec 10 15:21:03 crc kubenswrapper[4718]: I1210 15:21:03.014438 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/85d9945e-e143-4ac5-9054-b1aff0f907a7-utilities\") pod \"85d9945e-e143-4ac5-9054-b1aff0f907a7\" (UID: \"85d9945e-e143-4ac5-9054-b1aff0f907a7\") " Dec 10 15:21:03 crc kubenswrapper[4718]: I1210 15:21:03.016889 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/85d9945e-e143-4ac5-9054-b1aff0f907a7-utilities" (OuterVolumeSpecName: "utilities") pod "85d9945e-e143-4ac5-9054-b1aff0f907a7" (UID: "85d9945e-e143-4ac5-9054-b1aff0f907a7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 15:21:03 crc kubenswrapper[4718]: I1210 15:21:03.025711 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/85d9945e-e143-4ac5-9054-b1aff0f907a7-kube-api-access-zgkrj" (OuterVolumeSpecName: "kube-api-access-zgkrj") pod "85d9945e-e143-4ac5-9054-b1aff0f907a7" (UID: "85d9945e-e143-4ac5-9054-b1aff0f907a7"). InnerVolumeSpecName "kube-api-access-zgkrj". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:21:03 crc kubenswrapper[4718]: I1210 15:21:03.048243 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/85d9945e-e143-4ac5-9054-b1aff0f907a7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "85d9945e-e143-4ac5-9054-b1aff0f907a7" (UID: "85d9945e-e143-4ac5-9054-b1aff0f907a7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 15:21:03 crc kubenswrapper[4718]: I1210 15:21:03.118495 4718 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/85d9945e-e143-4ac5-9054-b1aff0f907a7-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 10 15:21:03 crc kubenswrapper[4718]: I1210 15:21:03.118562 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgkrj\" (UniqueName: \"kubernetes.io/projected/85d9945e-e143-4ac5-9054-b1aff0f907a7-kube-api-access-zgkrj\") on node \"crc\" DevicePath \"\"" Dec 10 15:21:03 crc kubenswrapper[4718]: I1210 15:21:03.118585 4718 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/85d9945e-e143-4ac5-9054-b1aff0f907a7-utilities\") on node \"crc\" DevicePath \"\"" Dec 10 15:21:03 crc kubenswrapper[4718]: I1210 15:21:03.638063 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pg2zx" event={"ID":"85d9945e-e143-4ac5-9054-b1aff0f907a7","Type":"ContainerDied","Data":"a63dd2e77b4d3079e19ced1578379557c32d50a39c1f88239ff1147b39ee6a07"} Dec 10 15:21:03 crc kubenswrapper[4718]: I1210 15:21:03.638132 4718 scope.go:117] "RemoveContainer" containerID="1a9607f1045f42d74b1e52abc9c37340b869b1d3d0c0d6be3fc14790b7618314" Dec 10 15:21:03 crc kubenswrapper[4718]: I1210 15:21:03.638154 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pg2zx" Dec 10 15:21:03 crc kubenswrapper[4718]: I1210 15:21:03.670189 4718 scope.go:117] "RemoveContainer" containerID="4206e35f9f6ef8727df5e15ec883f6f9149cd175dd8f8ff91c09c4ad06ec84f8" Dec 10 15:21:03 crc kubenswrapper[4718]: I1210 15:21:03.683788 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-pg2zx"] Dec 10 15:21:03 crc kubenswrapper[4718]: I1210 15:21:03.695371 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-pg2zx"] Dec 10 15:21:03 crc kubenswrapper[4718]: I1210 15:21:03.697150 4718 scope.go:117] "RemoveContainer" containerID="be5ecc76b8d4cc384ca2cafb70bf64944a4323191defe95e9d80a9b4c7acc3df" Dec 10 15:21:04 crc kubenswrapper[4718]: I1210 15:21:04.037838 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="85d9945e-e143-4ac5-9054-b1aff0f907a7" path="/var/lib/kubelet/pods/85d9945e-e143-4ac5-9054-b1aff0f907a7/volumes" Dec 10 15:21:12 crc kubenswrapper[4718]: I1210 15:21:12.022103 4718 scope.go:117] "RemoveContainer" containerID="65f273b169c25d47441ce0cd6888b81917c4ea628742cda4d8d97b9f88784ba0" Dec 10 15:21:12 crc kubenswrapper[4718]: E1210 15:21:12.023957 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8zmhn_openshift-machine-config-operator(8db53917-7cfb-496d-b8a0-5cc68f3be4e7)\"" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" podUID="8db53917-7cfb-496d-b8a0-5cc68f3be4e7" Dec 10 15:21:25 crc kubenswrapper[4718]: I1210 15:21:25.020713 4718 scope.go:117] "RemoveContainer" containerID="65f273b169c25d47441ce0cd6888b81917c4ea628742cda4d8d97b9f88784ba0" Dec 10 15:21:25 crc kubenswrapper[4718]: E1210 15:21:25.021542 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8zmhn_openshift-machine-config-operator(8db53917-7cfb-496d-b8a0-5cc68f3be4e7)\"" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" podUID="8db53917-7cfb-496d-b8a0-5cc68f3be4e7" Dec 10 15:21:39 crc kubenswrapper[4718]: I1210 15:21:39.021446 4718 scope.go:117] "RemoveContainer" containerID="65f273b169c25d47441ce0cd6888b81917c4ea628742cda4d8d97b9f88784ba0" Dec 10 15:21:39 crc kubenswrapper[4718]: E1210 15:21:39.022776 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8zmhn_openshift-machine-config-operator(8db53917-7cfb-496d-b8a0-5cc68f3be4e7)\"" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" podUID="8db53917-7cfb-496d-b8a0-5cc68f3be4e7" Dec 10 15:21:51 crc kubenswrapper[4718]: I1210 15:21:51.022936 4718 scope.go:117] "RemoveContainer" containerID="65f273b169c25d47441ce0cd6888b81917c4ea628742cda4d8d97b9f88784ba0" Dec 10 15:21:51 crc kubenswrapper[4718]: E1210 15:21:51.024312 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8zmhn_openshift-machine-config-operator(8db53917-7cfb-496d-b8a0-5cc68f3be4e7)\"" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" podUID="8db53917-7cfb-496d-b8a0-5cc68f3be4e7" Dec 10 15:22:05 crc kubenswrapper[4718]: I1210 15:22:05.020581 4718 scope.go:117] "RemoveContainer" containerID="65f273b169c25d47441ce0cd6888b81917c4ea628742cda4d8d97b9f88784ba0" Dec 10 15:22:05 crc kubenswrapper[4718]: E1210 15:22:05.022539 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8zmhn_openshift-machine-config-operator(8db53917-7cfb-496d-b8a0-5cc68f3be4e7)\"" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" podUID="8db53917-7cfb-496d-b8a0-5cc68f3be4e7" Dec 10 15:22:19 crc kubenswrapper[4718]: I1210 15:22:19.020090 4718 scope.go:117] "RemoveContainer" containerID="65f273b169c25d47441ce0cd6888b81917c4ea628742cda4d8d97b9f88784ba0" Dec 10 15:22:19 crc kubenswrapper[4718]: E1210 15:22:19.020952 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8zmhn_openshift-machine-config-operator(8db53917-7cfb-496d-b8a0-5cc68f3be4e7)\"" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" podUID="8db53917-7cfb-496d-b8a0-5cc68f3be4e7" Dec 10 15:22:30 crc kubenswrapper[4718]: I1210 15:22:30.021178 4718 scope.go:117] "RemoveContainer" containerID="65f273b169c25d47441ce0cd6888b81917c4ea628742cda4d8d97b9f88784ba0" Dec 10 15:22:30 crc kubenswrapper[4718]: E1210 15:22:30.022457 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8zmhn_openshift-machine-config-operator(8db53917-7cfb-496d-b8a0-5cc68f3be4e7)\"" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" podUID="8db53917-7cfb-496d-b8a0-5cc68f3be4e7" Dec 10 15:22:41 crc kubenswrapper[4718]: I1210 15:22:41.021516 4718 scope.go:117] "RemoveContainer" containerID="65f273b169c25d47441ce0cd6888b81917c4ea628742cda4d8d97b9f88784ba0" Dec 10 15:22:41 crc kubenswrapper[4718]: E1210 15:22:41.023035 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8zmhn_openshift-machine-config-operator(8db53917-7cfb-496d-b8a0-5cc68f3be4e7)\"" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" podUID="8db53917-7cfb-496d-b8a0-5cc68f3be4e7" Dec 10 15:22:49 crc kubenswrapper[4718]: I1210 15:22:49.080874 4718 generic.go:334] "Generic (PLEG): container finished" podID="080c7769-f2d8-47fa-aa3d-a1b63190a679" containerID="14428f469c27ac79998cbf88ca93395d235c579add17409ea7d615cc4e04ad53" exitCode=0 Dec 10 15:22:49 crc kubenswrapper[4718]: I1210 15:22:49.080942 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-r2rqj" event={"ID":"080c7769-f2d8-47fa-aa3d-a1b63190a679","Type":"ContainerDied","Data":"14428f469c27ac79998cbf88ca93395d235c579add17409ea7d615cc4e04ad53"} Dec 10 15:22:50 crc kubenswrapper[4718]: I1210 15:22:50.611751 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-r2rqj" Dec 10 15:22:50 crc kubenswrapper[4718]: I1210 15:22:50.728495 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sgb7b\" (UniqueName: \"kubernetes.io/projected/080c7769-f2d8-47fa-aa3d-a1b63190a679-kube-api-access-sgb7b\") pod \"080c7769-f2d8-47fa-aa3d-a1b63190a679\" (UID: \"080c7769-f2d8-47fa-aa3d-a1b63190a679\") " Dec 10 15:22:50 crc kubenswrapper[4718]: I1210 15:22:50.728594 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/080c7769-f2d8-47fa-aa3d-a1b63190a679-libvirt-combined-ca-bundle\") pod \"080c7769-f2d8-47fa-aa3d-a1b63190a679\" (UID: \"080c7769-f2d8-47fa-aa3d-a1b63190a679\") " Dec 10 15:22:50 crc kubenswrapper[4718]: I1210 15:22:50.728658 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/080c7769-f2d8-47fa-aa3d-a1b63190a679-inventory\") pod \"080c7769-f2d8-47fa-aa3d-a1b63190a679\" (UID: \"080c7769-f2d8-47fa-aa3d-a1b63190a679\") " Dec 10 15:22:50 crc kubenswrapper[4718]: I1210 15:22:50.728684 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/080c7769-f2d8-47fa-aa3d-a1b63190a679-libvirt-secret-0\") pod \"080c7769-f2d8-47fa-aa3d-a1b63190a679\" (UID: \"080c7769-f2d8-47fa-aa3d-a1b63190a679\") " Dec 10 15:22:50 crc kubenswrapper[4718]: I1210 15:22:50.728777 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/080c7769-f2d8-47fa-aa3d-a1b63190a679-ssh-key\") pod \"080c7769-f2d8-47fa-aa3d-a1b63190a679\" (UID: \"080c7769-f2d8-47fa-aa3d-a1b63190a679\") " Dec 10 15:22:50 crc kubenswrapper[4718]: I1210 15:22:50.736027 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/080c7769-f2d8-47fa-aa3d-a1b63190a679-kube-api-access-sgb7b" (OuterVolumeSpecName: "kube-api-access-sgb7b") pod "080c7769-f2d8-47fa-aa3d-a1b63190a679" (UID: "080c7769-f2d8-47fa-aa3d-a1b63190a679"). InnerVolumeSpecName "kube-api-access-sgb7b". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:22:50 crc kubenswrapper[4718]: I1210 15:22:50.736829 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/080c7769-f2d8-47fa-aa3d-a1b63190a679-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "080c7769-f2d8-47fa-aa3d-a1b63190a679" (UID: "080c7769-f2d8-47fa-aa3d-a1b63190a679"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:22:50 crc kubenswrapper[4718]: I1210 15:22:50.768617 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/080c7769-f2d8-47fa-aa3d-a1b63190a679-libvirt-secret-0" (OuterVolumeSpecName: "libvirt-secret-0") pod "080c7769-f2d8-47fa-aa3d-a1b63190a679" (UID: "080c7769-f2d8-47fa-aa3d-a1b63190a679"). InnerVolumeSpecName "libvirt-secret-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:22:50 crc kubenswrapper[4718]: I1210 15:22:50.768682 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/080c7769-f2d8-47fa-aa3d-a1b63190a679-inventory" (OuterVolumeSpecName: "inventory") pod "080c7769-f2d8-47fa-aa3d-a1b63190a679" (UID: "080c7769-f2d8-47fa-aa3d-a1b63190a679"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:22:50 crc kubenswrapper[4718]: I1210 15:22:50.778755 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/080c7769-f2d8-47fa-aa3d-a1b63190a679-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "080c7769-f2d8-47fa-aa3d-a1b63190a679" (UID: "080c7769-f2d8-47fa-aa3d-a1b63190a679"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:22:50 crc kubenswrapper[4718]: I1210 15:22:50.831588 4718 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/080c7769-f2d8-47fa-aa3d-a1b63190a679-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 10 15:22:50 crc kubenswrapper[4718]: I1210 15:22:50.831641 4718 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/080c7769-f2d8-47fa-aa3d-a1b63190a679-inventory\") on node \"crc\" DevicePath \"\"" Dec 10 15:22:50 crc kubenswrapper[4718]: I1210 15:22:50.831661 4718 reconciler_common.go:293] "Volume detached for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/080c7769-f2d8-47fa-aa3d-a1b63190a679-libvirt-secret-0\") on node \"crc\" DevicePath \"\"" Dec 10 15:22:50 crc kubenswrapper[4718]: I1210 15:22:50.831675 4718 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/080c7769-f2d8-47fa-aa3d-a1b63190a679-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 10 15:22:50 crc kubenswrapper[4718]: I1210 15:22:50.831694 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sgb7b\" (UniqueName: \"kubernetes.io/projected/080c7769-f2d8-47fa-aa3d-a1b63190a679-kube-api-access-sgb7b\") on node \"crc\" DevicePath \"\"" Dec 10 15:22:51 crc kubenswrapper[4718]: I1210 15:22:51.102787 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-r2rqj" event={"ID":"080c7769-f2d8-47fa-aa3d-a1b63190a679","Type":"ContainerDied","Data":"e2c212aa2084bc437d3112c3339e9c9763e75e9eb5e2a364ee900d6fdb49fc72"} Dec 10 15:22:51 crc kubenswrapper[4718]: I1210 15:22:51.102833 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-r2rqj" Dec 10 15:22:51 crc kubenswrapper[4718]: I1210 15:22:51.102902 4718 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e2c212aa2084bc437d3112c3339e9c9763e75e9eb5e2a364ee900d6fdb49fc72" Dec 10 15:22:51 crc kubenswrapper[4718]: I1210 15:22:51.244053 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-td49f"] Dec 10 15:22:51 crc kubenswrapper[4718]: E1210 15:22:51.245056 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c0bc83f-8884-4c57-b3c5-ab5d38a76393" containerName="extract-content" Dec 10 15:22:51 crc kubenswrapper[4718]: I1210 15:22:51.245091 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c0bc83f-8884-4c57-b3c5-ab5d38a76393" containerName="extract-content" Dec 10 15:22:51 crc kubenswrapper[4718]: E1210 15:22:51.245125 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9228db18-ae68-483c-a474-3e654e72ea9a" containerName="extract-utilities" Dec 10 15:22:51 crc kubenswrapper[4718]: I1210 15:22:51.245135 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="9228db18-ae68-483c-a474-3e654e72ea9a" containerName="extract-utilities" Dec 10 15:22:51 crc kubenswrapper[4718]: E1210 15:22:51.245161 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="85d9945e-e143-4ac5-9054-b1aff0f907a7" containerName="extract-utilities" Dec 10 15:22:51 crc kubenswrapper[4718]: I1210 15:22:51.245169 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="85d9945e-e143-4ac5-9054-b1aff0f907a7" containerName="extract-utilities" Dec 10 15:22:51 crc kubenswrapper[4718]: E1210 15:22:51.245183 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9228db18-ae68-483c-a474-3e654e72ea9a" containerName="registry-server" Dec 10 15:22:51 crc kubenswrapper[4718]: I1210 15:22:51.245191 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="9228db18-ae68-483c-a474-3e654e72ea9a" containerName="registry-server" Dec 10 15:22:51 crc kubenswrapper[4718]: E1210 15:22:51.245209 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9228db18-ae68-483c-a474-3e654e72ea9a" containerName="extract-content" Dec 10 15:22:51 crc kubenswrapper[4718]: I1210 15:22:51.245218 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="9228db18-ae68-483c-a474-3e654e72ea9a" containerName="extract-content" Dec 10 15:22:51 crc kubenswrapper[4718]: E1210 15:22:51.245250 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c0bc83f-8884-4c57-b3c5-ab5d38a76393" containerName="extract-utilities" Dec 10 15:22:51 crc kubenswrapper[4718]: I1210 15:22:51.245259 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c0bc83f-8884-4c57-b3c5-ab5d38a76393" containerName="extract-utilities" Dec 10 15:22:51 crc kubenswrapper[4718]: E1210 15:22:51.245278 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="080c7769-f2d8-47fa-aa3d-a1b63190a679" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Dec 10 15:22:51 crc kubenswrapper[4718]: I1210 15:22:51.245287 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="080c7769-f2d8-47fa-aa3d-a1b63190a679" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Dec 10 15:22:51 crc kubenswrapper[4718]: E1210 15:22:51.245304 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="85d9945e-e143-4ac5-9054-b1aff0f907a7" containerName="extract-content" Dec 10 15:22:51 crc kubenswrapper[4718]: I1210 15:22:51.245311 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="85d9945e-e143-4ac5-9054-b1aff0f907a7" containerName="extract-content" Dec 10 15:22:51 crc kubenswrapper[4718]: E1210 15:22:51.245332 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c0bc83f-8884-4c57-b3c5-ab5d38a76393" containerName="registry-server" Dec 10 15:22:51 crc kubenswrapper[4718]: I1210 15:22:51.245339 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c0bc83f-8884-4c57-b3c5-ab5d38a76393" containerName="registry-server" Dec 10 15:22:51 crc kubenswrapper[4718]: E1210 15:22:51.245360 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="85d9945e-e143-4ac5-9054-b1aff0f907a7" containerName="registry-server" Dec 10 15:22:51 crc kubenswrapper[4718]: I1210 15:22:51.245368 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="85d9945e-e143-4ac5-9054-b1aff0f907a7" containerName="registry-server" Dec 10 15:22:51 crc kubenswrapper[4718]: I1210 15:22:51.245974 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="080c7769-f2d8-47fa-aa3d-a1b63190a679" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Dec 10 15:22:51 crc kubenswrapper[4718]: I1210 15:22:51.246079 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="7c0bc83f-8884-4c57-b3c5-ab5d38a76393" containerName="registry-server" Dec 10 15:22:51 crc kubenswrapper[4718]: I1210 15:22:51.246102 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="85d9945e-e143-4ac5-9054-b1aff0f907a7" containerName="registry-server" Dec 10 15:22:51 crc kubenswrapper[4718]: I1210 15:22:51.246115 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="9228db18-ae68-483c-a474-3e654e72ea9a" containerName="registry-server" Dec 10 15:22:51 crc kubenswrapper[4718]: I1210 15:22:51.247295 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-td49f" Dec 10 15:22:51 crc kubenswrapper[4718]: I1210 15:22:51.251715 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"nova-extra-config" Dec 10 15:22:51 crc kubenswrapper[4718]: I1210 15:22:51.251715 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-vqd8j" Dec 10 15:22:51 crc kubenswrapper[4718]: I1210 15:22:51.253239 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 10 15:22:51 crc kubenswrapper[4718]: I1210 15:22:51.253263 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-migration-ssh-key" Dec 10 15:22:51 crc kubenswrapper[4718]: I1210 15:22:51.253461 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 10 15:22:51 crc kubenswrapper[4718]: I1210 15:22:51.253771 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 10 15:22:51 crc kubenswrapper[4718]: I1210 15:22:51.256307 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-compute-config" Dec 10 15:22:51 crc kubenswrapper[4718]: I1210 15:22:51.258051 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-td49f"] Dec 10 15:22:51 crc kubenswrapper[4718]: I1210 15:22:51.445278 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/2dd95b44-946b-43ef-91a6-3eeab6ded836-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-td49f\" (UID: \"2dd95b44-946b-43ef-91a6-3eeab6ded836\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-td49f" Dec 10 15:22:51 crc kubenswrapper[4718]: I1210 15:22:51.445343 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/2dd95b44-946b-43ef-91a6-3eeab6ded836-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-td49f\" (UID: \"2dd95b44-946b-43ef-91a6-3eeab6ded836\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-td49f" Dec 10 15:22:51 crc kubenswrapper[4718]: I1210 15:22:51.445368 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2dd95b44-946b-43ef-91a6-3eeab6ded836-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-td49f\" (UID: \"2dd95b44-946b-43ef-91a6-3eeab6ded836\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-td49f" Dec 10 15:22:51 crc kubenswrapper[4718]: I1210 15:22:51.445502 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2dd95b44-946b-43ef-91a6-3eeab6ded836-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-td49f\" (UID: \"2dd95b44-946b-43ef-91a6-3eeab6ded836\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-td49f" Dec 10 15:22:51 crc kubenswrapper[4718]: I1210 15:22:51.445628 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hmxnb\" (UniqueName: \"kubernetes.io/projected/2dd95b44-946b-43ef-91a6-3eeab6ded836-kube-api-access-hmxnb\") pod \"nova-edpm-deployment-openstack-edpm-ipam-td49f\" (UID: \"2dd95b44-946b-43ef-91a6-3eeab6ded836\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-td49f" Dec 10 15:22:51 crc kubenswrapper[4718]: I1210 15:22:51.445719 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/2dd95b44-946b-43ef-91a6-3eeab6ded836-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-td49f\" (UID: \"2dd95b44-946b-43ef-91a6-3eeab6ded836\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-td49f" Dec 10 15:22:51 crc kubenswrapper[4718]: I1210 15:22:51.445770 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/2dd95b44-946b-43ef-91a6-3eeab6ded836-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-td49f\" (UID: \"2dd95b44-946b-43ef-91a6-3eeab6ded836\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-td49f" Dec 10 15:22:51 crc kubenswrapper[4718]: I1210 15:22:51.445871 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/2dd95b44-946b-43ef-91a6-3eeab6ded836-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-td49f\" (UID: \"2dd95b44-946b-43ef-91a6-3eeab6ded836\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-td49f" Dec 10 15:22:51 crc kubenswrapper[4718]: I1210 15:22:51.445904 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2dd95b44-946b-43ef-91a6-3eeab6ded836-ssh-key\") pod \"nova-edpm-deployment-openstack-edpm-ipam-td49f\" (UID: \"2dd95b44-946b-43ef-91a6-3eeab6ded836\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-td49f" Dec 10 15:22:51 crc kubenswrapper[4718]: I1210 15:22:51.548348 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hmxnb\" (UniqueName: \"kubernetes.io/projected/2dd95b44-946b-43ef-91a6-3eeab6ded836-kube-api-access-hmxnb\") pod \"nova-edpm-deployment-openstack-edpm-ipam-td49f\" (UID: \"2dd95b44-946b-43ef-91a6-3eeab6ded836\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-td49f" Dec 10 15:22:51 crc kubenswrapper[4718]: I1210 15:22:51.548522 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/2dd95b44-946b-43ef-91a6-3eeab6ded836-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-td49f\" (UID: \"2dd95b44-946b-43ef-91a6-3eeab6ded836\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-td49f" Dec 10 15:22:51 crc kubenswrapper[4718]: I1210 15:22:51.548570 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/2dd95b44-946b-43ef-91a6-3eeab6ded836-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-td49f\" (UID: \"2dd95b44-946b-43ef-91a6-3eeab6ded836\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-td49f" Dec 10 15:22:51 crc kubenswrapper[4718]: I1210 15:22:51.548667 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/2dd95b44-946b-43ef-91a6-3eeab6ded836-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-td49f\" (UID: \"2dd95b44-946b-43ef-91a6-3eeab6ded836\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-td49f" Dec 10 15:22:51 crc kubenswrapper[4718]: I1210 15:22:51.548699 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2dd95b44-946b-43ef-91a6-3eeab6ded836-ssh-key\") pod \"nova-edpm-deployment-openstack-edpm-ipam-td49f\" (UID: \"2dd95b44-946b-43ef-91a6-3eeab6ded836\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-td49f" Dec 10 15:22:51 crc kubenswrapper[4718]: I1210 15:22:51.548760 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/2dd95b44-946b-43ef-91a6-3eeab6ded836-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-td49f\" (UID: \"2dd95b44-946b-43ef-91a6-3eeab6ded836\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-td49f" Dec 10 15:22:51 crc kubenswrapper[4718]: I1210 15:22:51.548811 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/2dd95b44-946b-43ef-91a6-3eeab6ded836-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-td49f\" (UID: \"2dd95b44-946b-43ef-91a6-3eeab6ded836\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-td49f" Dec 10 15:22:51 crc kubenswrapper[4718]: I1210 15:22:51.548830 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2dd95b44-946b-43ef-91a6-3eeab6ded836-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-td49f\" (UID: \"2dd95b44-946b-43ef-91a6-3eeab6ded836\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-td49f" Dec 10 15:22:51 crc kubenswrapper[4718]: I1210 15:22:51.548934 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2dd95b44-946b-43ef-91a6-3eeab6ded836-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-td49f\" (UID: \"2dd95b44-946b-43ef-91a6-3eeab6ded836\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-td49f" Dec 10 15:22:51 crc kubenswrapper[4718]: I1210 15:22:51.550281 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/2dd95b44-946b-43ef-91a6-3eeab6ded836-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-td49f\" (UID: \"2dd95b44-946b-43ef-91a6-3eeab6ded836\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-td49f" Dec 10 15:22:51 crc kubenswrapper[4718]: I1210 15:22:51.552613 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/2dd95b44-946b-43ef-91a6-3eeab6ded836-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-td49f\" (UID: \"2dd95b44-946b-43ef-91a6-3eeab6ded836\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-td49f" Dec 10 15:22:51 crc kubenswrapper[4718]: I1210 15:22:51.553566 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/2dd95b44-946b-43ef-91a6-3eeab6ded836-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-td49f\" (UID: \"2dd95b44-946b-43ef-91a6-3eeab6ded836\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-td49f" Dec 10 15:22:51 crc kubenswrapper[4718]: I1210 15:22:51.553570 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2dd95b44-946b-43ef-91a6-3eeab6ded836-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-td49f\" (UID: \"2dd95b44-946b-43ef-91a6-3eeab6ded836\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-td49f" Dec 10 15:22:51 crc kubenswrapper[4718]: I1210 15:22:51.553972 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/2dd95b44-946b-43ef-91a6-3eeab6ded836-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-td49f\" (UID: \"2dd95b44-946b-43ef-91a6-3eeab6ded836\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-td49f" Dec 10 15:22:51 crc kubenswrapper[4718]: I1210 15:22:51.554271 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/2dd95b44-946b-43ef-91a6-3eeab6ded836-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-td49f\" (UID: \"2dd95b44-946b-43ef-91a6-3eeab6ded836\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-td49f" Dec 10 15:22:51 crc kubenswrapper[4718]: I1210 15:22:51.554520 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2dd95b44-946b-43ef-91a6-3eeab6ded836-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-td49f\" (UID: \"2dd95b44-946b-43ef-91a6-3eeab6ded836\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-td49f" Dec 10 15:22:51 crc kubenswrapper[4718]: I1210 15:22:51.555374 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2dd95b44-946b-43ef-91a6-3eeab6ded836-ssh-key\") pod \"nova-edpm-deployment-openstack-edpm-ipam-td49f\" (UID: \"2dd95b44-946b-43ef-91a6-3eeab6ded836\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-td49f" Dec 10 15:22:51 crc kubenswrapper[4718]: I1210 15:22:51.569837 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hmxnb\" (UniqueName: \"kubernetes.io/projected/2dd95b44-946b-43ef-91a6-3eeab6ded836-kube-api-access-hmxnb\") pod \"nova-edpm-deployment-openstack-edpm-ipam-td49f\" (UID: \"2dd95b44-946b-43ef-91a6-3eeab6ded836\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-td49f" Dec 10 15:22:51 crc kubenswrapper[4718]: I1210 15:22:51.572146 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-td49f" Dec 10 15:22:52 crc kubenswrapper[4718]: I1210 15:22:52.171061 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-td49f"] Dec 10 15:22:53 crc kubenswrapper[4718]: I1210 15:22:53.126190 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-td49f" event={"ID":"2dd95b44-946b-43ef-91a6-3eeab6ded836","Type":"ContainerStarted","Data":"eec3a63f95d20fd90f0cea72d1f1c0dbca286c2c5f32222d370838a506aaaedf"} Dec 10 15:22:54 crc kubenswrapper[4718]: I1210 15:22:54.138602 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-td49f" event={"ID":"2dd95b44-946b-43ef-91a6-3eeab6ded836","Type":"ContainerStarted","Data":"05ef222103e63a188c8042f1e6490816d18ee77d99556043213f29c7166b9192"} Dec 10 15:22:54 crc kubenswrapper[4718]: I1210 15:22:54.172128 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-td49f" podStartSLOduration=1.768211008 podStartE2EDuration="3.172069838s" podCreationTimestamp="2025-12-10 15:22:51 +0000 UTC" firstStartedPulling="2025-12-10 15:22:52.187074496 +0000 UTC m=+3077.136297913" lastFinishedPulling="2025-12-10 15:22:53.590933326 +0000 UTC m=+3078.540156743" observedRunningTime="2025-12-10 15:22:54.15636097 +0000 UTC m=+3079.105584387" watchObservedRunningTime="2025-12-10 15:22:54.172069838 +0000 UTC m=+3079.121293255" Dec 10 15:22:55 crc kubenswrapper[4718]: I1210 15:22:55.020372 4718 scope.go:117] "RemoveContainer" containerID="65f273b169c25d47441ce0cd6888b81917c4ea628742cda4d8d97b9f88784ba0" Dec 10 15:22:55 crc kubenswrapper[4718]: E1210 15:22:55.020917 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8zmhn_openshift-machine-config-operator(8db53917-7cfb-496d-b8a0-5cc68f3be4e7)\"" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" podUID="8db53917-7cfb-496d-b8a0-5cc68f3be4e7" Dec 10 15:23:08 crc kubenswrapper[4718]: I1210 15:23:08.021509 4718 scope.go:117] "RemoveContainer" containerID="65f273b169c25d47441ce0cd6888b81917c4ea628742cda4d8d97b9f88784ba0" Dec 10 15:23:08 crc kubenswrapper[4718]: E1210 15:23:08.023608 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8zmhn_openshift-machine-config-operator(8db53917-7cfb-496d-b8a0-5cc68f3be4e7)\"" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" podUID="8db53917-7cfb-496d-b8a0-5cc68f3be4e7" Dec 10 15:23:22 crc kubenswrapper[4718]: I1210 15:23:22.020806 4718 scope.go:117] "RemoveContainer" containerID="65f273b169c25d47441ce0cd6888b81917c4ea628742cda4d8d97b9f88784ba0" Dec 10 15:23:22 crc kubenswrapper[4718]: E1210 15:23:22.021664 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8zmhn_openshift-machine-config-operator(8db53917-7cfb-496d-b8a0-5cc68f3be4e7)\"" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" podUID="8db53917-7cfb-496d-b8a0-5cc68f3be4e7" Dec 10 15:23:33 crc kubenswrapper[4718]: I1210 15:23:33.021715 4718 scope.go:117] "RemoveContainer" containerID="65f273b169c25d47441ce0cd6888b81917c4ea628742cda4d8d97b9f88784ba0" Dec 10 15:23:33 crc kubenswrapper[4718]: E1210 15:23:33.022597 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8zmhn_openshift-machine-config-operator(8db53917-7cfb-496d-b8a0-5cc68f3be4e7)\"" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" podUID="8db53917-7cfb-496d-b8a0-5cc68f3be4e7" Dec 10 15:23:45 crc kubenswrapper[4718]: I1210 15:23:45.021008 4718 scope.go:117] "RemoveContainer" containerID="65f273b169c25d47441ce0cd6888b81917c4ea628742cda4d8d97b9f88784ba0" Dec 10 15:23:45 crc kubenswrapper[4718]: E1210 15:23:45.021887 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8zmhn_openshift-machine-config-operator(8db53917-7cfb-496d-b8a0-5cc68f3be4e7)\"" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" podUID="8db53917-7cfb-496d-b8a0-5cc68f3be4e7" Dec 10 15:23:58 crc kubenswrapper[4718]: I1210 15:23:58.027502 4718 scope.go:117] "RemoveContainer" containerID="65f273b169c25d47441ce0cd6888b81917c4ea628742cda4d8d97b9f88784ba0" Dec 10 15:23:58 crc kubenswrapper[4718]: E1210 15:23:58.028404 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8zmhn_openshift-machine-config-operator(8db53917-7cfb-496d-b8a0-5cc68f3be4e7)\"" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" podUID="8db53917-7cfb-496d-b8a0-5cc68f3be4e7" Dec 10 15:24:09 crc kubenswrapper[4718]: I1210 15:24:09.027207 4718 scope.go:117] "RemoveContainer" containerID="65f273b169c25d47441ce0cd6888b81917c4ea628742cda4d8d97b9f88784ba0" Dec 10 15:24:09 crc kubenswrapper[4718]: E1210 15:24:09.028100 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8zmhn_openshift-machine-config-operator(8db53917-7cfb-496d-b8a0-5cc68f3be4e7)\"" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" podUID="8db53917-7cfb-496d-b8a0-5cc68f3be4e7" Dec 10 15:24:20 crc kubenswrapper[4718]: I1210 15:24:20.021093 4718 scope.go:117] "RemoveContainer" containerID="65f273b169c25d47441ce0cd6888b81917c4ea628742cda4d8d97b9f88784ba0" Dec 10 15:24:20 crc kubenswrapper[4718]: E1210 15:24:20.022012 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8zmhn_openshift-machine-config-operator(8db53917-7cfb-496d-b8a0-5cc68f3be4e7)\"" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" podUID="8db53917-7cfb-496d-b8a0-5cc68f3be4e7" Dec 10 15:24:35 crc kubenswrapper[4718]: I1210 15:24:35.020842 4718 scope.go:117] "RemoveContainer" containerID="65f273b169c25d47441ce0cd6888b81917c4ea628742cda4d8d97b9f88784ba0" Dec 10 15:24:35 crc kubenswrapper[4718]: E1210 15:24:35.021813 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8zmhn_openshift-machine-config-operator(8db53917-7cfb-496d-b8a0-5cc68f3be4e7)\"" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" podUID="8db53917-7cfb-496d-b8a0-5cc68f3be4e7" Dec 10 15:24:49 crc kubenswrapper[4718]: I1210 15:24:49.020938 4718 scope.go:117] "RemoveContainer" containerID="65f273b169c25d47441ce0cd6888b81917c4ea628742cda4d8d97b9f88784ba0" Dec 10 15:24:49 crc kubenswrapper[4718]: E1210 15:24:49.021952 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8zmhn_openshift-machine-config-operator(8db53917-7cfb-496d-b8a0-5cc68f3be4e7)\"" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" podUID="8db53917-7cfb-496d-b8a0-5cc68f3be4e7" Dec 10 15:25:03 crc kubenswrapper[4718]: I1210 15:25:03.022420 4718 scope.go:117] "RemoveContainer" containerID="65f273b169c25d47441ce0cd6888b81917c4ea628742cda4d8d97b9f88784ba0" Dec 10 15:25:03 crc kubenswrapper[4718]: E1210 15:25:03.023950 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8zmhn_openshift-machine-config-operator(8db53917-7cfb-496d-b8a0-5cc68f3be4e7)\"" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" podUID="8db53917-7cfb-496d-b8a0-5cc68f3be4e7" Dec 10 15:25:18 crc kubenswrapper[4718]: I1210 15:25:18.020967 4718 scope.go:117] "RemoveContainer" containerID="65f273b169c25d47441ce0cd6888b81917c4ea628742cda4d8d97b9f88784ba0" Dec 10 15:25:18 crc kubenswrapper[4718]: E1210 15:25:18.022236 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8zmhn_openshift-machine-config-operator(8db53917-7cfb-496d-b8a0-5cc68f3be4e7)\"" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" podUID="8db53917-7cfb-496d-b8a0-5cc68f3be4e7" Dec 10 15:25:33 crc kubenswrapper[4718]: I1210 15:25:33.020964 4718 scope.go:117] "RemoveContainer" containerID="65f273b169c25d47441ce0cd6888b81917c4ea628742cda4d8d97b9f88784ba0" Dec 10 15:25:33 crc kubenswrapper[4718]: I1210 15:25:33.644432 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" event={"ID":"8db53917-7cfb-496d-b8a0-5cc68f3be4e7","Type":"ContainerStarted","Data":"b0ebbb69003f414bf5979556281a474a5538c66cccb2f18435180d4ba1a9de43"} Dec 10 15:25:52 crc kubenswrapper[4718]: I1210 15:25:52.957813 4718 generic.go:334] "Generic (PLEG): container finished" podID="2dd95b44-946b-43ef-91a6-3eeab6ded836" containerID="05ef222103e63a188c8042f1e6490816d18ee77d99556043213f29c7166b9192" exitCode=0 Dec 10 15:25:52 crc kubenswrapper[4718]: I1210 15:25:52.957926 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-td49f" event={"ID":"2dd95b44-946b-43ef-91a6-3eeab6ded836","Type":"ContainerDied","Data":"05ef222103e63a188c8042f1e6490816d18ee77d99556043213f29c7166b9192"} Dec 10 15:25:54 crc kubenswrapper[4718]: I1210 15:25:54.744880 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-td49f" Dec 10 15:25:54 crc kubenswrapper[4718]: I1210 15:25:54.911729 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/2dd95b44-946b-43ef-91a6-3eeab6ded836-nova-cell1-compute-config-1\") pod \"2dd95b44-946b-43ef-91a6-3eeab6ded836\" (UID: \"2dd95b44-946b-43ef-91a6-3eeab6ded836\") " Dec 10 15:25:54 crc kubenswrapper[4718]: I1210 15:25:54.911880 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2dd95b44-946b-43ef-91a6-3eeab6ded836-inventory\") pod \"2dd95b44-946b-43ef-91a6-3eeab6ded836\" (UID: \"2dd95b44-946b-43ef-91a6-3eeab6ded836\") " Dec 10 15:25:54 crc kubenswrapper[4718]: I1210 15:25:54.911988 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2dd95b44-946b-43ef-91a6-3eeab6ded836-nova-combined-ca-bundle\") pod \"2dd95b44-946b-43ef-91a6-3eeab6ded836\" (UID: \"2dd95b44-946b-43ef-91a6-3eeab6ded836\") " Dec 10 15:25:54 crc kubenswrapper[4718]: I1210 15:25:54.912152 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hmxnb\" (UniqueName: \"kubernetes.io/projected/2dd95b44-946b-43ef-91a6-3eeab6ded836-kube-api-access-hmxnb\") pod \"2dd95b44-946b-43ef-91a6-3eeab6ded836\" (UID: \"2dd95b44-946b-43ef-91a6-3eeab6ded836\") " Dec 10 15:25:54 crc kubenswrapper[4718]: I1210 15:25:54.912344 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/2dd95b44-946b-43ef-91a6-3eeab6ded836-nova-extra-config-0\") pod \"2dd95b44-946b-43ef-91a6-3eeab6ded836\" (UID: \"2dd95b44-946b-43ef-91a6-3eeab6ded836\") " Dec 10 15:25:54 crc kubenswrapper[4718]: I1210 15:25:54.912527 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2dd95b44-946b-43ef-91a6-3eeab6ded836-ssh-key\") pod \"2dd95b44-946b-43ef-91a6-3eeab6ded836\" (UID: \"2dd95b44-946b-43ef-91a6-3eeab6ded836\") " Dec 10 15:25:54 crc kubenswrapper[4718]: I1210 15:25:54.912611 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/2dd95b44-946b-43ef-91a6-3eeab6ded836-nova-migration-ssh-key-1\") pod \"2dd95b44-946b-43ef-91a6-3eeab6ded836\" (UID: \"2dd95b44-946b-43ef-91a6-3eeab6ded836\") " Dec 10 15:25:54 crc kubenswrapper[4718]: I1210 15:25:54.912680 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/2dd95b44-946b-43ef-91a6-3eeab6ded836-nova-cell1-compute-config-0\") pod \"2dd95b44-946b-43ef-91a6-3eeab6ded836\" (UID: \"2dd95b44-946b-43ef-91a6-3eeab6ded836\") " Dec 10 15:25:54 crc kubenswrapper[4718]: I1210 15:25:54.912798 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/2dd95b44-946b-43ef-91a6-3eeab6ded836-nova-migration-ssh-key-0\") pod \"2dd95b44-946b-43ef-91a6-3eeab6ded836\" (UID: \"2dd95b44-946b-43ef-91a6-3eeab6ded836\") " Dec 10 15:25:54 crc kubenswrapper[4718]: I1210 15:25:54.921075 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2dd95b44-946b-43ef-91a6-3eeab6ded836-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "2dd95b44-946b-43ef-91a6-3eeab6ded836" (UID: "2dd95b44-946b-43ef-91a6-3eeab6ded836"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:25:54 crc kubenswrapper[4718]: I1210 15:25:54.921606 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2dd95b44-946b-43ef-91a6-3eeab6ded836-kube-api-access-hmxnb" (OuterVolumeSpecName: "kube-api-access-hmxnb") pod "2dd95b44-946b-43ef-91a6-3eeab6ded836" (UID: "2dd95b44-946b-43ef-91a6-3eeab6ded836"). InnerVolumeSpecName "kube-api-access-hmxnb". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:25:54 crc kubenswrapper[4718]: I1210 15:25:54.949612 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2dd95b44-946b-43ef-91a6-3eeab6ded836-nova-cell1-compute-config-1" (OuterVolumeSpecName: "nova-cell1-compute-config-1") pod "2dd95b44-946b-43ef-91a6-3eeab6ded836" (UID: "2dd95b44-946b-43ef-91a6-3eeab6ded836"). InnerVolumeSpecName "nova-cell1-compute-config-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:25:54 crc kubenswrapper[4718]: I1210 15:25:54.950149 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2dd95b44-946b-43ef-91a6-3eeab6ded836-nova-cell1-compute-config-0" (OuterVolumeSpecName: "nova-cell1-compute-config-0") pod "2dd95b44-946b-43ef-91a6-3eeab6ded836" (UID: "2dd95b44-946b-43ef-91a6-3eeab6ded836"). InnerVolumeSpecName "nova-cell1-compute-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:25:54 crc kubenswrapper[4718]: I1210 15:25:54.963653 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2dd95b44-946b-43ef-91a6-3eeab6ded836-nova-extra-config-0" (OuterVolumeSpecName: "nova-extra-config-0") pod "2dd95b44-946b-43ef-91a6-3eeab6ded836" (UID: "2dd95b44-946b-43ef-91a6-3eeab6ded836"). InnerVolumeSpecName "nova-extra-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 15:25:54 crc kubenswrapper[4718]: I1210 15:25:54.974726 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2dd95b44-946b-43ef-91a6-3eeab6ded836-nova-migration-ssh-key-1" (OuterVolumeSpecName: "nova-migration-ssh-key-1") pod "2dd95b44-946b-43ef-91a6-3eeab6ded836" (UID: "2dd95b44-946b-43ef-91a6-3eeab6ded836"). InnerVolumeSpecName "nova-migration-ssh-key-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:25:54 crc kubenswrapper[4718]: I1210 15:25:54.976407 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2dd95b44-946b-43ef-91a6-3eeab6ded836-nova-migration-ssh-key-0" (OuterVolumeSpecName: "nova-migration-ssh-key-0") pod "2dd95b44-946b-43ef-91a6-3eeab6ded836" (UID: "2dd95b44-946b-43ef-91a6-3eeab6ded836"). InnerVolumeSpecName "nova-migration-ssh-key-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:25:54 crc kubenswrapper[4718]: I1210 15:25:54.981400 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2dd95b44-946b-43ef-91a6-3eeab6ded836-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "2dd95b44-946b-43ef-91a6-3eeab6ded836" (UID: "2dd95b44-946b-43ef-91a6-3eeab6ded836"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:25:54 crc kubenswrapper[4718]: I1210 15:25:54.991046 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-td49f" event={"ID":"2dd95b44-946b-43ef-91a6-3eeab6ded836","Type":"ContainerDied","Data":"eec3a63f95d20fd90f0cea72d1f1c0dbca286c2c5f32222d370838a506aaaedf"} Dec 10 15:25:54 crc kubenswrapper[4718]: I1210 15:25:54.991118 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-td49f" Dec 10 15:25:54 crc kubenswrapper[4718]: I1210 15:25:54.991219 4718 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="eec3a63f95d20fd90f0cea72d1f1c0dbca286c2c5f32222d370838a506aaaedf" Dec 10 15:25:54 crc kubenswrapper[4718]: I1210 15:25:54.998311 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2dd95b44-946b-43ef-91a6-3eeab6ded836-inventory" (OuterVolumeSpecName: "inventory") pod "2dd95b44-946b-43ef-91a6-3eeab6ded836" (UID: "2dd95b44-946b-43ef-91a6-3eeab6ded836"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:25:55 crc kubenswrapper[4718]: I1210 15:25:55.016857 4718 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/2dd95b44-946b-43ef-91a6-3eeab6ded836-nova-cell1-compute-config-1\") on node \"crc\" DevicePath \"\"" Dec 10 15:25:55 crc kubenswrapper[4718]: I1210 15:25:55.016944 4718 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2dd95b44-946b-43ef-91a6-3eeab6ded836-inventory\") on node \"crc\" DevicePath \"\"" Dec 10 15:25:55 crc kubenswrapper[4718]: I1210 15:25:55.016981 4718 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2dd95b44-946b-43ef-91a6-3eeab6ded836-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 10 15:25:55 crc kubenswrapper[4718]: I1210 15:25:55.016997 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hmxnb\" (UniqueName: \"kubernetes.io/projected/2dd95b44-946b-43ef-91a6-3eeab6ded836-kube-api-access-hmxnb\") on node \"crc\" DevicePath \"\"" Dec 10 15:25:55 crc kubenswrapper[4718]: I1210 15:25:55.017031 4718 reconciler_common.go:293] "Volume detached for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/2dd95b44-946b-43ef-91a6-3eeab6ded836-nova-extra-config-0\") on node \"crc\" DevicePath \"\"" Dec 10 15:25:55 crc kubenswrapper[4718]: I1210 15:25:55.017062 4718 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2dd95b44-946b-43ef-91a6-3eeab6ded836-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 10 15:25:55 crc kubenswrapper[4718]: I1210 15:25:55.017074 4718 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/2dd95b44-946b-43ef-91a6-3eeab6ded836-nova-migration-ssh-key-1\") on node \"crc\" DevicePath \"\"" Dec 10 15:25:55 crc kubenswrapper[4718]: I1210 15:25:55.017083 4718 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/2dd95b44-946b-43ef-91a6-3eeab6ded836-nova-cell1-compute-config-0\") on node \"crc\" DevicePath \"\"" Dec 10 15:25:55 crc kubenswrapper[4718]: I1210 15:25:55.017104 4718 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/2dd95b44-946b-43ef-91a6-3eeab6ded836-nova-migration-ssh-key-0\") on node \"crc\" DevicePath \"\"" Dec 10 15:25:55 crc kubenswrapper[4718]: I1210 15:25:55.136437 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-hf8zl"] Dec 10 15:25:55 crc kubenswrapper[4718]: E1210 15:25:55.137045 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2dd95b44-946b-43ef-91a6-3eeab6ded836" containerName="nova-edpm-deployment-openstack-edpm-ipam" Dec 10 15:25:55 crc kubenswrapper[4718]: I1210 15:25:55.137080 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="2dd95b44-946b-43ef-91a6-3eeab6ded836" containerName="nova-edpm-deployment-openstack-edpm-ipam" Dec 10 15:25:55 crc kubenswrapper[4718]: I1210 15:25:55.137336 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="2dd95b44-946b-43ef-91a6-3eeab6ded836" containerName="nova-edpm-deployment-openstack-edpm-ipam" Dec 10 15:25:55 crc kubenswrapper[4718]: I1210 15:25:55.138270 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-hf8zl" Dec 10 15:25:55 crc kubenswrapper[4718]: I1210 15:25:55.141250 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-compute-config-data" Dec 10 15:25:55 crc kubenswrapper[4718]: I1210 15:25:55.164170 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-hf8zl"] Dec 10 15:25:55 crc kubenswrapper[4718]: I1210 15:25:55.323828 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/7aeaa205-67e7-4b41-a2d6-fff74ab0d61b-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-hf8zl\" (UID: \"7aeaa205-67e7-4b41-a2d6-fff74ab0d61b\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-hf8zl" Dec 10 15:25:55 crc kubenswrapper[4718]: I1210 15:25:55.324341 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/7aeaa205-67e7-4b41-a2d6-fff74ab0d61b-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-hf8zl\" (UID: \"7aeaa205-67e7-4b41-a2d6-fff74ab0d61b\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-hf8zl" Dec 10 15:25:55 crc kubenswrapper[4718]: I1210 15:25:55.324517 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7aeaa205-67e7-4b41-a2d6-fff74ab0d61b-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-hf8zl\" (UID: \"7aeaa205-67e7-4b41-a2d6-fff74ab0d61b\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-hf8zl" Dec 10 15:25:55 crc kubenswrapper[4718]: I1210 15:25:55.324786 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/7aeaa205-67e7-4b41-a2d6-fff74ab0d61b-ssh-key\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-hf8zl\" (UID: \"7aeaa205-67e7-4b41-a2d6-fff74ab0d61b\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-hf8zl" Dec 10 15:25:55 crc kubenswrapper[4718]: I1210 15:25:55.325079 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7aeaa205-67e7-4b41-a2d6-fff74ab0d61b-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-hf8zl\" (UID: \"7aeaa205-67e7-4b41-a2d6-fff74ab0d61b\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-hf8zl" Dec 10 15:25:55 crc kubenswrapper[4718]: I1210 15:25:55.325368 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/7aeaa205-67e7-4b41-a2d6-fff74ab0d61b-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-hf8zl\" (UID: \"7aeaa205-67e7-4b41-a2d6-fff74ab0d61b\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-hf8zl" Dec 10 15:25:55 crc kubenswrapper[4718]: I1210 15:25:55.325614 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jgrlc\" (UniqueName: \"kubernetes.io/projected/7aeaa205-67e7-4b41-a2d6-fff74ab0d61b-kube-api-access-jgrlc\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-hf8zl\" (UID: \"7aeaa205-67e7-4b41-a2d6-fff74ab0d61b\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-hf8zl" Dec 10 15:25:55 crc kubenswrapper[4718]: I1210 15:25:55.428374 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7aeaa205-67e7-4b41-a2d6-fff74ab0d61b-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-hf8zl\" (UID: \"7aeaa205-67e7-4b41-a2d6-fff74ab0d61b\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-hf8zl" Dec 10 15:25:55 crc kubenswrapper[4718]: I1210 15:25:55.428503 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/7aeaa205-67e7-4b41-a2d6-fff74ab0d61b-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-hf8zl\" (UID: \"7aeaa205-67e7-4b41-a2d6-fff74ab0d61b\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-hf8zl" Dec 10 15:25:55 crc kubenswrapper[4718]: I1210 15:25:55.428581 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jgrlc\" (UniqueName: \"kubernetes.io/projected/7aeaa205-67e7-4b41-a2d6-fff74ab0d61b-kube-api-access-jgrlc\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-hf8zl\" (UID: \"7aeaa205-67e7-4b41-a2d6-fff74ab0d61b\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-hf8zl" Dec 10 15:25:55 crc kubenswrapper[4718]: I1210 15:25:55.428623 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/7aeaa205-67e7-4b41-a2d6-fff74ab0d61b-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-hf8zl\" (UID: \"7aeaa205-67e7-4b41-a2d6-fff74ab0d61b\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-hf8zl" Dec 10 15:25:55 crc kubenswrapper[4718]: I1210 15:25:55.428718 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/7aeaa205-67e7-4b41-a2d6-fff74ab0d61b-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-hf8zl\" (UID: \"7aeaa205-67e7-4b41-a2d6-fff74ab0d61b\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-hf8zl" Dec 10 15:25:55 crc kubenswrapper[4718]: I1210 15:25:55.428743 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7aeaa205-67e7-4b41-a2d6-fff74ab0d61b-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-hf8zl\" (UID: \"7aeaa205-67e7-4b41-a2d6-fff74ab0d61b\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-hf8zl" Dec 10 15:25:55 crc kubenswrapper[4718]: I1210 15:25:55.428785 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/7aeaa205-67e7-4b41-a2d6-fff74ab0d61b-ssh-key\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-hf8zl\" (UID: \"7aeaa205-67e7-4b41-a2d6-fff74ab0d61b\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-hf8zl" Dec 10 15:25:55 crc kubenswrapper[4718]: I1210 15:25:55.435995 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/7aeaa205-67e7-4b41-a2d6-fff74ab0d61b-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-hf8zl\" (UID: \"7aeaa205-67e7-4b41-a2d6-fff74ab0d61b\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-hf8zl" Dec 10 15:25:55 crc kubenswrapper[4718]: I1210 15:25:55.436073 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/7aeaa205-67e7-4b41-a2d6-fff74ab0d61b-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-hf8zl\" (UID: \"7aeaa205-67e7-4b41-a2d6-fff74ab0d61b\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-hf8zl" Dec 10 15:25:55 crc kubenswrapper[4718]: I1210 15:25:55.436312 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/7aeaa205-67e7-4b41-a2d6-fff74ab0d61b-ssh-key\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-hf8zl\" (UID: \"7aeaa205-67e7-4b41-a2d6-fff74ab0d61b\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-hf8zl" Dec 10 15:25:55 crc kubenswrapper[4718]: I1210 15:25:55.436740 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7aeaa205-67e7-4b41-a2d6-fff74ab0d61b-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-hf8zl\" (UID: \"7aeaa205-67e7-4b41-a2d6-fff74ab0d61b\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-hf8zl" Dec 10 15:25:55 crc kubenswrapper[4718]: I1210 15:25:55.437290 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/7aeaa205-67e7-4b41-a2d6-fff74ab0d61b-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-hf8zl\" (UID: \"7aeaa205-67e7-4b41-a2d6-fff74ab0d61b\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-hf8zl" Dec 10 15:25:55 crc kubenswrapper[4718]: I1210 15:25:55.438738 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7aeaa205-67e7-4b41-a2d6-fff74ab0d61b-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-hf8zl\" (UID: \"7aeaa205-67e7-4b41-a2d6-fff74ab0d61b\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-hf8zl" Dec 10 15:25:55 crc kubenswrapper[4718]: I1210 15:25:55.449998 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jgrlc\" (UniqueName: \"kubernetes.io/projected/7aeaa205-67e7-4b41-a2d6-fff74ab0d61b-kube-api-access-jgrlc\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-hf8zl\" (UID: \"7aeaa205-67e7-4b41-a2d6-fff74ab0d61b\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-hf8zl" Dec 10 15:25:55 crc kubenswrapper[4718]: I1210 15:25:55.464367 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-hf8zl" Dec 10 15:25:56 crc kubenswrapper[4718]: I1210 15:25:56.303548 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-hf8zl"] Dec 10 15:25:56 crc kubenswrapper[4718]: I1210 15:25:56.322261 4718 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 10 15:25:57 crc kubenswrapper[4718]: I1210 15:25:57.182064 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-hf8zl" event={"ID":"7aeaa205-67e7-4b41-a2d6-fff74ab0d61b","Type":"ContainerStarted","Data":"acc7910f03d0037b276224f89c50abcc7ba670b0bb76c0b580192be99c9fcc32"} Dec 10 15:25:57 crc kubenswrapper[4718]: I1210 15:25:57.182421 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-hf8zl" event={"ID":"7aeaa205-67e7-4b41-a2d6-fff74ab0d61b","Type":"ContainerStarted","Data":"3a296fc95e3681323f4ca84551acb43a90d154a0f6a1224c5ef72175cf7ae47a"} Dec 10 15:25:57 crc kubenswrapper[4718]: I1210 15:25:57.265744 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-hf8zl" podStartSLOduration=1.747497916 podStartE2EDuration="2.265686196s" podCreationTimestamp="2025-12-10 15:25:55 +0000 UTC" firstStartedPulling="2025-12-10 15:25:56.321831755 +0000 UTC m=+3261.271055172" lastFinishedPulling="2025-12-10 15:25:56.840020035 +0000 UTC m=+3261.789243452" observedRunningTime="2025-12-10 15:25:57.250626806 +0000 UTC m=+3262.199850223" watchObservedRunningTime="2025-12-10 15:25:57.265686196 +0000 UTC m=+3262.214909613" Dec 10 15:27:07 crc kubenswrapper[4718]: I1210 15:27:07.533316 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-4hm6l"] Dec 10 15:27:07 crc kubenswrapper[4718]: I1210 15:27:07.536517 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4hm6l" Dec 10 15:27:07 crc kubenswrapper[4718]: I1210 15:27:07.546618 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-4hm6l"] Dec 10 15:27:07 crc kubenswrapper[4718]: I1210 15:27:07.676955 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8ece88b0-ae6d-4e8f-8e0d-5eadc92c555a-catalog-content\") pod \"certified-operators-4hm6l\" (UID: \"8ece88b0-ae6d-4e8f-8e0d-5eadc92c555a\") " pod="openshift-marketplace/certified-operators-4hm6l" Dec 10 15:27:07 crc kubenswrapper[4718]: I1210 15:27:07.677226 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8ece88b0-ae6d-4e8f-8e0d-5eadc92c555a-utilities\") pod \"certified-operators-4hm6l\" (UID: \"8ece88b0-ae6d-4e8f-8e0d-5eadc92c555a\") " pod="openshift-marketplace/certified-operators-4hm6l" Dec 10 15:27:07 crc kubenswrapper[4718]: I1210 15:27:07.677271 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2qstz\" (UniqueName: \"kubernetes.io/projected/8ece88b0-ae6d-4e8f-8e0d-5eadc92c555a-kube-api-access-2qstz\") pod \"certified-operators-4hm6l\" (UID: \"8ece88b0-ae6d-4e8f-8e0d-5eadc92c555a\") " pod="openshift-marketplace/certified-operators-4hm6l" Dec 10 15:27:07 crc kubenswrapper[4718]: I1210 15:27:07.780267 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8ece88b0-ae6d-4e8f-8e0d-5eadc92c555a-utilities\") pod \"certified-operators-4hm6l\" (UID: \"8ece88b0-ae6d-4e8f-8e0d-5eadc92c555a\") " pod="openshift-marketplace/certified-operators-4hm6l" Dec 10 15:27:07 crc kubenswrapper[4718]: I1210 15:27:07.780348 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2qstz\" (UniqueName: \"kubernetes.io/projected/8ece88b0-ae6d-4e8f-8e0d-5eadc92c555a-kube-api-access-2qstz\") pod \"certified-operators-4hm6l\" (UID: \"8ece88b0-ae6d-4e8f-8e0d-5eadc92c555a\") " pod="openshift-marketplace/certified-operators-4hm6l" Dec 10 15:27:07 crc kubenswrapper[4718]: I1210 15:27:07.780424 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8ece88b0-ae6d-4e8f-8e0d-5eadc92c555a-catalog-content\") pod \"certified-operators-4hm6l\" (UID: \"8ece88b0-ae6d-4e8f-8e0d-5eadc92c555a\") " pod="openshift-marketplace/certified-operators-4hm6l" Dec 10 15:27:07 crc kubenswrapper[4718]: I1210 15:27:07.781644 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8ece88b0-ae6d-4e8f-8e0d-5eadc92c555a-catalog-content\") pod \"certified-operators-4hm6l\" (UID: \"8ece88b0-ae6d-4e8f-8e0d-5eadc92c555a\") " pod="openshift-marketplace/certified-operators-4hm6l" Dec 10 15:27:07 crc kubenswrapper[4718]: I1210 15:27:07.781677 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8ece88b0-ae6d-4e8f-8e0d-5eadc92c555a-utilities\") pod \"certified-operators-4hm6l\" (UID: \"8ece88b0-ae6d-4e8f-8e0d-5eadc92c555a\") " pod="openshift-marketplace/certified-operators-4hm6l" Dec 10 15:27:07 crc kubenswrapper[4718]: I1210 15:27:07.806811 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2qstz\" (UniqueName: \"kubernetes.io/projected/8ece88b0-ae6d-4e8f-8e0d-5eadc92c555a-kube-api-access-2qstz\") pod \"certified-operators-4hm6l\" (UID: \"8ece88b0-ae6d-4e8f-8e0d-5eadc92c555a\") " pod="openshift-marketplace/certified-operators-4hm6l" Dec 10 15:27:07 crc kubenswrapper[4718]: I1210 15:27:07.873943 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4hm6l" Dec 10 15:27:08 crc kubenswrapper[4718]: I1210 15:27:08.717192 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-4hm6l"] Dec 10 15:27:09 crc kubenswrapper[4718]: I1210 15:27:09.034835 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4hm6l" event={"ID":"8ece88b0-ae6d-4e8f-8e0d-5eadc92c555a","Type":"ContainerStarted","Data":"bd3fba58806c48537d11570ec1617e2e647bb8a2b5dcb2180a15b3d497725f76"} Dec 10 15:27:09 crc kubenswrapper[4718]: I1210 15:27:09.035357 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4hm6l" event={"ID":"8ece88b0-ae6d-4e8f-8e0d-5eadc92c555a","Type":"ContainerStarted","Data":"01d99b429b2ee5d7b8a22e6e516a7e69215a8f7f56d003fe61f049176fc49f85"} Dec 10 15:27:10 crc kubenswrapper[4718]: I1210 15:27:10.066326 4718 generic.go:334] "Generic (PLEG): container finished" podID="8ece88b0-ae6d-4e8f-8e0d-5eadc92c555a" containerID="bd3fba58806c48537d11570ec1617e2e647bb8a2b5dcb2180a15b3d497725f76" exitCode=0 Dec 10 15:27:10 crc kubenswrapper[4718]: I1210 15:27:10.066824 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4hm6l" event={"ID":"8ece88b0-ae6d-4e8f-8e0d-5eadc92c555a","Type":"ContainerDied","Data":"bd3fba58806c48537d11570ec1617e2e647bb8a2b5dcb2180a15b3d497725f76"} Dec 10 15:27:10 crc kubenswrapper[4718]: I1210 15:27:10.066902 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4hm6l" event={"ID":"8ece88b0-ae6d-4e8f-8e0d-5eadc92c555a","Type":"ContainerStarted","Data":"d0aa80e0977d55bed8b68edef713b6bafab51dcc91eb2614d7908b0b45b79dde"} Dec 10 15:27:11 crc kubenswrapper[4718]: I1210 15:27:11.083718 4718 generic.go:334] "Generic (PLEG): container finished" podID="8ece88b0-ae6d-4e8f-8e0d-5eadc92c555a" containerID="d0aa80e0977d55bed8b68edef713b6bafab51dcc91eb2614d7908b0b45b79dde" exitCode=0 Dec 10 15:27:11 crc kubenswrapper[4718]: I1210 15:27:11.083808 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4hm6l" event={"ID":"8ece88b0-ae6d-4e8f-8e0d-5eadc92c555a","Type":"ContainerDied","Data":"d0aa80e0977d55bed8b68edef713b6bafab51dcc91eb2614d7908b0b45b79dde"} Dec 10 15:27:12 crc kubenswrapper[4718]: I1210 15:27:12.098856 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4hm6l" event={"ID":"8ece88b0-ae6d-4e8f-8e0d-5eadc92c555a","Type":"ContainerStarted","Data":"8e622a1c72670bd0befdb35a76e5131b1f2b43b6850fb3691996f66b79cd817e"} Dec 10 15:27:12 crc kubenswrapper[4718]: I1210 15:27:12.123807 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-4hm6l" podStartSLOduration=2.646875006 podStartE2EDuration="5.123777373s" podCreationTimestamp="2025-12-10 15:27:07 +0000 UTC" firstStartedPulling="2025-12-10 15:27:09.03746992 +0000 UTC m=+3333.986693337" lastFinishedPulling="2025-12-10 15:27:11.514372287 +0000 UTC m=+3336.463595704" observedRunningTime="2025-12-10 15:27:12.120652344 +0000 UTC m=+3337.069875761" watchObservedRunningTime="2025-12-10 15:27:12.123777373 +0000 UTC m=+3337.073000790" Dec 10 15:27:17 crc kubenswrapper[4718]: I1210 15:27:17.875208 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-4hm6l" Dec 10 15:27:17 crc kubenswrapper[4718]: I1210 15:27:17.876014 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-4hm6l" Dec 10 15:27:17 crc kubenswrapper[4718]: I1210 15:27:17.926410 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-4hm6l" Dec 10 15:27:18 crc kubenswrapper[4718]: I1210 15:27:18.230705 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-4hm6l" Dec 10 15:27:18 crc kubenswrapper[4718]: I1210 15:27:18.589283 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-4hm6l"] Dec 10 15:27:20 crc kubenswrapper[4718]: I1210 15:27:20.201698 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-4hm6l" podUID="8ece88b0-ae6d-4e8f-8e0d-5eadc92c555a" containerName="registry-server" containerID="cri-o://8e622a1c72670bd0befdb35a76e5131b1f2b43b6850fb3691996f66b79cd817e" gracePeriod=2 Dec 10 15:27:21 crc kubenswrapper[4718]: I1210 15:27:21.161497 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4hm6l" Dec 10 15:27:21 crc kubenswrapper[4718]: I1210 15:27:21.232222 4718 generic.go:334] "Generic (PLEG): container finished" podID="8ece88b0-ae6d-4e8f-8e0d-5eadc92c555a" containerID="8e622a1c72670bd0befdb35a76e5131b1f2b43b6850fb3691996f66b79cd817e" exitCode=0 Dec 10 15:27:21 crc kubenswrapper[4718]: I1210 15:27:21.232318 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4hm6l" event={"ID":"8ece88b0-ae6d-4e8f-8e0d-5eadc92c555a","Type":"ContainerDied","Data":"8e622a1c72670bd0befdb35a76e5131b1f2b43b6850fb3691996f66b79cd817e"} Dec 10 15:27:21 crc kubenswrapper[4718]: I1210 15:27:21.232369 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4hm6l" event={"ID":"8ece88b0-ae6d-4e8f-8e0d-5eadc92c555a","Type":"ContainerDied","Data":"01d99b429b2ee5d7b8a22e6e516a7e69215a8f7f56d003fe61f049176fc49f85"} Dec 10 15:27:21 crc kubenswrapper[4718]: I1210 15:27:21.232449 4718 scope.go:117] "RemoveContainer" containerID="8e622a1c72670bd0befdb35a76e5131b1f2b43b6850fb3691996f66b79cd817e" Dec 10 15:27:21 crc kubenswrapper[4718]: I1210 15:27:21.232606 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4hm6l" Dec 10 15:27:21 crc kubenswrapper[4718]: I1210 15:27:21.277315 4718 scope.go:117] "RemoveContainer" containerID="d0aa80e0977d55bed8b68edef713b6bafab51dcc91eb2614d7908b0b45b79dde" Dec 10 15:27:21 crc kubenswrapper[4718]: I1210 15:27:21.296638 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8ece88b0-ae6d-4e8f-8e0d-5eadc92c555a-utilities\") pod \"8ece88b0-ae6d-4e8f-8e0d-5eadc92c555a\" (UID: \"8ece88b0-ae6d-4e8f-8e0d-5eadc92c555a\") " Dec 10 15:27:21 crc kubenswrapper[4718]: I1210 15:27:21.296814 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8ece88b0-ae6d-4e8f-8e0d-5eadc92c555a-catalog-content\") pod \"8ece88b0-ae6d-4e8f-8e0d-5eadc92c555a\" (UID: \"8ece88b0-ae6d-4e8f-8e0d-5eadc92c555a\") " Dec 10 15:27:21 crc kubenswrapper[4718]: I1210 15:27:21.296924 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2qstz\" (UniqueName: \"kubernetes.io/projected/8ece88b0-ae6d-4e8f-8e0d-5eadc92c555a-kube-api-access-2qstz\") pod \"8ece88b0-ae6d-4e8f-8e0d-5eadc92c555a\" (UID: \"8ece88b0-ae6d-4e8f-8e0d-5eadc92c555a\") " Dec 10 15:27:21 crc kubenswrapper[4718]: I1210 15:27:21.299122 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8ece88b0-ae6d-4e8f-8e0d-5eadc92c555a-utilities" (OuterVolumeSpecName: "utilities") pod "8ece88b0-ae6d-4e8f-8e0d-5eadc92c555a" (UID: "8ece88b0-ae6d-4e8f-8e0d-5eadc92c555a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 15:27:21 crc kubenswrapper[4718]: I1210 15:27:21.306094 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8ece88b0-ae6d-4e8f-8e0d-5eadc92c555a-kube-api-access-2qstz" (OuterVolumeSpecName: "kube-api-access-2qstz") pod "8ece88b0-ae6d-4e8f-8e0d-5eadc92c555a" (UID: "8ece88b0-ae6d-4e8f-8e0d-5eadc92c555a"). InnerVolumeSpecName "kube-api-access-2qstz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:27:21 crc kubenswrapper[4718]: I1210 15:27:21.308448 4718 scope.go:117] "RemoveContainer" containerID="bd3fba58806c48537d11570ec1617e2e647bb8a2b5dcb2180a15b3d497725f76" Dec 10 15:27:21 crc kubenswrapper[4718]: I1210 15:27:21.361179 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8ece88b0-ae6d-4e8f-8e0d-5eadc92c555a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8ece88b0-ae6d-4e8f-8e0d-5eadc92c555a" (UID: "8ece88b0-ae6d-4e8f-8e0d-5eadc92c555a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 15:27:21 crc kubenswrapper[4718]: I1210 15:27:21.399974 4718 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8ece88b0-ae6d-4e8f-8e0d-5eadc92c555a-utilities\") on node \"crc\" DevicePath \"\"" Dec 10 15:27:21 crc kubenswrapper[4718]: I1210 15:27:21.400029 4718 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8ece88b0-ae6d-4e8f-8e0d-5eadc92c555a-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 10 15:27:21 crc kubenswrapper[4718]: I1210 15:27:21.400044 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2qstz\" (UniqueName: \"kubernetes.io/projected/8ece88b0-ae6d-4e8f-8e0d-5eadc92c555a-kube-api-access-2qstz\") on node \"crc\" DevicePath \"\"" Dec 10 15:27:21 crc kubenswrapper[4718]: I1210 15:27:21.422906 4718 scope.go:117] "RemoveContainer" containerID="8e622a1c72670bd0befdb35a76e5131b1f2b43b6850fb3691996f66b79cd817e" Dec 10 15:27:21 crc kubenswrapper[4718]: E1210 15:27:21.423745 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8e622a1c72670bd0befdb35a76e5131b1f2b43b6850fb3691996f66b79cd817e\": container with ID starting with 8e622a1c72670bd0befdb35a76e5131b1f2b43b6850fb3691996f66b79cd817e not found: ID does not exist" containerID="8e622a1c72670bd0befdb35a76e5131b1f2b43b6850fb3691996f66b79cd817e" Dec 10 15:27:21 crc kubenswrapper[4718]: I1210 15:27:21.423799 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8e622a1c72670bd0befdb35a76e5131b1f2b43b6850fb3691996f66b79cd817e"} err="failed to get container status \"8e622a1c72670bd0befdb35a76e5131b1f2b43b6850fb3691996f66b79cd817e\": rpc error: code = NotFound desc = could not find container \"8e622a1c72670bd0befdb35a76e5131b1f2b43b6850fb3691996f66b79cd817e\": container with ID starting with 8e622a1c72670bd0befdb35a76e5131b1f2b43b6850fb3691996f66b79cd817e not found: ID does not exist" Dec 10 15:27:21 crc kubenswrapper[4718]: I1210 15:27:21.423880 4718 scope.go:117] "RemoveContainer" containerID="d0aa80e0977d55bed8b68edef713b6bafab51dcc91eb2614d7908b0b45b79dde" Dec 10 15:27:21 crc kubenswrapper[4718]: E1210 15:27:21.424605 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d0aa80e0977d55bed8b68edef713b6bafab51dcc91eb2614d7908b0b45b79dde\": container with ID starting with d0aa80e0977d55bed8b68edef713b6bafab51dcc91eb2614d7908b0b45b79dde not found: ID does not exist" containerID="d0aa80e0977d55bed8b68edef713b6bafab51dcc91eb2614d7908b0b45b79dde" Dec 10 15:27:21 crc kubenswrapper[4718]: I1210 15:27:21.424654 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d0aa80e0977d55bed8b68edef713b6bafab51dcc91eb2614d7908b0b45b79dde"} err="failed to get container status \"d0aa80e0977d55bed8b68edef713b6bafab51dcc91eb2614d7908b0b45b79dde\": rpc error: code = NotFound desc = could not find container \"d0aa80e0977d55bed8b68edef713b6bafab51dcc91eb2614d7908b0b45b79dde\": container with ID starting with d0aa80e0977d55bed8b68edef713b6bafab51dcc91eb2614d7908b0b45b79dde not found: ID does not exist" Dec 10 15:27:21 crc kubenswrapper[4718]: I1210 15:27:21.424679 4718 scope.go:117] "RemoveContainer" containerID="bd3fba58806c48537d11570ec1617e2e647bb8a2b5dcb2180a15b3d497725f76" Dec 10 15:27:21 crc kubenswrapper[4718]: E1210 15:27:21.425257 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bd3fba58806c48537d11570ec1617e2e647bb8a2b5dcb2180a15b3d497725f76\": container with ID starting with bd3fba58806c48537d11570ec1617e2e647bb8a2b5dcb2180a15b3d497725f76 not found: ID does not exist" containerID="bd3fba58806c48537d11570ec1617e2e647bb8a2b5dcb2180a15b3d497725f76" Dec 10 15:27:21 crc kubenswrapper[4718]: I1210 15:27:21.425332 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bd3fba58806c48537d11570ec1617e2e647bb8a2b5dcb2180a15b3d497725f76"} err="failed to get container status \"bd3fba58806c48537d11570ec1617e2e647bb8a2b5dcb2180a15b3d497725f76\": rpc error: code = NotFound desc = could not find container \"bd3fba58806c48537d11570ec1617e2e647bb8a2b5dcb2180a15b3d497725f76\": container with ID starting with bd3fba58806c48537d11570ec1617e2e647bb8a2b5dcb2180a15b3d497725f76 not found: ID does not exist" Dec 10 15:27:21 crc kubenswrapper[4718]: I1210 15:27:21.579555 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-4hm6l"] Dec 10 15:27:21 crc kubenswrapper[4718]: I1210 15:27:21.591558 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-4hm6l"] Dec 10 15:27:22 crc kubenswrapper[4718]: I1210 15:27:22.033807 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8ece88b0-ae6d-4e8f-8e0d-5eadc92c555a" path="/var/lib/kubelet/pods/8ece88b0-ae6d-4e8f-8e0d-5eadc92c555a/volumes" Dec 10 15:27:48 crc kubenswrapper[4718]: I1210 15:27:48.085214 4718 patch_prober.go:28] interesting pod/machine-config-daemon-8zmhn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 10 15:27:48 crc kubenswrapper[4718]: I1210 15:27:48.086172 4718 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" podUID="8db53917-7cfb-496d-b8a0-5cc68f3be4e7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 10 15:28:13 crc kubenswrapper[4718]: I1210 15:28:13.083823 4718 generic.go:334] "Generic (PLEG): container finished" podID="7aeaa205-67e7-4b41-a2d6-fff74ab0d61b" containerID="acc7910f03d0037b276224f89c50abcc7ba670b0bb76c0b580192be99c9fcc32" exitCode=0 Dec 10 15:28:13 crc kubenswrapper[4718]: I1210 15:28:13.083930 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-hf8zl" event={"ID":"7aeaa205-67e7-4b41-a2d6-fff74ab0d61b","Type":"ContainerDied","Data":"acc7910f03d0037b276224f89c50abcc7ba670b0bb76c0b580192be99c9fcc32"} Dec 10 15:28:14 crc kubenswrapper[4718]: I1210 15:28:14.758493 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-hf8zl" Dec 10 15:28:14 crc kubenswrapper[4718]: I1210 15:28:14.905122 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/7aeaa205-67e7-4b41-a2d6-fff74ab0d61b-ceilometer-compute-config-data-2\") pod \"7aeaa205-67e7-4b41-a2d6-fff74ab0d61b\" (UID: \"7aeaa205-67e7-4b41-a2d6-fff74ab0d61b\") " Dec 10 15:28:14 crc kubenswrapper[4718]: I1210 15:28:14.905192 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/7aeaa205-67e7-4b41-a2d6-fff74ab0d61b-ssh-key\") pod \"7aeaa205-67e7-4b41-a2d6-fff74ab0d61b\" (UID: \"7aeaa205-67e7-4b41-a2d6-fff74ab0d61b\") " Dec 10 15:28:14 crc kubenswrapper[4718]: I1210 15:28:14.905258 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/7aeaa205-67e7-4b41-a2d6-fff74ab0d61b-ceilometer-compute-config-data-0\") pod \"7aeaa205-67e7-4b41-a2d6-fff74ab0d61b\" (UID: \"7aeaa205-67e7-4b41-a2d6-fff74ab0d61b\") " Dec 10 15:28:14 crc kubenswrapper[4718]: I1210 15:28:14.905415 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7aeaa205-67e7-4b41-a2d6-fff74ab0d61b-inventory\") pod \"7aeaa205-67e7-4b41-a2d6-fff74ab0d61b\" (UID: \"7aeaa205-67e7-4b41-a2d6-fff74ab0d61b\") " Dec 10 15:28:14 crc kubenswrapper[4718]: I1210 15:28:14.905466 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jgrlc\" (UniqueName: \"kubernetes.io/projected/7aeaa205-67e7-4b41-a2d6-fff74ab0d61b-kube-api-access-jgrlc\") pod \"7aeaa205-67e7-4b41-a2d6-fff74ab0d61b\" (UID: \"7aeaa205-67e7-4b41-a2d6-fff74ab0d61b\") " Dec 10 15:28:14 crc kubenswrapper[4718]: I1210 15:28:14.905502 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7aeaa205-67e7-4b41-a2d6-fff74ab0d61b-telemetry-combined-ca-bundle\") pod \"7aeaa205-67e7-4b41-a2d6-fff74ab0d61b\" (UID: \"7aeaa205-67e7-4b41-a2d6-fff74ab0d61b\") " Dec 10 15:28:14 crc kubenswrapper[4718]: I1210 15:28:14.905625 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/7aeaa205-67e7-4b41-a2d6-fff74ab0d61b-ceilometer-compute-config-data-1\") pod \"7aeaa205-67e7-4b41-a2d6-fff74ab0d61b\" (UID: \"7aeaa205-67e7-4b41-a2d6-fff74ab0d61b\") " Dec 10 15:28:14 crc kubenswrapper[4718]: I1210 15:28:14.914224 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7aeaa205-67e7-4b41-a2d6-fff74ab0d61b-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "7aeaa205-67e7-4b41-a2d6-fff74ab0d61b" (UID: "7aeaa205-67e7-4b41-a2d6-fff74ab0d61b"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:28:14 crc kubenswrapper[4718]: I1210 15:28:14.926142 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7aeaa205-67e7-4b41-a2d6-fff74ab0d61b-kube-api-access-jgrlc" (OuterVolumeSpecName: "kube-api-access-jgrlc") pod "7aeaa205-67e7-4b41-a2d6-fff74ab0d61b" (UID: "7aeaa205-67e7-4b41-a2d6-fff74ab0d61b"). InnerVolumeSpecName "kube-api-access-jgrlc". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:28:14 crc kubenswrapper[4718]: I1210 15:28:14.943616 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7aeaa205-67e7-4b41-a2d6-fff74ab0d61b-inventory" (OuterVolumeSpecName: "inventory") pod "7aeaa205-67e7-4b41-a2d6-fff74ab0d61b" (UID: "7aeaa205-67e7-4b41-a2d6-fff74ab0d61b"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:28:14 crc kubenswrapper[4718]: I1210 15:28:14.951641 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7aeaa205-67e7-4b41-a2d6-fff74ab0d61b-ceilometer-compute-config-data-0" (OuterVolumeSpecName: "ceilometer-compute-config-data-0") pod "7aeaa205-67e7-4b41-a2d6-fff74ab0d61b" (UID: "7aeaa205-67e7-4b41-a2d6-fff74ab0d61b"). InnerVolumeSpecName "ceilometer-compute-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:28:14 crc kubenswrapper[4718]: I1210 15:28:14.953963 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7aeaa205-67e7-4b41-a2d6-fff74ab0d61b-ceilometer-compute-config-data-1" (OuterVolumeSpecName: "ceilometer-compute-config-data-1") pod "7aeaa205-67e7-4b41-a2d6-fff74ab0d61b" (UID: "7aeaa205-67e7-4b41-a2d6-fff74ab0d61b"). InnerVolumeSpecName "ceilometer-compute-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:28:14 crc kubenswrapper[4718]: I1210 15:28:14.962739 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7aeaa205-67e7-4b41-a2d6-fff74ab0d61b-ceilometer-compute-config-data-2" (OuterVolumeSpecName: "ceilometer-compute-config-data-2") pod "7aeaa205-67e7-4b41-a2d6-fff74ab0d61b" (UID: "7aeaa205-67e7-4b41-a2d6-fff74ab0d61b"). InnerVolumeSpecName "ceilometer-compute-config-data-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:28:14 crc kubenswrapper[4718]: I1210 15:28:14.971702 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7aeaa205-67e7-4b41-a2d6-fff74ab0d61b-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "7aeaa205-67e7-4b41-a2d6-fff74ab0d61b" (UID: "7aeaa205-67e7-4b41-a2d6-fff74ab0d61b"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:28:15 crc kubenswrapper[4718]: I1210 15:28:15.008831 4718 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/7aeaa205-67e7-4b41-a2d6-fff74ab0d61b-ceilometer-compute-config-data-1\") on node \"crc\" DevicePath \"\"" Dec 10 15:28:15 crc kubenswrapper[4718]: I1210 15:28:15.008881 4718 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/7aeaa205-67e7-4b41-a2d6-fff74ab0d61b-ceilometer-compute-config-data-2\") on node \"crc\" DevicePath \"\"" Dec 10 15:28:15 crc kubenswrapper[4718]: I1210 15:28:15.008893 4718 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/7aeaa205-67e7-4b41-a2d6-fff74ab0d61b-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 10 15:28:15 crc kubenswrapper[4718]: I1210 15:28:15.008903 4718 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/7aeaa205-67e7-4b41-a2d6-fff74ab0d61b-ceilometer-compute-config-data-0\") on node \"crc\" DevicePath \"\"" Dec 10 15:28:15 crc kubenswrapper[4718]: I1210 15:28:15.008923 4718 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7aeaa205-67e7-4b41-a2d6-fff74ab0d61b-inventory\") on node \"crc\" DevicePath \"\"" Dec 10 15:28:15 crc kubenswrapper[4718]: I1210 15:28:15.008938 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jgrlc\" (UniqueName: \"kubernetes.io/projected/7aeaa205-67e7-4b41-a2d6-fff74ab0d61b-kube-api-access-jgrlc\") on node \"crc\" DevicePath \"\"" Dec 10 15:28:15 crc kubenswrapper[4718]: I1210 15:28:15.008948 4718 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7aeaa205-67e7-4b41-a2d6-fff74ab0d61b-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 10 15:28:15 crc kubenswrapper[4718]: I1210 15:28:15.109042 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-hf8zl" event={"ID":"7aeaa205-67e7-4b41-a2d6-fff74ab0d61b","Type":"ContainerDied","Data":"3a296fc95e3681323f4ca84551acb43a90d154a0f6a1224c5ef72175cf7ae47a"} Dec 10 15:28:15 crc kubenswrapper[4718]: I1210 15:28:15.109112 4718 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3a296fc95e3681323f4ca84551acb43a90d154a0f6a1224c5ef72175cf7ae47a" Dec 10 15:28:15 crc kubenswrapper[4718]: I1210 15:28:15.109127 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-hf8zl" Dec 10 15:28:18 crc kubenswrapper[4718]: I1210 15:28:18.085266 4718 patch_prober.go:28] interesting pod/machine-config-daemon-8zmhn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 10 15:28:18 crc kubenswrapper[4718]: I1210 15:28:18.086488 4718 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" podUID="8db53917-7cfb-496d-b8a0-5cc68f3be4e7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 10 15:28:48 crc kubenswrapper[4718]: I1210 15:28:48.084110 4718 patch_prober.go:28] interesting pod/machine-config-daemon-8zmhn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 10 15:28:48 crc kubenswrapper[4718]: I1210 15:28:48.085208 4718 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" podUID="8db53917-7cfb-496d-b8a0-5cc68f3be4e7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 10 15:28:48 crc kubenswrapper[4718]: I1210 15:28:48.085330 4718 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" Dec 10 15:28:48 crc kubenswrapper[4718]: I1210 15:28:48.086929 4718 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"b0ebbb69003f414bf5979556281a474a5538c66cccb2f18435180d4ba1a9de43"} pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 10 15:28:48 crc kubenswrapper[4718]: I1210 15:28:48.087008 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" podUID="8db53917-7cfb-496d-b8a0-5cc68f3be4e7" containerName="machine-config-daemon" containerID="cri-o://b0ebbb69003f414bf5979556281a474a5538c66cccb2f18435180d4ba1a9de43" gracePeriod=600 Dec 10 15:28:48 crc kubenswrapper[4718]: I1210 15:28:48.528613 4718 generic.go:334] "Generic (PLEG): container finished" podID="8db53917-7cfb-496d-b8a0-5cc68f3be4e7" containerID="b0ebbb69003f414bf5979556281a474a5538c66cccb2f18435180d4ba1a9de43" exitCode=0 Dec 10 15:28:48 crc kubenswrapper[4718]: I1210 15:28:48.528703 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" event={"ID":"8db53917-7cfb-496d-b8a0-5cc68f3be4e7","Type":"ContainerDied","Data":"b0ebbb69003f414bf5979556281a474a5538c66cccb2f18435180d4ba1a9de43"} Dec 10 15:28:48 crc kubenswrapper[4718]: I1210 15:28:48.529254 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" event={"ID":"8db53917-7cfb-496d-b8a0-5cc68f3be4e7","Type":"ContainerStarted","Data":"a39fe9ee57bd6a377e7053d9337e95aa45b6b874e934a440d79cfb0832a5276b"} Dec 10 15:28:48 crc kubenswrapper[4718]: I1210 15:28:48.529318 4718 scope.go:117] "RemoveContainer" containerID="65f273b169c25d47441ce0cd6888b81917c4ea628742cda4d8d97b9f88784ba0" Dec 10 15:28:56 crc kubenswrapper[4718]: I1210 15:28:56.436815 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Dec 10 15:28:56 crc kubenswrapper[4718]: I1210 15:28:56.438281 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="9261d0c6-0506-4f71-ad72-76eb7dd4ac0a" containerName="prometheus" containerID="cri-o://b37608f294d0a205897165cd8ffe52eeb5998e5893fb0745c136901828bea0a0" gracePeriod=600 Dec 10 15:28:56 crc kubenswrapper[4718]: I1210 15:28:56.440658 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="9261d0c6-0506-4f71-ad72-76eb7dd4ac0a" containerName="config-reloader" containerID="cri-o://23931b46d54d68f781ff0af77258a1a4336dd892c7bae77f04b963b41689c6cf" gracePeriod=600 Dec 10 15:28:56 crc kubenswrapper[4718]: I1210 15:28:56.440658 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="9261d0c6-0506-4f71-ad72-76eb7dd4ac0a" containerName="thanos-sidecar" containerID="cri-o://a6895eee2c578cff62226d2105459b82da186cce242a503cccec972eea4b7ea7" gracePeriod=600 Dec 10 15:28:56 crc kubenswrapper[4718]: I1210 15:28:56.761480 4718 generic.go:334] "Generic (PLEG): container finished" podID="9261d0c6-0506-4f71-ad72-76eb7dd4ac0a" containerID="a6895eee2c578cff62226d2105459b82da186cce242a503cccec972eea4b7ea7" exitCode=0 Dec 10 15:28:56 crc kubenswrapper[4718]: I1210 15:28:56.761977 4718 generic.go:334] "Generic (PLEG): container finished" podID="9261d0c6-0506-4f71-ad72-76eb7dd4ac0a" containerID="b37608f294d0a205897165cd8ffe52eeb5998e5893fb0745c136901828bea0a0" exitCode=0 Dec 10 15:28:56 crc kubenswrapper[4718]: I1210 15:28:56.761515 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"9261d0c6-0506-4f71-ad72-76eb7dd4ac0a","Type":"ContainerDied","Data":"a6895eee2c578cff62226d2105459b82da186cce242a503cccec972eea4b7ea7"} Dec 10 15:28:56 crc kubenswrapper[4718]: I1210 15:28:56.762050 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"9261d0c6-0506-4f71-ad72-76eb7dd4ac0a","Type":"ContainerDied","Data":"b37608f294d0a205897165cd8ffe52eeb5998e5893fb0745c136901828bea0a0"} Dec 10 15:28:57 crc kubenswrapper[4718]: I1210 15:28:57.586673 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Dec 10 15:28:57 crc kubenswrapper[4718]: I1210 15:28:57.638244 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/9261d0c6-0506-4f71-ad72-76eb7dd4ac0a-thanos-prometheus-http-client-file\") pod \"9261d0c6-0506-4f71-ad72-76eb7dd4ac0a\" (UID: \"9261d0c6-0506-4f71-ad72-76eb7dd4ac0a\") " Dec 10 15:28:57 crc kubenswrapper[4718]: I1210 15:28:57.638651 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/9261d0c6-0506-4f71-ad72-76eb7dd4ac0a-tls-assets\") pod \"9261d0c6-0506-4f71-ad72-76eb7dd4ac0a\" (UID: \"9261d0c6-0506-4f71-ad72-76eb7dd4ac0a\") " Dec 10 15:28:57 crc kubenswrapper[4718]: I1210 15:28:57.638888 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-db\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-9caec827-4c52-4f0c-a90d-988b3cf66a46\") pod \"9261d0c6-0506-4f71-ad72-76eb7dd4ac0a\" (UID: \"9261d0c6-0506-4f71-ad72-76eb7dd4ac0a\") " Dec 10 15:28:57 crc kubenswrapper[4718]: I1210 15:28:57.638944 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9261d0c6-0506-4f71-ad72-76eb7dd4ac0a-secret-combined-ca-bundle\") pod \"9261d0c6-0506-4f71-ad72-76eb7dd4ac0a\" (UID: \"9261d0c6-0506-4f71-ad72-76eb7dd4ac0a\") " Dec 10 15:28:57 crc kubenswrapper[4718]: I1210 15:28:57.639013 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/9261d0c6-0506-4f71-ad72-76eb7dd4ac0a-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"9261d0c6-0506-4f71-ad72-76eb7dd4ac0a\" (UID: \"9261d0c6-0506-4f71-ad72-76eb7dd4ac0a\") " Dec 10 15:28:57 crc kubenswrapper[4718]: I1210 15:28:57.639064 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/9261d0c6-0506-4f71-ad72-76eb7dd4ac0a-config\") pod \"9261d0c6-0506-4f71-ad72-76eb7dd4ac0a\" (UID: \"9261d0c6-0506-4f71-ad72-76eb7dd4ac0a\") " Dec 10 15:28:57 crc kubenswrapper[4718]: I1210 15:28:57.639089 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/9261d0c6-0506-4f71-ad72-76eb7dd4ac0a-config-out\") pod \"9261d0c6-0506-4f71-ad72-76eb7dd4ac0a\" (UID: \"9261d0c6-0506-4f71-ad72-76eb7dd4ac0a\") " Dec 10 15:28:57 crc kubenswrapper[4718]: I1210 15:28:57.639271 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xm59j\" (UniqueName: \"kubernetes.io/projected/9261d0c6-0506-4f71-ad72-76eb7dd4ac0a-kube-api-access-xm59j\") pod \"9261d0c6-0506-4f71-ad72-76eb7dd4ac0a\" (UID: \"9261d0c6-0506-4f71-ad72-76eb7dd4ac0a\") " Dec 10 15:28:57 crc kubenswrapper[4718]: I1210 15:28:57.639303 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/9261d0c6-0506-4f71-ad72-76eb7dd4ac0a-web-config\") pod \"9261d0c6-0506-4f71-ad72-76eb7dd4ac0a\" (UID: \"9261d0c6-0506-4f71-ad72-76eb7dd4ac0a\") " Dec 10 15:28:57 crc kubenswrapper[4718]: I1210 15:28:57.639339 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/9261d0c6-0506-4f71-ad72-76eb7dd4ac0a-prometheus-metric-storage-rulefiles-0\") pod \"9261d0c6-0506-4f71-ad72-76eb7dd4ac0a\" (UID: \"9261d0c6-0506-4f71-ad72-76eb7dd4ac0a\") " Dec 10 15:28:57 crc kubenswrapper[4718]: I1210 15:28:57.639408 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/9261d0c6-0506-4f71-ad72-76eb7dd4ac0a-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"9261d0c6-0506-4f71-ad72-76eb7dd4ac0a\" (UID: \"9261d0c6-0506-4f71-ad72-76eb7dd4ac0a\") " Dec 10 15:28:57 crc kubenswrapper[4718]: I1210 15:28:57.645961 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9261d0c6-0506-4f71-ad72-76eb7dd4ac0a-prometheus-metric-storage-rulefiles-0" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-0") pod "9261d0c6-0506-4f71-ad72-76eb7dd4ac0a" (UID: "9261d0c6-0506-4f71-ad72-76eb7dd4ac0a"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 15:28:57 crc kubenswrapper[4718]: I1210 15:28:57.658756 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9261d0c6-0506-4f71-ad72-76eb7dd4ac0a-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d" (OuterVolumeSpecName: "web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d") pod "9261d0c6-0506-4f71-ad72-76eb7dd4ac0a" (UID: "9261d0c6-0506-4f71-ad72-76eb7dd4ac0a"). InnerVolumeSpecName "web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:28:57 crc kubenswrapper[4718]: I1210 15:28:57.658803 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9261d0c6-0506-4f71-ad72-76eb7dd4ac0a-secret-combined-ca-bundle" (OuterVolumeSpecName: "secret-combined-ca-bundle") pod "9261d0c6-0506-4f71-ad72-76eb7dd4ac0a" (UID: "9261d0c6-0506-4f71-ad72-76eb7dd4ac0a"). InnerVolumeSpecName "secret-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:28:57 crc kubenswrapper[4718]: I1210 15:28:57.661766 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9261d0c6-0506-4f71-ad72-76eb7dd4ac0a-tls-assets" (OuterVolumeSpecName: "tls-assets") pod "9261d0c6-0506-4f71-ad72-76eb7dd4ac0a" (UID: "9261d0c6-0506-4f71-ad72-76eb7dd4ac0a"). InnerVolumeSpecName "tls-assets". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:28:57 crc kubenswrapper[4718]: I1210 15:28:57.662036 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9261d0c6-0506-4f71-ad72-76eb7dd4ac0a-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d" (OuterVolumeSpecName: "web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d") pod "9261d0c6-0506-4f71-ad72-76eb7dd4ac0a" (UID: "9261d0c6-0506-4f71-ad72-76eb7dd4ac0a"). InnerVolumeSpecName "web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:28:57 crc kubenswrapper[4718]: I1210 15:28:57.664481 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9261d0c6-0506-4f71-ad72-76eb7dd4ac0a-thanos-prometheus-http-client-file" (OuterVolumeSpecName: "thanos-prometheus-http-client-file") pod "9261d0c6-0506-4f71-ad72-76eb7dd4ac0a" (UID: "9261d0c6-0506-4f71-ad72-76eb7dd4ac0a"). InnerVolumeSpecName "thanos-prometheus-http-client-file". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:28:57 crc kubenswrapper[4718]: I1210 15:28:57.664845 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9261d0c6-0506-4f71-ad72-76eb7dd4ac0a-config-out" (OuterVolumeSpecName: "config-out") pod "9261d0c6-0506-4f71-ad72-76eb7dd4ac0a" (UID: "9261d0c6-0506-4f71-ad72-76eb7dd4ac0a"). InnerVolumeSpecName "config-out". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 15:28:57 crc kubenswrapper[4718]: I1210 15:28:57.666609 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9261d0c6-0506-4f71-ad72-76eb7dd4ac0a-config" (OuterVolumeSpecName: "config") pod "9261d0c6-0506-4f71-ad72-76eb7dd4ac0a" (UID: "9261d0c6-0506-4f71-ad72-76eb7dd4ac0a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:28:57 crc kubenswrapper[4718]: I1210 15:28:57.671890 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9261d0c6-0506-4f71-ad72-76eb7dd4ac0a-kube-api-access-xm59j" (OuterVolumeSpecName: "kube-api-access-xm59j") pod "9261d0c6-0506-4f71-ad72-76eb7dd4ac0a" (UID: "9261d0c6-0506-4f71-ad72-76eb7dd4ac0a"). InnerVolumeSpecName "kube-api-access-xm59j". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:28:57 crc kubenswrapper[4718]: I1210 15:28:57.699679 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-9caec827-4c52-4f0c-a90d-988b3cf66a46" (OuterVolumeSpecName: "prometheus-metric-storage-db") pod "9261d0c6-0506-4f71-ad72-76eb7dd4ac0a" (UID: "9261d0c6-0506-4f71-ad72-76eb7dd4ac0a"). InnerVolumeSpecName "pvc-9caec827-4c52-4f0c-a90d-988b3cf66a46". PluginName "kubernetes.io/csi", VolumeGidValue "" Dec 10 15:28:57 crc kubenswrapper[4718]: I1210 15:28:57.743102 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xm59j\" (UniqueName: \"kubernetes.io/projected/9261d0c6-0506-4f71-ad72-76eb7dd4ac0a-kube-api-access-xm59j\") on node \"crc\" DevicePath \"\"" Dec 10 15:28:57 crc kubenswrapper[4718]: I1210 15:28:57.743154 4718 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/9261d0c6-0506-4f71-ad72-76eb7dd4ac0a-prometheus-metric-storage-rulefiles-0\") on node \"crc\" DevicePath \"\"" Dec 10 15:28:57 crc kubenswrapper[4718]: I1210 15:28:57.743176 4718 reconciler_common.go:293] "Volume detached for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/9261d0c6-0506-4f71-ad72-76eb7dd4ac0a-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") on node \"crc\" DevicePath \"\"" Dec 10 15:28:57 crc kubenswrapper[4718]: I1210 15:28:57.743189 4718 reconciler_common.go:293] "Volume detached for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/9261d0c6-0506-4f71-ad72-76eb7dd4ac0a-thanos-prometheus-http-client-file\") on node \"crc\" DevicePath \"\"" Dec 10 15:28:57 crc kubenswrapper[4718]: I1210 15:28:57.743248 4718 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-9caec827-4c52-4f0c-a90d-988b3cf66a46\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-9caec827-4c52-4f0c-a90d-988b3cf66a46\") on node \"crc\" " Dec 10 15:28:57 crc kubenswrapper[4718]: I1210 15:28:57.743260 4718 reconciler_common.go:293] "Volume detached for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/9261d0c6-0506-4f71-ad72-76eb7dd4ac0a-tls-assets\") on node \"crc\" DevicePath \"\"" Dec 10 15:28:57 crc kubenswrapper[4718]: I1210 15:28:57.743269 4718 reconciler_common.go:293] "Volume detached for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9261d0c6-0506-4f71-ad72-76eb7dd4ac0a-secret-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 10 15:28:57 crc kubenswrapper[4718]: I1210 15:28:57.743281 4718 reconciler_common.go:293] "Volume detached for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/9261d0c6-0506-4f71-ad72-76eb7dd4ac0a-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") on node \"crc\" DevicePath \"\"" Dec 10 15:28:57 crc kubenswrapper[4718]: I1210 15:28:57.743291 4718 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/9261d0c6-0506-4f71-ad72-76eb7dd4ac0a-config\") on node \"crc\" DevicePath \"\"" Dec 10 15:28:57 crc kubenswrapper[4718]: I1210 15:28:57.743300 4718 reconciler_common.go:293] "Volume detached for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/9261d0c6-0506-4f71-ad72-76eb7dd4ac0a-config-out\") on node \"crc\" DevicePath \"\"" Dec 10 15:28:57 crc kubenswrapper[4718]: I1210 15:28:57.777879 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9261d0c6-0506-4f71-ad72-76eb7dd4ac0a-web-config" (OuterVolumeSpecName: "web-config") pod "9261d0c6-0506-4f71-ad72-76eb7dd4ac0a" (UID: "9261d0c6-0506-4f71-ad72-76eb7dd4ac0a"). InnerVolumeSpecName "web-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:28:57 crc kubenswrapper[4718]: I1210 15:28:57.783706 4718 generic.go:334] "Generic (PLEG): container finished" podID="9261d0c6-0506-4f71-ad72-76eb7dd4ac0a" containerID="23931b46d54d68f781ff0af77258a1a4336dd892c7bae77f04b963b41689c6cf" exitCode=0 Dec 10 15:28:57 crc kubenswrapper[4718]: I1210 15:28:57.783759 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"9261d0c6-0506-4f71-ad72-76eb7dd4ac0a","Type":"ContainerDied","Data":"23931b46d54d68f781ff0af77258a1a4336dd892c7bae77f04b963b41689c6cf"} Dec 10 15:28:57 crc kubenswrapper[4718]: I1210 15:28:57.783791 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"9261d0c6-0506-4f71-ad72-76eb7dd4ac0a","Type":"ContainerDied","Data":"85ce5f60b0cdd99a5a754c2bed864db403c9218d2ecb82c9cd834bcea355b53d"} Dec 10 15:28:57 crc kubenswrapper[4718]: I1210 15:28:57.783810 4718 scope.go:117] "RemoveContainer" containerID="a6895eee2c578cff62226d2105459b82da186cce242a503cccec972eea4b7ea7" Dec 10 15:28:57 crc kubenswrapper[4718]: I1210 15:28:57.784063 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Dec 10 15:28:57 crc kubenswrapper[4718]: I1210 15:28:57.798509 4718 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Dec 10 15:28:57 crc kubenswrapper[4718]: I1210 15:28:57.798738 4718 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-9caec827-4c52-4f0c-a90d-988b3cf66a46" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-9caec827-4c52-4f0c-a90d-988b3cf66a46") on node "crc" Dec 10 15:28:57 crc kubenswrapper[4718]: I1210 15:28:57.845864 4718 reconciler_common.go:293] "Volume detached for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/9261d0c6-0506-4f71-ad72-76eb7dd4ac0a-web-config\") on node \"crc\" DevicePath \"\"" Dec 10 15:28:57 crc kubenswrapper[4718]: I1210 15:28:57.845915 4718 reconciler_common.go:293] "Volume detached for volume \"pvc-9caec827-4c52-4f0c-a90d-988b3cf66a46\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-9caec827-4c52-4f0c-a90d-988b3cf66a46\") on node \"crc\" DevicePath \"\"" Dec 10 15:28:57 crc kubenswrapper[4718]: I1210 15:28:57.905709 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Dec 10 15:28:57 crc kubenswrapper[4718]: I1210 15:28:57.906702 4718 scope.go:117] "RemoveContainer" containerID="23931b46d54d68f781ff0af77258a1a4336dd892c7bae77f04b963b41689c6cf" Dec 10 15:28:58 crc kubenswrapper[4718]: I1210 15:28:58.116846 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/prometheus-metric-storage-0"] Dec 10 15:28:58 crc kubenswrapper[4718]: I1210 15:28:58.125636 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Dec 10 15:28:58 crc kubenswrapper[4718]: E1210 15:28:58.126532 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9261d0c6-0506-4f71-ad72-76eb7dd4ac0a" containerName="thanos-sidecar" Dec 10 15:28:58 crc kubenswrapper[4718]: I1210 15:28:58.126568 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="9261d0c6-0506-4f71-ad72-76eb7dd4ac0a" containerName="thanos-sidecar" Dec 10 15:28:58 crc kubenswrapper[4718]: E1210 15:28:58.126610 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9261d0c6-0506-4f71-ad72-76eb7dd4ac0a" containerName="init-config-reloader" Dec 10 15:28:58 crc kubenswrapper[4718]: I1210 15:28:58.126623 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="9261d0c6-0506-4f71-ad72-76eb7dd4ac0a" containerName="init-config-reloader" Dec 10 15:28:58 crc kubenswrapper[4718]: E1210 15:28:58.126652 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9261d0c6-0506-4f71-ad72-76eb7dd4ac0a" containerName="prometheus" Dec 10 15:28:58 crc kubenswrapper[4718]: I1210 15:28:58.126663 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="9261d0c6-0506-4f71-ad72-76eb7dd4ac0a" containerName="prometheus" Dec 10 15:28:58 crc kubenswrapper[4718]: E1210 15:28:58.126686 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ece88b0-ae6d-4e8f-8e0d-5eadc92c555a" containerName="extract-utilities" Dec 10 15:28:58 crc kubenswrapper[4718]: I1210 15:28:58.126695 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ece88b0-ae6d-4e8f-8e0d-5eadc92c555a" containerName="extract-utilities" Dec 10 15:28:58 crc kubenswrapper[4718]: E1210 15:28:58.126728 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ece88b0-ae6d-4e8f-8e0d-5eadc92c555a" containerName="extract-content" Dec 10 15:28:58 crc kubenswrapper[4718]: I1210 15:28:58.126737 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ece88b0-ae6d-4e8f-8e0d-5eadc92c555a" containerName="extract-content" Dec 10 15:28:58 crc kubenswrapper[4718]: E1210 15:28:58.126752 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9261d0c6-0506-4f71-ad72-76eb7dd4ac0a" containerName="config-reloader" Dec 10 15:28:58 crc kubenswrapper[4718]: I1210 15:28:58.126760 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="9261d0c6-0506-4f71-ad72-76eb7dd4ac0a" containerName="config-reloader" Dec 10 15:28:58 crc kubenswrapper[4718]: E1210 15:28:58.126777 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ece88b0-ae6d-4e8f-8e0d-5eadc92c555a" containerName="registry-server" Dec 10 15:28:58 crc kubenswrapper[4718]: I1210 15:28:58.126785 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ece88b0-ae6d-4e8f-8e0d-5eadc92c555a" containerName="registry-server" Dec 10 15:28:58 crc kubenswrapper[4718]: E1210 15:28:58.126801 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7aeaa205-67e7-4b41-a2d6-fff74ab0d61b" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Dec 10 15:28:58 crc kubenswrapper[4718]: I1210 15:28:58.126810 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="7aeaa205-67e7-4b41-a2d6-fff74ab0d61b" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Dec 10 15:28:58 crc kubenswrapper[4718]: I1210 15:28:58.127103 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="8ece88b0-ae6d-4e8f-8e0d-5eadc92c555a" containerName="registry-server" Dec 10 15:28:58 crc kubenswrapper[4718]: I1210 15:28:58.127138 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="9261d0c6-0506-4f71-ad72-76eb7dd4ac0a" containerName="thanos-sidecar" Dec 10 15:28:58 crc kubenswrapper[4718]: I1210 15:28:58.127155 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="9261d0c6-0506-4f71-ad72-76eb7dd4ac0a" containerName="prometheus" Dec 10 15:28:58 crc kubenswrapper[4718]: I1210 15:28:58.127170 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="7aeaa205-67e7-4b41-a2d6-fff74ab0d61b" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Dec 10 15:28:58 crc kubenswrapper[4718]: I1210 15:28:58.127186 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="9261d0c6-0506-4f71-ad72-76eb7dd4ac0a" containerName="config-reloader" Dec 10 15:28:58 crc kubenswrapper[4718]: I1210 15:28:58.126559 4718 scope.go:117] "RemoveContainer" containerID="b37608f294d0a205897165cd8ffe52eeb5998e5893fb0745c136901828bea0a0" Dec 10 15:28:58 crc kubenswrapper[4718]: I1210 15:28:58.131050 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Dec 10 15:28:58 crc kubenswrapper[4718]: I1210 15:28:58.136034 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-8v2sk" Dec 10 15:28:58 crc kubenswrapper[4718]: I1210 15:28:58.136459 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Dec 10 15:28:58 crc kubenswrapper[4718]: I1210 15:28:58.136660 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Dec 10 15:28:58 crc kubenswrapper[4718]: I1210 15:28:58.136686 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Dec 10 15:28:58 crc kubenswrapper[4718]: I1210 15:28:58.136902 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Dec 10 15:28:58 crc kubenswrapper[4718]: I1210 15:28:58.155828 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Dec 10 15:28:58 crc kubenswrapper[4718]: I1210 15:28:58.160137 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Dec 10 15:28:58 crc kubenswrapper[4718]: I1210 15:28:58.197453 4718 scope.go:117] "RemoveContainer" containerID="c1a5f18a598e9f56435185660387a975748c62c649fb18418697ce5535793d33" Dec 10 15:28:58 crc kubenswrapper[4718]: I1210 15:28:58.235856 4718 scope.go:117] "RemoveContainer" containerID="a6895eee2c578cff62226d2105459b82da186cce242a503cccec972eea4b7ea7" Dec 10 15:28:58 crc kubenswrapper[4718]: E1210 15:28:58.238256 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a6895eee2c578cff62226d2105459b82da186cce242a503cccec972eea4b7ea7\": container with ID starting with a6895eee2c578cff62226d2105459b82da186cce242a503cccec972eea4b7ea7 not found: ID does not exist" containerID="a6895eee2c578cff62226d2105459b82da186cce242a503cccec972eea4b7ea7" Dec 10 15:28:58 crc kubenswrapper[4718]: I1210 15:28:58.238309 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a6895eee2c578cff62226d2105459b82da186cce242a503cccec972eea4b7ea7"} err="failed to get container status \"a6895eee2c578cff62226d2105459b82da186cce242a503cccec972eea4b7ea7\": rpc error: code = NotFound desc = could not find container \"a6895eee2c578cff62226d2105459b82da186cce242a503cccec972eea4b7ea7\": container with ID starting with a6895eee2c578cff62226d2105459b82da186cce242a503cccec972eea4b7ea7 not found: ID does not exist" Dec 10 15:28:58 crc kubenswrapper[4718]: I1210 15:28:58.238352 4718 scope.go:117] "RemoveContainer" containerID="23931b46d54d68f781ff0af77258a1a4336dd892c7bae77f04b963b41689c6cf" Dec 10 15:28:58 crc kubenswrapper[4718]: E1210 15:28:58.238821 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"23931b46d54d68f781ff0af77258a1a4336dd892c7bae77f04b963b41689c6cf\": container with ID starting with 23931b46d54d68f781ff0af77258a1a4336dd892c7bae77f04b963b41689c6cf not found: ID does not exist" containerID="23931b46d54d68f781ff0af77258a1a4336dd892c7bae77f04b963b41689c6cf" Dec 10 15:28:58 crc kubenswrapper[4718]: I1210 15:28:58.238867 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"23931b46d54d68f781ff0af77258a1a4336dd892c7bae77f04b963b41689c6cf"} err="failed to get container status \"23931b46d54d68f781ff0af77258a1a4336dd892c7bae77f04b963b41689c6cf\": rpc error: code = NotFound desc = could not find container \"23931b46d54d68f781ff0af77258a1a4336dd892c7bae77f04b963b41689c6cf\": container with ID starting with 23931b46d54d68f781ff0af77258a1a4336dd892c7bae77f04b963b41689c6cf not found: ID does not exist" Dec 10 15:28:58 crc kubenswrapper[4718]: I1210 15:28:58.238881 4718 scope.go:117] "RemoveContainer" containerID="b37608f294d0a205897165cd8ffe52eeb5998e5893fb0745c136901828bea0a0" Dec 10 15:28:58 crc kubenswrapper[4718]: E1210 15:28:58.239263 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b37608f294d0a205897165cd8ffe52eeb5998e5893fb0745c136901828bea0a0\": container with ID starting with b37608f294d0a205897165cd8ffe52eeb5998e5893fb0745c136901828bea0a0 not found: ID does not exist" containerID="b37608f294d0a205897165cd8ffe52eeb5998e5893fb0745c136901828bea0a0" Dec 10 15:28:58 crc kubenswrapper[4718]: I1210 15:28:58.239282 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b37608f294d0a205897165cd8ffe52eeb5998e5893fb0745c136901828bea0a0"} err="failed to get container status \"b37608f294d0a205897165cd8ffe52eeb5998e5893fb0745c136901828bea0a0\": rpc error: code = NotFound desc = could not find container \"b37608f294d0a205897165cd8ffe52eeb5998e5893fb0745c136901828bea0a0\": container with ID starting with b37608f294d0a205897165cd8ffe52eeb5998e5893fb0745c136901828bea0a0 not found: ID does not exist" Dec 10 15:28:58 crc kubenswrapper[4718]: I1210 15:28:58.239294 4718 scope.go:117] "RemoveContainer" containerID="c1a5f18a598e9f56435185660387a975748c62c649fb18418697ce5535793d33" Dec 10 15:28:58 crc kubenswrapper[4718]: E1210 15:28:58.240010 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c1a5f18a598e9f56435185660387a975748c62c649fb18418697ce5535793d33\": container with ID starting with c1a5f18a598e9f56435185660387a975748c62c649fb18418697ce5535793d33 not found: ID does not exist" containerID="c1a5f18a598e9f56435185660387a975748c62c649fb18418697ce5535793d33" Dec 10 15:28:58 crc kubenswrapper[4718]: I1210 15:28:58.240095 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c1a5f18a598e9f56435185660387a975748c62c649fb18418697ce5535793d33"} err="failed to get container status \"c1a5f18a598e9f56435185660387a975748c62c649fb18418697ce5535793d33\": rpc error: code = NotFound desc = could not find container \"c1a5f18a598e9f56435185660387a975748c62c649fb18418697ce5535793d33\": container with ID starting with c1a5f18a598e9f56435185660387a975748c62c649fb18418697ce5535793d33 not found: ID does not exist" Dec 10 15:28:58 crc kubenswrapper[4718]: I1210 15:28:58.290926 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/cac0f06e-eca1-4268-9fb6-78207619e61c-config\") pod \"prometheus-metric-storage-0\" (UID: \"cac0f06e-eca1-4268-9fb6-78207619e61c\") " pod="openstack/prometheus-metric-storage-0" Dec 10 15:28:58 crc kubenswrapper[4718]: I1210 15:28:58.291143 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/cac0f06e-eca1-4268-9fb6-78207619e61c-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"cac0f06e-eca1-4268-9fb6-78207619e61c\") " pod="openstack/prometheus-metric-storage-0" Dec 10 15:28:58 crc kubenswrapper[4718]: I1210 15:28:58.291226 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/cac0f06e-eca1-4268-9fb6-78207619e61c-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"cac0f06e-eca1-4268-9fb6-78207619e61c\") " pod="openstack/prometheus-metric-storage-0" Dec 10 15:28:58 crc kubenswrapper[4718]: I1210 15:28:58.291283 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/cac0f06e-eca1-4268-9fb6-78207619e61c-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"cac0f06e-eca1-4268-9fb6-78207619e61c\") " pod="openstack/prometheus-metric-storage-0" Dec 10 15:28:58 crc kubenswrapper[4718]: I1210 15:28:58.291576 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cac0f06e-eca1-4268-9fb6-78207619e61c-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"cac0f06e-eca1-4268-9fb6-78207619e61c\") " pod="openstack/prometheus-metric-storage-0" Dec 10 15:28:58 crc kubenswrapper[4718]: I1210 15:28:58.291616 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n4gdh\" (UniqueName: \"kubernetes.io/projected/cac0f06e-eca1-4268-9fb6-78207619e61c-kube-api-access-n4gdh\") pod \"prometheus-metric-storage-0\" (UID: \"cac0f06e-eca1-4268-9fb6-78207619e61c\") " pod="openstack/prometheus-metric-storage-0" Dec 10 15:28:58 crc kubenswrapper[4718]: I1210 15:28:58.291842 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/cac0f06e-eca1-4268-9fb6-78207619e61c-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"cac0f06e-eca1-4268-9fb6-78207619e61c\") " pod="openstack/prometheus-metric-storage-0" Dec 10 15:28:58 crc kubenswrapper[4718]: I1210 15:28:58.291909 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-9caec827-4c52-4f0c-a90d-988b3cf66a46\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-9caec827-4c52-4f0c-a90d-988b3cf66a46\") pod \"prometheus-metric-storage-0\" (UID: \"cac0f06e-eca1-4268-9fb6-78207619e61c\") " pod="openstack/prometheus-metric-storage-0" Dec 10 15:28:58 crc kubenswrapper[4718]: I1210 15:28:58.293112 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/cac0f06e-eca1-4268-9fb6-78207619e61c-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"cac0f06e-eca1-4268-9fb6-78207619e61c\") " pod="openstack/prometheus-metric-storage-0" Dec 10 15:28:58 crc kubenswrapper[4718]: I1210 15:28:58.293515 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/cac0f06e-eca1-4268-9fb6-78207619e61c-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"cac0f06e-eca1-4268-9fb6-78207619e61c\") " pod="openstack/prometheus-metric-storage-0" Dec 10 15:28:58 crc kubenswrapper[4718]: I1210 15:28:58.293578 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/cac0f06e-eca1-4268-9fb6-78207619e61c-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"cac0f06e-eca1-4268-9fb6-78207619e61c\") " pod="openstack/prometheus-metric-storage-0" Dec 10 15:28:58 crc kubenswrapper[4718]: I1210 15:28:58.395940 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cac0f06e-eca1-4268-9fb6-78207619e61c-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"cac0f06e-eca1-4268-9fb6-78207619e61c\") " pod="openstack/prometheus-metric-storage-0" Dec 10 15:28:58 crc kubenswrapper[4718]: I1210 15:28:58.395999 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n4gdh\" (UniqueName: \"kubernetes.io/projected/cac0f06e-eca1-4268-9fb6-78207619e61c-kube-api-access-n4gdh\") pod \"prometheus-metric-storage-0\" (UID: \"cac0f06e-eca1-4268-9fb6-78207619e61c\") " pod="openstack/prometheus-metric-storage-0" Dec 10 15:28:58 crc kubenswrapper[4718]: I1210 15:28:58.396157 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/cac0f06e-eca1-4268-9fb6-78207619e61c-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"cac0f06e-eca1-4268-9fb6-78207619e61c\") " pod="openstack/prometheus-metric-storage-0" Dec 10 15:28:58 crc kubenswrapper[4718]: I1210 15:28:58.397140 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-9caec827-4c52-4f0c-a90d-988b3cf66a46\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-9caec827-4c52-4f0c-a90d-988b3cf66a46\") pod \"prometheus-metric-storage-0\" (UID: \"cac0f06e-eca1-4268-9fb6-78207619e61c\") " pod="openstack/prometheus-metric-storage-0" Dec 10 15:28:58 crc kubenswrapper[4718]: I1210 15:28:58.397747 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/cac0f06e-eca1-4268-9fb6-78207619e61c-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"cac0f06e-eca1-4268-9fb6-78207619e61c\") " pod="openstack/prometheus-metric-storage-0" Dec 10 15:28:58 crc kubenswrapper[4718]: I1210 15:28:58.398214 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/cac0f06e-eca1-4268-9fb6-78207619e61c-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"cac0f06e-eca1-4268-9fb6-78207619e61c\") " pod="openstack/prometheus-metric-storage-0" Dec 10 15:28:58 crc kubenswrapper[4718]: I1210 15:28:58.398253 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/cac0f06e-eca1-4268-9fb6-78207619e61c-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"cac0f06e-eca1-4268-9fb6-78207619e61c\") " pod="openstack/prometheus-metric-storage-0" Dec 10 15:28:58 crc kubenswrapper[4718]: I1210 15:28:58.398439 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/cac0f06e-eca1-4268-9fb6-78207619e61c-config\") pod \"prometheus-metric-storage-0\" (UID: \"cac0f06e-eca1-4268-9fb6-78207619e61c\") " pod="openstack/prometheus-metric-storage-0" Dec 10 15:28:58 crc kubenswrapper[4718]: I1210 15:28:58.398582 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/cac0f06e-eca1-4268-9fb6-78207619e61c-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"cac0f06e-eca1-4268-9fb6-78207619e61c\") " pod="openstack/prometheus-metric-storage-0" Dec 10 15:28:58 crc kubenswrapper[4718]: I1210 15:28:58.398620 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/cac0f06e-eca1-4268-9fb6-78207619e61c-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"cac0f06e-eca1-4268-9fb6-78207619e61c\") " pod="openstack/prometheus-metric-storage-0" Dec 10 15:28:58 crc kubenswrapper[4718]: I1210 15:28:58.398665 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/cac0f06e-eca1-4268-9fb6-78207619e61c-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"cac0f06e-eca1-4268-9fb6-78207619e61c\") " pod="openstack/prometheus-metric-storage-0" Dec 10 15:28:58 crc kubenswrapper[4718]: I1210 15:28:58.399978 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/cac0f06e-eca1-4268-9fb6-78207619e61c-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"cac0f06e-eca1-4268-9fb6-78207619e61c\") " pod="openstack/prometheus-metric-storage-0" Dec 10 15:28:58 crc kubenswrapper[4718]: I1210 15:28:58.400821 4718 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Dec 10 15:28:58 crc kubenswrapper[4718]: I1210 15:28:58.400948 4718 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-9caec827-4c52-4f0c-a90d-988b3cf66a46\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-9caec827-4c52-4f0c-a90d-988b3cf66a46\") pod \"prometheus-metric-storage-0\" (UID: \"cac0f06e-eca1-4268-9fb6-78207619e61c\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/536f9c0805135df6ee87eba8f71795b119991da876d5796e8953829643544095/globalmount\"" pod="openstack/prometheus-metric-storage-0" Dec 10 15:28:58 crc kubenswrapper[4718]: I1210 15:28:58.402237 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/cac0f06e-eca1-4268-9fb6-78207619e61c-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"cac0f06e-eca1-4268-9fb6-78207619e61c\") " pod="openstack/prometheus-metric-storage-0" Dec 10 15:28:58 crc kubenswrapper[4718]: I1210 15:28:58.402922 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/cac0f06e-eca1-4268-9fb6-78207619e61c-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"cac0f06e-eca1-4268-9fb6-78207619e61c\") " pod="openstack/prometheus-metric-storage-0" Dec 10 15:28:58 crc kubenswrapper[4718]: I1210 15:28:58.404510 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cac0f06e-eca1-4268-9fb6-78207619e61c-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"cac0f06e-eca1-4268-9fb6-78207619e61c\") " pod="openstack/prometheus-metric-storage-0" Dec 10 15:28:58 crc kubenswrapper[4718]: I1210 15:28:58.404986 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/cac0f06e-eca1-4268-9fb6-78207619e61c-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"cac0f06e-eca1-4268-9fb6-78207619e61c\") " pod="openstack/prometheus-metric-storage-0" Dec 10 15:28:58 crc kubenswrapper[4718]: I1210 15:28:58.405898 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/cac0f06e-eca1-4268-9fb6-78207619e61c-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"cac0f06e-eca1-4268-9fb6-78207619e61c\") " pod="openstack/prometheus-metric-storage-0" Dec 10 15:28:58 crc kubenswrapper[4718]: I1210 15:28:58.406542 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/cac0f06e-eca1-4268-9fb6-78207619e61c-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"cac0f06e-eca1-4268-9fb6-78207619e61c\") " pod="openstack/prometheus-metric-storage-0" Dec 10 15:28:58 crc kubenswrapper[4718]: I1210 15:28:58.406688 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/cac0f06e-eca1-4268-9fb6-78207619e61c-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"cac0f06e-eca1-4268-9fb6-78207619e61c\") " pod="openstack/prometheus-metric-storage-0" Dec 10 15:28:58 crc kubenswrapper[4718]: I1210 15:28:58.407471 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/cac0f06e-eca1-4268-9fb6-78207619e61c-config\") pod \"prometheus-metric-storage-0\" (UID: \"cac0f06e-eca1-4268-9fb6-78207619e61c\") " pod="openstack/prometheus-metric-storage-0" Dec 10 15:28:58 crc kubenswrapper[4718]: I1210 15:28:58.422732 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n4gdh\" (UniqueName: \"kubernetes.io/projected/cac0f06e-eca1-4268-9fb6-78207619e61c-kube-api-access-n4gdh\") pod \"prometheus-metric-storage-0\" (UID: \"cac0f06e-eca1-4268-9fb6-78207619e61c\") " pod="openstack/prometheus-metric-storage-0" Dec 10 15:28:58 crc kubenswrapper[4718]: I1210 15:28:58.453292 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-9caec827-4c52-4f0c-a90d-988b3cf66a46\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-9caec827-4c52-4f0c-a90d-988b3cf66a46\") pod \"prometheus-metric-storage-0\" (UID: \"cac0f06e-eca1-4268-9fb6-78207619e61c\") " pod="openstack/prometheus-metric-storage-0" Dec 10 15:28:58 crc kubenswrapper[4718]: I1210 15:28:58.496604 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Dec 10 15:28:59 crc kubenswrapper[4718]: I1210 15:28:59.005710 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Dec 10 15:28:59 crc kubenswrapper[4718]: I1210 15:28:59.817316 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"cac0f06e-eca1-4268-9fb6-78207619e61c","Type":"ContainerStarted","Data":"d41dee4b5c40705b79b02794ac0727892d5c58977bd24a5d4ac30df409c84e58"} Dec 10 15:29:00 crc kubenswrapper[4718]: I1210 15:29:00.033822 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9261d0c6-0506-4f71-ad72-76eb7dd4ac0a" path="/var/lib/kubelet/pods/9261d0c6-0506-4f71-ad72-76eb7dd4ac0a/volumes" Dec 10 15:29:08 crc kubenswrapper[4718]: I1210 15:29:08.923655 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"cac0f06e-eca1-4268-9fb6-78207619e61c","Type":"ContainerStarted","Data":"7253ca0092106dbdd01e999dd676068f872fcfbb111b828a09cfd551d1070fd9"} Dec 10 15:29:17 crc kubenswrapper[4718]: I1210 15:29:17.075615 4718 generic.go:334] "Generic (PLEG): container finished" podID="cac0f06e-eca1-4268-9fb6-78207619e61c" containerID="7253ca0092106dbdd01e999dd676068f872fcfbb111b828a09cfd551d1070fd9" exitCode=0 Dec 10 15:29:17 crc kubenswrapper[4718]: I1210 15:29:17.075723 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"cac0f06e-eca1-4268-9fb6-78207619e61c","Type":"ContainerDied","Data":"7253ca0092106dbdd01e999dd676068f872fcfbb111b828a09cfd551d1070fd9"} Dec 10 15:29:18 crc kubenswrapper[4718]: I1210 15:29:18.106178 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"cac0f06e-eca1-4268-9fb6-78207619e61c","Type":"ContainerStarted","Data":"aeca7473297b0bf0400ee30eca8f14cd966c769c6982f5adcbace9a08f8e20b2"} Dec 10 15:29:22 crc kubenswrapper[4718]: I1210 15:29:22.153607 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"cac0f06e-eca1-4268-9fb6-78207619e61c","Type":"ContainerStarted","Data":"131f63bbaa0107178b3af22f344f324eaf44a937798f1c1641fe305ba4a13103"} Dec 10 15:29:23 crc kubenswrapper[4718]: I1210 15:29:23.178719 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"cac0f06e-eca1-4268-9fb6-78207619e61c","Type":"ContainerStarted","Data":"7795937d9bf4a46c2b1b8bab6677da10c4a3a69919de87dc7250bcafa9f42e02"} Dec 10 15:29:23 crc kubenswrapper[4718]: I1210 15:29:23.215158 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=26.2151093 podStartE2EDuration="26.2151093s" podCreationTimestamp="2025-12-10 15:28:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 15:29:23.205716962 +0000 UTC m=+3468.154940379" watchObservedRunningTime="2025-12-10 15:29:23.2151093 +0000 UTC m=+3468.164332717" Dec 10 15:29:23 crc kubenswrapper[4718]: I1210 15:29:23.497150 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Dec 10 15:29:28 crc kubenswrapper[4718]: I1210 15:29:28.497051 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Dec 10 15:29:28 crc kubenswrapper[4718]: I1210 15:29:28.505491 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Dec 10 15:29:29 crc kubenswrapper[4718]: I1210 15:29:29.323493 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Dec 10 15:29:50 crc kubenswrapper[4718]: I1210 15:29:50.978968 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/tempest-tests-tempest"] Dec 10 15:29:50 crc kubenswrapper[4718]: I1210 15:29:50.981537 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Dec 10 15:29:50 crc kubenswrapper[4718]: I1210 15:29:50.986184 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"test-operator-controller-priv-key" Dec 10 15:29:50 crc kubenswrapper[4718]: I1210 15:29:50.986500 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-custom-data-s0" Dec 10 15:29:50 crc kubenswrapper[4718]: I1210 15:29:50.986750 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Dec 10 15:29:50 crc kubenswrapper[4718]: I1210 15:29:50.986931 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-vwgkm" Dec 10 15:29:51 crc kubenswrapper[4718]: I1210 15:29:51.002682 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Dec 10 15:29:51 crc kubenswrapper[4718]: I1210 15:29:51.100881 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/df134785-8fb2-418f-89ba-55f6d822f50a-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"df134785-8fb2-418f-89ba-55f6d822f50a\") " pod="openstack/tempest-tests-tempest" Dec 10 15:29:51 crc kubenswrapper[4718]: I1210 15:29:51.101287 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/df134785-8fb2-418f-89ba-55f6d822f50a-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"df134785-8fb2-418f-89ba-55f6d822f50a\") " pod="openstack/tempest-tests-tempest" Dec 10 15:29:51 crc kubenswrapper[4718]: I1210 15:29:51.101497 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/df134785-8fb2-418f-89ba-55f6d822f50a-config-data\") pod \"tempest-tests-tempest\" (UID: \"df134785-8fb2-418f-89ba-55f6d822f50a\") " pod="openstack/tempest-tests-tempest" Dec 10 15:29:51 crc kubenswrapper[4718]: I1210 15:29:51.101697 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/df134785-8fb2-418f-89ba-55f6d822f50a-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"df134785-8fb2-418f-89ba-55f6d822f50a\") " pod="openstack/tempest-tests-tempest" Dec 10 15:29:51 crc kubenswrapper[4718]: I1210 15:29:51.101965 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"tempest-tests-tempest\" (UID: \"df134785-8fb2-418f-89ba-55f6d822f50a\") " pod="openstack/tempest-tests-tempest" Dec 10 15:29:51 crc kubenswrapper[4718]: I1210 15:29:51.102085 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/df134785-8fb2-418f-89ba-55f6d822f50a-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"df134785-8fb2-418f-89ba-55f6d822f50a\") " pod="openstack/tempest-tests-tempest" Dec 10 15:29:51 crc kubenswrapper[4718]: I1210 15:29:51.102233 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/df134785-8fb2-418f-89ba-55f6d822f50a-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"df134785-8fb2-418f-89ba-55f6d822f50a\") " pod="openstack/tempest-tests-tempest" Dec 10 15:29:51 crc kubenswrapper[4718]: I1210 15:29:51.102361 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ck94q\" (UniqueName: \"kubernetes.io/projected/df134785-8fb2-418f-89ba-55f6d822f50a-kube-api-access-ck94q\") pod \"tempest-tests-tempest\" (UID: \"df134785-8fb2-418f-89ba-55f6d822f50a\") " pod="openstack/tempest-tests-tempest" Dec 10 15:29:51 crc kubenswrapper[4718]: I1210 15:29:51.102570 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/df134785-8fb2-418f-89ba-55f6d822f50a-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"df134785-8fb2-418f-89ba-55f6d822f50a\") " pod="openstack/tempest-tests-tempest" Dec 10 15:29:51 crc kubenswrapper[4718]: I1210 15:29:51.205238 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/df134785-8fb2-418f-89ba-55f6d822f50a-config-data\") pod \"tempest-tests-tempest\" (UID: \"df134785-8fb2-418f-89ba-55f6d822f50a\") " pod="openstack/tempest-tests-tempest" Dec 10 15:29:51 crc kubenswrapper[4718]: I1210 15:29:51.205923 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/df134785-8fb2-418f-89ba-55f6d822f50a-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"df134785-8fb2-418f-89ba-55f6d822f50a\") " pod="openstack/tempest-tests-tempest" Dec 10 15:29:51 crc kubenswrapper[4718]: I1210 15:29:51.206761 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"tempest-tests-tempest\" (UID: \"df134785-8fb2-418f-89ba-55f6d822f50a\") " pod="openstack/tempest-tests-tempest" Dec 10 15:29:51 crc kubenswrapper[4718]: I1210 15:29:51.206883 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/df134785-8fb2-418f-89ba-55f6d822f50a-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"df134785-8fb2-418f-89ba-55f6d822f50a\") " pod="openstack/tempest-tests-tempest" Dec 10 15:29:51 crc kubenswrapper[4718]: I1210 15:29:51.206984 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ck94q\" (UniqueName: \"kubernetes.io/projected/df134785-8fb2-418f-89ba-55f6d822f50a-kube-api-access-ck94q\") pod \"tempest-tests-tempest\" (UID: \"df134785-8fb2-418f-89ba-55f6d822f50a\") " pod="openstack/tempest-tests-tempest" Dec 10 15:29:51 crc kubenswrapper[4718]: I1210 15:29:51.207044 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/df134785-8fb2-418f-89ba-55f6d822f50a-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"df134785-8fb2-418f-89ba-55f6d822f50a\") " pod="openstack/tempest-tests-tempest" Dec 10 15:29:51 crc kubenswrapper[4718]: I1210 15:29:51.207281 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/df134785-8fb2-418f-89ba-55f6d822f50a-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"df134785-8fb2-418f-89ba-55f6d822f50a\") " pod="openstack/tempest-tests-tempest" Dec 10 15:29:51 crc kubenswrapper[4718]: I1210 15:29:51.207496 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/df134785-8fb2-418f-89ba-55f6d822f50a-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"df134785-8fb2-418f-89ba-55f6d822f50a\") " pod="openstack/tempest-tests-tempest" Dec 10 15:29:51 crc kubenswrapper[4718]: I1210 15:29:51.207627 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/df134785-8fb2-418f-89ba-55f6d822f50a-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"df134785-8fb2-418f-89ba-55f6d822f50a\") " pod="openstack/tempest-tests-tempest" Dec 10 15:29:51 crc kubenswrapper[4718]: I1210 15:29:51.207642 4718 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"tempest-tests-tempest\" (UID: \"df134785-8fb2-418f-89ba-55f6d822f50a\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/tempest-tests-tempest" Dec 10 15:29:51 crc kubenswrapper[4718]: I1210 15:29:51.208199 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/df134785-8fb2-418f-89ba-55f6d822f50a-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"df134785-8fb2-418f-89ba-55f6d822f50a\") " pod="openstack/tempest-tests-tempest" Dec 10 15:29:51 crc kubenswrapper[4718]: I1210 15:29:51.215839 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/df134785-8fb2-418f-89ba-55f6d822f50a-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"df134785-8fb2-418f-89ba-55f6d822f50a\") " pod="openstack/tempest-tests-tempest" Dec 10 15:29:51 crc kubenswrapper[4718]: I1210 15:29:51.218001 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/df134785-8fb2-418f-89ba-55f6d822f50a-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"df134785-8fb2-418f-89ba-55f6d822f50a\") " pod="openstack/tempest-tests-tempest" Dec 10 15:29:51 crc kubenswrapper[4718]: I1210 15:29:51.218569 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/df134785-8fb2-418f-89ba-55f6d822f50a-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"df134785-8fb2-418f-89ba-55f6d822f50a\") " pod="openstack/tempest-tests-tempest" Dec 10 15:29:51 crc kubenswrapper[4718]: I1210 15:29:51.223227 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/df134785-8fb2-418f-89ba-55f6d822f50a-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"df134785-8fb2-418f-89ba-55f6d822f50a\") " pod="openstack/tempest-tests-tempest" Dec 10 15:29:51 crc kubenswrapper[4718]: I1210 15:29:51.224076 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/df134785-8fb2-418f-89ba-55f6d822f50a-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"df134785-8fb2-418f-89ba-55f6d822f50a\") " pod="openstack/tempest-tests-tempest" Dec 10 15:29:51 crc kubenswrapper[4718]: I1210 15:29:51.227501 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/df134785-8fb2-418f-89ba-55f6d822f50a-config-data\") pod \"tempest-tests-tempest\" (UID: \"df134785-8fb2-418f-89ba-55f6d822f50a\") " pod="openstack/tempest-tests-tempest" Dec 10 15:29:51 crc kubenswrapper[4718]: I1210 15:29:51.230241 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ck94q\" (UniqueName: \"kubernetes.io/projected/df134785-8fb2-418f-89ba-55f6d822f50a-kube-api-access-ck94q\") pod \"tempest-tests-tempest\" (UID: \"df134785-8fb2-418f-89ba-55f6d822f50a\") " pod="openstack/tempest-tests-tempest" Dec 10 15:29:51 crc kubenswrapper[4718]: I1210 15:29:51.249951 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"tempest-tests-tempest\" (UID: \"df134785-8fb2-418f-89ba-55f6d822f50a\") " pod="openstack/tempest-tests-tempest" Dec 10 15:29:51 crc kubenswrapper[4718]: I1210 15:29:51.316725 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Dec 10 15:29:51 crc kubenswrapper[4718]: I1210 15:29:51.692094 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Dec 10 15:29:51 crc kubenswrapper[4718]: I1210 15:29:51.740730 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"df134785-8fb2-418f-89ba-55f6d822f50a","Type":"ContainerStarted","Data":"14cb497c47bcd778008bf70848bc1510428dc849baf8fa2d13a6bad1eaf1e2a0"} Dec 10 15:30:00 crc kubenswrapper[4718]: I1210 15:30:00.165538 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29423010-rwmzq"] Dec 10 15:30:00 crc kubenswrapper[4718]: I1210 15:30:00.168296 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29423010-rwmzq" Dec 10 15:30:00 crc kubenswrapper[4718]: I1210 15:30:00.172145 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 10 15:30:00 crc kubenswrapper[4718]: I1210 15:30:00.172163 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 10 15:30:00 crc kubenswrapper[4718]: I1210 15:30:00.181154 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29423010-rwmzq"] Dec 10 15:30:00 crc kubenswrapper[4718]: I1210 15:30:00.231741 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cjp58\" (UniqueName: \"kubernetes.io/projected/4a02cd7d-3718-4e26-b054-9a47df1e475b-kube-api-access-cjp58\") pod \"collect-profiles-29423010-rwmzq\" (UID: \"4a02cd7d-3718-4e26-b054-9a47df1e475b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29423010-rwmzq" Dec 10 15:30:00 crc kubenswrapper[4718]: I1210 15:30:00.231812 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4a02cd7d-3718-4e26-b054-9a47df1e475b-secret-volume\") pod \"collect-profiles-29423010-rwmzq\" (UID: \"4a02cd7d-3718-4e26-b054-9a47df1e475b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29423010-rwmzq" Dec 10 15:30:00 crc kubenswrapper[4718]: I1210 15:30:00.232018 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4a02cd7d-3718-4e26-b054-9a47df1e475b-config-volume\") pod \"collect-profiles-29423010-rwmzq\" (UID: \"4a02cd7d-3718-4e26-b054-9a47df1e475b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29423010-rwmzq" Dec 10 15:30:00 crc kubenswrapper[4718]: I1210 15:30:00.334260 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4a02cd7d-3718-4e26-b054-9a47df1e475b-config-volume\") pod \"collect-profiles-29423010-rwmzq\" (UID: \"4a02cd7d-3718-4e26-b054-9a47df1e475b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29423010-rwmzq" Dec 10 15:30:00 crc kubenswrapper[4718]: I1210 15:30:00.334484 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cjp58\" (UniqueName: \"kubernetes.io/projected/4a02cd7d-3718-4e26-b054-9a47df1e475b-kube-api-access-cjp58\") pod \"collect-profiles-29423010-rwmzq\" (UID: \"4a02cd7d-3718-4e26-b054-9a47df1e475b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29423010-rwmzq" Dec 10 15:30:00 crc kubenswrapper[4718]: I1210 15:30:00.334526 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4a02cd7d-3718-4e26-b054-9a47df1e475b-secret-volume\") pod \"collect-profiles-29423010-rwmzq\" (UID: \"4a02cd7d-3718-4e26-b054-9a47df1e475b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29423010-rwmzq" Dec 10 15:30:00 crc kubenswrapper[4718]: I1210 15:30:00.337085 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4a02cd7d-3718-4e26-b054-9a47df1e475b-config-volume\") pod \"collect-profiles-29423010-rwmzq\" (UID: \"4a02cd7d-3718-4e26-b054-9a47df1e475b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29423010-rwmzq" Dec 10 15:30:00 crc kubenswrapper[4718]: I1210 15:30:00.343859 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4a02cd7d-3718-4e26-b054-9a47df1e475b-secret-volume\") pod \"collect-profiles-29423010-rwmzq\" (UID: \"4a02cd7d-3718-4e26-b054-9a47df1e475b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29423010-rwmzq" Dec 10 15:30:00 crc kubenswrapper[4718]: I1210 15:30:00.352800 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cjp58\" (UniqueName: \"kubernetes.io/projected/4a02cd7d-3718-4e26-b054-9a47df1e475b-kube-api-access-cjp58\") pod \"collect-profiles-29423010-rwmzq\" (UID: \"4a02cd7d-3718-4e26-b054-9a47df1e475b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29423010-rwmzq" Dec 10 15:30:00 crc kubenswrapper[4718]: I1210 15:30:00.510617 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29423010-rwmzq" Dec 10 15:30:08 crc kubenswrapper[4718]: I1210 15:30:08.901749 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29423010-rwmzq"] Dec 10 15:30:08 crc kubenswrapper[4718]: I1210 15:30:08.994274 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29423010-rwmzq" event={"ID":"4a02cd7d-3718-4e26-b054-9a47df1e475b","Type":"ContainerStarted","Data":"1989148173a3a76802d6b4ee9178c80ecbcbb6783ef0771f179f8dfcde3c742b"} Dec 10 15:30:10 crc kubenswrapper[4718]: I1210 15:30:10.008514 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"df134785-8fb2-418f-89ba-55f6d822f50a","Type":"ContainerStarted","Data":"1f4ef711b0b00e11624af540c169e9f859704946e61429a4a90e8e686afe7ace"} Dec 10 15:30:10 crc kubenswrapper[4718]: I1210 15:30:10.016701 4718 generic.go:334] "Generic (PLEG): container finished" podID="4a02cd7d-3718-4e26-b054-9a47df1e475b" containerID="3c982718c52d8aec964ad4cdf30e83736ce2bfa51c3ceb95be430e20257f322f" exitCode=0 Dec 10 15:30:10 crc kubenswrapper[4718]: I1210 15:30:10.016758 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29423010-rwmzq" event={"ID":"4a02cd7d-3718-4e26-b054-9a47df1e475b","Type":"ContainerDied","Data":"3c982718c52d8aec964ad4cdf30e83736ce2bfa51c3ceb95be430e20257f322f"} Dec 10 15:30:10 crc kubenswrapper[4718]: I1210 15:30:10.058135 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/tempest-tests-tempest" podStartSLOduration=4.129441015 podStartE2EDuration="21.058103082s" podCreationTimestamp="2025-12-10 15:29:49 +0000 UTC" firstStartedPulling="2025-12-10 15:29:51.700507952 +0000 UTC m=+3496.649731369" lastFinishedPulling="2025-12-10 15:30:08.629170019 +0000 UTC m=+3513.578393436" observedRunningTime="2025-12-10 15:30:10.042305994 +0000 UTC m=+3514.991529411" watchObservedRunningTime="2025-12-10 15:30:10.058103082 +0000 UTC m=+3515.007326499" Dec 10 15:30:11 crc kubenswrapper[4718]: I1210 15:30:11.527235 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29423010-rwmzq" Dec 10 15:30:11 crc kubenswrapper[4718]: I1210 15:30:11.714234 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4a02cd7d-3718-4e26-b054-9a47df1e475b-secret-volume\") pod \"4a02cd7d-3718-4e26-b054-9a47df1e475b\" (UID: \"4a02cd7d-3718-4e26-b054-9a47df1e475b\") " Dec 10 15:30:11 crc kubenswrapper[4718]: I1210 15:30:11.714526 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4a02cd7d-3718-4e26-b054-9a47df1e475b-config-volume\") pod \"4a02cd7d-3718-4e26-b054-9a47df1e475b\" (UID: \"4a02cd7d-3718-4e26-b054-9a47df1e475b\") " Dec 10 15:30:11 crc kubenswrapper[4718]: I1210 15:30:11.715195 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4a02cd7d-3718-4e26-b054-9a47df1e475b-config-volume" (OuterVolumeSpecName: "config-volume") pod "4a02cd7d-3718-4e26-b054-9a47df1e475b" (UID: "4a02cd7d-3718-4e26-b054-9a47df1e475b"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 15:30:11 crc kubenswrapper[4718]: I1210 15:30:11.715378 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cjp58\" (UniqueName: \"kubernetes.io/projected/4a02cd7d-3718-4e26-b054-9a47df1e475b-kube-api-access-cjp58\") pod \"4a02cd7d-3718-4e26-b054-9a47df1e475b\" (UID: \"4a02cd7d-3718-4e26-b054-9a47df1e475b\") " Dec 10 15:30:11 crc kubenswrapper[4718]: I1210 15:30:11.716572 4718 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4a02cd7d-3718-4e26-b054-9a47df1e475b-config-volume\") on node \"crc\" DevicePath \"\"" Dec 10 15:30:11 crc kubenswrapper[4718]: I1210 15:30:11.724135 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4a02cd7d-3718-4e26-b054-9a47df1e475b-kube-api-access-cjp58" (OuterVolumeSpecName: "kube-api-access-cjp58") pod "4a02cd7d-3718-4e26-b054-9a47df1e475b" (UID: "4a02cd7d-3718-4e26-b054-9a47df1e475b"). InnerVolumeSpecName "kube-api-access-cjp58". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:30:11 crc kubenswrapper[4718]: I1210 15:30:11.724993 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4a02cd7d-3718-4e26-b054-9a47df1e475b-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "4a02cd7d-3718-4e26-b054-9a47df1e475b" (UID: "4a02cd7d-3718-4e26-b054-9a47df1e475b"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:30:11 crc kubenswrapper[4718]: I1210 15:30:11.820162 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cjp58\" (UniqueName: \"kubernetes.io/projected/4a02cd7d-3718-4e26-b054-9a47df1e475b-kube-api-access-cjp58\") on node \"crc\" DevicePath \"\"" Dec 10 15:30:11 crc kubenswrapper[4718]: I1210 15:30:11.820231 4718 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4a02cd7d-3718-4e26-b054-9a47df1e475b-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 10 15:30:12 crc kubenswrapper[4718]: I1210 15:30:12.068996 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29423010-rwmzq" event={"ID":"4a02cd7d-3718-4e26-b054-9a47df1e475b","Type":"ContainerDied","Data":"1989148173a3a76802d6b4ee9178c80ecbcbb6783ef0771f179f8dfcde3c742b"} Dec 10 15:30:12 crc kubenswrapper[4718]: I1210 15:30:12.069051 4718 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1989148173a3a76802d6b4ee9178c80ecbcbb6783ef0771f179f8dfcde3c742b" Dec 10 15:30:12 crc kubenswrapper[4718]: I1210 15:30:12.069058 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29423010-rwmzq" Dec 10 15:30:12 crc kubenswrapper[4718]: I1210 15:30:12.634984 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29422965-6d2gq"] Dec 10 15:30:12 crc kubenswrapper[4718]: I1210 15:30:12.650055 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29422965-6d2gq"] Dec 10 15:30:14 crc kubenswrapper[4718]: I1210 15:30:14.035403 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6fd1608c-f123-43ee-ab77-cb3efa8a81cc" path="/var/lib/kubelet/pods/6fd1608c-f123-43ee-ab77-cb3efa8a81cc/volumes" Dec 10 15:30:48 crc kubenswrapper[4718]: I1210 15:30:48.084281 4718 patch_prober.go:28] interesting pod/machine-config-daemon-8zmhn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 10 15:30:48 crc kubenswrapper[4718]: I1210 15:30:48.085147 4718 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" podUID="8db53917-7cfb-496d-b8a0-5cc68f3be4e7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 10 15:30:56 crc kubenswrapper[4718]: I1210 15:30:56.046733 4718 scope.go:117] "RemoveContainer" containerID="2e54e9d50ad3dfcbdb453c22ecc606c47d8b5db9653ae77c0735306592f4ca52" Dec 10 15:31:18 crc kubenswrapper[4718]: I1210 15:31:18.084180 4718 patch_prober.go:28] interesting pod/machine-config-daemon-8zmhn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 10 15:31:18 crc kubenswrapper[4718]: I1210 15:31:18.085293 4718 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" podUID="8db53917-7cfb-496d-b8a0-5cc68f3be4e7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 10 15:31:25 crc kubenswrapper[4718]: I1210 15:31:25.945179 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-vw5sv"] Dec 10 15:31:25 crc kubenswrapper[4718]: E1210 15:31:25.952524 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4a02cd7d-3718-4e26-b054-9a47df1e475b" containerName="collect-profiles" Dec 10 15:31:25 crc kubenswrapper[4718]: I1210 15:31:25.952564 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="4a02cd7d-3718-4e26-b054-9a47df1e475b" containerName="collect-profiles" Dec 10 15:31:25 crc kubenswrapper[4718]: I1210 15:31:25.953955 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="4a02cd7d-3718-4e26-b054-9a47df1e475b" containerName="collect-profiles" Dec 10 15:31:25 crc kubenswrapper[4718]: I1210 15:31:25.956494 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vw5sv" Dec 10 15:31:25 crc kubenswrapper[4718]: I1210 15:31:25.965210 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-vw5sv"] Dec 10 15:31:26 crc kubenswrapper[4718]: I1210 15:31:26.038085 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9707f95b-6eef-4f18-9a5e-a700ac0a3314-catalog-content\") pod \"redhat-operators-vw5sv\" (UID: \"9707f95b-6eef-4f18-9a5e-a700ac0a3314\") " pod="openshift-marketplace/redhat-operators-vw5sv" Dec 10 15:31:26 crc kubenswrapper[4718]: I1210 15:31:26.038287 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ck2g7\" (UniqueName: \"kubernetes.io/projected/9707f95b-6eef-4f18-9a5e-a700ac0a3314-kube-api-access-ck2g7\") pod \"redhat-operators-vw5sv\" (UID: \"9707f95b-6eef-4f18-9a5e-a700ac0a3314\") " pod="openshift-marketplace/redhat-operators-vw5sv" Dec 10 15:31:26 crc kubenswrapper[4718]: I1210 15:31:26.038337 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9707f95b-6eef-4f18-9a5e-a700ac0a3314-utilities\") pod \"redhat-operators-vw5sv\" (UID: \"9707f95b-6eef-4f18-9a5e-a700ac0a3314\") " pod="openshift-marketplace/redhat-operators-vw5sv" Dec 10 15:31:26 crc kubenswrapper[4718]: I1210 15:31:26.141373 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ck2g7\" (UniqueName: \"kubernetes.io/projected/9707f95b-6eef-4f18-9a5e-a700ac0a3314-kube-api-access-ck2g7\") pod \"redhat-operators-vw5sv\" (UID: \"9707f95b-6eef-4f18-9a5e-a700ac0a3314\") " pod="openshift-marketplace/redhat-operators-vw5sv" Dec 10 15:31:26 crc kubenswrapper[4718]: I1210 15:31:26.142004 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9707f95b-6eef-4f18-9a5e-a700ac0a3314-utilities\") pod \"redhat-operators-vw5sv\" (UID: \"9707f95b-6eef-4f18-9a5e-a700ac0a3314\") " pod="openshift-marketplace/redhat-operators-vw5sv" Dec 10 15:31:26 crc kubenswrapper[4718]: I1210 15:31:26.142181 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9707f95b-6eef-4f18-9a5e-a700ac0a3314-catalog-content\") pod \"redhat-operators-vw5sv\" (UID: \"9707f95b-6eef-4f18-9a5e-a700ac0a3314\") " pod="openshift-marketplace/redhat-operators-vw5sv" Dec 10 15:31:26 crc kubenswrapper[4718]: I1210 15:31:26.143043 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9707f95b-6eef-4f18-9a5e-a700ac0a3314-utilities\") pod \"redhat-operators-vw5sv\" (UID: \"9707f95b-6eef-4f18-9a5e-a700ac0a3314\") " pod="openshift-marketplace/redhat-operators-vw5sv" Dec 10 15:31:26 crc kubenswrapper[4718]: I1210 15:31:26.143893 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9707f95b-6eef-4f18-9a5e-a700ac0a3314-catalog-content\") pod \"redhat-operators-vw5sv\" (UID: \"9707f95b-6eef-4f18-9a5e-a700ac0a3314\") " pod="openshift-marketplace/redhat-operators-vw5sv" Dec 10 15:31:26 crc kubenswrapper[4718]: I1210 15:31:26.170271 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ck2g7\" (UniqueName: \"kubernetes.io/projected/9707f95b-6eef-4f18-9a5e-a700ac0a3314-kube-api-access-ck2g7\") pod \"redhat-operators-vw5sv\" (UID: \"9707f95b-6eef-4f18-9a5e-a700ac0a3314\") " pod="openshift-marketplace/redhat-operators-vw5sv" Dec 10 15:31:26 crc kubenswrapper[4718]: I1210 15:31:26.291269 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vw5sv" Dec 10 15:31:27 crc kubenswrapper[4718]: I1210 15:31:27.203490 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-vw5sv"] Dec 10 15:31:28 crc kubenswrapper[4718]: I1210 15:31:28.153008 4718 generic.go:334] "Generic (PLEG): container finished" podID="9707f95b-6eef-4f18-9a5e-a700ac0a3314" containerID="d7b86200bf135148e9d0e8b98172bba35b44169d64b304ef317d354727d990e0" exitCode=0 Dec 10 15:31:28 crc kubenswrapper[4718]: I1210 15:31:28.153271 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vw5sv" event={"ID":"9707f95b-6eef-4f18-9a5e-a700ac0a3314","Type":"ContainerDied","Data":"d7b86200bf135148e9d0e8b98172bba35b44169d64b304ef317d354727d990e0"} Dec 10 15:31:28 crc kubenswrapper[4718]: I1210 15:31:28.153502 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vw5sv" event={"ID":"9707f95b-6eef-4f18-9a5e-a700ac0a3314","Type":"ContainerStarted","Data":"76ebfa6807c7bbd9d1f38ef2986ab4ca9b09724a6972794c675ee7dd2ed76d37"} Dec 10 15:31:28 crc kubenswrapper[4718]: I1210 15:31:28.159310 4718 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 10 15:31:30 crc kubenswrapper[4718]: I1210 15:31:30.208571 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vw5sv" event={"ID":"9707f95b-6eef-4f18-9a5e-a700ac0a3314","Type":"ContainerStarted","Data":"89545d7f2f57f6efc295666092f246fa5676e7111983d04792fd6f918b219971"} Dec 10 15:31:34 crc kubenswrapper[4718]: I1210 15:31:34.255301 4718 generic.go:334] "Generic (PLEG): container finished" podID="9707f95b-6eef-4f18-9a5e-a700ac0a3314" containerID="89545d7f2f57f6efc295666092f246fa5676e7111983d04792fd6f918b219971" exitCode=0 Dec 10 15:31:34 crc kubenswrapper[4718]: I1210 15:31:34.255376 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vw5sv" event={"ID":"9707f95b-6eef-4f18-9a5e-a700ac0a3314","Type":"ContainerDied","Data":"89545d7f2f57f6efc295666092f246fa5676e7111983d04792fd6f918b219971"} Dec 10 15:31:35 crc kubenswrapper[4718]: I1210 15:31:35.270629 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vw5sv" event={"ID":"9707f95b-6eef-4f18-9a5e-a700ac0a3314","Type":"ContainerStarted","Data":"40c4016a18732908684d3e7841afc55d4844ae2b54335be3b2866de32e701883"} Dec 10 15:31:35 crc kubenswrapper[4718]: I1210 15:31:35.305479 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-vw5sv" podStartSLOduration=3.619538478 podStartE2EDuration="10.305428807s" podCreationTimestamp="2025-12-10 15:31:25 +0000 UTC" firstStartedPulling="2025-12-10 15:31:28.158923249 +0000 UTC m=+3593.108146666" lastFinishedPulling="2025-12-10 15:31:34.844813578 +0000 UTC m=+3599.794036995" observedRunningTime="2025-12-10 15:31:35.300551223 +0000 UTC m=+3600.249774640" watchObservedRunningTime="2025-12-10 15:31:35.305428807 +0000 UTC m=+3600.254652224" Dec 10 15:31:36 crc kubenswrapper[4718]: I1210 15:31:36.293244 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-vw5sv" Dec 10 15:31:36 crc kubenswrapper[4718]: I1210 15:31:36.294014 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-vw5sv" Dec 10 15:31:37 crc kubenswrapper[4718]: I1210 15:31:37.370117 4718 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-vw5sv" podUID="9707f95b-6eef-4f18-9a5e-a700ac0a3314" containerName="registry-server" probeResult="failure" output=< Dec 10 15:31:37 crc kubenswrapper[4718]: timeout: failed to connect service ":50051" within 1s Dec 10 15:31:37 crc kubenswrapper[4718]: > Dec 10 15:31:46 crc kubenswrapper[4718]: I1210 15:31:46.347971 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-vw5sv" Dec 10 15:31:46 crc kubenswrapper[4718]: I1210 15:31:46.404366 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-vw5sv" Dec 10 15:31:46 crc kubenswrapper[4718]: I1210 15:31:46.590460 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-vw5sv"] Dec 10 15:31:47 crc kubenswrapper[4718]: I1210 15:31:47.426080 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-vw5sv" podUID="9707f95b-6eef-4f18-9a5e-a700ac0a3314" containerName="registry-server" containerID="cri-o://40c4016a18732908684d3e7841afc55d4844ae2b54335be3b2866de32e701883" gracePeriod=2 Dec 10 15:31:48 crc kubenswrapper[4718]: I1210 15:31:48.056597 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vw5sv" Dec 10 15:31:48 crc kubenswrapper[4718]: I1210 15:31:48.084650 4718 patch_prober.go:28] interesting pod/machine-config-daemon-8zmhn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 10 15:31:48 crc kubenswrapper[4718]: I1210 15:31:48.084719 4718 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" podUID="8db53917-7cfb-496d-b8a0-5cc68f3be4e7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 10 15:31:48 crc kubenswrapper[4718]: I1210 15:31:48.084764 4718 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" Dec 10 15:31:48 crc kubenswrapper[4718]: I1210 15:31:48.085677 4718 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"a39fe9ee57bd6a377e7053d9337e95aa45b6b874e934a440d79cfb0832a5276b"} pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 10 15:31:48 crc kubenswrapper[4718]: I1210 15:31:48.085736 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" podUID="8db53917-7cfb-496d-b8a0-5cc68f3be4e7" containerName="machine-config-daemon" containerID="cri-o://a39fe9ee57bd6a377e7053d9337e95aa45b6b874e934a440d79cfb0832a5276b" gracePeriod=600 Dec 10 15:31:48 crc kubenswrapper[4718]: E1210 15:31:48.208559 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8zmhn_openshift-machine-config-operator(8db53917-7cfb-496d-b8a0-5cc68f3be4e7)\"" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" podUID="8db53917-7cfb-496d-b8a0-5cc68f3be4e7" Dec 10 15:31:48 crc kubenswrapper[4718]: I1210 15:31:48.209923 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9707f95b-6eef-4f18-9a5e-a700ac0a3314-catalog-content\") pod \"9707f95b-6eef-4f18-9a5e-a700ac0a3314\" (UID: \"9707f95b-6eef-4f18-9a5e-a700ac0a3314\") " Dec 10 15:31:48 crc kubenswrapper[4718]: I1210 15:31:48.210147 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9707f95b-6eef-4f18-9a5e-a700ac0a3314-utilities\") pod \"9707f95b-6eef-4f18-9a5e-a700ac0a3314\" (UID: \"9707f95b-6eef-4f18-9a5e-a700ac0a3314\") " Dec 10 15:31:48 crc kubenswrapper[4718]: I1210 15:31:48.210267 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ck2g7\" (UniqueName: \"kubernetes.io/projected/9707f95b-6eef-4f18-9a5e-a700ac0a3314-kube-api-access-ck2g7\") pod \"9707f95b-6eef-4f18-9a5e-a700ac0a3314\" (UID: \"9707f95b-6eef-4f18-9a5e-a700ac0a3314\") " Dec 10 15:31:48 crc kubenswrapper[4718]: I1210 15:31:48.211902 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9707f95b-6eef-4f18-9a5e-a700ac0a3314-utilities" (OuterVolumeSpecName: "utilities") pod "9707f95b-6eef-4f18-9a5e-a700ac0a3314" (UID: "9707f95b-6eef-4f18-9a5e-a700ac0a3314"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 15:31:48 crc kubenswrapper[4718]: I1210 15:31:48.224959 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9707f95b-6eef-4f18-9a5e-a700ac0a3314-kube-api-access-ck2g7" (OuterVolumeSpecName: "kube-api-access-ck2g7") pod "9707f95b-6eef-4f18-9a5e-a700ac0a3314" (UID: "9707f95b-6eef-4f18-9a5e-a700ac0a3314"). InnerVolumeSpecName "kube-api-access-ck2g7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:31:48 crc kubenswrapper[4718]: I1210 15:31:48.313774 4718 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9707f95b-6eef-4f18-9a5e-a700ac0a3314-utilities\") on node \"crc\" DevicePath \"\"" Dec 10 15:31:48 crc kubenswrapper[4718]: I1210 15:31:48.313815 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ck2g7\" (UniqueName: \"kubernetes.io/projected/9707f95b-6eef-4f18-9a5e-a700ac0a3314-kube-api-access-ck2g7\") on node \"crc\" DevicePath \"\"" Dec 10 15:31:48 crc kubenswrapper[4718]: I1210 15:31:48.350645 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9707f95b-6eef-4f18-9a5e-a700ac0a3314-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9707f95b-6eef-4f18-9a5e-a700ac0a3314" (UID: "9707f95b-6eef-4f18-9a5e-a700ac0a3314"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 15:31:48 crc kubenswrapper[4718]: I1210 15:31:48.415665 4718 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9707f95b-6eef-4f18-9a5e-a700ac0a3314-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 10 15:31:48 crc kubenswrapper[4718]: I1210 15:31:48.439436 4718 generic.go:334] "Generic (PLEG): container finished" podID="9707f95b-6eef-4f18-9a5e-a700ac0a3314" containerID="40c4016a18732908684d3e7841afc55d4844ae2b54335be3b2866de32e701883" exitCode=0 Dec 10 15:31:48 crc kubenswrapper[4718]: I1210 15:31:48.439526 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vw5sv" event={"ID":"9707f95b-6eef-4f18-9a5e-a700ac0a3314","Type":"ContainerDied","Data":"40c4016a18732908684d3e7841afc55d4844ae2b54335be3b2866de32e701883"} Dec 10 15:31:48 crc kubenswrapper[4718]: I1210 15:31:48.439547 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vw5sv" Dec 10 15:31:48 crc kubenswrapper[4718]: I1210 15:31:48.439676 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vw5sv" event={"ID":"9707f95b-6eef-4f18-9a5e-a700ac0a3314","Type":"ContainerDied","Data":"76ebfa6807c7bbd9d1f38ef2986ab4ca9b09724a6972794c675ee7dd2ed76d37"} Dec 10 15:31:48 crc kubenswrapper[4718]: I1210 15:31:48.439748 4718 scope.go:117] "RemoveContainer" containerID="40c4016a18732908684d3e7841afc55d4844ae2b54335be3b2866de32e701883" Dec 10 15:31:48 crc kubenswrapper[4718]: I1210 15:31:48.446055 4718 generic.go:334] "Generic (PLEG): container finished" podID="8db53917-7cfb-496d-b8a0-5cc68f3be4e7" containerID="a39fe9ee57bd6a377e7053d9337e95aa45b6b874e934a440d79cfb0832a5276b" exitCode=0 Dec 10 15:31:48 crc kubenswrapper[4718]: I1210 15:31:48.446139 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" event={"ID":"8db53917-7cfb-496d-b8a0-5cc68f3be4e7","Type":"ContainerDied","Data":"a39fe9ee57bd6a377e7053d9337e95aa45b6b874e934a440d79cfb0832a5276b"} Dec 10 15:31:48 crc kubenswrapper[4718]: I1210 15:31:48.447022 4718 scope.go:117] "RemoveContainer" containerID="a39fe9ee57bd6a377e7053d9337e95aa45b6b874e934a440d79cfb0832a5276b" Dec 10 15:31:48 crc kubenswrapper[4718]: E1210 15:31:48.447532 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8zmhn_openshift-machine-config-operator(8db53917-7cfb-496d-b8a0-5cc68f3be4e7)\"" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" podUID="8db53917-7cfb-496d-b8a0-5cc68f3be4e7" Dec 10 15:31:48 crc kubenswrapper[4718]: I1210 15:31:48.477294 4718 scope.go:117] "RemoveContainer" containerID="89545d7f2f57f6efc295666092f246fa5676e7111983d04792fd6f918b219971" Dec 10 15:31:48 crc kubenswrapper[4718]: I1210 15:31:48.507454 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-vw5sv"] Dec 10 15:31:48 crc kubenswrapper[4718]: I1210 15:31:48.521594 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-vw5sv"] Dec 10 15:31:48 crc kubenswrapper[4718]: I1210 15:31:48.524331 4718 scope.go:117] "RemoveContainer" containerID="d7b86200bf135148e9d0e8b98172bba35b44169d64b304ef317d354727d990e0" Dec 10 15:31:48 crc kubenswrapper[4718]: I1210 15:31:48.549667 4718 scope.go:117] "RemoveContainer" containerID="40c4016a18732908684d3e7841afc55d4844ae2b54335be3b2866de32e701883" Dec 10 15:31:48 crc kubenswrapper[4718]: E1210 15:31:48.550328 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"40c4016a18732908684d3e7841afc55d4844ae2b54335be3b2866de32e701883\": container with ID starting with 40c4016a18732908684d3e7841afc55d4844ae2b54335be3b2866de32e701883 not found: ID does not exist" containerID="40c4016a18732908684d3e7841afc55d4844ae2b54335be3b2866de32e701883" Dec 10 15:31:48 crc kubenswrapper[4718]: I1210 15:31:48.550381 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"40c4016a18732908684d3e7841afc55d4844ae2b54335be3b2866de32e701883"} err="failed to get container status \"40c4016a18732908684d3e7841afc55d4844ae2b54335be3b2866de32e701883\": rpc error: code = NotFound desc = could not find container \"40c4016a18732908684d3e7841afc55d4844ae2b54335be3b2866de32e701883\": container with ID starting with 40c4016a18732908684d3e7841afc55d4844ae2b54335be3b2866de32e701883 not found: ID does not exist" Dec 10 15:31:48 crc kubenswrapper[4718]: I1210 15:31:48.550484 4718 scope.go:117] "RemoveContainer" containerID="89545d7f2f57f6efc295666092f246fa5676e7111983d04792fd6f918b219971" Dec 10 15:31:48 crc kubenswrapper[4718]: E1210 15:31:48.550800 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"89545d7f2f57f6efc295666092f246fa5676e7111983d04792fd6f918b219971\": container with ID starting with 89545d7f2f57f6efc295666092f246fa5676e7111983d04792fd6f918b219971 not found: ID does not exist" containerID="89545d7f2f57f6efc295666092f246fa5676e7111983d04792fd6f918b219971" Dec 10 15:31:48 crc kubenswrapper[4718]: I1210 15:31:48.550827 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"89545d7f2f57f6efc295666092f246fa5676e7111983d04792fd6f918b219971"} err="failed to get container status \"89545d7f2f57f6efc295666092f246fa5676e7111983d04792fd6f918b219971\": rpc error: code = NotFound desc = could not find container \"89545d7f2f57f6efc295666092f246fa5676e7111983d04792fd6f918b219971\": container with ID starting with 89545d7f2f57f6efc295666092f246fa5676e7111983d04792fd6f918b219971 not found: ID does not exist" Dec 10 15:31:48 crc kubenswrapper[4718]: I1210 15:31:48.550840 4718 scope.go:117] "RemoveContainer" containerID="d7b86200bf135148e9d0e8b98172bba35b44169d64b304ef317d354727d990e0" Dec 10 15:31:48 crc kubenswrapper[4718]: E1210 15:31:48.551296 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d7b86200bf135148e9d0e8b98172bba35b44169d64b304ef317d354727d990e0\": container with ID starting with d7b86200bf135148e9d0e8b98172bba35b44169d64b304ef317d354727d990e0 not found: ID does not exist" containerID="d7b86200bf135148e9d0e8b98172bba35b44169d64b304ef317d354727d990e0" Dec 10 15:31:48 crc kubenswrapper[4718]: I1210 15:31:48.551362 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d7b86200bf135148e9d0e8b98172bba35b44169d64b304ef317d354727d990e0"} err="failed to get container status \"d7b86200bf135148e9d0e8b98172bba35b44169d64b304ef317d354727d990e0\": rpc error: code = NotFound desc = could not find container \"d7b86200bf135148e9d0e8b98172bba35b44169d64b304ef317d354727d990e0\": container with ID starting with d7b86200bf135148e9d0e8b98172bba35b44169d64b304ef317d354727d990e0 not found: ID does not exist" Dec 10 15:31:48 crc kubenswrapper[4718]: I1210 15:31:48.551382 4718 scope.go:117] "RemoveContainer" containerID="b0ebbb69003f414bf5979556281a474a5538c66cccb2f18435180d4ba1a9de43" Dec 10 15:31:50 crc kubenswrapper[4718]: I1210 15:31:50.033606 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9707f95b-6eef-4f18-9a5e-a700ac0a3314" path="/var/lib/kubelet/pods/9707f95b-6eef-4f18-9a5e-a700ac0a3314/volumes" Dec 10 15:31:59 crc kubenswrapper[4718]: I1210 15:31:59.021101 4718 scope.go:117] "RemoveContainer" containerID="a39fe9ee57bd6a377e7053d9337e95aa45b6b874e934a440d79cfb0832a5276b" Dec 10 15:31:59 crc kubenswrapper[4718]: E1210 15:31:59.022567 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8zmhn_openshift-machine-config-operator(8db53917-7cfb-496d-b8a0-5cc68f3be4e7)\"" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" podUID="8db53917-7cfb-496d-b8a0-5cc68f3be4e7" Dec 10 15:32:02 crc kubenswrapper[4718]: I1210 15:32:02.403499 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-4tf4b"] Dec 10 15:32:02 crc kubenswrapper[4718]: E1210 15:32:02.406339 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9707f95b-6eef-4f18-9a5e-a700ac0a3314" containerName="extract-utilities" Dec 10 15:32:02 crc kubenswrapper[4718]: I1210 15:32:02.406486 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="9707f95b-6eef-4f18-9a5e-a700ac0a3314" containerName="extract-utilities" Dec 10 15:32:02 crc kubenswrapper[4718]: E1210 15:32:02.406588 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9707f95b-6eef-4f18-9a5e-a700ac0a3314" containerName="registry-server" Dec 10 15:32:02 crc kubenswrapper[4718]: I1210 15:32:02.406658 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="9707f95b-6eef-4f18-9a5e-a700ac0a3314" containerName="registry-server" Dec 10 15:32:02 crc kubenswrapper[4718]: E1210 15:32:02.406741 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9707f95b-6eef-4f18-9a5e-a700ac0a3314" containerName="extract-content" Dec 10 15:32:02 crc kubenswrapper[4718]: I1210 15:32:02.406807 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="9707f95b-6eef-4f18-9a5e-a700ac0a3314" containerName="extract-content" Dec 10 15:32:02 crc kubenswrapper[4718]: I1210 15:32:02.407213 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="9707f95b-6eef-4f18-9a5e-a700ac0a3314" containerName="registry-server" Dec 10 15:32:02 crc kubenswrapper[4718]: I1210 15:32:02.409139 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4tf4b" Dec 10 15:32:02 crc kubenswrapper[4718]: I1210 15:32:02.420262 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-4tf4b"] Dec 10 15:32:02 crc kubenswrapper[4718]: I1210 15:32:02.593064 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fc1e16f2-291b-4f83-9bb8-1f1b8a59299f-utilities\") pod \"redhat-marketplace-4tf4b\" (UID: \"fc1e16f2-291b-4f83-9bb8-1f1b8a59299f\") " pod="openshift-marketplace/redhat-marketplace-4tf4b" Dec 10 15:32:02 crc kubenswrapper[4718]: I1210 15:32:02.593307 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fc1e16f2-291b-4f83-9bb8-1f1b8a59299f-catalog-content\") pod \"redhat-marketplace-4tf4b\" (UID: \"fc1e16f2-291b-4f83-9bb8-1f1b8a59299f\") " pod="openshift-marketplace/redhat-marketplace-4tf4b" Dec 10 15:32:02 crc kubenswrapper[4718]: I1210 15:32:02.593358 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2mpd6\" (UniqueName: \"kubernetes.io/projected/fc1e16f2-291b-4f83-9bb8-1f1b8a59299f-kube-api-access-2mpd6\") pod \"redhat-marketplace-4tf4b\" (UID: \"fc1e16f2-291b-4f83-9bb8-1f1b8a59299f\") " pod="openshift-marketplace/redhat-marketplace-4tf4b" Dec 10 15:32:02 crc kubenswrapper[4718]: I1210 15:32:02.695416 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fc1e16f2-291b-4f83-9bb8-1f1b8a59299f-catalog-content\") pod \"redhat-marketplace-4tf4b\" (UID: \"fc1e16f2-291b-4f83-9bb8-1f1b8a59299f\") " pod="openshift-marketplace/redhat-marketplace-4tf4b" Dec 10 15:32:02 crc kubenswrapper[4718]: I1210 15:32:02.695540 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2mpd6\" (UniqueName: \"kubernetes.io/projected/fc1e16f2-291b-4f83-9bb8-1f1b8a59299f-kube-api-access-2mpd6\") pod \"redhat-marketplace-4tf4b\" (UID: \"fc1e16f2-291b-4f83-9bb8-1f1b8a59299f\") " pod="openshift-marketplace/redhat-marketplace-4tf4b" Dec 10 15:32:02 crc kubenswrapper[4718]: I1210 15:32:02.695708 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fc1e16f2-291b-4f83-9bb8-1f1b8a59299f-utilities\") pod \"redhat-marketplace-4tf4b\" (UID: \"fc1e16f2-291b-4f83-9bb8-1f1b8a59299f\") " pod="openshift-marketplace/redhat-marketplace-4tf4b" Dec 10 15:32:02 crc kubenswrapper[4718]: I1210 15:32:02.696199 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fc1e16f2-291b-4f83-9bb8-1f1b8a59299f-catalog-content\") pod \"redhat-marketplace-4tf4b\" (UID: \"fc1e16f2-291b-4f83-9bb8-1f1b8a59299f\") " pod="openshift-marketplace/redhat-marketplace-4tf4b" Dec 10 15:32:02 crc kubenswrapper[4718]: I1210 15:32:02.696343 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fc1e16f2-291b-4f83-9bb8-1f1b8a59299f-utilities\") pod \"redhat-marketplace-4tf4b\" (UID: \"fc1e16f2-291b-4f83-9bb8-1f1b8a59299f\") " pod="openshift-marketplace/redhat-marketplace-4tf4b" Dec 10 15:32:02 crc kubenswrapper[4718]: I1210 15:32:02.729821 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2mpd6\" (UniqueName: \"kubernetes.io/projected/fc1e16f2-291b-4f83-9bb8-1f1b8a59299f-kube-api-access-2mpd6\") pod \"redhat-marketplace-4tf4b\" (UID: \"fc1e16f2-291b-4f83-9bb8-1f1b8a59299f\") " pod="openshift-marketplace/redhat-marketplace-4tf4b" Dec 10 15:32:02 crc kubenswrapper[4718]: I1210 15:32:02.736815 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4tf4b" Dec 10 15:32:03 crc kubenswrapper[4718]: I1210 15:32:03.275996 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-4tf4b"] Dec 10 15:32:03 crc kubenswrapper[4718]: I1210 15:32:03.623813 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4tf4b" event={"ID":"fc1e16f2-291b-4f83-9bb8-1f1b8a59299f","Type":"ContainerStarted","Data":"f617d16e3538f539c58748e5f4293a416b9d5337a90c39bb6860d5c3bf3bed4a"} Dec 10 15:32:04 crc kubenswrapper[4718]: I1210 15:32:04.638165 4718 generic.go:334] "Generic (PLEG): container finished" podID="fc1e16f2-291b-4f83-9bb8-1f1b8a59299f" containerID="373192321ac676fc6b46708e9270efd249c9c4179f0ea1278957e5fa5227e6ee" exitCode=0 Dec 10 15:32:04 crc kubenswrapper[4718]: I1210 15:32:04.638230 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4tf4b" event={"ID":"fc1e16f2-291b-4f83-9bb8-1f1b8a59299f","Type":"ContainerDied","Data":"373192321ac676fc6b46708e9270efd249c9c4179f0ea1278957e5fa5227e6ee"} Dec 10 15:32:05 crc kubenswrapper[4718]: I1210 15:32:05.653043 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4tf4b" event={"ID":"fc1e16f2-291b-4f83-9bb8-1f1b8a59299f","Type":"ContainerStarted","Data":"5b961bf24b1aef01361f1922e6d08426b01d5f56bdc051968c6c121a01a5bb85"} Dec 10 15:32:06 crc kubenswrapper[4718]: I1210 15:32:06.669559 4718 generic.go:334] "Generic (PLEG): container finished" podID="fc1e16f2-291b-4f83-9bb8-1f1b8a59299f" containerID="5b961bf24b1aef01361f1922e6d08426b01d5f56bdc051968c6c121a01a5bb85" exitCode=0 Dec 10 15:32:06 crc kubenswrapper[4718]: I1210 15:32:06.670130 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4tf4b" event={"ID":"fc1e16f2-291b-4f83-9bb8-1f1b8a59299f","Type":"ContainerDied","Data":"5b961bf24b1aef01361f1922e6d08426b01d5f56bdc051968c6c121a01a5bb85"} Dec 10 15:32:07 crc kubenswrapper[4718]: I1210 15:32:07.683595 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4tf4b" event={"ID":"fc1e16f2-291b-4f83-9bb8-1f1b8a59299f","Type":"ContainerStarted","Data":"738984c23ede0f619e1bb5df1c97e238ad05a2f4099c03e06e34a61a94364ea3"} Dec 10 15:32:07 crc kubenswrapper[4718]: I1210 15:32:07.705557 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-4tf4b" podStartSLOduration=3.1771167670000002 podStartE2EDuration="5.705523949s" podCreationTimestamp="2025-12-10 15:32:02 +0000 UTC" firstStartedPulling="2025-12-10 15:32:04.640786551 +0000 UTC m=+3629.590009968" lastFinishedPulling="2025-12-10 15:32:07.169193723 +0000 UTC m=+3632.118417150" observedRunningTime="2025-12-10 15:32:07.703073502 +0000 UTC m=+3632.652296919" watchObservedRunningTime="2025-12-10 15:32:07.705523949 +0000 UTC m=+3632.654747366" Dec 10 15:32:12 crc kubenswrapper[4718]: I1210 15:32:12.738894 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-4tf4b" Dec 10 15:32:12 crc kubenswrapper[4718]: I1210 15:32:12.739784 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-4tf4b" Dec 10 15:32:12 crc kubenswrapper[4718]: I1210 15:32:12.796567 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-4tf4b" Dec 10 15:32:13 crc kubenswrapper[4718]: I1210 15:32:13.798898 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-4tf4b" Dec 10 15:32:13 crc kubenswrapper[4718]: I1210 15:32:13.866255 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-4tf4b"] Dec 10 15:32:14 crc kubenswrapper[4718]: I1210 15:32:14.021259 4718 scope.go:117] "RemoveContainer" containerID="a39fe9ee57bd6a377e7053d9337e95aa45b6b874e934a440d79cfb0832a5276b" Dec 10 15:32:14 crc kubenswrapper[4718]: E1210 15:32:14.022007 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8zmhn_openshift-machine-config-operator(8db53917-7cfb-496d-b8a0-5cc68f3be4e7)\"" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" podUID="8db53917-7cfb-496d-b8a0-5cc68f3be4e7" Dec 10 15:32:15 crc kubenswrapper[4718]: I1210 15:32:15.774192 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-4tf4b" podUID="fc1e16f2-291b-4f83-9bb8-1f1b8a59299f" containerName="registry-server" containerID="cri-o://738984c23ede0f619e1bb5df1c97e238ad05a2f4099c03e06e34a61a94364ea3" gracePeriod=2 Dec 10 15:32:16 crc kubenswrapper[4718]: I1210 15:32:16.304493 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4tf4b" Dec 10 15:32:16 crc kubenswrapper[4718]: I1210 15:32:16.414366 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fc1e16f2-291b-4f83-9bb8-1f1b8a59299f-utilities\") pod \"fc1e16f2-291b-4f83-9bb8-1f1b8a59299f\" (UID: \"fc1e16f2-291b-4f83-9bb8-1f1b8a59299f\") " Dec 10 15:32:16 crc kubenswrapper[4718]: I1210 15:32:16.415184 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2mpd6\" (UniqueName: \"kubernetes.io/projected/fc1e16f2-291b-4f83-9bb8-1f1b8a59299f-kube-api-access-2mpd6\") pod \"fc1e16f2-291b-4f83-9bb8-1f1b8a59299f\" (UID: \"fc1e16f2-291b-4f83-9bb8-1f1b8a59299f\") " Dec 10 15:32:16 crc kubenswrapper[4718]: I1210 15:32:16.415362 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fc1e16f2-291b-4f83-9bb8-1f1b8a59299f-catalog-content\") pod \"fc1e16f2-291b-4f83-9bb8-1f1b8a59299f\" (UID: \"fc1e16f2-291b-4f83-9bb8-1f1b8a59299f\") " Dec 10 15:32:16 crc kubenswrapper[4718]: I1210 15:32:16.415617 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fc1e16f2-291b-4f83-9bb8-1f1b8a59299f-utilities" (OuterVolumeSpecName: "utilities") pod "fc1e16f2-291b-4f83-9bb8-1f1b8a59299f" (UID: "fc1e16f2-291b-4f83-9bb8-1f1b8a59299f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 15:32:16 crc kubenswrapper[4718]: I1210 15:32:16.415972 4718 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fc1e16f2-291b-4f83-9bb8-1f1b8a59299f-utilities\") on node \"crc\" DevicePath \"\"" Dec 10 15:32:16 crc kubenswrapper[4718]: I1210 15:32:16.423050 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fc1e16f2-291b-4f83-9bb8-1f1b8a59299f-kube-api-access-2mpd6" (OuterVolumeSpecName: "kube-api-access-2mpd6") pod "fc1e16f2-291b-4f83-9bb8-1f1b8a59299f" (UID: "fc1e16f2-291b-4f83-9bb8-1f1b8a59299f"). InnerVolumeSpecName "kube-api-access-2mpd6". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:32:16 crc kubenswrapper[4718]: I1210 15:32:16.446777 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fc1e16f2-291b-4f83-9bb8-1f1b8a59299f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "fc1e16f2-291b-4f83-9bb8-1f1b8a59299f" (UID: "fc1e16f2-291b-4f83-9bb8-1f1b8a59299f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 15:32:16 crc kubenswrapper[4718]: I1210 15:32:16.518446 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2mpd6\" (UniqueName: \"kubernetes.io/projected/fc1e16f2-291b-4f83-9bb8-1f1b8a59299f-kube-api-access-2mpd6\") on node \"crc\" DevicePath \"\"" Dec 10 15:32:16 crc kubenswrapper[4718]: I1210 15:32:16.518511 4718 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fc1e16f2-291b-4f83-9bb8-1f1b8a59299f-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 10 15:32:16 crc kubenswrapper[4718]: I1210 15:32:16.787005 4718 generic.go:334] "Generic (PLEG): container finished" podID="fc1e16f2-291b-4f83-9bb8-1f1b8a59299f" containerID="738984c23ede0f619e1bb5df1c97e238ad05a2f4099c03e06e34a61a94364ea3" exitCode=0 Dec 10 15:32:16 crc kubenswrapper[4718]: I1210 15:32:16.787065 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4tf4b" event={"ID":"fc1e16f2-291b-4f83-9bb8-1f1b8a59299f","Type":"ContainerDied","Data":"738984c23ede0f619e1bb5df1c97e238ad05a2f4099c03e06e34a61a94364ea3"} Dec 10 15:32:16 crc kubenswrapper[4718]: I1210 15:32:16.787075 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4tf4b" Dec 10 15:32:16 crc kubenswrapper[4718]: I1210 15:32:16.787107 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4tf4b" event={"ID":"fc1e16f2-291b-4f83-9bb8-1f1b8a59299f","Type":"ContainerDied","Data":"f617d16e3538f539c58748e5f4293a416b9d5337a90c39bb6860d5c3bf3bed4a"} Dec 10 15:32:16 crc kubenswrapper[4718]: I1210 15:32:16.787130 4718 scope.go:117] "RemoveContainer" containerID="738984c23ede0f619e1bb5df1c97e238ad05a2f4099c03e06e34a61a94364ea3" Dec 10 15:32:16 crc kubenswrapper[4718]: I1210 15:32:16.821944 4718 scope.go:117] "RemoveContainer" containerID="5b961bf24b1aef01361f1922e6d08426b01d5f56bdc051968c6c121a01a5bb85" Dec 10 15:32:16 crc kubenswrapper[4718]: I1210 15:32:16.829205 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-4tf4b"] Dec 10 15:32:16 crc kubenswrapper[4718]: I1210 15:32:16.843645 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-4tf4b"] Dec 10 15:32:16 crc kubenswrapper[4718]: I1210 15:32:16.868152 4718 scope.go:117] "RemoveContainer" containerID="373192321ac676fc6b46708e9270efd249c9c4179f0ea1278957e5fa5227e6ee" Dec 10 15:32:16 crc kubenswrapper[4718]: I1210 15:32:16.917962 4718 scope.go:117] "RemoveContainer" containerID="738984c23ede0f619e1bb5df1c97e238ad05a2f4099c03e06e34a61a94364ea3" Dec 10 15:32:16 crc kubenswrapper[4718]: E1210 15:32:16.918749 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"738984c23ede0f619e1bb5df1c97e238ad05a2f4099c03e06e34a61a94364ea3\": container with ID starting with 738984c23ede0f619e1bb5df1c97e238ad05a2f4099c03e06e34a61a94364ea3 not found: ID does not exist" containerID="738984c23ede0f619e1bb5df1c97e238ad05a2f4099c03e06e34a61a94364ea3" Dec 10 15:32:16 crc kubenswrapper[4718]: I1210 15:32:16.918823 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"738984c23ede0f619e1bb5df1c97e238ad05a2f4099c03e06e34a61a94364ea3"} err="failed to get container status \"738984c23ede0f619e1bb5df1c97e238ad05a2f4099c03e06e34a61a94364ea3\": rpc error: code = NotFound desc = could not find container \"738984c23ede0f619e1bb5df1c97e238ad05a2f4099c03e06e34a61a94364ea3\": container with ID starting with 738984c23ede0f619e1bb5df1c97e238ad05a2f4099c03e06e34a61a94364ea3 not found: ID does not exist" Dec 10 15:32:16 crc kubenswrapper[4718]: I1210 15:32:16.918862 4718 scope.go:117] "RemoveContainer" containerID="5b961bf24b1aef01361f1922e6d08426b01d5f56bdc051968c6c121a01a5bb85" Dec 10 15:32:16 crc kubenswrapper[4718]: E1210 15:32:16.919225 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5b961bf24b1aef01361f1922e6d08426b01d5f56bdc051968c6c121a01a5bb85\": container with ID starting with 5b961bf24b1aef01361f1922e6d08426b01d5f56bdc051968c6c121a01a5bb85 not found: ID does not exist" containerID="5b961bf24b1aef01361f1922e6d08426b01d5f56bdc051968c6c121a01a5bb85" Dec 10 15:32:16 crc kubenswrapper[4718]: I1210 15:32:16.919264 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5b961bf24b1aef01361f1922e6d08426b01d5f56bdc051968c6c121a01a5bb85"} err="failed to get container status \"5b961bf24b1aef01361f1922e6d08426b01d5f56bdc051968c6c121a01a5bb85\": rpc error: code = NotFound desc = could not find container \"5b961bf24b1aef01361f1922e6d08426b01d5f56bdc051968c6c121a01a5bb85\": container with ID starting with 5b961bf24b1aef01361f1922e6d08426b01d5f56bdc051968c6c121a01a5bb85 not found: ID does not exist" Dec 10 15:32:16 crc kubenswrapper[4718]: I1210 15:32:16.919284 4718 scope.go:117] "RemoveContainer" containerID="373192321ac676fc6b46708e9270efd249c9c4179f0ea1278957e5fa5227e6ee" Dec 10 15:32:16 crc kubenswrapper[4718]: E1210 15:32:16.919589 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"373192321ac676fc6b46708e9270efd249c9c4179f0ea1278957e5fa5227e6ee\": container with ID starting with 373192321ac676fc6b46708e9270efd249c9c4179f0ea1278957e5fa5227e6ee not found: ID does not exist" containerID="373192321ac676fc6b46708e9270efd249c9c4179f0ea1278957e5fa5227e6ee" Dec 10 15:32:16 crc kubenswrapper[4718]: I1210 15:32:16.919637 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"373192321ac676fc6b46708e9270efd249c9c4179f0ea1278957e5fa5227e6ee"} err="failed to get container status \"373192321ac676fc6b46708e9270efd249c9c4179f0ea1278957e5fa5227e6ee\": rpc error: code = NotFound desc = could not find container \"373192321ac676fc6b46708e9270efd249c9c4179f0ea1278957e5fa5227e6ee\": container with ID starting with 373192321ac676fc6b46708e9270efd249c9c4179f0ea1278957e5fa5227e6ee not found: ID does not exist" Dec 10 15:32:18 crc kubenswrapper[4718]: I1210 15:32:18.036870 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fc1e16f2-291b-4f83-9bb8-1f1b8a59299f" path="/var/lib/kubelet/pods/fc1e16f2-291b-4f83-9bb8-1f1b8a59299f/volumes" Dec 10 15:32:27 crc kubenswrapper[4718]: I1210 15:32:27.020873 4718 scope.go:117] "RemoveContainer" containerID="a39fe9ee57bd6a377e7053d9337e95aa45b6b874e934a440d79cfb0832a5276b" Dec 10 15:32:27 crc kubenswrapper[4718]: E1210 15:32:27.022342 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8zmhn_openshift-machine-config-operator(8db53917-7cfb-496d-b8a0-5cc68f3be4e7)\"" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" podUID="8db53917-7cfb-496d-b8a0-5cc68f3be4e7" Dec 10 15:32:39 crc kubenswrapper[4718]: I1210 15:32:39.021330 4718 scope.go:117] "RemoveContainer" containerID="a39fe9ee57bd6a377e7053d9337e95aa45b6b874e934a440d79cfb0832a5276b" Dec 10 15:32:39 crc kubenswrapper[4718]: E1210 15:32:39.022738 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8zmhn_openshift-machine-config-operator(8db53917-7cfb-496d-b8a0-5cc68f3be4e7)\"" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" podUID="8db53917-7cfb-496d-b8a0-5cc68f3be4e7" Dec 10 15:32:52 crc kubenswrapper[4718]: I1210 15:32:52.021111 4718 scope.go:117] "RemoveContainer" containerID="a39fe9ee57bd6a377e7053d9337e95aa45b6b874e934a440d79cfb0832a5276b" Dec 10 15:32:52 crc kubenswrapper[4718]: E1210 15:32:52.022408 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8zmhn_openshift-machine-config-operator(8db53917-7cfb-496d-b8a0-5cc68f3be4e7)\"" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" podUID="8db53917-7cfb-496d-b8a0-5cc68f3be4e7" Dec 10 15:33:06 crc kubenswrapper[4718]: I1210 15:33:06.030612 4718 scope.go:117] "RemoveContainer" containerID="a39fe9ee57bd6a377e7053d9337e95aa45b6b874e934a440d79cfb0832a5276b" Dec 10 15:33:06 crc kubenswrapper[4718]: E1210 15:33:06.032273 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8zmhn_openshift-machine-config-operator(8db53917-7cfb-496d-b8a0-5cc68f3be4e7)\"" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" podUID="8db53917-7cfb-496d-b8a0-5cc68f3be4e7" Dec 10 15:33:18 crc kubenswrapper[4718]: I1210 15:33:18.021620 4718 scope.go:117] "RemoveContainer" containerID="a39fe9ee57bd6a377e7053d9337e95aa45b6b874e934a440d79cfb0832a5276b" Dec 10 15:33:18 crc kubenswrapper[4718]: E1210 15:33:18.022798 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8zmhn_openshift-machine-config-operator(8db53917-7cfb-496d-b8a0-5cc68f3be4e7)\"" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" podUID="8db53917-7cfb-496d-b8a0-5cc68f3be4e7" Dec 10 15:33:30 crc kubenswrapper[4718]: I1210 15:33:30.021164 4718 scope.go:117] "RemoveContainer" containerID="a39fe9ee57bd6a377e7053d9337e95aa45b6b874e934a440d79cfb0832a5276b" Dec 10 15:33:30 crc kubenswrapper[4718]: E1210 15:33:30.022594 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8zmhn_openshift-machine-config-operator(8db53917-7cfb-496d-b8a0-5cc68f3be4e7)\"" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" podUID="8db53917-7cfb-496d-b8a0-5cc68f3be4e7" Dec 10 15:33:44 crc kubenswrapper[4718]: I1210 15:33:44.021232 4718 scope.go:117] "RemoveContainer" containerID="a39fe9ee57bd6a377e7053d9337e95aa45b6b874e934a440d79cfb0832a5276b" Dec 10 15:33:44 crc kubenswrapper[4718]: E1210 15:33:44.022538 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8zmhn_openshift-machine-config-operator(8db53917-7cfb-496d-b8a0-5cc68f3be4e7)\"" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" podUID="8db53917-7cfb-496d-b8a0-5cc68f3be4e7" Dec 10 15:33:56 crc kubenswrapper[4718]: I1210 15:33:56.031547 4718 scope.go:117] "RemoveContainer" containerID="a39fe9ee57bd6a377e7053d9337e95aa45b6b874e934a440d79cfb0832a5276b" Dec 10 15:33:56 crc kubenswrapper[4718]: E1210 15:33:56.032929 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8zmhn_openshift-machine-config-operator(8db53917-7cfb-496d-b8a0-5cc68f3be4e7)\"" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" podUID="8db53917-7cfb-496d-b8a0-5cc68f3be4e7" Dec 10 15:34:10 crc kubenswrapper[4718]: I1210 15:34:10.020826 4718 scope.go:117] "RemoveContainer" containerID="a39fe9ee57bd6a377e7053d9337e95aa45b6b874e934a440d79cfb0832a5276b" Dec 10 15:34:10 crc kubenswrapper[4718]: E1210 15:34:10.022076 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8zmhn_openshift-machine-config-operator(8db53917-7cfb-496d-b8a0-5cc68f3be4e7)\"" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" podUID="8db53917-7cfb-496d-b8a0-5cc68f3be4e7" Dec 10 15:34:21 crc kubenswrapper[4718]: I1210 15:34:21.021245 4718 scope.go:117] "RemoveContainer" containerID="a39fe9ee57bd6a377e7053d9337e95aa45b6b874e934a440d79cfb0832a5276b" Dec 10 15:34:21 crc kubenswrapper[4718]: E1210 15:34:21.024592 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8zmhn_openshift-machine-config-operator(8db53917-7cfb-496d-b8a0-5cc68f3be4e7)\"" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" podUID="8db53917-7cfb-496d-b8a0-5cc68f3be4e7" Dec 10 15:34:27 crc kubenswrapper[4718]: I1210 15:34:27.345713 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-b2c8z"] Dec 10 15:34:27 crc kubenswrapper[4718]: E1210 15:34:27.347240 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fc1e16f2-291b-4f83-9bb8-1f1b8a59299f" containerName="registry-server" Dec 10 15:34:27 crc kubenswrapper[4718]: I1210 15:34:27.347256 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc1e16f2-291b-4f83-9bb8-1f1b8a59299f" containerName="registry-server" Dec 10 15:34:27 crc kubenswrapper[4718]: E1210 15:34:27.347278 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fc1e16f2-291b-4f83-9bb8-1f1b8a59299f" containerName="extract-utilities" Dec 10 15:34:27 crc kubenswrapper[4718]: I1210 15:34:27.347286 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc1e16f2-291b-4f83-9bb8-1f1b8a59299f" containerName="extract-utilities" Dec 10 15:34:27 crc kubenswrapper[4718]: E1210 15:34:27.347298 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fc1e16f2-291b-4f83-9bb8-1f1b8a59299f" containerName="extract-content" Dec 10 15:34:27 crc kubenswrapper[4718]: I1210 15:34:27.347304 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc1e16f2-291b-4f83-9bb8-1f1b8a59299f" containerName="extract-content" Dec 10 15:34:27 crc kubenswrapper[4718]: I1210 15:34:27.347572 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="fc1e16f2-291b-4f83-9bb8-1f1b8a59299f" containerName="registry-server" Dec 10 15:34:27 crc kubenswrapper[4718]: I1210 15:34:27.349520 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-b2c8z" Dec 10 15:34:27 crc kubenswrapper[4718]: I1210 15:34:27.373721 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-b2c8z"] Dec 10 15:34:27 crc kubenswrapper[4718]: I1210 15:34:27.435736 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0bf77adc-a31f-4488-b949-62e25cdf5c7e-catalog-content\") pod \"community-operators-b2c8z\" (UID: \"0bf77adc-a31f-4488-b949-62e25cdf5c7e\") " pod="openshift-marketplace/community-operators-b2c8z" Dec 10 15:34:27 crc kubenswrapper[4718]: I1210 15:34:27.435881 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0bf77adc-a31f-4488-b949-62e25cdf5c7e-utilities\") pod \"community-operators-b2c8z\" (UID: \"0bf77adc-a31f-4488-b949-62e25cdf5c7e\") " pod="openshift-marketplace/community-operators-b2c8z" Dec 10 15:34:27 crc kubenswrapper[4718]: I1210 15:34:27.435928 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xxfr6\" (UniqueName: \"kubernetes.io/projected/0bf77adc-a31f-4488-b949-62e25cdf5c7e-kube-api-access-xxfr6\") pod \"community-operators-b2c8z\" (UID: \"0bf77adc-a31f-4488-b949-62e25cdf5c7e\") " pod="openshift-marketplace/community-operators-b2c8z" Dec 10 15:34:27 crc kubenswrapper[4718]: I1210 15:34:27.543445 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0bf77adc-a31f-4488-b949-62e25cdf5c7e-utilities\") pod \"community-operators-b2c8z\" (UID: \"0bf77adc-a31f-4488-b949-62e25cdf5c7e\") " pod="openshift-marketplace/community-operators-b2c8z" Dec 10 15:34:27 crc kubenswrapper[4718]: I1210 15:34:27.543539 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xxfr6\" (UniqueName: \"kubernetes.io/projected/0bf77adc-a31f-4488-b949-62e25cdf5c7e-kube-api-access-xxfr6\") pod \"community-operators-b2c8z\" (UID: \"0bf77adc-a31f-4488-b949-62e25cdf5c7e\") " pod="openshift-marketplace/community-operators-b2c8z" Dec 10 15:34:27 crc kubenswrapper[4718]: I1210 15:34:27.543857 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0bf77adc-a31f-4488-b949-62e25cdf5c7e-catalog-content\") pod \"community-operators-b2c8z\" (UID: \"0bf77adc-a31f-4488-b949-62e25cdf5c7e\") " pod="openshift-marketplace/community-operators-b2c8z" Dec 10 15:34:27 crc kubenswrapper[4718]: I1210 15:34:27.544356 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0bf77adc-a31f-4488-b949-62e25cdf5c7e-utilities\") pod \"community-operators-b2c8z\" (UID: \"0bf77adc-a31f-4488-b949-62e25cdf5c7e\") " pod="openshift-marketplace/community-operators-b2c8z" Dec 10 15:34:27 crc kubenswrapper[4718]: I1210 15:34:27.544517 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0bf77adc-a31f-4488-b949-62e25cdf5c7e-catalog-content\") pod \"community-operators-b2c8z\" (UID: \"0bf77adc-a31f-4488-b949-62e25cdf5c7e\") " pod="openshift-marketplace/community-operators-b2c8z" Dec 10 15:34:27 crc kubenswrapper[4718]: I1210 15:34:27.571513 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xxfr6\" (UniqueName: \"kubernetes.io/projected/0bf77adc-a31f-4488-b949-62e25cdf5c7e-kube-api-access-xxfr6\") pod \"community-operators-b2c8z\" (UID: \"0bf77adc-a31f-4488-b949-62e25cdf5c7e\") " pod="openshift-marketplace/community-operators-b2c8z" Dec 10 15:34:27 crc kubenswrapper[4718]: I1210 15:34:27.703090 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-b2c8z" Dec 10 15:34:28 crc kubenswrapper[4718]: I1210 15:34:28.302439 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-b2c8z"] Dec 10 15:34:28 crc kubenswrapper[4718]: I1210 15:34:28.404227 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-b2c8z" event={"ID":"0bf77adc-a31f-4488-b949-62e25cdf5c7e","Type":"ContainerStarted","Data":"86625cd9aac4e6fc90ce7577e778ad12fa71d9f4cef9d303d27fe08182a1c64d"} Dec 10 15:34:29 crc kubenswrapper[4718]: I1210 15:34:29.419641 4718 generic.go:334] "Generic (PLEG): container finished" podID="0bf77adc-a31f-4488-b949-62e25cdf5c7e" containerID="65808460bee645786c7b55427330008375d6c28ef8b0a283c7648d61d2267c26" exitCode=0 Dec 10 15:34:29 crc kubenswrapper[4718]: I1210 15:34:29.419719 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-b2c8z" event={"ID":"0bf77adc-a31f-4488-b949-62e25cdf5c7e","Type":"ContainerDied","Data":"65808460bee645786c7b55427330008375d6c28ef8b0a283c7648d61d2267c26"} Dec 10 15:34:31 crc kubenswrapper[4718]: I1210 15:34:31.452514 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-b2c8z" event={"ID":"0bf77adc-a31f-4488-b949-62e25cdf5c7e","Type":"ContainerStarted","Data":"d303fb2bbf431304074eb05d1a023555eedb588a06a9d004e7bf23e19f7701e5"} Dec 10 15:34:32 crc kubenswrapper[4718]: I1210 15:34:32.465547 4718 generic.go:334] "Generic (PLEG): container finished" podID="0bf77adc-a31f-4488-b949-62e25cdf5c7e" containerID="d303fb2bbf431304074eb05d1a023555eedb588a06a9d004e7bf23e19f7701e5" exitCode=0 Dec 10 15:34:32 crc kubenswrapper[4718]: I1210 15:34:32.465615 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-b2c8z" event={"ID":"0bf77adc-a31f-4488-b949-62e25cdf5c7e","Type":"ContainerDied","Data":"d303fb2bbf431304074eb05d1a023555eedb588a06a9d004e7bf23e19f7701e5"} Dec 10 15:34:33 crc kubenswrapper[4718]: I1210 15:34:33.025486 4718 scope.go:117] "RemoveContainer" containerID="a39fe9ee57bd6a377e7053d9337e95aa45b6b874e934a440d79cfb0832a5276b" Dec 10 15:34:33 crc kubenswrapper[4718]: E1210 15:34:33.025853 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8zmhn_openshift-machine-config-operator(8db53917-7cfb-496d-b8a0-5cc68f3be4e7)\"" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" podUID="8db53917-7cfb-496d-b8a0-5cc68f3be4e7" Dec 10 15:34:33 crc kubenswrapper[4718]: I1210 15:34:33.483254 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-b2c8z" event={"ID":"0bf77adc-a31f-4488-b949-62e25cdf5c7e","Type":"ContainerStarted","Data":"93d157714c1987653b6d5f077574a4f6de812fc84a8e99328755f180189b78a0"} Dec 10 15:34:33 crc kubenswrapper[4718]: I1210 15:34:33.526515 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-b2c8z" podStartSLOduration=2.808805576 podStartE2EDuration="6.526486186s" podCreationTimestamp="2025-12-10 15:34:27 +0000 UTC" firstStartedPulling="2025-12-10 15:34:29.423640583 +0000 UTC m=+3774.372864000" lastFinishedPulling="2025-12-10 15:34:33.141321193 +0000 UTC m=+3778.090544610" observedRunningTime="2025-12-10 15:34:33.510369888 +0000 UTC m=+3778.459593305" watchObservedRunningTime="2025-12-10 15:34:33.526486186 +0000 UTC m=+3778.475709603" Dec 10 15:34:37 crc kubenswrapper[4718]: I1210 15:34:37.704363 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-b2c8z" Dec 10 15:34:37 crc kubenswrapper[4718]: I1210 15:34:37.705613 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-b2c8z" Dec 10 15:34:37 crc kubenswrapper[4718]: I1210 15:34:37.775993 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-b2c8z" Dec 10 15:34:38 crc kubenswrapper[4718]: I1210 15:34:38.590796 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-b2c8z" Dec 10 15:34:39 crc kubenswrapper[4718]: I1210 15:34:39.074311 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-b2c8z"] Dec 10 15:34:40 crc kubenswrapper[4718]: I1210 15:34:40.662674 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-b2c8z" podUID="0bf77adc-a31f-4488-b949-62e25cdf5c7e" containerName="registry-server" containerID="cri-o://93d157714c1987653b6d5f077574a4f6de812fc84a8e99328755f180189b78a0" gracePeriod=2 Dec 10 15:34:41 crc kubenswrapper[4718]: I1210 15:34:41.208690 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-b2c8z" Dec 10 15:34:41 crc kubenswrapper[4718]: I1210 15:34:41.362458 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0bf77adc-a31f-4488-b949-62e25cdf5c7e-catalog-content\") pod \"0bf77adc-a31f-4488-b949-62e25cdf5c7e\" (UID: \"0bf77adc-a31f-4488-b949-62e25cdf5c7e\") " Dec 10 15:34:41 crc kubenswrapper[4718]: I1210 15:34:41.362541 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xxfr6\" (UniqueName: \"kubernetes.io/projected/0bf77adc-a31f-4488-b949-62e25cdf5c7e-kube-api-access-xxfr6\") pod \"0bf77adc-a31f-4488-b949-62e25cdf5c7e\" (UID: \"0bf77adc-a31f-4488-b949-62e25cdf5c7e\") " Dec 10 15:34:41 crc kubenswrapper[4718]: I1210 15:34:41.362712 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0bf77adc-a31f-4488-b949-62e25cdf5c7e-utilities\") pod \"0bf77adc-a31f-4488-b949-62e25cdf5c7e\" (UID: \"0bf77adc-a31f-4488-b949-62e25cdf5c7e\") " Dec 10 15:34:41 crc kubenswrapper[4718]: I1210 15:34:41.364109 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0bf77adc-a31f-4488-b949-62e25cdf5c7e-utilities" (OuterVolumeSpecName: "utilities") pod "0bf77adc-a31f-4488-b949-62e25cdf5c7e" (UID: "0bf77adc-a31f-4488-b949-62e25cdf5c7e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 15:34:41 crc kubenswrapper[4718]: I1210 15:34:41.371763 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0bf77adc-a31f-4488-b949-62e25cdf5c7e-kube-api-access-xxfr6" (OuterVolumeSpecName: "kube-api-access-xxfr6") pod "0bf77adc-a31f-4488-b949-62e25cdf5c7e" (UID: "0bf77adc-a31f-4488-b949-62e25cdf5c7e"). InnerVolumeSpecName "kube-api-access-xxfr6". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:34:41 crc kubenswrapper[4718]: I1210 15:34:41.430145 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0bf77adc-a31f-4488-b949-62e25cdf5c7e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0bf77adc-a31f-4488-b949-62e25cdf5c7e" (UID: "0bf77adc-a31f-4488-b949-62e25cdf5c7e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 15:34:41 crc kubenswrapper[4718]: I1210 15:34:41.465627 4718 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0bf77adc-a31f-4488-b949-62e25cdf5c7e-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 10 15:34:41 crc kubenswrapper[4718]: I1210 15:34:41.465674 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xxfr6\" (UniqueName: \"kubernetes.io/projected/0bf77adc-a31f-4488-b949-62e25cdf5c7e-kube-api-access-xxfr6\") on node \"crc\" DevicePath \"\"" Dec 10 15:34:41 crc kubenswrapper[4718]: I1210 15:34:41.465715 4718 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0bf77adc-a31f-4488-b949-62e25cdf5c7e-utilities\") on node \"crc\" DevicePath \"\"" Dec 10 15:34:41 crc kubenswrapper[4718]: I1210 15:34:41.677304 4718 generic.go:334] "Generic (PLEG): container finished" podID="0bf77adc-a31f-4488-b949-62e25cdf5c7e" containerID="93d157714c1987653b6d5f077574a4f6de812fc84a8e99328755f180189b78a0" exitCode=0 Dec 10 15:34:41 crc kubenswrapper[4718]: I1210 15:34:41.677359 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-b2c8z" event={"ID":"0bf77adc-a31f-4488-b949-62e25cdf5c7e","Type":"ContainerDied","Data":"93d157714c1987653b6d5f077574a4f6de812fc84a8e99328755f180189b78a0"} Dec 10 15:34:41 crc kubenswrapper[4718]: I1210 15:34:41.677410 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-b2c8z" event={"ID":"0bf77adc-a31f-4488-b949-62e25cdf5c7e","Type":"ContainerDied","Data":"86625cd9aac4e6fc90ce7577e778ad12fa71d9f4cef9d303d27fe08182a1c64d"} Dec 10 15:34:41 crc kubenswrapper[4718]: I1210 15:34:41.677429 4718 scope.go:117] "RemoveContainer" containerID="93d157714c1987653b6d5f077574a4f6de812fc84a8e99328755f180189b78a0" Dec 10 15:34:41 crc kubenswrapper[4718]: I1210 15:34:41.677603 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-b2c8z" Dec 10 15:34:41 crc kubenswrapper[4718]: I1210 15:34:41.720564 4718 scope.go:117] "RemoveContainer" containerID="d303fb2bbf431304074eb05d1a023555eedb588a06a9d004e7bf23e19f7701e5" Dec 10 15:34:41 crc kubenswrapper[4718]: I1210 15:34:41.726592 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-b2c8z"] Dec 10 15:34:41 crc kubenswrapper[4718]: I1210 15:34:41.738267 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-b2c8z"] Dec 10 15:34:41 crc kubenswrapper[4718]: I1210 15:34:41.748317 4718 scope.go:117] "RemoveContainer" containerID="65808460bee645786c7b55427330008375d6c28ef8b0a283c7648d61d2267c26" Dec 10 15:34:41 crc kubenswrapper[4718]: I1210 15:34:41.801415 4718 scope.go:117] "RemoveContainer" containerID="93d157714c1987653b6d5f077574a4f6de812fc84a8e99328755f180189b78a0" Dec 10 15:34:41 crc kubenswrapper[4718]: E1210 15:34:41.802788 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"93d157714c1987653b6d5f077574a4f6de812fc84a8e99328755f180189b78a0\": container with ID starting with 93d157714c1987653b6d5f077574a4f6de812fc84a8e99328755f180189b78a0 not found: ID does not exist" containerID="93d157714c1987653b6d5f077574a4f6de812fc84a8e99328755f180189b78a0" Dec 10 15:34:41 crc kubenswrapper[4718]: I1210 15:34:41.802877 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"93d157714c1987653b6d5f077574a4f6de812fc84a8e99328755f180189b78a0"} err="failed to get container status \"93d157714c1987653b6d5f077574a4f6de812fc84a8e99328755f180189b78a0\": rpc error: code = NotFound desc = could not find container \"93d157714c1987653b6d5f077574a4f6de812fc84a8e99328755f180189b78a0\": container with ID starting with 93d157714c1987653b6d5f077574a4f6de812fc84a8e99328755f180189b78a0 not found: ID does not exist" Dec 10 15:34:41 crc kubenswrapper[4718]: I1210 15:34:41.802901 4718 scope.go:117] "RemoveContainer" containerID="d303fb2bbf431304074eb05d1a023555eedb588a06a9d004e7bf23e19f7701e5" Dec 10 15:34:41 crc kubenswrapper[4718]: E1210 15:34:41.804762 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d303fb2bbf431304074eb05d1a023555eedb588a06a9d004e7bf23e19f7701e5\": container with ID starting with d303fb2bbf431304074eb05d1a023555eedb588a06a9d004e7bf23e19f7701e5 not found: ID does not exist" containerID="d303fb2bbf431304074eb05d1a023555eedb588a06a9d004e7bf23e19f7701e5" Dec 10 15:34:41 crc kubenswrapper[4718]: I1210 15:34:41.804795 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d303fb2bbf431304074eb05d1a023555eedb588a06a9d004e7bf23e19f7701e5"} err="failed to get container status \"d303fb2bbf431304074eb05d1a023555eedb588a06a9d004e7bf23e19f7701e5\": rpc error: code = NotFound desc = could not find container \"d303fb2bbf431304074eb05d1a023555eedb588a06a9d004e7bf23e19f7701e5\": container with ID starting with d303fb2bbf431304074eb05d1a023555eedb588a06a9d004e7bf23e19f7701e5 not found: ID does not exist" Dec 10 15:34:41 crc kubenswrapper[4718]: I1210 15:34:41.804811 4718 scope.go:117] "RemoveContainer" containerID="65808460bee645786c7b55427330008375d6c28ef8b0a283c7648d61d2267c26" Dec 10 15:34:41 crc kubenswrapper[4718]: E1210 15:34:41.805019 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"65808460bee645786c7b55427330008375d6c28ef8b0a283c7648d61d2267c26\": container with ID starting with 65808460bee645786c7b55427330008375d6c28ef8b0a283c7648d61d2267c26 not found: ID does not exist" containerID="65808460bee645786c7b55427330008375d6c28ef8b0a283c7648d61d2267c26" Dec 10 15:34:41 crc kubenswrapper[4718]: I1210 15:34:41.805049 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"65808460bee645786c7b55427330008375d6c28ef8b0a283c7648d61d2267c26"} err="failed to get container status \"65808460bee645786c7b55427330008375d6c28ef8b0a283c7648d61d2267c26\": rpc error: code = NotFound desc = could not find container \"65808460bee645786c7b55427330008375d6c28ef8b0a283c7648d61d2267c26\": container with ID starting with 65808460bee645786c7b55427330008375d6c28ef8b0a283c7648d61d2267c26 not found: ID does not exist" Dec 10 15:34:42 crc kubenswrapper[4718]: I1210 15:34:42.042067 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0bf77adc-a31f-4488-b949-62e25cdf5c7e" path="/var/lib/kubelet/pods/0bf77adc-a31f-4488-b949-62e25cdf5c7e/volumes" Dec 10 15:34:47 crc kubenswrapper[4718]: I1210 15:34:47.020846 4718 scope.go:117] "RemoveContainer" containerID="a39fe9ee57bd6a377e7053d9337e95aa45b6b874e934a440d79cfb0832a5276b" Dec 10 15:34:47 crc kubenswrapper[4718]: E1210 15:34:47.021964 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8zmhn_openshift-machine-config-operator(8db53917-7cfb-496d-b8a0-5cc68f3be4e7)\"" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" podUID="8db53917-7cfb-496d-b8a0-5cc68f3be4e7" Dec 10 15:35:02 crc kubenswrapper[4718]: I1210 15:35:02.020897 4718 scope.go:117] "RemoveContainer" containerID="a39fe9ee57bd6a377e7053d9337e95aa45b6b874e934a440d79cfb0832a5276b" Dec 10 15:35:02 crc kubenswrapper[4718]: E1210 15:35:02.021997 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8zmhn_openshift-machine-config-operator(8db53917-7cfb-496d-b8a0-5cc68f3be4e7)\"" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" podUID="8db53917-7cfb-496d-b8a0-5cc68f3be4e7" Dec 10 15:35:16 crc kubenswrapper[4718]: I1210 15:35:16.028026 4718 scope.go:117] "RemoveContainer" containerID="a39fe9ee57bd6a377e7053d9337e95aa45b6b874e934a440d79cfb0832a5276b" Dec 10 15:35:16 crc kubenswrapper[4718]: E1210 15:35:16.029234 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8zmhn_openshift-machine-config-operator(8db53917-7cfb-496d-b8a0-5cc68f3be4e7)\"" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" podUID="8db53917-7cfb-496d-b8a0-5cc68f3be4e7" Dec 10 15:35:30 crc kubenswrapper[4718]: I1210 15:35:30.021014 4718 scope.go:117] "RemoveContainer" containerID="a39fe9ee57bd6a377e7053d9337e95aa45b6b874e934a440d79cfb0832a5276b" Dec 10 15:35:30 crc kubenswrapper[4718]: E1210 15:35:30.022116 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8zmhn_openshift-machine-config-operator(8db53917-7cfb-496d-b8a0-5cc68f3be4e7)\"" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" podUID="8db53917-7cfb-496d-b8a0-5cc68f3be4e7" Dec 10 15:35:42 crc kubenswrapper[4718]: I1210 15:35:42.021683 4718 scope.go:117] "RemoveContainer" containerID="a39fe9ee57bd6a377e7053d9337e95aa45b6b874e934a440d79cfb0832a5276b" Dec 10 15:35:42 crc kubenswrapper[4718]: E1210 15:35:42.024212 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8zmhn_openshift-machine-config-operator(8db53917-7cfb-496d-b8a0-5cc68f3be4e7)\"" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" podUID="8db53917-7cfb-496d-b8a0-5cc68f3be4e7" Dec 10 15:35:53 crc kubenswrapper[4718]: I1210 15:35:53.021430 4718 scope.go:117] "RemoveContainer" containerID="a39fe9ee57bd6a377e7053d9337e95aa45b6b874e934a440d79cfb0832a5276b" Dec 10 15:35:53 crc kubenswrapper[4718]: E1210 15:35:53.022709 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8zmhn_openshift-machine-config-operator(8db53917-7cfb-496d-b8a0-5cc68f3be4e7)\"" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" podUID="8db53917-7cfb-496d-b8a0-5cc68f3be4e7" Dec 10 15:36:06 crc kubenswrapper[4718]: I1210 15:36:06.032105 4718 scope.go:117] "RemoveContainer" containerID="a39fe9ee57bd6a377e7053d9337e95aa45b6b874e934a440d79cfb0832a5276b" Dec 10 15:36:06 crc kubenswrapper[4718]: E1210 15:36:06.033748 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8zmhn_openshift-machine-config-operator(8db53917-7cfb-496d-b8a0-5cc68f3be4e7)\"" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" podUID="8db53917-7cfb-496d-b8a0-5cc68f3be4e7" Dec 10 15:36:21 crc kubenswrapper[4718]: I1210 15:36:21.022003 4718 scope.go:117] "RemoveContainer" containerID="a39fe9ee57bd6a377e7053d9337e95aa45b6b874e934a440d79cfb0832a5276b" Dec 10 15:36:21 crc kubenswrapper[4718]: E1210 15:36:21.023521 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8zmhn_openshift-machine-config-operator(8db53917-7cfb-496d-b8a0-5cc68f3be4e7)\"" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" podUID="8db53917-7cfb-496d-b8a0-5cc68f3be4e7" Dec 10 15:36:34 crc kubenswrapper[4718]: I1210 15:36:34.021703 4718 scope.go:117] "RemoveContainer" containerID="a39fe9ee57bd6a377e7053d9337e95aa45b6b874e934a440d79cfb0832a5276b" Dec 10 15:36:34 crc kubenswrapper[4718]: E1210 15:36:34.022950 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8zmhn_openshift-machine-config-operator(8db53917-7cfb-496d-b8a0-5cc68f3be4e7)\"" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" podUID="8db53917-7cfb-496d-b8a0-5cc68f3be4e7" Dec 10 15:36:47 crc kubenswrapper[4718]: I1210 15:36:47.021959 4718 scope.go:117] "RemoveContainer" containerID="a39fe9ee57bd6a377e7053d9337e95aa45b6b874e934a440d79cfb0832a5276b" Dec 10 15:36:47 crc kubenswrapper[4718]: E1210 15:36:47.023486 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8zmhn_openshift-machine-config-operator(8db53917-7cfb-496d-b8a0-5cc68f3be4e7)\"" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" podUID="8db53917-7cfb-496d-b8a0-5cc68f3be4e7" Dec 10 15:37:00 crc kubenswrapper[4718]: I1210 15:37:00.020936 4718 scope.go:117] "RemoveContainer" containerID="a39fe9ee57bd6a377e7053d9337e95aa45b6b874e934a440d79cfb0832a5276b" Dec 10 15:37:00 crc kubenswrapper[4718]: I1210 15:37:00.476125 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" event={"ID":"8db53917-7cfb-496d-b8a0-5cc68f3be4e7","Type":"ContainerStarted","Data":"091febd21323ea160579007db4efde5bdc49cc265b8f871db31f8669c62c4ddd"} Dec 10 15:37:23 crc kubenswrapper[4718]: I1210 15:37:23.980839 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-bq6rm"] Dec 10 15:37:23 crc kubenswrapper[4718]: E1210 15:37:23.985199 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0bf77adc-a31f-4488-b949-62e25cdf5c7e" containerName="extract-utilities" Dec 10 15:37:23 crc kubenswrapper[4718]: I1210 15:37:23.985344 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="0bf77adc-a31f-4488-b949-62e25cdf5c7e" containerName="extract-utilities" Dec 10 15:37:23 crc kubenswrapper[4718]: E1210 15:37:23.985483 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0bf77adc-a31f-4488-b949-62e25cdf5c7e" containerName="extract-content" Dec 10 15:37:23 crc kubenswrapper[4718]: I1210 15:37:23.985573 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="0bf77adc-a31f-4488-b949-62e25cdf5c7e" containerName="extract-content" Dec 10 15:37:23 crc kubenswrapper[4718]: E1210 15:37:23.985693 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0bf77adc-a31f-4488-b949-62e25cdf5c7e" containerName="registry-server" Dec 10 15:37:23 crc kubenswrapper[4718]: I1210 15:37:23.985772 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="0bf77adc-a31f-4488-b949-62e25cdf5c7e" containerName="registry-server" Dec 10 15:37:23 crc kubenswrapper[4718]: I1210 15:37:23.986323 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="0bf77adc-a31f-4488-b949-62e25cdf5c7e" containerName="registry-server" Dec 10 15:37:23 crc kubenswrapper[4718]: I1210 15:37:23.989252 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bq6rm" Dec 10 15:37:23 crc kubenswrapper[4718]: I1210 15:37:23.993727 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-bq6rm"] Dec 10 15:37:24 crc kubenswrapper[4718]: I1210 15:37:24.067421 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8b64809a-893c-4855-8f41-e52b08a55fd3-utilities\") pod \"certified-operators-bq6rm\" (UID: \"8b64809a-893c-4855-8f41-e52b08a55fd3\") " pod="openshift-marketplace/certified-operators-bq6rm" Dec 10 15:37:24 crc kubenswrapper[4718]: I1210 15:37:24.067526 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6ftjk\" (UniqueName: \"kubernetes.io/projected/8b64809a-893c-4855-8f41-e52b08a55fd3-kube-api-access-6ftjk\") pod \"certified-operators-bq6rm\" (UID: \"8b64809a-893c-4855-8f41-e52b08a55fd3\") " pod="openshift-marketplace/certified-operators-bq6rm" Dec 10 15:37:24 crc kubenswrapper[4718]: I1210 15:37:24.067580 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8b64809a-893c-4855-8f41-e52b08a55fd3-catalog-content\") pod \"certified-operators-bq6rm\" (UID: \"8b64809a-893c-4855-8f41-e52b08a55fd3\") " pod="openshift-marketplace/certified-operators-bq6rm" Dec 10 15:37:24 crc kubenswrapper[4718]: I1210 15:37:24.170316 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8b64809a-893c-4855-8f41-e52b08a55fd3-utilities\") pod \"certified-operators-bq6rm\" (UID: \"8b64809a-893c-4855-8f41-e52b08a55fd3\") " pod="openshift-marketplace/certified-operators-bq6rm" Dec 10 15:37:24 crc kubenswrapper[4718]: I1210 15:37:24.170516 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6ftjk\" (UniqueName: \"kubernetes.io/projected/8b64809a-893c-4855-8f41-e52b08a55fd3-kube-api-access-6ftjk\") pod \"certified-operators-bq6rm\" (UID: \"8b64809a-893c-4855-8f41-e52b08a55fd3\") " pod="openshift-marketplace/certified-operators-bq6rm" Dec 10 15:37:24 crc kubenswrapper[4718]: I1210 15:37:24.170592 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8b64809a-893c-4855-8f41-e52b08a55fd3-catalog-content\") pod \"certified-operators-bq6rm\" (UID: \"8b64809a-893c-4855-8f41-e52b08a55fd3\") " pod="openshift-marketplace/certified-operators-bq6rm" Dec 10 15:37:24 crc kubenswrapper[4718]: I1210 15:37:24.171450 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8b64809a-893c-4855-8f41-e52b08a55fd3-catalog-content\") pod \"certified-operators-bq6rm\" (UID: \"8b64809a-893c-4855-8f41-e52b08a55fd3\") " pod="openshift-marketplace/certified-operators-bq6rm" Dec 10 15:37:24 crc kubenswrapper[4718]: I1210 15:37:24.171486 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8b64809a-893c-4855-8f41-e52b08a55fd3-utilities\") pod \"certified-operators-bq6rm\" (UID: \"8b64809a-893c-4855-8f41-e52b08a55fd3\") " pod="openshift-marketplace/certified-operators-bq6rm" Dec 10 15:37:24 crc kubenswrapper[4718]: I1210 15:37:24.207205 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6ftjk\" (UniqueName: \"kubernetes.io/projected/8b64809a-893c-4855-8f41-e52b08a55fd3-kube-api-access-6ftjk\") pod \"certified-operators-bq6rm\" (UID: \"8b64809a-893c-4855-8f41-e52b08a55fd3\") " pod="openshift-marketplace/certified-operators-bq6rm" Dec 10 15:37:24 crc kubenswrapper[4718]: I1210 15:37:24.326354 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bq6rm" Dec 10 15:37:24 crc kubenswrapper[4718]: I1210 15:37:24.964975 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-bq6rm"] Dec 10 15:37:25 crc kubenswrapper[4718]: I1210 15:37:25.779749 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bq6rm" event={"ID":"8b64809a-893c-4855-8f41-e52b08a55fd3","Type":"ContainerStarted","Data":"3318f07c470fb6f294431d20af1bac1fc294f85e333e9b59367fd210979f9f8b"} Dec 10 15:37:26 crc kubenswrapper[4718]: I1210 15:37:26.792346 4718 generic.go:334] "Generic (PLEG): container finished" podID="8b64809a-893c-4855-8f41-e52b08a55fd3" containerID="ff5b79599b01cc334c67c7e2fcc4109b3183081bdfad421ba465ef6e54b7b96e" exitCode=0 Dec 10 15:37:26 crc kubenswrapper[4718]: I1210 15:37:26.792758 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bq6rm" event={"ID":"8b64809a-893c-4855-8f41-e52b08a55fd3","Type":"ContainerDied","Data":"ff5b79599b01cc334c67c7e2fcc4109b3183081bdfad421ba465ef6e54b7b96e"} Dec 10 15:37:26 crc kubenswrapper[4718]: I1210 15:37:26.795627 4718 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 10 15:37:30 crc kubenswrapper[4718]: I1210 15:37:30.836588 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bq6rm" event={"ID":"8b64809a-893c-4855-8f41-e52b08a55fd3","Type":"ContainerStarted","Data":"fcdbddae4a2f072a28ed1e9a1ad5af7f3478ad07e986ba26469626d7b5f55399"} Dec 10 15:37:32 crc kubenswrapper[4718]: I1210 15:37:32.870108 4718 generic.go:334] "Generic (PLEG): container finished" podID="8b64809a-893c-4855-8f41-e52b08a55fd3" containerID="fcdbddae4a2f072a28ed1e9a1ad5af7f3478ad07e986ba26469626d7b5f55399" exitCode=0 Dec 10 15:37:32 crc kubenswrapper[4718]: I1210 15:37:32.870177 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bq6rm" event={"ID":"8b64809a-893c-4855-8f41-e52b08a55fd3","Type":"ContainerDied","Data":"fcdbddae4a2f072a28ed1e9a1ad5af7f3478ad07e986ba26469626d7b5f55399"} Dec 10 15:37:34 crc kubenswrapper[4718]: I1210 15:37:34.910739 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bq6rm" event={"ID":"8b64809a-893c-4855-8f41-e52b08a55fd3","Type":"ContainerStarted","Data":"250314ddd2f470fa7fd2bd361d91888619e01d75498ec58dc1520e135ab0d67e"} Dec 10 15:37:34 crc kubenswrapper[4718]: I1210 15:37:34.942923 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-bq6rm" podStartSLOduration=5.304984528 podStartE2EDuration="11.942859116s" podCreationTimestamp="2025-12-10 15:37:23 +0000 UTC" firstStartedPulling="2025-12-10 15:37:26.795211101 +0000 UTC m=+3951.744434528" lastFinishedPulling="2025-12-10 15:37:33.433085699 +0000 UTC m=+3958.382309116" observedRunningTime="2025-12-10 15:37:34.93120835 +0000 UTC m=+3959.880431777" watchObservedRunningTime="2025-12-10 15:37:34.942859116 +0000 UTC m=+3959.892082533" Dec 10 15:37:44 crc kubenswrapper[4718]: I1210 15:37:44.328216 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-bq6rm" Dec 10 15:37:44 crc kubenswrapper[4718]: I1210 15:37:44.329015 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-bq6rm" Dec 10 15:37:44 crc kubenswrapper[4718]: I1210 15:37:44.388028 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-bq6rm" Dec 10 15:37:45 crc kubenswrapper[4718]: I1210 15:37:45.092464 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-bq6rm" Dec 10 15:37:45 crc kubenswrapper[4718]: I1210 15:37:45.154854 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-bq6rm"] Dec 10 15:37:47 crc kubenswrapper[4718]: I1210 15:37:47.047081 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-bq6rm" podUID="8b64809a-893c-4855-8f41-e52b08a55fd3" containerName="registry-server" containerID="cri-o://250314ddd2f470fa7fd2bd361d91888619e01d75498ec58dc1520e135ab0d67e" gracePeriod=2 Dec 10 15:37:48 crc kubenswrapper[4718]: I1210 15:37:48.098427 4718 generic.go:334] "Generic (PLEG): container finished" podID="8b64809a-893c-4855-8f41-e52b08a55fd3" containerID="250314ddd2f470fa7fd2bd361d91888619e01d75498ec58dc1520e135ab0d67e" exitCode=0 Dec 10 15:37:48 crc kubenswrapper[4718]: I1210 15:37:48.098737 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bq6rm" event={"ID":"8b64809a-893c-4855-8f41-e52b08a55fd3","Type":"ContainerDied","Data":"250314ddd2f470fa7fd2bd361d91888619e01d75498ec58dc1520e135ab0d67e"} Dec 10 15:37:48 crc kubenswrapper[4718]: I1210 15:37:48.216329 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bq6rm" Dec 10 15:37:48 crc kubenswrapper[4718]: I1210 15:37:48.326357 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8b64809a-893c-4855-8f41-e52b08a55fd3-utilities\") pod \"8b64809a-893c-4855-8f41-e52b08a55fd3\" (UID: \"8b64809a-893c-4855-8f41-e52b08a55fd3\") " Dec 10 15:37:48 crc kubenswrapper[4718]: I1210 15:37:48.327843 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8b64809a-893c-4855-8f41-e52b08a55fd3-utilities" (OuterVolumeSpecName: "utilities") pod "8b64809a-893c-4855-8f41-e52b08a55fd3" (UID: "8b64809a-893c-4855-8f41-e52b08a55fd3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 15:37:48 crc kubenswrapper[4718]: I1210 15:37:48.429068 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8b64809a-893c-4855-8f41-e52b08a55fd3-catalog-content\") pod \"8b64809a-893c-4855-8f41-e52b08a55fd3\" (UID: \"8b64809a-893c-4855-8f41-e52b08a55fd3\") " Dec 10 15:37:48 crc kubenswrapper[4718]: I1210 15:37:48.429150 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ftjk\" (UniqueName: \"kubernetes.io/projected/8b64809a-893c-4855-8f41-e52b08a55fd3-kube-api-access-6ftjk\") pod \"8b64809a-893c-4855-8f41-e52b08a55fd3\" (UID: \"8b64809a-893c-4855-8f41-e52b08a55fd3\") " Dec 10 15:37:48 crc kubenswrapper[4718]: I1210 15:37:48.429601 4718 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8b64809a-893c-4855-8f41-e52b08a55fd3-utilities\") on node \"crc\" DevicePath \"\"" Dec 10 15:37:48 crc kubenswrapper[4718]: I1210 15:37:48.438869 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8b64809a-893c-4855-8f41-e52b08a55fd3-kube-api-access-6ftjk" (OuterVolumeSpecName: "kube-api-access-6ftjk") pod "8b64809a-893c-4855-8f41-e52b08a55fd3" (UID: "8b64809a-893c-4855-8f41-e52b08a55fd3"). InnerVolumeSpecName "kube-api-access-6ftjk". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:37:48 crc kubenswrapper[4718]: I1210 15:37:48.524002 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8b64809a-893c-4855-8f41-e52b08a55fd3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8b64809a-893c-4855-8f41-e52b08a55fd3" (UID: "8b64809a-893c-4855-8f41-e52b08a55fd3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 15:37:48 crc kubenswrapper[4718]: I1210 15:37:48.532496 4718 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8b64809a-893c-4855-8f41-e52b08a55fd3-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 10 15:37:48 crc kubenswrapper[4718]: I1210 15:37:48.532536 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ftjk\" (UniqueName: \"kubernetes.io/projected/8b64809a-893c-4855-8f41-e52b08a55fd3-kube-api-access-6ftjk\") on node \"crc\" DevicePath \"\"" Dec 10 15:37:49 crc kubenswrapper[4718]: I1210 15:37:49.111701 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bq6rm" event={"ID":"8b64809a-893c-4855-8f41-e52b08a55fd3","Type":"ContainerDied","Data":"3318f07c470fb6f294431d20af1bac1fc294f85e333e9b59367fd210979f9f8b"} Dec 10 15:37:49 crc kubenswrapper[4718]: I1210 15:37:49.112232 4718 scope.go:117] "RemoveContainer" containerID="250314ddd2f470fa7fd2bd361d91888619e01d75498ec58dc1520e135ab0d67e" Dec 10 15:37:49 crc kubenswrapper[4718]: I1210 15:37:49.111803 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bq6rm" Dec 10 15:37:49 crc kubenswrapper[4718]: I1210 15:37:49.162807 4718 scope.go:117] "RemoveContainer" containerID="fcdbddae4a2f072a28ed1e9a1ad5af7f3478ad07e986ba26469626d7b5f55399" Dec 10 15:37:49 crc kubenswrapper[4718]: I1210 15:37:49.186109 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-bq6rm"] Dec 10 15:37:49 crc kubenswrapper[4718]: I1210 15:37:49.198808 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-bq6rm"] Dec 10 15:37:49 crc kubenswrapper[4718]: I1210 15:37:49.205007 4718 scope.go:117] "RemoveContainer" containerID="ff5b79599b01cc334c67c7e2fcc4109b3183081bdfad421ba465ef6e54b7b96e" Dec 10 15:37:50 crc kubenswrapper[4718]: I1210 15:37:50.034710 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8b64809a-893c-4855-8f41-e52b08a55fd3" path="/var/lib/kubelet/pods/8b64809a-893c-4855-8f41-e52b08a55fd3/volumes" Dec 10 15:39:18 crc kubenswrapper[4718]: I1210 15:39:18.084327 4718 patch_prober.go:28] interesting pod/machine-config-daemon-8zmhn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 10 15:39:18 crc kubenswrapper[4718]: I1210 15:39:18.084913 4718 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" podUID="8db53917-7cfb-496d-b8a0-5cc68f3be4e7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 10 15:39:48 crc kubenswrapper[4718]: I1210 15:39:48.084414 4718 patch_prober.go:28] interesting pod/machine-config-daemon-8zmhn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 10 15:39:48 crc kubenswrapper[4718]: I1210 15:39:48.085139 4718 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" podUID="8db53917-7cfb-496d-b8a0-5cc68f3be4e7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 10 15:40:18 crc kubenswrapper[4718]: I1210 15:40:18.091517 4718 patch_prober.go:28] interesting pod/machine-config-daemon-8zmhn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 10 15:40:18 crc kubenswrapper[4718]: I1210 15:40:18.091958 4718 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" podUID="8db53917-7cfb-496d-b8a0-5cc68f3be4e7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 10 15:40:18 crc kubenswrapper[4718]: I1210 15:40:18.092059 4718 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" Dec 10 15:40:18 crc kubenswrapper[4718]: I1210 15:40:18.092793 4718 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"091febd21323ea160579007db4efde5bdc49cc265b8f871db31f8669c62c4ddd"} pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 10 15:40:18 crc kubenswrapper[4718]: I1210 15:40:18.092845 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" podUID="8db53917-7cfb-496d-b8a0-5cc68f3be4e7" containerName="machine-config-daemon" containerID="cri-o://091febd21323ea160579007db4efde5bdc49cc265b8f871db31f8669c62c4ddd" gracePeriod=600 Dec 10 15:40:19 crc kubenswrapper[4718]: I1210 15:40:19.092725 4718 generic.go:334] "Generic (PLEG): container finished" podID="8db53917-7cfb-496d-b8a0-5cc68f3be4e7" containerID="091febd21323ea160579007db4efde5bdc49cc265b8f871db31f8669c62c4ddd" exitCode=0 Dec 10 15:40:19 crc kubenswrapper[4718]: I1210 15:40:19.092834 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" event={"ID":"8db53917-7cfb-496d-b8a0-5cc68f3be4e7","Type":"ContainerDied","Data":"091febd21323ea160579007db4efde5bdc49cc265b8f871db31f8669c62c4ddd"} Dec 10 15:40:19 crc kubenswrapper[4718]: I1210 15:40:19.093278 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" event={"ID":"8db53917-7cfb-496d-b8a0-5cc68f3be4e7","Type":"ContainerStarted","Data":"cb6e06156cef44ea6acb938b6306c1862c0bc8b47e9e3894d04fd052dc237815"} Dec 10 15:40:19 crc kubenswrapper[4718]: I1210 15:40:19.093363 4718 scope.go:117] "RemoveContainer" containerID="a39fe9ee57bd6a377e7053d9337e95aa45b6b874e934a440d79cfb0832a5276b" Dec 10 15:41:27 crc kubenswrapper[4718]: I1210 15:41:27.351491 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-k9h7w"] Dec 10 15:41:27 crc kubenswrapper[4718]: E1210 15:41:27.354304 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b64809a-893c-4855-8f41-e52b08a55fd3" containerName="extract-utilities" Dec 10 15:41:27 crc kubenswrapper[4718]: I1210 15:41:27.354505 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b64809a-893c-4855-8f41-e52b08a55fd3" containerName="extract-utilities" Dec 10 15:41:27 crc kubenswrapper[4718]: E1210 15:41:27.354672 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b64809a-893c-4855-8f41-e52b08a55fd3" containerName="registry-server" Dec 10 15:41:27 crc kubenswrapper[4718]: I1210 15:41:27.354777 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b64809a-893c-4855-8f41-e52b08a55fd3" containerName="registry-server" Dec 10 15:41:27 crc kubenswrapper[4718]: E1210 15:41:27.354886 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b64809a-893c-4855-8f41-e52b08a55fd3" containerName="extract-content" Dec 10 15:41:27 crc kubenswrapper[4718]: I1210 15:41:27.354986 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b64809a-893c-4855-8f41-e52b08a55fd3" containerName="extract-content" Dec 10 15:41:27 crc kubenswrapper[4718]: I1210 15:41:27.355593 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="8b64809a-893c-4855-8f41-e52b08a55fd3" containerName="registry-server" Dec 10 15:41:27 crc kubenswrapper[4718]: I1210 15:41:27.364446 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-k9h7w" Dec 10 15:41:27 crc kubenswrapper[4718]: I1210 15:41:27.373841 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-k9h7w"] Dec 10 15:41:27 crc kubenswrapper[4718]: I1210 15:41:27.534807 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4525909a-e5eb-458e-9a90-b5a079e0eb09-utilities\") pod \"redhat-operators-k9h7w\" (UID: \"4525909a-e5eb-458e-9a90-b5a079e0eb09\") " pod="openshift-marketplace/redhat-operators-k9h7w" Dec 10 15:41:27 crc kubenswrapper[4718]: I1210 15:41:27.534881 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4525909a-e5eb-458e-9a90-b5a079e0eb09-catalog-content\") pod \"redhat-operators-k9h7w\" (UID: \"4525909a-e5eb-458e-9a90-b5a079e0eb09\") " pod="openshift-marketplace/redhat-operators-k9h7w" Dec 10 15:41:27 crc kubenswrapper[4718]: I1210 15:41:27.535095 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fcn52\" (UniqueName: \"kubernetes.io/projected/4525909a-e5eb-458e-9a90-b5a079e0eb09-kube-api-access-fcn52\") pod \"redhat-operators-k9h7w\" (UID: \"4525909a-e5eb-458e-9a90-b5a079e0eb09\") " pod="openshift-marketplace/redhat-operators-k9h7w" Dec 10 15:41:27 crc kubenswrapper[4718]: I1210 15:41:27.636862 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fcn52\" (UniqueName: \"kubernetes.io/projected/4525909a-e5eb-458e-9a90-b5a079e0eb09-kube-api-access-fcn52\") pod \"redhat-operators-k9h7w\" (UID: \"4525909a-e5eb-458e-9a90-b5a079e0eb09\") " pod="openshift-marketplace/redhat-operators-k9h7w" Dec 10 15:41:27 crc kubenswrapper[4718]: I1210 15:41:27.637013 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4525909a-e5eb-458e-9a90-b5a079e0eb09-utilities\") pod \"redhat-operators-k9h7w\" (UID: \"4525909a-e5eb-458e-9a90-b5a079e0eb09\") " pod="openshift-marketplace/redhat-operators-k9h7w" Dec 10 15:41:27 crc kubenswrapper[4718]: I1210 15:41:27.637041 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4525909a-e5eb-458e-9a90-b5a079e0eb09-catalog-content\") pod \"redhat-operators-k9h7w\" (UID: \"4525909a-e5eb-458e-9a90-b5a079e0eb09\") " pod="openshift-marketplace/redhat-operators-k9h7w" Dec 10 15:41:27 crc kubenswrapper[4718]: I1210 15:41:27.637590 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4525909a-e5eb-458e-9a90-b5a079e0eb09-catalog-content\") pod \"redhat-operators-k9h7w\" (UID: \"4525909a-e5eb-458e-9a90-b5a079e0eb09\") " pod="openshift-marketplace/redhat-operators-k9h7w" Dec 10 15:41:27 crc kubenswrapper[4718]: I1210 15:41:27.637643 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4525909a-e5eb-458e-9a90-b5a079e0eb09-utilities\") pod \"redhat-operators-k9h7w\" (UID: \"4525909a-e5eb-458e-9a90-b5a079e0eb09\") " pod="openshift-marketplace/redhat-operators-k9h7w" Dec 10 15:41:27 crc kubenswrapper[4718]: I1210 15:41:27.659420 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fcn52\" (UniqueName: \"kubernetes.io/projected/4525909a-e5eb-458e-9a90-b5a079e0eb09-kube-api-access-fcn52\") pod \"redhat-operators-k9h7w\" (UID: \"4525909a-e5eb-458e-9a90-b5a079e0eb09\") " pod="openshift-marketplace/redhat-operators-k9h7w" Dec 10 15:41:27 crc kubenswrapper[4718]: I1210 15:41:27.696399 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-k9h7w" Dec 10 15:41:28 crc kubenswrapper[4718]: I1210 15:41:28.323161 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-k9h7w"] Dec 10 15:41:28 crc kubenswrapper[4718]: I1210 15:41:28.955724 4718 generic.go:334] "Generic (PLEG): container finished" podID="4525909a-e5eb-458e-9a90-b5a079e0eb09" containerID="2fe8871bc476d301905625e545dbb1380fb94d90e9764d9582c9f2025c59fd48" exitCode=0 Dec 10 15:41:28 crc kubenswrapper[4718]: I1210 15:41:28.955837 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k9h7w" event={"ID":"4525909a-e5eb-458e-9a90-b5a079e0eb09","Type":"ContainerDied","Data":"2fe8871bc476d301905625e545dbb1380fb94d90e9764d9582c9f2025c59fd48"} Dec 10 15:41:28 crc kubenswrapper[4718]: I1210 15:41:28.956057 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k9h7w" event={"ID":"4525909a-e5eb-458e-9a90-b5a079e0eb09","Type":"ContainerStarted","Data":"27a62d18634007ab19c61aa6b2f430e6bc4f299ca360248358f33115e6e368d5"} Dec 10 15:41:38 crc kubenswrapper[4718]: I1210 15:41:38.086747 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k9h7w" event={"ID":"4525909a-e5eb-458e-9a90-b5a079e0eb09","Type":"ContainerStarted","Data":"177197318a4c820fc6c984485d843b0fbbe47957d01de3022ca348930c51c341"} Dec 10 15:41:42 crc kubenswrapper[4718]: I1210 15:41:42.139900 4718 generic.go:334] "Generic (PLEG): container finished" podID="4525909a-e5eb-458e-9a90-b5a079e0eb09" containerID="177197318a4c820fc6c984485d843b0fbbe47957d01de3022ca348930c51c341" exitCode=0 Dec 10 15:41:42 crc kubenswrapper[4718]: I1210 15:41:42.139961 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k9h7w" event={"ID":"4525909a-e5eb-458e-9a90-b5a079e0eb09","Type":"ContainerDied","Data":"177197318a4c820fc6c984485d843b0fbbe47957d01de3022ca348930c51c341"} Dec 10 15:41:45 crc kubenswrapper[4718]: I1210 15:41:45.173425 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k9h7w" event={"ID":"4525909a-e5eb-458e-9a90-b5a079e0eb09","Type":"ContainerStarted","Data":"f6834da07f5a2929a3ff60d4941b1ca8ca2ac721d4d0983aaa6745aec58b4b88"} Dec 10 15:41:45 crc kubenswrapper[4718]: I1210 15:41:45.197584 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-k9h7w" podStartSLOduration=2.754850418 podStartE2EDuration="18.197543489s" podCreationTimestamp="2025-12-10 15:41:27 +0000 UTC" firstStartedPulling="2025-12-10 15:41:28.957771324 +0000 UTC m=+4193.906994751" lastFinishedPulling="2025-12-10 15:41:44.400464405 +0000 UTC m=+4209.349687822" observedRunningTime="2025-12-10 15:41:45.191929157 +0000 UTC m=+4210.141152584" watchObservedRunningTime="2025-12-10 15:41:45.197543489 +0000 UTC m=+4210.146766906" Dec 10 15:41:47 crc kubenswrapper[4718]: I1210 15:41:47.696934 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-k9h7w" Dec 10 15:41:47 crc kubenswrapper[4718]: I1210 15:41:47.697286 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-k9h7w" Dec 10 15:41:48 crc kubenswrapper[4718]: I1210 15:41:48.754450 4718 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-k9h7w" podUID="4525909a-e5eb-458e-9a90-b5a079e0eb09" containerName="registry-server" probeResult="failure" output=< Dec 10 15:41:48 crc kubenswrapper[4718]: timeout: failed to connect service ":50051" within 1s Dec 10 15:41:48 crc kubenswrapper[4718]: > Dec 10 15:41:57 crc kubenswrapper[4718]: I1210 15:41:57.833440 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-k9h7w" Dec 10 15:41:57 crc kubenswrapper[4718]: I1210 15:41:57.898974 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-k9h7w" Dec 10 15:41:58 crc kubenswrapper[4718]: I1210 15:41:58.379407 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-k9h7w"] Dec 10 15:41:58 crc kubenswrapper[4718]: I1210 15:41:58.553831 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-drwpm"] Dec 10 15:41:58 crc kubenswrapper[4718]: I1210 15:41:58.554301 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-drwpm" podUID="b3c409a6-6840-483d-8019-68c6842f8d25" containerName="registry-server" containerID="cri-o://90e1ce264c11fca223a878b758d2607840c88490a772dbdae8bc88e6155291a1" gracePeriod=2 Dec 10 15:41:59 crc kubenswrapper[4718]: I1210 15:41:59.192612 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-drwpm" Dec 10 15:41:59 crc kubenswrapper[4718]: I1210 15:41:59.292117 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b3c409a6-6840-483d-8019-68c6842f8d25-utilities\") pod \"b3c409a6-6840-483d-8019-68c6842f8d25\" (UID: \"b3c409a6-6840-483d-8019-68c6842f8d25\") " Dec 10 15:41:59 crc kubenswrapper[4718]: I1210 15:41:59.292244 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-td8kr\" (UniqueName: \"kubernetes.io/projected/b3c409a6-6840-483d-8019-68c6842f8d25-kube-api-access-td8kr\") pod \"b3c409a6-6840-483d-8019-68c6842f8d25\" (UID: \"b3c409a6-6840-483d-8019-68c6842f8d25\") " Dec 10 15:41:59 crc kubenswrapper[4718]: I1210 15:41:59.292355 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b3c409a6-6840-483d-8019-68c6842f8d25-catalog-content\") pod \"b3c409a6-6840-483d-8019-68c6842f8d25\" (UID: \"b3c409a6-6840-483d-8019-68c6842f8d25\") " Dec 10 15:41:59 crc kubenswrapper[4718]: I1210 15:41:59.295832 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b3c409a6-6840-483d-8019-68c6842f8d25-utilities" (OuterVolumeSpecName: "utilities") pod "b3c409a6-6840-483d-8019-68c6842f8d25" (UID: "b3c409a6-6840-483d-8019-68c6842f8d25"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 15:41:59 crc kubenswrapper[4718]: I1210 15:41:59.309097 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b3c409a6-6840-483d-8019-68c6842f8d25-kube-api-access-td8kr" (OuterVolumeSpecName: "kube-api-access-td8kr") pod "b3c409a6-6840-483d-8019-68c6842f8d25" (UID: "b3c409a6-6840-483d-8019-68c6842f8d25"). InnerVolumeSpecName "kube-api-access-td8kr". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:41:59 crc kubenswrapper[4718]: I1210 15:41:59.338035 4718 generic.go:334] "Generic (PLEG): container finished" podID="b3c409a6-6840-483d-8019-68c6842f8d25" containerID="90e1ce264c11fca223a878b758d2607840c88490a772dbdae8bc88e6155291a1" exitCode=0 Dec 10 15:41:59 crc kubenswrapper[4718]: I1210 15:41:59.339314 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-drwpm" Dec 10 15:41:59 crc kubenswrapper[4718]: I1210 15:41:59.339908 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-drwpm" event={"ID":"b3c409a6-6840-483d-8019-68c6842f8d25","Type":"ContainerDied","Data":"90e1ce264c11fca223a878b758d2607840c88490a772dbdae8bc88e6155291a1"} Dec 10 15:41:59 crc kubenswrapper[4718]: I1210 15:41:59.339946 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-drwpm" event={"ID":"b3c409a6-6840-483d-8019-68c6842f8d25","Type":"ContainerDied","Data":"9dcfde76ae9f4e0f8d6ad176bc365c77d0fdd7a964a8d9610fe95ee402c86761"} Dec 10 15:41:59 crc kubenswrapper[4718]: I1210 15:41:59.339964 4718 scope.go:117] "RemoveContainer" containerID="90e1ce264c11fca223a878b758d2607840c88490a772dbdae8bc88e6155291a1" Dec 10 15:41:59 crc kubenswrapper[4718]: I1210 15:41:59.389128 4718 scope.go:117] "RemoveContainer" containerID="10cba698f07ee3337a11e6cefd4284f8328e1c174fd738cb6c4db10e2b3f8146" Dec 10 15:41:59 crc kubenswrapper[4718]: I1210 15:41:59.397317 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-td8kr\" (UniqueName: \"kubernetes.io/projected/b3c409a6-6840-483d-8019-68c6842f8d25-kube-api-access-td8kr\") on node \"crc\" DevicePath \"\"" Dec 10 15:41:59 crc kubenswrapper[4718]: I1210 15:41:59.397349 4718 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b3c409a6-6840-483d-8019-68c6842f8d25-utilities\") on node \"crc\" DevicePath \"\"" Dec 10 15:41:59 crc kubenswrapper[4718]: I1210 15:41:59.418036 4718 scope.go:117] "RemoveContainer" containerID="680e900f544711763c2fad0dd55eda0e2277012656dbc9b9bebcb9752f3d9d87" Dec 10 15:41:59 crc kubenswrapper[4718]: I1210 15:41:59.440239 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b3c409a6-6840-483d-8019-68c6842f8d25-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b3c409a6-6840-483d-8019-68c6842f8d25" (UID: "b3c409a6-6840-483d-8019-68c6842f8d25"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 15:41:59 crc kubenswrapper[4718]: I1210 15:41:59.468754 4718 scope.go:117] "RemoveContainer" containerID="90e1ce264c11fca223a878b758d2607840c88490a772dbdae8bc88e6155291a1" Dec 10 15:41:59 crc kubenswrapper[4718]: E1210 15:41:59.469571 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"90e1ce264c11fca223a878b758d2607840c88490a772dbdae8bc88e6155291a1\": container with ID starting with 90e1ce264c11fca223a878b758d2607840c88490a772dbdae8bc88e6155291a1 not found: ID does not exist" containerID="90e1ce264c11fca223a878b758d2607840c88490a772dbdae8bc88e6155291a1" Dec 10 15:41:59 crc kubenswrapper[4718]: I1210 15:41:59.469638 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"90e1ce264c11fca223a878b758d2607840c88490a772dbdae8bc88e6155291a1"} err="failed to get container status \"90e1ce264c11fca223a878b758d2607840c88490a772dbdae8bc88e6155291a1\": rpc error: code = NotFound desc = could not find container \"90e1ce264c11fca223a878b758d2607840c88490a772dbdae8bc88e6155291a1\": container with ID starting with 90e1ce264c11fca223a878b758d2607840c88490a772dbdae8bc88e6155291a1 not found: ID does not exist" Dec 10 15:41:59 crc kubenswrapper[4718]: I1210 15:41:59.469675 4718 scope.go:117] "RemoveContainer" containerID="10cba698f07ee3337a11e6cefd4284f8328e1c174fd738cb6c4db10e2b3f8146" Dec 10 15:41:59 crc kubenswrapper[4718]: E1210 15:41:59.470181 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"10cba698f07ee3337a11e6cefd4284f8328e1c174fd738cb6c4db10e2b3f8146\": container with ID starting with 10cba698f07ee3337a11e6cefd4284f8328e1c174fd738cb6c4db10e2b3f8146 not found: ID does not exist" containerID="10cba698f07ee3337a11e6cefd4284f8328e1c174fd738cb6c4db10e2b3f8146" Dec 10 15:41:59 crc kubenswrapper[4718]: I1210 15:41:59.470316 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"10cba698f07ee3337a11e6cefd4284f8328e1c174fd738cb6c4db10e2b3f8146"} err="failed to get container status \"10cba698f07ee3337a11e6cefd4284f8328e1c174fd738cb6c4db10e2b3f8146\": rpc error: code = NotFound desc = could not find container \"10cba698f07ee3337a11e6cefd4284f8328e1c174fd738cb6c4db10e2b3f8146\": container with ID starting with 10cba698f07ee3337a11e6cefd4284f8328e1c174fd738cb6c4db10e2b3f8146 not found: ID does not exist" Dec 10 15:41:59 crc kubenswrapper[4718]: I1210 15:41:59.470540 4718 scope.go:117] "RemoveContainer" containerID="680e900f544711763c2fad0dd55eda0e2277012656dbc9b9bebcb9752f3d9d87" Dec 10 15:41:59 crc kubenswrapper[4718]: E1210 15:41:59.471062 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"680e900f544711763c2fad0dd55eda0e2277012656dbc9b9bebcb9752f3d9d87\": container with ID starting with 680e900f544711763c2fad0dd55eda0e2277012656dbc9b9bebcb9752f3d9d87 not found: ID does not exist" containerID="680e900f544711763c2fad0dd55eda0e2277012656dbc9b9bebcb9752f3d9d87" Dec 10 15:41:59 crc kubenswrapper[4718]: I1210 15:41:59.471110 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"680e900f544711763c2fad0dd55eda0e2277012656dbc9b9bebcb9752f3d9d87"} err="failed to get container status \"680e900f544711763c2fad0dd55eda0e2277012656dbc9b9bebcb9752f3d9d87\": rpc error: code = NotFound desc = could not find container \"680e900f544711763c2fad0dd55eda0e2277012656dbc9b9bebcb9752f3d9d87\": container with ID starting with 680e900f544711763c2fad0dd55eda0e2277012656dbc9b9bebcb9752f3d9d87 not found: ID does not exist" Dec 10 15:41:59 crc kubenswrapper[4718]: I1210 15:41:59.499619 4718 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b3c409a6-6840-483d-8019-68c6842f8d25-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 10 15:41:59 crc kubenswrapper[4718]: I1210 15:41:59.676831 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-drwpm"] Dec 10 15:41:59 crc kubenswrapper[4718]: I1210 15:41:59.694948 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-drwpm"] Dec 10 15:42:00 crc kubenswrapper[4718]: I1210 15:42:00.034586 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b3c409a6-6840-483d-8019-68c6842f8d25" path="/var/lib/kubelet/pods/b3c409a6-6840-483d-8019-68c6842f8d25/volumes" Dec 10 15:42:18 crc kubenswrapper[4718]: I1210 15:42:18.084225 4718 patch_prober.go:28] interesting pod/machine-config-daemon-8zmhn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 10 15:42:18 crc kubenswrapper[4718]: I1210 15:42:18.084834 4718 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" podUID="8db53917-7cfb-496d-b8a0-5cc68f3be4e7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 10 15:42:26 crc kubenswrapper[4718]: I1210 15:42:26.887208 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-s8gdb"] Dec 10 15:42:26 crc kubenswrapper[4718]: E1210 15:42:26.888267 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b3c409a6-6840-483d-8019-68c6842f8d25" containerName="extract-utilities" Dec 10 15:42:26 crc kubenswrapper[4718]: I1210 15:42:26.888285 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3c409a6-6840-483d-8019-68c6842f8d25" containerName="extract-utilities" Dec 10 15:42:26 crc kubenswrapper[4718]: E1210 15:42:26.888321 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b3c409a6-6840-483d-8019-68c6842f8d25" containerName="extract-content" Dec 10 15:42:26 crc kubenswrapper[4718]: I1210 15:42:26.888329 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3c409a6-6840-483d-8019-68c6842f8d25" containerName="extract-content" Dec 10 15:42:26 crc kubenswrapper[4718]: E1210 15:42:26.888352 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b3c409a6-6840-483d-8019-68c6842f8d25" containerName="registry-server" Dec 10 15:42:26 crc kubenswrapper[4718]: I1210 15:42:26.888359 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3c409a6-6840-483d-8019-68c6842f8d25" containerName="registry-server" Dec 10 15:42:26 crc kubenswrapper[4718]: I1210 15:42:26.888627 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="b3c409a6-6840-483d-8019-68c6842f8d25" containerName="registry-server" Dec 10 15:42:26 crc kubenswrapper[4718]: I1210 15:42:26.890717 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-s8gdb" Dec 10 15:42:26 crc kubenswrapper[4718]: I1210 15:42:26.907072 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-s8gdb"] Dec 10 15:42:26 crc kubenswrapper[4718]: I1210 15:42:26.929327 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d370e475-1ad2-49f6-960b-3d973e566abc-utilities\") pod \"redhat-marketplace-s8gdb\" (UID: \"d370e475-1ad2-49f6-960b-3d973e566abc\") " pod="openshift-marketplace/redhat-marketplace-s8gdb" Dec 10 15:42:26 crc kubenswrapper[4718]: I1210 15:42:26.929580 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d370e475-1ad2-49f6-960b-3d973e566abc-catalog-content\") pod \"redhat-marketplace-s8gdb\" (UID: \"d370e475-1ad2-49f6-960b-3d973e566abc\") " pod="openshift-marketplace/redhat-marketplace-s8gdb" Dec 10 15:42:26 crc kubenswrapper[4718]: I1210 15:42:26.929747 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-98bmb\" (UniqueName: \"kubernetes.io/projected/d370e475-1ad2-49f6-960b-3d973e566abc-kube-api-access-98bmb\") pod \"redhat-marketplace-s8gdb\" (UID: \"d370e475-1ad2-49f6-960b-3d973e566abc\") " pod="openshift-marketplace/redhat-marketplace-s8gdb" Dec 10 15:42:27 crc kubenswrapper[4718]: I1210 15:42:27.031877 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d370e475-1ad2-49f6-960b-3d973e566abc-utilities\") pod \"redhat-marketplace-s8gdb\" (UID: \"d370e475-1ad2-49f6-960b-3d973e566abc\") " pod="openshift-marketplace/redhat-marketplace-s8gdb" Dec 10 15:42:27 crc kubenswrapper[4718]: I1210 15:42:27.031962 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d370e475-1ad2-49f6-960b-3d973e566abc-catalog-content\") pod \"redhat-marketplace-s8gdb\" (UID: \"d370e475-1ad2-49f6-960b-3d973e566abc\") " pod="openshift-marketplace/redhat-marketplace-s8gdb" Dec 10 15:42:27 crc kubenswrapper[4718]: I1210 15:42:27.032003 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-98bmb\" (UniqueName: \"kubernetes.io/projected/d370e475-1ad2-49f6-960b-3d973e566abc-kube-api-access-98bmb\") pod \"redhat-marketplace-s8gdb\" (UID: \"d370e475-1ad2-49f6-960b-3d973e566abc\") " pod="openshift-marketplace/redhat-marketplace-s8gdb" Dec 10 15:42:27 crc kubenswrapper[4718]: I1210 15:42:27.032507 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d370e475-1ad2-49f6-960b-3d973e566abc-utilities\") pod \"redhat-marketplace-s8gdb\" (UID: \"d370e475-1ad2-49f6-960b-3d973e566abc\") " pod="openshift-marketplace/redhat-marketplace-s8gdb" Dec 10 15:42:27 crc kubenswrapper[4718]: I1210 15:42:27.033036 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d370e475-1ad2-49f6-960b-3d973e566abc-catalog-content\") pod \"redhat-marketplace-s8gdb\" (UID: \"d370e475-1ad2-49f6-960b-3d973e566abc\") " pod="openshift-marketplace/redhat-marketplace-s8gdb" Dec 10 15:42:27 crc kubenswrapper[4718]: I1210 15:42:27.061896 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-98bmb\" (UniqueName: \"kubernetes.io/projected/d370e475-1ad2-49f6-960b-3d973e566abc-kube-api-access-98bmb\") pod \"redhat-marketplace-s8gdb\" (UID: \"d370e475-1ad2-49f6-960b-3d973e566abc\") " pod="openshift-marketplace/redhat-marketplace-s8gdb" Dec 10 15:42:27 crc kubenswrapper[4718]: I1210 15:42:27.223097 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-s8gdb" Dec 10 15:42:27 crc kubenswrapper[4718]: I1210 15:42:27.846965 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-s8gdb"] Dec 10 15:42:28 crc kubenswrapper[4718]: I1210 15:42:28.659953 4718 generic.go:334] "Generic (PLEG): container finished" podID="d370e475-1ad2-49f6-960b-3d973e566abc" containerID="fcec4e0800910dd21aeb4ec92f1601c0fe6a19fc4b41fd253ed5f83bcb0d17ea" exitCode=0 Dec 10 15:42:28 crc kubenswrapper[4718]: I1210 15:42:28.660006 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-s8gdb" event={"ID":"d370e475-1ad2-49f6-960b-3d973e566abc","Type":"ContainerDied","Data":"fcec4e0800910dd21aeb4ec92f1601c0fe6a19fc4b41fd253ed5f83bcb0d17ea"} Dec 10 15:42:28 crc kubenswrapper[4718]: I1210 15:42:28.660563 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-s8gdb" event={"ID":"d370e475-1ad2-49f6-960b-3d973e566abc","Type":"ContainerStarted","Data":"0fc1a246461b9ba5aa4228c3ed723c0efa1481ed130d627e14b794dc0b6869d7"} Dec 10 15:42:28 crc kubenswrapper[4718]: I1210 15:42:28.662491 4718 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 10 15:42:30 crc kubenswrapper[4718]: I1210 15:42:30.684631 4718 generic.go:334] "Generic (PLEG): container finished" podID="d370e475-1ad2-49f6-960b-3d973e566abc" containerID="b9217e8d14f17c6947c894931f2e0a3b2dcd8348c3570e74530fe7a4d1c6afa3" exitCode=0 Dec 10 15:42:30 crc kubenswrapper[4718]: I1210 15:42:30.684702 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-s8gdb" event={"ID":"d370e475-1ad2-49f6-960b-3d973e566abc","Type":"ContainerDied","Data":"b9217e8d14f17c6947c894931f2e0a3b2dcd8348c3570e74530fe7a4d1c6afa3"} Dec 10 15:42:32 crc kubenswrapper[4718]: I1210 15:42:32.722565 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-s8gdb" event={"ID":"d370e475-1ad2-49f6-960b-3d973e566abc","Type":"ContainerStarted","Data":"58eddd0cc068b22918024c2dddd834b2fa3dffb4a4379fc3122d27159614caa8"} Dec 10 15:42:32 crc kubenswrapper[4718]: I1210 15:42:32.747947 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-s8gdb" podStartSLOduration=3.01930602 podStartE2EDuration="6.747923356s" podCreationTimestamp="2025-12-10 15:42:26 +0000 UTC" firstStartedPulling="2025-12-10 15:42:28.662220121 +0000 UTC m=+4253.611443538" lastFinishedPulling="2025-12-10 15:42:32.390837457 +0000 UTC m=+4257.340060874" observedRunningTime="2025-12-10 15:42:32.73979367 +0000 UTC m=+4257.689017107" watchObservedRunningTime="2025-12-10 15:42:32.747923356 +0000 UTC m=+4257.697146763" Dec 10 15:42:37 crc kubenswrapper[4718]: I1210 15:42:37.244358 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-s8gdb" Dec 10 15:42:37 crc kubenswrapper[4718]: I1210 15:42:37.245004 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-s8gdb" Dec 10 15:42:37 crc kubenswrapper[4718]: I1210 15:42:37.295342 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-s8gdb" Dec 10 15:42:37 crc kubenswrapper[4718]: I1210 15:42:37.838513 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-s8gdb" Dec 10 15:42:37 crc kubenswrapper[4718]: I1210 15:42:37.917526 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-s8gdb"] Dec 10 15:42:39 crc kubenswrapper[4718]: I1210 15:42:39.829655 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-s8gdb" podUID="d370e475-1ad2-49f6-960b-3d973e566abc" containerName="registry-server" containerID="cri-o://58eddd0cc068b22918024c2dddd834b2fa3dffb4a4379fc3122d27159614caa8" gracePeriod=2 Dec 10 15:42:40 crc kubenswrapper[4718]: I1210 15:42:40.506444 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-s8gdb" Dec 10 15:42:40 crc kubenswrapper[4718]: I1210 15:42:40.634003 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d370e475-1ad2-49f6-960b-3d973e566abc-utilities\") pod \"d370e475-1ad2-49f6-960b-3d973e566abc\" (UID: \"d370e475-1ad2-49f6-960b-3d973e566abc\") " Dec 10 15:42:40 crc kubenswrapper[4718]: I1210 15:42:40.634169 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-98bmb\" (UniqueName: \"kubernetes.io/projected/d370e475-1ad2-49f6-960b-3d973e566abc-kube-api-access-98bmb\") pod \"d370e475-1ad2-49f6-960b-3d973e566abc\" (UID: \"d370e475-1ad2-49f6-960b-3d973e566abc\") " Dec 10 15:42:40 crc kubenswrapper[4718]: I1210 15:42:40.634337 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d370e475-1ad2-49f6-960b-3d973e566abc-catalog-content\") pod \"d370e475-1ad2-49f6-960b-3d973e566abc\" (UID: \"d370e475-1ad2-49f6-960b-3d973e566abc\") " Dec 10 15:42:40 crc kubenswrapper[4718]: I1210 15:42:40.635192 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d370e475-1ad2-49f6-960b-3d973e566abc-utilities" (OuterVolumeSpecName: "utilities") pod "d370e475-1ad2-49f6-960b-3d973e566abc" (UID: "d370e475-1ad2-49f6-960b-3d973e566abc"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 15:42:40 crc kubenswrapper[4718]: I1210 15:42:40.640743 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d370e475-1ad2-49f6-960b-3d973e566abc-kube-api-access-98bmb" (OuterVolumeSpecName: "kube-api-access-98bmb") pod "d370e475-1ad2-49f6-960b-3d973e566abc" (UID: "d370e475-1ad2-49f6-960b-3d973e566abc"). InnerVolumeSpecName "kube-api-access-98bmb". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:42:40 crc kubenswrapper[4718]: I1210 15:42:40.654677 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d370e475-1ad2-49f6-960b-3d973e566abc-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d370e475-1ad2-49f6-960b-3d973e566abc" (UID: "d370e475-1ad2-49f6-960b-3d973e566abc"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 15:42:40 crc kubenswrapper[4718]: I1210 15:42:40.737105 4718 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d370e475-1ad2-49f6-960b-3d973e566abc-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 10 15:42:40 crc kubenswrapper[4718]: I1210 15:42:40.737164 4718 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d370e475-1ad2-49f6-960b-3d973e566abc-utilities\") on node \"crc\" DevicePath \"\"" Dec 10 15:42:40 crc kubenswrapper[4718]: I1210 15:42:40.737177 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-98bmb\" (UniqueName: \"kubernetes.io/projected/d370e475-1ad2-49f6-960b-3d973e566abc-kube-api-access-98bmb\") on node \"crc\" DevicePath \"\"" Dec 10 15:42:40 crc kubenswrapper[4718]: I1210 15:42:40.842742 4718 generic.go:334] "Generic (PLEG): container finished" podID="d370e475-1ad2-49f6-960b-3d973e566abc" containerID="58eddd0cc068b22918024c2dddd834b2fa3dffb4a4379fc3122d27159614caa8" exitCode=0 Dec 10 15:42:40 crc kubenswrapper[4718]: I1210 15:42:40.842867 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-s8gdb" Dec 10 15:42:40 crc kubenswrapper[4718]: I1210 15:42:40.842852 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-s8gdb" event={"ID":"d370e475-1ad2-49f6-960b-3d973e566abc","Type":"ContainerDied","Data":"58eddd0cc068b22918024c2dddd834b2fa3dffb4a4379fc3122d27159614caa8"} Dec 10 15:42:40 crc kubenswrapper[4718]: I1210 15:42:40.843291 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-s8gdb" event={"ID":"d370e475-1ad2-49f6-960b-3d973e566abc","Type":"ContainerDied","Data":"0fc1a246461b9ba5aa4228c3ed723c0efa1481ed130d627e14b794dc0b6869d7"} Dec 10 15:42:40 crc kubenswrapper[4718]: I1210 15:42:40.843327 4718 scope.go:117] "RemoveContainer" containerID="58eddd0cc068b22918024c2dddd834b2fa3dffb4a4379fc3122d27159614caa8" Dec 10 15:42:40 crc kubenswrapper[4718]: I1210 15:42:40.862941 4718 scope.go:117] "RemoveContainer" containerID="b9217e8d14f17c6947c894931f2e0a3b2dcd8348c3570e74530fe7a4d1c6afa3" Dec 10 15:42:40 crc kubenswrapper[4718]: I1210 15:42:40.897143 4718 scope.go:117] "RemoveContainer" containerID="fcec4e0800910dd21aeb4ec92f1601c0fe6a19fc4b41fd253ed5f83bcb0d17ea" Dec 10 15:42:40 crc kubenswrapper[4718]: I1210 15:42:40.900961 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-s8gdb"] Dec 10 15:42:40 crc kubenswrapper[4718]: I1210 15:42:40.917680 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-s8gdb"] Dec 10 15:42:40 crc kubenswrapper[4718]: I1210 15:42:40.949622 4718 scope.go:117] "RemoveContainer" containerID="58eddd0cc068b22918024c2dddd834b2fa3dffb4a4379fc3122d27159614caa8" Dec 10 15:42:40 crc kubenswrapper[4718]: E1210 15:42:40.950508 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"58eddd0cc068b22918024c2dddd834b2fa3dffb4a4379fc3122d27159614caa8\": container with ID starting with 58eddd0cc068b22918024c2dddd834b2fa3dffb4a4379fc3122d27159614caa8 not found: ID does not exist" containerID="58eddd0cc068b22918024c2dddd834b2fa3dffb4a4379fc3122d27159614caa8" Dec 10 15:42:40 crc kubenswrapper[4718]: I1210 15:42:40.950568 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"58eddd0cc068b22918024c2dddd834b2fa3dffb4a4379fc3122d27159614caa8"} err="failed to get container status \"58eddd0cc068b22918024c2dddd834b2fa3dffb4a4379fc3122d27159614caa8\": rpc error: code = NotFound desc = could not find container \"58eddd0cc068b22918024c2dddd834b2fa3dffb4a4379fc3122d27159614caa8\": container with ID starting with 58eddd0cc068b22918024c2dddd834b2fa3dffb4a4379fc3122d27159614caa8 not found: ID does not exist" Dec 10 15:42:40 crc kubenswrapper[4718]: I1210 15:42:40.950603 4718 scope.go:117] "RemoveContainer" containerID="b9217e8d14f17c6947c894931f2e0a3b2dcd8348c3570e74530fe7a4d1c6afa3" Dec 10 15:42:40 crc kubenswrapper[4718]: E1210 15:42:40.951763 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b9217e8d14f17c6947c894931f2e0a3b2dcd8348c3570e74530fe7a4d1c6afa3\": container with ID starting with b9217e8d14f17c6947c894931f2e0a3b2dcd8348c3570e74530fe7a4d1c6afa3 not found: ID does not exist" containerID="b9217e8d14f17c6947c894931f2e0a3b2dcd8348c3570e74530fe7a4d1c6afa3" Dec 10 15:42:40 crc kubenswrapper[4718]: I1210 15:42:40.951922 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b9217e8d14f17c6947c894931f2e0a3b2dcd8348c3570e74530fe7a4d1c6afa3"} err="failed to get container status \"b9217e8d14f17c6947c894931f2e0a3b2dcd8348c3570e74530fe7a4d1c6afa3\": rpc error: code = NotFound desc = could not find container \"b9217e8d14f17c6947c894931f2e0a3b2dcd8348c3570e74530fe7a4d1c6afa3\": container with ID starting with b9217e8d14f17c6947c894931f2e0a3b2dcd8348c3570e74530fe7a4d1c6afa3 not found: ID does not exist" Dec 10 15:42:40 crc kubenswrapper[4718]: I1210 15:42:40.952064 4718 scope.go:117] "RemoveContainer" containerID="fcec4e0800910dd21aeb4ec92f1601c0fe6a19fc4b41fd253ed5f83bcb0d17ea" Dec 10 15:42:40 crc kubenswrapper[4718]: E1210 15:42:40.954345 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fcec4e0800910dd21aeb4ec92f1601c0fe6a19fc4b41fd253ed5f83bcb0d17ea\": container with ID starting with fcec4e0800910dd21aeb4ec92f1601c0fe6a19fc4b41fd253ed5f83bcb0d17ea not found: ID does not exist" containerID="fcec4e0800910dd21aeb4ec92f1601c0fe6a19fc4b41fd253ed5f83bcb0d17ea" Dec 10 15:42:40 crc kubenswrapper[4718]: I1210 15:42:40.954428 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fcec4e0800910dd21aeb4ec92f1601c0fe6a19fc4b41fd253ed5f83bcb0d17ea"} err="failed to get container status \"fcec4e0800910dd21aeb4ec92f1601c0fe6a19fc4b41fd253ed5f83bcb0d17ea\": rpc error: code = NotFound desc = could not find container \"fcec4e0800910dd21aeb4ec92f1601c0fe6a19fc4b41fd253ed5f83bcb0d17ea\": container with ID starting with fcec4e0800910dd21aeb4ec92f1601c0fe6a19fc4b41fd253ed5f83bcb0d17ea not found: ID does not exist" Dec 10 15:42:42 crc kubenswrapper[4718]: I1210 15:42:42.033199 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d370e475-1ad2-49f6-960b-3d973e566abc" path="/var/lib/kubelet/pods/d370e475-1ad2-49f6-960b-3d973e566abc/volumes" Dec 10 15:42:48 crc kubenswrapper[4718]: I1210 15:42:48.084529 4718 patch_prober.go:28] interesting pod/machine-config-daemon-8zmhn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 10 15:42:48 crc kubenswrapper[4718]: I1210 15:42:48.085096 4718 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" podUID="8db53917-7cfb-496d-b8a0-5cc68f3be4e7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 10 15:43:18 crc kubenswrapper[4718]: I1210 15:43:18.084315 4718 patch_prober.go:28] interesting pod/machine-config-daemon-8zmhn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 10 15:43:18 crc kubenswrapper[4718]: I1210 15:43:18.085243 4718 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" podUID="8db53917-7cfb-496d-b8a0-5cc68f3be4e7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 10 15:43:18 crc kubenswrapper[4718]: I1210 15:43:18.085371 4718 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" Dec 10 15:43:18 crc kubenswrapper[4718]: I1210 15:43:18.087157 4718 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"cb6e06156cef44ea6acb938b6306c1862c0bc8b47e9e3894d04fd052dc237815"} pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 10 15:43:18 crc kubenswrapper[4718]: I1210 15:43:18.087321 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" podUID="8db53917-7cfb-496d-b8a0-5cc68f3be4e7" containerName="machine-config-daemon" containerID="cri-o://cb6e06156cef44ea6acb938b6306c1862c0bc8b47e9e3894d04fd052dc237815" gracePeriod=600 Dec 10 15:43:19 crc kubenswrapper[4718]: E1210 15:43:19.105289 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8zmhn_openshift-machine-config-operator(8db53917-7cfb-496d-b8a0-5cc68f3be4e7)\"" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" podUID="8db53917-7cfb-496d-b8a0-5cc68f3be4e7" Dec 10 15:43:19 crc kubenswrapper[4718]: I1210 15:43:19.253889 4718 generic.go:334] "Generic (PLEG): container finished" podID="8db53917-7cfb-496d-b8a0-5cc68f3be4e7" containerID="cb6e06156cef44ea6acb938b6306c1862c0bc8b47e9e3894d04fd052dc237815" exitCode=0 Dec 10 15:43:19 crc kubenswrapper[4718]: I1210 15:43:19.253963 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" event={"ID":"8db53917-7cfb-496d-b8a0-5cc68f3be4e7","Type":"ContainerDied","Data":"cb6e06156cef44ea6acb938b6306c1862c0bc8b47e9e3894d04fd052dc237815"} Dec 10 15:43:19 crc kubenswrapper[4718]: I1210 15:43:19.254023 4718 scope.go:117] "RemoveContainer" containerID="091febd21323ea160579007db4efde5bdc49cc265b8f871db31f8669c62c4ddd" Dec 10 15:43:19 crc kubenswrapper[4718]: I1210 15:43:19.255164 4718 scope.go:117] "RemoveContainer" containerID="cb6e06156cef44ea6acb938b6306c1862c0bc8b47e9e3894d04fd052dc237815" Dec 10 15:43:19 crc kubenswrapper[4718]: E1210 15:43:19.255898 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8zmhn_openshift-machine-config-operator(8db53917-7cfb-496d-b8a0-5cc68f3be4e7)\"" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" podUID="8db53917-7cfb-496d-b8a0-5cc68f3be4e7" Dec 10 15:43:30 crc kubenswrapper[4718]: I1210 15:43:30.020720 4718 scope.go:117] "RemoveContainer" containerID="cb6e06156cef44ea6acb938b6306c1862c0bc8b47e9e3894d04fd052dc237815" Dec 10 15:43:30 crc kubenswrapper[4718]: E1210 15:43:30.021716 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8zmhn_openshift-machine-config-operator(8db53917-7cfb-496d-b8a0-5cc68f3be4e7)\"" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" podUID="8db53917-7cfb-496d-b8a0-5cc68f3be4e7" Dec 10 15:43:44 crc kubenswrapper[4718]: I1210 15:43:44.022238 4718 scope.go:117] "RemoveContainer" containerID="cb6e06156cef44ea6acb938b6306c1862c0bc8b47e9e3894d04fd052dc237815" Dec 10 15:43:44 crc kubenswrapper[4718]: E1210 15:43:44.023140 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8zmhn_openshift-machine-config-operator(8db53917-7cfb-496d-b8a0-5cc68f3be4e7)\"" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" podUID="8db53917-7cfb-496d-b8a0-5cc68f3be4e7" Dec 10 15:43:56 crc kubenswrapper[4718]: I1210 15:43:56.040583 4718 scope.go:117] "RemoveContainer" containerID="cb6e06156cef44ea6acb938b6306c1862c0bc8b47e9e3894d04fd052dc237815" Dec 10 15:43:56 crc kubenswrapper[4718]: E1210 15:43:56.041472 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8zmhn_openshift-machine-config-operator(8db53917-7cfb-496d-b8a0-5cc68f3be4e7)\"" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" podUID="8db53917-7cfb-496d-b8a0-5cc68f3be4e7" Dec 10 15:44:08 crc kubenswrapper[4718]: I1210 15:44:08.021559 4718 scope.go:117] "RemoveContainer" containerID="cb6e06156cef44ea6acb938b6306c1862c0bc8b47e9e3894d04fd052dc237815" Dec 10 15:44:08 crc kubenswrapper[4718]: E1210 15:44:08.022585 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8zmhn_openshift-machine-config-operator(8db53917-7cfb-496d-b8a0-5cc68f3be4e7)\"" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" podUID="8db53917-7cfb-496d-b8a0-5cc68f3be4e7" Dec 10 15:44:21 crc kubenswrapper[4718]: I1210 15:44:21.021176 4718 scope.go:117] "RemoveContainer" containerID="cb6e06156cef44ea6acb938b6306c1862c0bc8b47e9e3894d04fd052dc237815" Dec 10 15:44:21 crc kubenswrapper[4718]: E1210 15:44:21.022473 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8zmhn_openshift-machine-config-operator(8db53917-7cfb-496d-b8a0-5cc68f3be4e7)\"" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" podUID="8db53917-7cfb-496d-b8a0-5cc68f3be4e7" Dec 10 15:44:35 crc kubenswrapper[4718]: I1210 15:44:35.020363 4718 scope.go:117] "RemoveContainer" containerID="cb6e06156cef44ea6acb938b6306c1862c0bc8b47e9e3894d04fd052dc237815" Dec 10 15:44:35 crc kubenswrapper[4718]: E1210 15:44:35.021427 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8zmhn_openshift-machine-config-operator(8db53917-7cfb-496d-b8a0-5cc68f3be4e7)\"" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" podUID="8db53917-7cfb-496d-b8a0-5cc68f3be4e7" Dec 10 15:44:46 crc kubenswrapper[4718]: I1210 15:44:46.028359 4718 scope.go:117] "RemoveContainer" containerID="cb6e06156cef44ea6acb938b6306c1862c0bc8b47e9e3894d04fd052dc237815" Dec 10 15:44:46 crc kubenswrapper[4718]: E1210 15:44:46.029555 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8zmhn_openshift-machine-config-operator(8db53917-7cfb-496d-b8a0-5cc68f3be4e7)\"" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" podUID="8db53917-7cfb-496d-b8a0-5cc68f3be4e7" Dec 10 15:44:58 crc kubenswrapper[4718]: I1210 15:44:58.020529 4718 scope.go:117] "RemoveContainer" containerID="cb6e06156cef44ea6acb938b6306c1862c0bc8b47e9e3894d04fd052dc237815" Dec 10 15:44:58 crc kubenswrapper[4718]: E1210 15:44:58.021441 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8zmhn_openshift-machine-config-operator(8db53917-7cfb-496d-b8a0-5cc68f3be4e7)\"" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" podUID="8db53917-7cfb-496d-b8a0-5cc68f3be4e7" Dec 10 15:45:00 crc kubenswrapper[4718]: I1210 15:45:00.189063 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29423025-qltcv"] Dec 10 15:45:00 crc kubenswrapper[4718]: E1210 15:45:00.190033 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d370e475-1ad2-49f6-960b-3d973e566abc" containerName="registry-server" Dec 10 15:45:00 crc kubenswrapper[4718]: I1210 15:45:00.190061 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="d370e475-1ad2-49f6-960b-3d973e566abc" containerName="registry-server" Dec 10 15:45:00 crc kubenswrapper[4718]: E1210 15:45:00.190083 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d370e475-1ad2-49f6-960b-3d973e566abc" containerName="extract-utilities" Dec 10 15:45:00 crc kubenswrapper[4718]: I1210 15:45:00.190091 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="d370e475-1ad2-49f6-960b-3d973e566abc" containerName="extract-utilities" Dec 10 15:45:00 crc kubenswrapper[4718]: E1210 15:45:00.190104 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d370e475-1ad2-49f6-960b-3d973e566abc" containerName="extract-content" Dec 10 15:45:00 crc kubenswrapper[4718]: I1210 15:45:00.190110 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="d370e475-1ad2-49f6-960b-3d973e566abc" containerName="extract-content" Dec 10 15:45:00 crc kubenswrapper[4718]: I1210 15:45:00.190382 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="d370e475-1ad2-49f6-960b-3d973e566abc" containerName="registry-server" Dec 10 15:45:00 crc kubenswrapper[4718]: I1210 15:45:00.191335 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29423025-qltcv" Dec 10 15:45:00 crc kubenswrapper[4718]: I1210 15:45:00.193981 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 10 15:45:00 crc kubenswrapper[4718]: I1210 15:45:00.194436 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 10 15:45:00 crc kubenswrapper[4718]: I1210 15:45:00.205046 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29423025-qltcv"] Dec 10 15:45:00 crc kubenswrapper[4718]: I1210 15:45:00.268775 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b7519cf8-6dba-4139-8a89-6d0a5187c5b8-config-volume\") pod \"collect-profiles-29423025-qltcv\" (UID: \"b7519cf8-6dba-4139-8a89-6d0a5187c5b8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29423025-qltcv" Dec 10 15:45:00 crc kubenswrapper[4718]: I1210 15:45:00.268874 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b7519cf8-6dba-4139-8a89-6d0a5187c5b8-secret-volume\") pod \"collect-profiles-29423025-qltcv\" (UID: \"b7519cf8-6dba-4139-8a89-6d0a5187c5b8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29423025-qltcv" Dec 10 15:45:00 crc kubenswrapper[4718]: I1210 15:45:00.269021 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ctp9t\" (UniqueName: \"kubernetes.io/projected/b7519cf8-6dba-4139-8a89-6d0a5187c5b8-kube-api-access-ctp9t\") pod \"collect-profiles-29423025-qltcv\" (UID: \"b7519cf8-6dba-4139-8a89-6d0a5187c5b8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29423025-qltcv" Dec 10 15:45:00 crc kubenswrapper[4718]: I1210 15:45:00.371120 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ctp9t\" (UniqueName: \"kubernetes.io/projected/b7519cf8-6dba-4139-8a89-6d0a5187c5b8-kube-api-access-ctp9t\") pod \"collect-profiles-29423025-qltcv\" (UID: \"b7519cf8-6dba-4139-8a89-6d0a5187c5b8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29423025-qltcv" Dec 10 15:45:00 crc kubenswrapper[4718]: I1210 15:45:00.371377 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b7519cf8-6dba-4139-8a89-6d0a5187c5b8-config-volume\") pod \"collect-profiles-29423025-qltcv\" (UID: \"b7519cf8-6dba-4139-8a89-6d0a5187c5b8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29423025-qltcv" Dec 10 15:45:00 crc kubenswrapper[4718]: I1210 15:45:00.371513 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b7519cf8-6dba-4139-8a89-6d0a5187c5b8-secret-volume\") pod \"collect-profiles-29423025-qltcv\" (UID: \"b7519cf8-6dba-4139-8a89-6d0a5187c5b8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29423025-qltcv" Dec 10 15:45:00 crc kubenswrapper[4718]: I1210 15:45:00.372420 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b7519cf8-6dba-4139-8a89-6d0a5187c5b8-config-volume\") pod \"collect-profiles-29423025-qltcv\" (UID: \"b7519cf8-6dba-4139-8a89-6d0a5187c5b8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29423025-qltcv" Dec 10 15:45:00 crc kubenswrapper[4718]: I1210 15:45:00.378262 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b7519cf8-6dba-4139-8a89-6d0a5187c5b8-secret-volume\") pod \"collect-profiles-29423025-qltcv\" (UID: \"b7519cf8-6dba-4139-8a89-6d0a5187c5b8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29423025-qltcv" Dec 10 15:45:00 crc kubenswrapper[4718]: I1210 15:45:00.388540 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ctp9t\" (UniqueName: \"kubernetes.io/projected/b7519cf8-6dba-4139-8a89-6d0a5187c5b8-kube-api-access-ctp9t\") pod \"collect-profiles-29423025-qltcv\" (UID: \"b7519cf8-6dba-4139-8a89-6d0a5187c5b8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29423025-qltcv" Dec 10 15:45:00 crc kubenswrapper[4718]: I1210 15:45:00.517869 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29423025-qltcv" Dec 10 15:45:01 crc kubenswrapper[4718]: I1210 15:45:01.066369 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29423025-qltcv"] Dec 10 15:45:01 crc kubenswrapper[4718]: I1210 15:45:01.588088 4718 generic.go:334] "Generic (PLEG): container finished" podID="b7519cf8-6dba-4139-8a89-6d0a5187c5b8" containerID="d0e813890190bb3563842ea854b423deb8eb0f9fbedbeaeb89f7977a370ca408" exitCode=0 Dec 10 15:45:01 crc kubenswrapper[4718]: I1210 15:45:01.588206 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29423025-qltcv" event={"ID":"b7519cf8-6dba-4139-8a89-6d0a5187c5b8","Type":"ContainerDied","Data":"d0e813890190bb3563842ea854b423deb8eb0f9fbedbeaeb89f7977a370ca408"} Dec 10 15:45:01 crc kubenswrapper[4718]: I1210 15:45:01.588444 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29423025-qltcv" event={"ID":"b7519cf8-6dba-4139-8a89-6d0a5187c5b8","Type":"ContainerStarted","Data":"fdaa5110752c67b94e34c802e3aa0f0e56ae60c5acdf0985f2eb4a9bd7a0589c"} Dec 10 15:45:03 crc kubenswrapper[4718]: I1210 15:45:03.050478 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29423025-qltcv" Dec 10 15:45:03 crc kubenswrapper[4718]: I1210 15:45:03.242485 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b7519cf8-6dba-4139-8a89-6d0a5187c5b8-secret-volume\") pod \"b7519cf8-6dba-4139-8a89-6d0a5187c5b8\" (UID: \"b7519cf8-6dba-4139-8a89-6d0a5187c5b8\") " Dec 10 15:45:03 crc kubenswrapper[4718]: I1210 15:45:03.242659 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b7519cf8-6dba-4139-8a89-6d0a5187c5b8-config-volume\") pod \"b7519cf8-6dba-4139-8a89-6d0a5187c5b8\" (UID: \"b7519cf8-6dba-4139-8a89-6d0a5187c5b8\") " Dec 10 15:45:03 crc kubenswrapper[4718]: I1210 15:45:03.242754 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ctp9t\" (UniqueName: \"kubernetes.io/projected/b7519cf8-6dba-4139-8a89-6d0a5187c5b8-kube-api-access-ctp9t\") pod \"b7519cf8-6dba-4139-8a89-6d0a5187c5b8\" (UID: \"b7519cf8-6dba-4139-8a89-6d0a5187c5b8\") " Dec 10 15:45:03 crc kubenswrapper[4718]: I1210 15:45:03.243590 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b7519cf8-6dba-4139-8a89-6d0a5187c5b8-config-volume" (OuterVolumeSpecName: "config-volume") pod "b7519cf8-6dba-4139-8a89-6d0a5187c5b8" (UID: "b7519cf8-6dba-4139-8a89-6d0a5187c5b8"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 15:45:03 crc kubenswrapper[4718]: I1210 15:45:03.250577 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b7519cf8-6dba-4139-8a89-6d0a5187c5b8-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "b7519cf8-6dba-4139-8a89-6d0a5187c5b8" (UID: "b7519cf8-6dba-4139-8a89-6d0a5187c5b8"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 15:45:03 crc kubenswrapper[4718]: I1210 15:45:03.256883 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b7519cf8-6dba-4139-8a89-6d0a5187c5b8-kube-api-access-ctp9t" (OuterVolumeSpecName: "kube-api-access-ctp9t") pod "b7519cf8-6dba-4139-8a89-6d0a5187c5b8" (UID: "b7519cf8-6dba-4139-8a89-6d0a5187c5b8"). InnerVolumeSpecName "kube-api-access-ctp9t". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:45:03 crc kubenswrapper[4718]: I1210 15:45:03.346381 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ctp9t\" (UniqueName: \"kubernetes.io/projected/b7519cf8-6dba-4139-8a89-6d0a5187c5b8-kube-api-access-ctp9t\") on node \"crc\" DevicePath \"\"" Dec 10 15:45:03 crc kubenswrapper[4718]: I1210 15:45:03.346469 4718 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b7519cf8-6dba-4139-8a89-6d0a5187c5b8-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 10 15:45:03 crc kubenswrapper[4718]: I1210 15:45:03.346487 4718 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b7519cf8-6dba-4139-8a89-6d0a5187c5b8-config-volume\") on node \"crc\" DevicePath \"\"" Dec 10 15:45:03 crc kubenswrapper[4718]: I1210 15:45:03.611048 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29423025-qltcv" event={"ID":"b7519cf8-6dba-4139-8a89-6d0a5187c5b8","Type":"ContainerDied","Data":"fdaa5110752c67b94e34c802e3aa0f0e56ae60c5acdf0985f2eb4a9bd7a0589c"} Dec 10 15:45:03 crc kubenswrapper[4718]: I1210 15:45:03.611126 4718 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fdaa5110752c67b94e34c802e3aa0f0e56ae60c5acdf0985f2eb4a9bd7a0589c" Dec 10 15:45:03 crc kubenswrapper[4718]: I1210 15:45:03.611204 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29423025-qltcv" Dec 10 15:45:04 crc kubenswrapper[4718]: I1210 15:45:04.121463 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29422980-7wqr6"] Dec 10 15:45:04 crc kubenswrapper[4718]: I1210 15:45:04.131466 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29422980-7wqr6"] Dec 10 15:45:04 crc kubenswrapper[4718]: I1210 15:45:04.687028 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-hb6wh"] Dec 10 15:45:04 crc kubenswrapper[4718]: E1210 15:45:04.687830 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b7519cf8-6dba-4139-8a89-6d0a5187c5b8" containerName="collect-profiles" Dec 10 15:45:04 crc kubenswrapper[4718]: I1210 15:45:04.687853 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="b7519cf8-6dba-4139-8a89-6d0a5187c5b8" containerName="collect-profiles" Dec 10 15:45:04 crc kubenswrapper[4718]: I1210 15:45:04.688095 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="b7519cf8-6dba-4139-8a89-6d0a5187c5b8" containerName="collect-profiles" Dec 10 15:45:04 crc kubenswrapper[4718]: I1210 15:45:04.690372 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hb6wh" Dec 10 15:45:04 crc kubenswrapper[4718]: I1210 15:45:04.698895 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cggl5\" (UniqueName: \"kubernetes.io/projected/20bbc2dc-feba-44ce-bc96-2cd2ecb35714-kube-api-access-cggl5\") pod \"community-operators-hb6wh\" (UID: \"20bbc2dc-feba-44ce-bc96-2cd2ecb35714\") " pod="openshift-marketplace/community-operators-hb6wh" Dec 10 15:45:04 crc kubenswrapper[4718]: I1210 15:45:04.698981 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/20bbc2dc-feba-44ce-bc96-2cd2ecb35714-catalog-content\") pod \"community-operators-hb6wh\" (UID: \"20bbc2dc-feba-44ce-bc96-2cd2ecb35714\") " pod="openshift-marketplace/community-operators-hb6wh" Dec 10 15:45:04 crc kubenswrapper[4718]: I1210 15:45:04.699157 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/20bbc2dc-feba-44ce-bc96-2cd2ecb35714-utilities\") pod \"community-operators-hb6wh\" (UID: \"20bbc2dc-feba-44ce-bc96-2cd2ecb35714\") " pod="openshift-marketplace/community-operators-hb6wh" Dec 10 15:45:04 crc kubenswrapper[4718]: I1210 15:45:04.706263 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-hb6wh"] Dec 10 15:45:04 crc kubenswrapper[4718]: I1210 15:45:04.800763 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/20bbc2dc-feba-44ce-bc96-2cd2ecb35714-utilities\") pod \"community-operators-hb6wh\" (UID: \"20bbc2dc-feba-44ce-bc96-2cd2ecb35714\") " pod="openshift-marketplace/community-operators-hb6wh" Dec 10 15:45:04 crc kubenswrapper[4718]: I1210 15:45:04.800863 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cggl5\" (UniqueName: \"kubernetes.io/projected/20bbc2dc-feba-44ce-bc96-2cd2ecb35714-kube-api-access-cggl5\") pod \"community-operators-hb6wh\" (UID: \"20bbc2dc-feba-44ce-bc96-2cd2ecb35714\") " pod="openshift-marketplace/community-operators-hb6wh" Dec 10 15:45:04 crc kubenswrapper[4718]: I1210 15:45:04.800914 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/20bbc2dc-feba-44ce-bc96-2cd2ecb35714-catalog-content\") pod \"community-operators-hb6wh\" (UID: \"20bbc2dc-feba-44ce-bc96-2cd2ecb35714\") " pod="openshift-marketplace/community-operators-hb6wh" Dec 10 15:45:04 crc kubenswrapper[4718]: I1210 15:45:04.801793 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/20bbc2dc-feba-44ce-bc96-2cd2ecb35714-catalog-content\") pod \"community-operators-hb6wh\" (UID: \"20bbc2dc-feba-44ce-bc96-2cd2ecb35714\") " pod="openshift-marketplace/community-operators-hb6wh" Dec 10 15:45:04 crc kubenswrapper[4718]: I1210 15:45:04.801953 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/20bbc2dc-feba-44ce-bc96-2cd2ecb35714-utilities\") pod \"community-operators-hb6wh\" (UID: \"20bbc2dc-feba-44ce-bc96-2cd2ecb35714\") " pod="openshift-marketplace/community-operators-hb6wh" Dec 10 15:45:04 crc kubenswrapper[4718]: I1210 15:45:04.823692 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cggl5\" (UniqueName: \"kubernetes.io/projected/20bbc2dc-feba-44ce-bc96-2cd2ecb35714-kube-api-access-cggl5\") pod \"community-operators-hb6wh\" (UID: \"20bbc2dc-feba-44ce-bc96-2cd2ecb35714\") " pod="openshift-marketplace/community-operators-hb6wh" Dec 10 15:45:05 crc kubenswrapper[4718]: I1210 15:45:05.020436 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hb6wh" Dec 10 15:45:05 crc kubenswrapper[4718]: I1210 15:45:05.600444 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-hb6wh"] Dec 10 15:45:05 crc kubenswrapper[4718]: W1210 15:45:05.602490 4718 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod20bbc2dc_feba_44ce_bc96_2cd2ecb35714.slice/crio-d8d914bdcf3b16b8bb0785d3d43e2c4163ba91bd3a09865faba8df20d872403d WatchSource:0}: Error finding container d8d914bdcf3b16b8bb0785d3d43e2c4163ba91bd3a09865faba8df20d872403d: Status 404 returned error can't find the container with id d8d914bdcf3b16b8bb0785d3d43e2c4163ba91bd3a09865faba8df20d872403d Dec 10 15:45:05 crc kubenswrapper[4718]: I1210 15:45:05.636315 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hb6wh" event={"ID":"20bbc2dc-feba-44ce-bc96-2cd2ecb35714","Type":"ContainerStarted","Data":"d8d914bdcf3b16b8bb0785d3d43e2c4163ba91bd3a09865faba8df20d872403d"} Dec 10 15:45:06 crc kubenswrapper[4718]: I1210 15:45:06.035963 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="372b2022-f87c-4e95-9831-74f4c801d98e" path="/var/lib/kubelet/pods/372b2022-f87c-4e95-9831-74f4c801d98e/volumes" Dec 10 15:45:06 crc kubenswrapper[4718]: I1210 15:45:06.670730 4718 generic.go:334] "Generic (PLEG): container finished" podID="20bbc2dc-feba-44ce-bc96-2cd2ecb35714" containerID="717854315f55472902fff492a5e137c1008e73feeb72f07ce3bb64530c9fcc74" exitCode=0 Dec 10 15:45:06 crc kubenswrapper[4718]: I1210 15:45:06.670782 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hb6wh" event={"ID":"20bbc2dc-feba-44ce-bc96-2cd2ecb35714","Type":"ContainerDied","Data":"717854315f55472902fff492a5e137c1008e73feeb72f07ce3bb64530c9fcc74"} Dec 10 15:45:08 crc kubenswrapper[4718]: I1210 15:45:08.695991 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hb6wh" event={"ID":"20bbc2dc-feba-44ce-bc96-2cd2ecb35714","Type":"ContainerStarted","Data":"00faa4fef888687ed1a5482b8a615105d717cd9643cfa8b384e345b2b128ec37"} Dec 10 15:45:09 crc kubenswrapper[4718]: I1210 15:45:09.707343 4718 generic.go:334] "Generic (PLEG): container finished" podID="20bbc2dc-feba-44ce-bc96-2cd2ecb35714" containerID="00faa4fef888687ed1a5482b8a615105d717cd9643cfa8b384e345b2b128ec37" exitCode=0 Dec 10 15:45:09 crc kubenswrapper[4718]: I1210 15:45:09.707399 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hb6wh" event={"ID":"20bbc2dc-feba-44ce-bc96-2cd2ecb35714","Type":"ContainerDied","Data":"00faa4fef888687ed1a5482b8a615105d717cd9643cfa8b384e345b2b128ec37"} Dec 10 15:45:10 crc kubenswrapper[4718]: I1210 15:45:10.720953 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hb6wh" event={"ID":"20bbc2dc-feba-44ce-bc96-2cd2ecb35714","Type":"ContainerStarted","Data":"70695a99fa44564910122046ceafe6a3c37d795ac5205b2bb8de4c14296c8226"} Dec 10 15:45:10 crc kubenswrapper[4718]: I1210 15:45:10.774540 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-hb6wh" podStartSLOduration=2.980146666 podStartE2EDuration="6.774477888s" podCreationTimestamp="2025-12-10 15:45:04 +0000 UTC" firstStartedPulling="2025-12-10 15:45:06.676011324 +0000 UTC m=+4411.625234741" lastFinishedPulling="2025-12-10 15:45:10.470342546 +0000 UTC m=+4415.419565963" observedRunningTime="2025-12-10 15:45:10.747626197 +0000 UTC m=+4415.696849624" watchObservedRunningTime="2025-12-10 15:45:10.774477888 +0000 UTC m=+4415.723701305" Dec 10 15:45:13 crc kubenswrapper[4718]: I1210 15:45:13.020628 4718 scope.go:117] "RemoveContainer" containerID="cb6e06156cef44ea6acb938b6306c1862c0bc8b47e9e3894d04fd052dc237815" Dec 10 15:45:13 crc kubenswrapper[4718]: E1210 15:45:13.021606 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8zmhn_openshift-machine-config-operator(8db53917-7cfb-496d-b8a0-5cc68f3be4e7)\"" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" podUID="8db53917-7cfb-496d-b8a0-5cc68f3be4e7" Dec 10 15:45:15 crc kubenswrapper[4718]: I1210 15:45:15.020749 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-hb6wh" Dec 10 15:45:15 crc kubenswrapper[4718]: I1210 15:45:15.021160 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-hb6wh" Dec 10 15:45:15 crc kubenswrapper[4718]: I1210 15:45:15.082152 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-hb6wh" Dec 10 15:45:15 crc kubenswrapper[4718]: I1210 15:45:15.832819 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-hb6wh" Dec 10 15:45:15 crc kubenswrapper[4718]: I1210 15:45:15.908808 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-hb6wh"] Dec 10 15:45:17 crc kubenswrapper[4718]: I1210 15:45:17.790607 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-hb6wh" podUID="20bbc2dc-feba-44ce-bc96-2cd2ecb35714" containerName="registry-server" containerID="cri-o://70695a99fa44564910122046ceafe6a3c37d795ac5205b2bb8de4c14296c8226" gracePeriod=2 Dec 10 15:45:18 crc kubenswrapper[4718]: I1210 15:45:18.409825 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hb6wh" Dec 10 15:45:18 crc kubenswrapper[4718]: I1210 15:45:18.546919 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/20bbc2dc-feba-44ce-bc96-2cd2ecb35714-utilities\") pod \"20bbc2dc-feba-44ce-bc96-2cd2ecb35714\" (UID: \"20bbc2dc-feba-44ce-bc96-2cd2ecb35714\") " Dec 10 15:45:18 crc kubenswrapper[4718]: I1210 15:45:18.546994 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cggl5\" (UniqueName: \"kubernetes.io/projected/20bbc2dc-feba-44ce-bc96-2cd2ecb35714-kube-api-access-cggl5\") pod \"20bbc2dc-feba-44ce-bc96-2cd2ecb35714\" (UID: \"20bbc2dc-feba-44ce-bc96-2cd2ecb35714\") " Dec 10 15:45:18 crc kubenswrapper[4718]: I1210 15:45:18.547151 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/20bbc2dc-feba-44ce-bc96-2cd2ecb35714-catalog-content\") pod \"20bbc2dc-feba-44ce-bc96-2cd2ecb35714\" (UID: \"20bbc2dc-feba-44ce-bc96-2cd2ecb35714\") " Dec 10 15:45:18 crc kubenswrapper[4718]: I1210 15:45:18.550073 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/20bbc2dc-feba-44ce-bc96-2cd2ecb35714-utilities" (OuterVolumeSpecName: "utilities") pod "20bbc2dc-feba-44ce-bc96-2cd2ecb35714" (UID: "20bbc2dc-feba-44ce-bc96-2cd2ecb35714"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 15:45:18 crc kubenswrapper[4718]: I1210 15:45:18.555051 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20bbc2dc-feba-44ce-bc96-2cd2ecb35714-kube-api-access-cggl5" (OuterVolumeSpecName: "kube-api-access-cggl5") pod "20bbc2dc-feba-44ce-bc96-2cd2ecb35714" (UID: "20bbc2dc-feba-44ce-bc96-2cd2ecb35714"). InnerVolumeSpecName "kube-api-access-cggl5". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:45:18 crc kubenswrapper[4718]: I1210 15:45:18.599543 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/20bbc2dc-feba-44ce-bc96-2cd2ecb35714-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "20bbc2dc-feba-44ce-bc96-2cd2ecb35714" (UID: "20bbc2dc-feba-44ce-bc96-2cd2ecb35714"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 15:45:18 crc kubenswrapper[4718]: I1210 15:45:18.649432 4718 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/20bbc2dc-feba-44ce-bc96-2cd2ecb35714-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 10 15:45:18 crc kubenswrapper[4718]: I1210 15:45:18.649466 4718 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/20bbc2dc-feba-44ce-bc96-2cd2ecb35714-utilities\") on node \"crc\" DevicePath \"\"" Dec 10 15:45:18 crc kubenswrapper[4718]: I1210 15:45:18.649484 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cggl5\" (UniqueName: \"kubernetes.io/projected/20bbc2dc-feba-44ce-bc96-2cd2ecb35714-kube-api-access-cggl5\") on node \"crc\" DevicePath \"\"" Dec 10 15:45:18 crc kubenswrapper[4718]: I1210 15:45:18.804663 4718 generic.go:334] "Generic (PLEG): container finished" podID="20bbc2dc-feba-44ce-bc96-2cd2ecb35714" containerID="70695a99fa44564910122046ceafe6a3c37d795ac5205b2bb8de4c14296c8226" exitCode=0 Dec 10 15:45:18 crc kubenswrapper[4718]: I1210 15:45:18.804761 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hb6wh" event={"ID":"20bbc2dc-feba-44ce-bc96-2cd2ecb35714","Type":"ContainerDied","Data":"70695a99fa44564910122046ceafe6a3c37d795ac5205b2bb8de4c14296c8226"} Dec 10 15:45:18 crc kubenswrapper[4718]: I1210 15:45:18.804834 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hb6wh" event={"ID":"20bbc2dc-feba-44ce-bc96-2cd2ecb35714","Type":"ContainerDied","Data":"d8d914bdcf3b16b8bb0785d3d43e2c4163ba91bd3a09865faba8df20d872403d"} Dec 10 15:45:18 crc kubenswrapper[4718]: I1210 15:45:18.804945 4718 scope.go:117] "RemoveContainer" containerID="70695a99fa44564910122046ceafe6a3c37d795ac5205b2bb8de4c14296c8226" Dec 10 15:45:18 crc kubenswrapper[4718]: I1210 15:45:18.805306 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hb6wh" Dec 10 15:45:18 crc kubenswrapper[4718]: I1210 15:45:18.834832 4718 scope.go:117] "RemoveContainer" containerID="00faa4fef888687ed1a5482b8a615105d717cd9643cfa8b384e345b2b128ec37" Dec 10 15:45:18 crc kubenswrapper[4718]: I1210 15:45:18.851629 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-hb6wh"] Dec 10 15:45:18 crc kubenswrapper[4718]: I1210 15:45:18.866220 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-hb6wh"] Dec 10 15:45:18 crc kubenswrapper[4718]: I1210 15:45:18.889553 4718 scope.go:117] "RemoveContainer" containerID="717854315f55472902fff492a5e137c1008e73feeb72f07ce3bb64530c9fcc74" Dec 10 15:45:18 crc kubenswrapper[4718]: I1210 15:45:18.919654 4718 scope.go:117] "RemoveContainer" containerID="70695a99fa44564910122046ceafe6a3c37d795ac5205b2bb8de4c14296c8226" Dec 10 15:45:18 crc kubenswrapper[4718]: E1210 15:45:18.920249 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"70695a99fa44564910122046ceafe6a3c37d795ac5205b2bb8de4c14296c8226\": container with ID starting with 70695a99fa44564910122046ceafe6a3c37d795ac5205b2bb8de4c14296c8226 not found: ID does not exist" containerID="70695a99fa44564910122046ceafe6a3c37d795ac5205b2bb8de4c14296c8226" Dec 10 15:45:18 crc kubenswrapper[4718]: I1210 15:45:18.920307 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"70695a99fa44564910122046ceafe6a3c37d795ac5205b2bb8de4c14296c8226"} err="failed to get container status \"70695a99fa44564910122046ceafe6a3c37d795ac5205b2bb8de4c14296c8226\": rpc error: code = NotFound desc = could not find container \"70695a99fa44564910122046ceafe6a3c37d795ac5205b2bb8de4c14296c8226\": container with ID starting with 70695a99fa44564910122046ceafe6a3c37d795ac5205b2bb8de4c14296c8226 not found: ID does not exist" Dec 10 15:45:18 crc kubenswrapper[4718]: I1210 15:45:18.920334 4718 scope.go:117] "RemoveContainer" containerID="00faa4fef888687ed1a5482b8a615105d717cd9643cfa8b384e345b2b128ec37" Dec 10 15:45:18 crc kubenswrapper[4718]: E1210 15:45:18.920701 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"00faa4fef888687ed1a5482b8a615105d717cd9643cfa8b384e345b2b128ec37\": container with ID starting with 00faa4fef888687ed1a5482b8a615105d717cd9643cfa8b384e345b2b128ec37 not found: ID does not exist" containerID="00faa4fef888687ed1a5482b8a615105d717cd9643cfa8b384e345b2b128ec37" Dec 10 15:45:18 crc kubenswrapper[4718]: I1210 15:45:18.920875 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"00faa4fef888687ed1a5482b8a615105d717cd9643cfa8b384e345b2b128ec37"} err="failed to get container status \"00faa4fef888687ed1a5482b8a615105d717cd9643cfa8b384e345b2b128ec37\": rpc error: code = NotFound desc = could not find container \"00faa4fef888687ed1a5482b8a615105d717cd9643cfa8b384e345b2b128ec37\": container with ID starting with 00faa4fef888687ed1a5482b8a615105d717cd9643cfa8b384e345b2b128ec37 not found: ID does not exist" Dec 10 15:45:18 crc kubenswrapper[4718]: I1210 15:45:18.921018 4718 scope.go:117] "RemoveContainer" containerID="717854315f55472902fff492a5e137c1008e73feeb72f07ce3bb64530c9fcc74" Dec 10 15:45:18 crc kubenswrapper[4718]: E1210 15:45:18.921373 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"717854315f55472902fff492a5e137c1008e73feeb72f07ce3bb64530c9fcc74\": container with ID starting with 717854315f55472902fff492a5e137c1008e73feeb72f07ce3bb64530c9fcc74 not found: ID does not exist" containerID="717854315f55472902fff492a5e137c1008e73feeb72f07ce3bb64530c9fcc74" Dec 10 15:45:18 crc kubenswrapper[4718]: I1210 15:45:18.921573 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"717854315f55472902fff492a5e137c1008e73feeb72f07ce3bb64530c9fcc74"} err="failed to get container status \"717854315f55472902fff492a5e137c1008e73feeb72f07ce3bb64530c9fcc74\": rpc error: code = NotFound desc = could not find container \"717854315f55472902fff492a5e137c1008e73feeb72f07ce3bb64530c9fcc74\": container with ID starting with 717854315f55472902fff492a5e137c1008e73feeb72f07ce3bb64530c9fcc74 not found: ID does not exist" Dec 10 15:45:20 crc kubenswrapper[4718]: I1210 15:45:20.042535 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20bbc2dc-feba-44ce-bc96-2cd2ecb35714" path="/var/lib/kubelet/pods/20bbc2dc-feba-44ce-bc96-2cd2ecb35714/volumes" Dec 10 15:45:27 crc kubenswrapper[4718]: I1210 15:45:27.021927 4718 scope.go:117] "RemoveContainer" containerID="cb6e06156cef44ea6acb938b6306c1862c0bc8b47e9e3894d04fd052dc237815" Dec 10 15:45:27 crc kubenswrapper[4718]: E1210 15:45:27.023316 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8zmhn_openshift-machine-config-operator(8db53917-7cfb-496d-b8a0-5cc68f3be4e7)\"" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" podUID="8db53917-7cfb-496d-b8a0-5cc68f3be4e7" Dec 10 15:45:41 crc kubenswrapper[4718]: I1210 15:45:41.020362 4718 scope.go:117] "RemoveContainer" containerID="cb6e06156cef44ea6acb938b6306c1862c0bc8b47e9e3894d04fd052dc237815" Dec 10 15:45:41 crc kubenswrapper[4718]: E1210 15:45:41.021191 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8zmhn_openshift-machine-config-operator(8db53917-7cfb-496d-b8a0-5cc68f3be4e7)\"" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" podUID="8db53917-7cfb-496d-b8a0-5cc68f3be4e7" Dec 10 15:45:54 crc kubenswrapper[4718]: I1210 15:45:54.052802 4718 scope.go:117] "RemoveContainer" containerID="cb6e06156cef44ea6acb938b6306c1862c0bc8b47e9e3894d04fd052dc237815" Dec 10 15:45:54 crc kubenswrapper[4718]: E1210 15:45:54.055280 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8zmhn_openshift-machine-config-operator(8db53917-7cfb-496d-b8a0-5cc68f3be4e7)\"" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" podUID="8db53917-7cfb-496d-b8a0-5cc68f3be4e7" Dec 10 15:45:56 crc kubenswrapper[4718]: I1210 15:45:56.552277 4718 scope.go:117] "RemoveContainer" containerID="78d628407787c880daab351d6dcf7b862932fa2a901de90427eb6a5194b8c793" Dec 10 15:46:06 crc kubenswrapper[4718]: I1210 15:46:06.030972 4718 scope.go:117] "RemoveContainer" containerID="cb6e06156cef44ea6acb938b6306c1862c0bc8b47e9e3894d04fd052dc237815" Dec 10 15:46:06 crc kubenswrapper[4718]: E1210 15:46:06.032248 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8zmhn_openshift-machine-config-operator(8db53917-7cfb-496d-b8a0-5cc68f3be4e7)\"" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" podUID="8db53917-7cfb-496d-b8a0-5cc68f3be4e7" Dec 10 15:46:20 crc kubenswrapper[4718]: I1210 15:46:20.020252 4718 scope.go:117] "RemoveContainer" containerID="cb6e06156cef44ea6acb938b6306c1862c0bc8b47e9e3894d04fd052dc237815" Dec 10 15:46:20 crc kubenswrapper[4718]: E1210 15:46:20.021340 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8zmhn_openshift-machine-config-operator(8db53917-7cfb-496d-b8a0-5cc68f3be4e7)\"" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" podUID="8db53917-7cfb-496d-b8a0-5cc68f3be4e7" Dec 10 15:46:34 crc kubenswrapper[4718]: I1210 15:46:34.021661 4718 scope.go:117] "RemoveContainer" containerID="cb6e06156cef44ea6acb938b6306c1862c0bc8b47e9e3894d04fd052dc237815" Dec 10 15:46:34 crc kubenswrapper[4718]: E1210 15:46:34.023137 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8zmhn_openshift-machine-config-operator(8db53917-7cfb-496d-b8a0-5cc68f3be4e7)\"" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" podUID="8db53917-7cfb-496d-b8a0-5cc68f3be4e7" Dec 10 15:46:47 crc kubenswrapper[4718]: I1210 15:46:47.020993 4718 scope.go:117] "RemoveContainer" containerID="cb6e06156cef44ea6acb938b6306c1862c0bc8b47e9e3894d04fd052dc237815" Dec 10 15:46:47 crc kubenswrapper[4718]: E1210 15:46:47.021865 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8zmhn_openshift-machine-config-operator(8db53917-7cfb-496d-b8a0-5cc68f3be4e7)\"" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" podUID="8db53917-7cfb-496d-b8a0-5cc68f3be4e7" Dec 10 15:47:01 crc kubenswrapper[4718]: I1210 15:47:01.020467 4718 scope.go:117] "RemoveContainer" containerID="cb6e06156cef44ea6acb938b6306c1862c0bc8b47e9e3894d04fd052dc237815" Dec 10 15:47:01 crc kubenswrapper[4718]: E1210 15:47:01.021247 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8zmhn_openshift-machine-config-operator(8db53917-7cfb-496d-b8a0-5cc68f3be4e7)\"" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" podUID="8db53917-7cfb-496d-b8a0-5cc68f3be4e7" Dec 10 15:47:16 crc kubenswrapper[4718]: I1210 15:47:16.030295 4718 scope.go:117] "RemoveContainer" containerID="cb6e06156cef44ea6acb938b6306c1862c0bc8b47e9e3894d04fd052dc237815" Dec 10 15:47:16 crc kubenswrapper[4718]: E1210 15:47:16.033452 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8zmhn_openshift-machine-config-operator(8db53917-7cfb-496d-b8a0-5cc68f3be4e7)\"" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" podUID="8db53917-7cfb-496d-b8a0-5cc68f3be4e7" Dec 10 15:47:31 crc kubenswrapper[4718]: I1210 15:47:31.021257 4718 scope.go:117] "RemoveContainer" containerID="cb6e06156cef44ea6acb938b6306c1862c0bc8b47e9e3894d04fd052dc237815" Dec 10 15:47:31 crc kubenswrapper[4718]: E1210 15:47:31.022127 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8zmhn_openshift-machine-config-operator(8db53917-7cfb-496d-b8a0-5cc68f3be4e7)\"" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" podUID="8db53917-7cfb-496d-b8a0-5cc68f3be4e7" Dec 10 15:47:43 crc kubenswrapper[4718]: I1210 15:47:43.021131 4718 scope.go:117] "RemoveContainer" containerID="cb6e06156cef44ea6acb938b6306c1862c0bc8b47e9e3894d04fd052dc237815" Dec 10 15:47:43 crc kubenswrapper[4718]: E1210 15:47:43.021901 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8zmhn_openshift-machine-config-operator(8db53917-7cfb-496d-b8a0-5cc68f3be4e7)\"" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" podUID="8db53917-7cfb-496d-b8a0-5cc68f3be4e7" Dec 10 15:47:56 crc kubenswrapper[4718]: I1210 15:47:56.027285 4718 scope.go:117] "RemoveContainer" containerID="cb6e06156cef44ea6acb938b6306c1862c0bc8b47e9e3894d04fd052dc237815" Dec 10 15:47:56 crc kubenswrapper[4718]: E1210 15:47:56.028167 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8zmhn_openshift-machine-config-operator(8db53917-7cfb-496d-b8a0-5cc68f3be4e7)\"" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" podUID="8db53917-7cfb-496d-b8a0-5cc68f3be4e7" Dec 10 15:48:07 crc kubenswrapper[4718]: I1210 15:48:07.020780 4718 scope.go:117] "RemoveContainer" containerID="cb6e06156cef44ea6acb938b6306c1862c0bc8b47e9e3894d04fd052dc237815" Dec 10 15:48:07 crc kubenswrapper[4718]: E1210 15:48:07.021613 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8zmhn_openshift-machine-config-operator(8db53917-7cfb-496d-b8a0-5cc68f3be4e7)\"" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" podUID="8db53917-7cfb-496d-b8a0-5cc68f3be4e7" Dec 10 15:48:19 crc kubenswrapper[4718]: I1210 15:48:19.021847 4718 scope.go:117] "RemoveContainer" containerID="cb6e06156cef44ea6acb938b6306c1862c0bc8b47e9e3894d04fd052dc237815" Dec 10 15:48:19 crc kubenswrapper[4718]: I1210 15:48:19.897597 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" event={"ID":"8db53917-7cfb-496d-b8a0-5cc68f3be4e7","Type":"ContainerStarted","Data":"bcff67293106ed7846469b906065ff115e73a0638858cacd7edc48105630b838"} Dec 10 15:48:24 crc kubenswrapper[4718]: I1210 15:48:24.796844 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-6f8d8"] Dec 10 15:48:24 crc kubenswrapper[4718]: E1210 15:48:24.799743 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20bbc2dc-feba-44ce-bc96-2cd2ecb35714" containerName="extract-utilities" Dec 10 15:48:24 crc kubenswrapper[4718]: I1210 15:48:24.799901 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="20bbc2dc-feba-44ce-bc96-2cd2ecb35714" containerName="extract-utilities" Dec 10 15:48:24 crc kubenswrapper[4718]: E1210 15:48:24.800049 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20bbc2dc-feba-44ce-bc96-2cd2ecb35714" containerName="extract-content" Dec 10 15:48:24 crc kubenswrapper[4718]: I1210 15:48:24.800152 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="20bbc2dc-feba-44ce-bc96-2cd2ecb35714" containerName="extract-content" Dec 10 15:48:24 crc kubenswrapper[4718]: E1210 15:48:24.800262 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20bbc2dc-feba-44ce-bc96-2cd2ecb35714" containerName="registry-server" Dec 10 15:48:24 crc kubenswrapper[4718]: I1210 15:48:24.800331 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="20bbc2dc-feba-44ce-bc96-2cd2ecb35714" containerName="registry-server" Dec 10 15:48:24 crc kubenswrapper[4718]: I1210 15:48:24.800794 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="20bbc2dc-feba-44ce-bc96-2cd2ecb35714" containerName="registry-server" Dec 10 15:48:24 crc kubenswrapper[4718]: I1210 15:48:24.803451 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6f8d8" Dec 10 15:48:24 crc kubenswrapper[4718]: I1210 15:48:24.822509 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-6f8d8"] Dec 10 15:48:24 crc kubenswrapper[4718]: I1210 15:48:24.900300 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/24a53000-823a-4a63-a595-f181a7a2e01e-catalog-content\") pod \"certified-operators-6f8d8\" (UID: \"24a53000-823a-4a63-a595-f181a7a2e01e\") " pod="openshift-marketplace/certified-operators-6f8d8" Dec 10 15:48:24 crc kubenswrapper[4718]: I1210 15:48:24.900474 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/24a53000-823a-4a63-a595-f181a7a2e01e-utilities\") pod \"certified-operators-6f8d8\" (UID: \"24a53000-823a-4a63-a595-f181a7a2e01e\") " pod="openshift-marketplace/certified-operators-6f8d8" Dec 10 15:48:24 crc kubenswrapper[4718]: I1210 15:48:24.900529 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dkgd2\" (UniqueName: \"kubernetes.io/projected/24a53000-823a-4a63-a595-f181a7a2e01e-kube-api-access-dkgd2\") pod \"certified-operators-6f8d8\" (UID: \"24a53000-823a-4a63-a595-f181a7a2e01e\") " pod="openshift-marketplace/certified-operators-6f8d8" Dec 10 15:48:25 crc kubenswrapper[4718]: I1210 15:48:25.002983 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/24a53000-823a-4a63-a595-f181a7a2e01e-utilities\") pod \"certified-operators-6f8d8\" (UID: \"24a53000-823a-4a63-a595-f181a7a2e01e\") " pod="openshift-marketplace/certified-operators-6f8d8" Dec 10 15:48:25 crc kubenswrapper[4718]: I1210 15:48:25.003462 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dkgd2\" (UniqueName: \"kubernetes.io/projected/24a53000-823a-4a63-a595-f181a7a2e01e-kube-api-access-dkgd2\") pod \"certified-operators-6f8d8\" (UID: \"24a53000-823a-4a63-a595-f181a7a2e01e\") " pod="openshift-marketplace/certified-operators-6f8d8" Dec 10 15:48:25 crc kubenswrapper[4718]: I1210 15:48:25.003618 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/24a53000-823a-4a63-a595-f181a7a2e01e-catalog-content\") pod \"certified-operators-6f8d8\" (UID: \"24a53000-823a-4a63-a595-f181a7a2e01e\") " pod="openshift-marketplace/certified-operators-6f8d8" Dec 10 15:48:25 crc kubenswrapper[4718]: I1210 15:48:25.003978 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/24a53000-823a-4a63-a595-f181a7a2e01e-utilities\") pod \"certified-operators-6f8d8\" (UID: \"24a53000-823a-4a63-a595-f181a7a2e01e\") " pod="openshift-marketplace/certified-operators-6f8d8" Dec 10 15:48:25 crc kubenswrapper[4718]: I1210 15:48:25.004074 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/24a53000-823a-4a63-a595-f181a7a2e01e-catalog-content\") pod \"certified-operators-6f8d8\" (UID: \"24a53000-823a-4a63-a595-f181a7a2e01e\") " pod="openshift-marketplace/certified-operators-6f8d8" Dec 10 15:48:25 crc kubenswrapper[4718]: I1210 15:48:25.047689 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dkgd2\" (UniqueName: \"kubernetes.io/projected/24a53000-823a-4a63-a595-f181a7a2e01e-kube-api-access-dkgd2\") pod \"certified-operators-6f8d8\" (UID: \"24a53000-823a-4a63-a595-f181a7a2e01e\") " pod="openshift-marketplace/certified-operators-6f8d8" Dec 10 15:48:25 crc kubenswrapper[4718]: I1210 15:48:25.127624 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6f8d8" Dec 10 15:48:25 crc kubenswrapper[4718]: I1210 15:48:25.672890 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-6f8d8"] Dec 10 15:48:25 crc kubenswrapper[4718]: I1210 15:48:25.997460 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6f8d8" event={"ID":"24a53000-823a-4a63-a595-f181a7a2e01e","Type":"ContainerStarted","Data":"8f7f1f86ee38aadd649d32b6de31c8e6a8c33b0f3dcc4c34a9b212535bc58e06"} Dec 10 15:48:27 crc kubenswrapper[4718]: I1210 15:48:27.014662 4718 generic.go:334] "Generic (PLEG): container finished" podID="24a53000-823a-4a63-a595-f181a7a2e01e" containerID="dc57d82bdc7d24e4fb1d70ba575efac852284c96db1d504e2aa2ebbe7c03b074" exitCode=0 Dec 10 15:48:27 crc kubenswrapper[4718]: I1210 15:48:27.014912 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6f8d8" event={"ID":"24a53000-823a-4a63-a595-f181a7a2e01e","Type":"ContainerDied","Data":"dc57d82bdc7d24e4fb1d70ba575efac852284c96db1d504e2aa2ebbe7c03b074"} Dec 10 15:48:27 crc kubenswrapper[4718]: I1210 15:48:27.018923 4718 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 10 15:48:28 crc kubenswrapper[4718]: I1210 15:48:28.035970 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6f8d8" event={"ID":"24a53000-823a-4a63-a595-f181a7a2e01e","Type":"ContainerStarted","Data":"31daff312586e996656c4f08358239de63521361ce990c0a26e77c6e34fb190a"} Dec 10 15:48:29 crc kubenswrapper[4718]: I1210 15:48:29.046538 4718 generic.go:334] "Generic (PLEG): container finished" podID="24a53000-823a-4a63-a595-f181a7a2e01e" containerID="31daff312586e996656c4f08358239de63521361ce990c0a26e77c6e34fb190a" exitCode=0 Dec 10 15:48:29 crc kubenswrapper[4718]: I1210 15:48:29.046622 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6f8d8" event={"ID":"24a53000-823a-4a63-a595-f181a7a2e01e","Type":"ContainerDied","Data":"31daff312586e996656c4f08358239de63521361ce990c0a26e77c6e34fb190a"} Dec 10 15:48:31 crc kubenswrapper[4718]: I1210 15:48:31.079048 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6f8d8" event={"ID":"24a53000-823a-4a63-a595-f181a7a2e01e","Type":"ContainerStarted","Data":"21208d351b79844b08cc943f09e74fa7cb5f2c6e38fc51074570f375548d20f7"} Dec 10 15:48:31 crc kubenswrapper[4718]: I1210 15:48:31.099380 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-6f8d8" podStartSLOduration=4.095879827 podStartE2EDuration="7.099325039s" podCreationTimestamp="2025-12-10 15:48:24 +0000 UTC" firstStartedPulling="2025-12-10 15:48:27.018631865 +0000 UTC m=+4611.967855282" lastFinishedPulling="2025-12-10 15:48:30.022077077 +0000 UTC m=+4614.971300494" observedRunningTime="2025-12-10 15:48:31.096056907 +0000 UTC m=+4616.045280324" watchObservedRunningTime="2025-12-10 15:48:31.099325039 +0000 UTC m=+4616.048548476" Dec 10 15:48:35 crc kubenswrapper[4718]: I1210 15:48:35.128301 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-6f8d8" Dec 10 15:48:35 crc kubenswrapper[4718]: I1210 15:48:35.128972 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-6f8d8" Dec 10 15:48:35 crc kubenswrapper[4718]: I1210 15:48:35.184254 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-6f8d8" Dec 10 15:48:36 crc kubenswrapper[4718]: I1210 15:48:36.840644 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-6f8d8" Dec 10 15:48:36 crc kubenswrapper[4718]: I1210 15:48:36.913978 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-6f8d8"] Dec 10 15:48:38 crc kubenswrapper[4718]: I1210 15:48:38.147357 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-6f8d8" podUID="24a53000-823a-4a63-a595-f181a7a2e01e" containerName="registry-server" containerID="cri-o://21208d351b79844b08cc943f09e74fa7cb5f2c6e38fc51074570f375548d20f7" gracePeriod=2 Dec 10 15:48:38 crc kubenswrapper[4718]: I1210 15:48:38.904233 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6f8d8" Dec 10 15:48:39 crc kubenswrapper[4718]: I1210 15:48:39.022645 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dkgd2\" (UniqueName: \"kubernetes.io/projected/24a53000-823a-4a63-a595-f181a7a2e01e-kube-api-access-dkgd2\") pod \"24a53000-823a-4a63-a595-f181a7a2e01e\" (UID: \"24a53000-823a-4a63-a595-f181a7a2e01e\") " Dec 10 15:48:39 crc kubenswrapper[4718]: I1210 15:48:39.022909 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/24a53000-823a-4a63-a595-f181a7a2e01e-catalog-content\") pod \"24a53000-823a-4a63-a595-f181a7a2e01e\" (UID: \"24a53000-823a-4a63-a595-f181a7a2e01e\") " Dec 10 15:48:39 crc kubenswrapper[4718]: I1210 15:48:39.023000 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/24a53000-823a-4a63-a595-f181a7a2e01e-utilities\") pod \"24a53000-823a-4a63-a595-f181a7a2e01e\" (UID: \"24a53000-823a-4a63-a595-f181a7a2e01e\") " Dec 10 15:48:39 crc kubenswrapper[4718]: I1210 15:48:39.024659 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/24a53000-823a-4a63-a595-f181a7a2e01e-utilities" (OuterVolumeSpecName: "utilities") pod "24a53000-823a-4a63-a595-f181a7a2e01e" (UID: "24a53000-823a-4a63-a595-f181a7a2e01e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 15:48:39 crc kubenswrapper[4718]: I1210 15:48:39.032732 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/24a53000-823a-4a63-a595-f181a7a2e01e-kube-api-access-dkgd2" (OuterVolumeSpecName: "kube-api-access-dkgd2") pod "24a53000-823a-4a63-a595-f181a7a2e01e" (UID: "24a53000-823a-4a63-a595-f181a7a2e01e"). InnerVolumeSpecName "kube-api-access-dkgd2". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:48:39 crc kubenswrapper[4718]: I1210 15:48:39.085921 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/24a53000-823a-4a63-a595-f181a7a2e01e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "24a53000-823a-4a63-a595-f181a7a2e01e" (UID: "24a53000-823a-4a63-a595-f181a7a2e01e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 15:48:39 crc kubenswrapper[4718]: I1210 15:48:39.126036 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dkgd2\" (UniqueName: \"kubernetes.io/projected/24a53000-823a-4a63-a595-f181a7a2e01e-kube-api-access-dkgd2\") on node \"crc\" DevicePath \"\"" Dec 10 15:48:39 crc kubenswrapper[4718]: I1210 15:48:39.126366 4718 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/24a53000-823a-4a63-a595-f181a7a2e01e-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 10 15:48:39 crc kubenswrapper[4718]: I1210 15:48:39.126380 4718 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/24a53000-823a-4a63-a595-f181a7a2e01e-utilities\") on node \"crc\" DevicePath \"\"" Dec 10 15:48:39 crc kubenswrapper[4718]: I1210 15:48:39.168411 4718 generic.go:334] "Generic (PLEG): container finished" podID="24a53000-823a-4a63-a595-f181a7a2e01e" containerID="21208d351b79844b08cc943f09e74fa7cb5f2c6e38fc51074570f375548d20f7" exitCode=0 Dec 10 15:48:39 crc kubenswrapper[4718]: I1210 15:48:39.168475 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6f8d8" event={"ID":"24a53000-823a-4a63-a595-f181a7a2e01e","Type":"ContainerDied","Data":"21208d351b79844b08cc943f09e74fa7cb5f2c6e38fc51074570f375548d20f7"} Dec 10 15:48:39 crc kubenswrapper[4718]: I1210 15:48:39.168494 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6f8d8" Dec 10 15:48:39 crc kubenswrapper[4718]: I1210 15:48:39.168517 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6f8d8" event={"ID":"24a53000-823a-4a63-a595-f181a7a2e01e","Type":"ContainerDied","Data":"8f7f1f86ee38aadd649d32b6de31c8e6a8c33b0f3dcc4c34a9b212535bc58e06"} Dec 10 15:48:39 crc kubenswrapper[4718]: I1210 15:48:39.168575 4718 scope.go:117] "RemoveContainer" containerID="21208d351b79844b08cc943f09e74fa7cb5f2c6e38fc51074570f375548d20f7" Dec 10 15:48:39 crc kubenswrapper[4718]: I1210 15:48:39.211317 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-6f8d8"] Dec 10 15:48:39 crc kubenswrapper[4718]: I1210 15:48:39.212726 4718 scope.go:117] "RemoveContainer" containerID="31daff312586e996656c4f08358239de63521361ce990c0a26e77c6e34fb190a" Dec 10 15:48:39 crc kubenswrapper[4718]: I1210 15:48:39.222743 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-6f8d8"] Dec 10 15:48:39 crc kubenswrapper[4718]: I1210 15:48:39.262579 4718 scope.go:117] "RemoveContainer" containerID="dc57d82bdc7d24e4fb1d70ba575efac852284c96db1d504e2aa2ebbe7c03b074" Dec 10 15:48:39 crc kubenswrapper[4718]: I1210 15:48:39.376989 4718 scope.go:117] "RemoveContainer" containerID="21208d351b79844b08cc943f09e74fa7cb5f2c6e38fc51074570f375548d20f7" Dec 10 15:48:39 crc kubenswrapper[4718]: E1210 15:48:39.377776 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"21208d351b79844b08cc943f09e74fa7cb5f2c6e38fc51074570f375548d20f7\": container with ID starting with 21208d351b79844b08cc943f09e74fa7cb5f2c6e38fc51074570f375548d20f7 not found: ID does not exist" containerID="21208d351b79844b08cc943f09e74fa7cb5f2c6e38fc51074570f375548d20f7" Dec 10 15:48:39 crc kubenswrapper[4718]: I1210 15:48:39.377837 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"21208d351b79844b08cc943f09e74fa7cb5f2c6e38fc51074570f375548d20f7"} err="failed to get container status \"21208d351b79844b08cc943f09e74fa7cb5f2c6e38fc51074570f375548d20f7\": rpc error: code = NotFound desc = could not find container \"21208d351b79844b08cc943f09e74fa7cb5f2c6e38fc51074570f375548d20f7\": container with ID starting with 21208d351b79844b08cc943f09e74fa7cb5f2c6e38fc51074570f375548d20f7 not found: ID does not exist" Dec 10 15:48:39 crc kubenswrapper[4718]: I1210 15:48:39.377868 4718 scope.go:117] "RemoveContainer" containerID="31daff312586e996656c4f08358239de63521361ce990c0a26e77c6e34fb190a" Dec 10 15:48:39 crc kubenswrapper[4718]: E1210 15:48:39.378422 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"31daff312586e996656c4f08358239de63521361ce990c0a26e77c6e34fb190a\": container with ID starting with 31daff312586e996656c4f08358239de63521361ce990c0a26e77c6e34fb190a not found: ID does not exist" containerID="31daff312586e996656c4f08358239de63521361ce990c0a26e77c6e34fb190a" Dec 10 15:48:39 crc kubenswrapper[4718]: I1210 15:48:39.378461 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"31daff312586e996656c4f08358239de63521361ce990c0a26e77c6e34fb190a"} err="failed to get container status \"31daff312586e996656c4f08358239de63521361ce990c0a26e77c6e34fb190a\": rpc error: code = NotFound desc = could not find container \"31daff312586e996656c4f08358239de63521361ce990c0a26e77c6e34fb190a\": container with ID starting with 31daff312586e996656c4f08358239de63521361ce990c0a26e77c6e34fb190a not found: ID does not exist" Dec 10 15:48:39 crc kubenswrapper[4718]: I1210 15:48:39.378511 4718 scope.go:117] "RemoveContainer" containerID="dc57d82bdc7d24e4fb1d70ba575efac852284c96db1d504e2aa2ebbe7c03b074" Dec 10 15:48:39 crc kubenswrapper[4718]: E1210 15:48:39.378784 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dc57d82bdc7d24e4fb1d70ba575efac852284c96db1d504e2aa2ebbe7c03b074\": container with ID starting with dc57d82bdc7d24e4fb1d70ba575efac852284c96db1d504e2aa2ebbe7c03b074 not found: ID does not exist" containerID="dc57d82bdc7d24e4fb1d70ba575efac852284c96db1d504e2aa2ebbe7c03b074" Dec 10 15:48:39 crc kubenswrapper[4718]: I1210 15:48:39.378811 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dc57d82bdc7d24e4fb1d70ba575efac852284c96db1d504e2aa2ebbe7c03b074"} err="failed to get container status \"dc57d82bdc7d24e4fb1d70ba575efac852284c96db1d504e2aa2ebbe7c03b074\": rpc error: code = NotFound desc = could not find container \"dc57d82bdc7d24e4fb1d70ba575efac852284c96db1d504e2aa2ebbe7c03b074\": container with ID starting with dc57d82bdc7d24e4fb1d70ba575efac852284c96db1d504e2aa2ebbe7c03b074 not found: ID does not exist" Dec 10 15:48:40 crc kubenswrapper[4718]: I1210 15:48:40.051432 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="24a53000-823a-4a63-a595-f181a7a2e01e" path="/var/lib/kubelet/pods/24a53000-823a-4a63-a595-f181a7a2e01e/volumes" Dec 10 15:50:48 crc kubenswrapper[4718]: I1210 15:50:48.084584 4718 patch_prober.go:28] interesting pod/machine-config-daemon-8zmhn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 10 15:50:48 crc kubenswrapper[4718]: I1210 15:50:48.085176 4718 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" podUID="8db53917-7cfb-496d-b8a0-5cc68f3be4e7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 10 15:51:18 crc kubenswrapper[4718]: I1210 15:51:18.084744 4718 patch_prober.go:28] interesting pod/machine-config-daemon-8zmhn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 10 15:51:18 crc kubenswrapper[4718]: I1210 15:51:18.085177 4718 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" podUID="8db53917-7cfb-496d-b8a0-5cc68f3be4e7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 10 15:51:39 crc kubenswrapper[4718]: I1210 15:51:39.695558 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-f99tp"] Dec 10 15:51:39 crc kubenswrapper[4718]: E1210 15:51:39.696731 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24a53000-823a-4a63-a595-f181a7a2e01e" containerName="extract-content" Dec 10 15:51:39 crc kubenswrapper[4718]: I1210 15:51:39.696750 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="24a53000-823a-4a63-a595-f181a7a2e01e" containerName="extract-content" Dec 10 15:51:39 crc kubenswrapper[4718]: E1210 15:51:39.696785 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24a53000-823a-4a63-a595-f181a7a2e01e" containerName="extract-utilities" Dec 10 15:51:39 crc kubenswrapper[4718]: I1210 15:51:39.696794 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="24a53000-823a-4a63-a595-f181a7a2e01e" containerName="extract-utilities" Dec 10 15:51:39 crc kubenswrapper[4718]: E1210 15:51:39.696839 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24a53000-823a-4a63-a595-f181a7a2e01e" containerName="registry-server" Dec 10 15:51:39 crc kubenswrapper[4718]: I1210 15:51:39.696846 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="24a53000-823a-4a63-a595-f181a7a2e01e" containerName="registry-server" Dec 10 15:51:39 crc kubenswrapper[4718]: I1210 15:51:39.697045 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="24a53000-823a-4a63-a595-f181a7a2e01e" containerName="registry-server" Dec 10 15:51:39 crc kubenswrapper[4718]: I1210 15:51:39.698783 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f99tp" Dec 10 15:51:39 crc kubenswrapper[4718]: I1210 15:51:39.722567 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-f99tp"] Dec 10 15:51:39 crc kubenswrapper[4718]: I1210 15:51:39.844564 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/144493c5-9000-4f1b-8287-9fbad1dc5f0d-catalog-content\") pod \"redhat-operators-f99tp\" (UID: \"144493c5-9000-4f1b-8287-9fbad1dc5f0d\") " pod="openshift-marketplace/redhat-operators-f99tp" Dec 10 15:51:39 crc kubenswrapper[4718]: I1210 15:51:39.844652 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7kbf8\" (UniqueName: \"kubernetes.io/projected/144493c5-9000-4f1b-8287-9fbad1dc5f0d-kube-api-access-7kbf8\") pod \"redhat-operators-f99tp\" (UID: \"144493c5-9000-4f1b-8287-9fbad1dc5f0d\") " pod="openshift-marketplace/redhat-operators-f99tp" Dec 10 15:51:39 crc kubenswrapper[4718]: I1210 15:51:39.844767 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/144493c5-9000-4f1b-8287-9fbad1dc5f0d-utilities\") pod \"redhat-operators-f99tp\" (UID: \"144493c5-9000-4f1b-8287-9fbad1dc5f0d\") " pod="openshift-marketplace/redhat-operators-f99tp" Dec 10 15:51:39 crc kubenswrapper[4718]: I1210 15:51:39.948553 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/144493c5-9000-4f1b-8287-9fbad1dc5f0d-catalog-content\") pod \"redhat-operators-f99tp\" (UID: \"144493c5-9000-4f1b-8287-9fbad1dc5f0d\") " pod="openshift-marketplace/redhat-operators-f99tp" Dec 10 15:51:39 crc kubenswrapper[4718]: I1210 15:51:39.947936 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/144493c5-9000-4f1b-8287-9fbad1dc5f0d-catalog-content\") pod \"redhat-operators-f99tp\" (UID: \"144493c5-9000-4f1b-8287-9fbad1dc5f0d\") " pod="openshift-marketplace/redhat-operators-f99tp" Dec 10 15:51:39 crc kubenswrapper[4718]: I1210 15:51:39.948661 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7kbf8\" (UniqueName: \"kubernetes.io/projected/144493c5-9000-4f1b-8287-9fbad1dc5f0d-kube-api-access-7kbf8\") pod \"redhat-operators-f99tp\" (UID: \"144493c5-9000-4f1b-8287-9fbad1dc5f0d\") " pod="openshift-marketplace/redhat-operators-f99tp" Dec 10 15:51:39 crc kubenswrapper[4718]: I1210 15:51:39.948751 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/144493c5-9000-4f1b-8287-9fbad1dc5f0d-utilities\") pod \"redhat-operators-f99tp\" (UID: \"144493c5-9000-4f1b-8287-9fbad1dc5f0d\") " pod="openshift-marketplace/redhat-operators-f99tp" Dec 10 15:51:39 crc kubenswrapper[4718]: I1210 15:51:39.949204 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/144493c5-9000-4f1b-8287-9fbad1dc5f0d-utilities\") pod \"redhat-operators-f99tp\" (UID: \"144493c5-9000-4f1b-8287-9fbad1dc5f0d\") " pod="openshift-marketplace/redhat-operators-f99tp" Dec 10 15:51:39 crc kubenswrapper[4718]: I1210 15:51:39.983348 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7kbf8\" (UniqueName: \"kubernetes.io/projected/144493c5-9000-4f1b-8287-9fbad1dc5f0d-kube-api-access-7kbf8\") pod \"redhat-operators-f99tp\" (UID: \"144493c5-9000-4f1b-8287-9fbad1dc5f0d\") " pod="openshift-marketplace/redhat-operators-f99tp" Dec 10 15:51:40 crc kubenswrapper[4718]: I1210 15:51:40.037452 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f99tp" Dec 10 15:51:40 crc kubenswrapper[4718]: I1210 15:51:40.609523 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-f99tp"] Dec 10 15:51:41 crc kubenswrapper[4718]: I1210 15:51:41.195637 4718 generic.go:334] "Generic (PLEG): container finished" podID="144493c5-9000-4f1b-8287-9fbad1dc5f0d" containerID="a802525ad076af9afd22db4171943deaed308544f57ad452709de2d894e07c10" exitCode=0 Dec 10 15:51:41 crc kubenswrapper[4718]: I1210 15:51:41.195789 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-f99tp" event={"ID":"144493c5-9000-4f1b-8287-9fbad1dc5f0d","Type":"ContainerDied","Data":"a802525ad076af9afd22db4171943deaed308544f57ad452709de2d894e07c10"} Dec 10 15:51:41 crc kubenswrapper[4718]: I1210 15:51:41.196072 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-f99tp" event={"ID":"144493c5-9000-4f1b-8287-9fbad1dc5f0d","Type":"ContainerStarted","Data":"79e9318751f220a12c65614ddb78a3c5ed5b43af3f168942524aa140a763eeaa"} Dec 10 15:51:42 crc kubenswrapper[4718]: I1210 15:51:42.210146 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-f99tp" event={"ID":"144493c5-9000-4f1b-8287-9fbad1dc5f0d","Type":"ContainerStarted","Data":"5337d1c05d90b091240410c2af384668cae69bf78453e3b7c47a8a1fc51e8624"} Dec 10 15:51:45 crc kubenswrapper[4718]: I1210 15:51:45.244243 4718 generic.go:334] "Generic (PLEG): container finished" podID="144493c5-9000-4f1b-8287-9fbad1dc5f0d" containerID="5337d1c05d90b091240410c2af384668cae69bf78453e3b7c47a8a1fc51e8624" exitCode=0 Dec 10 15:51:45 crc kubenswrapper[4718]: I1210 15:51:45.244302 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-f99tp" event={"ID":"144493c5-9000-4f1b-8287-9fbad1dc5f0d","Type":"ContainerDied","Data":"5337d1c05d90b091240410c2af384668cae69bf78453e3b7c47a8a1fc51e8624"} Dec 10 15:51:47 crc kubenswrapper[4718]: I1210 15:51:47.267183 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-f99tp" event={"ID":"144493c5-9000-4f1b-8287-9fbad1dc5f0d","Type":"ContainerStarted","Data":"86448afac65401dd2395acd0aa1ee861930de8c366a4f0ccfdc560282fc869e6"} Dec 10 15:51:47 crc kubenswrapper[4718]: I1210 15:51:47.300352 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-f99tp" podStartSLOduration=3.807014489 podStartE2EDuration="8.300315627s" podCreationTimestamp="2025-12-10 15:51:39 +0000 UTC" firstStartedPulling="2025-12-10 15:51:41.198177275 +0000 UTC m=+4806.147400692" lastFinishedPulling="2025-12-10 15:51:45.691478413 +0000 UTC m=+4810.640701830" observedRunningTime="2025-12-10 15:51:47.291027142 +0000 UTC m=+4812.240250559" watchObservedRunningTime="2025-12-10 15:51:47.300315627 +0000 UTC m=+4812.249539044" Dec 10 15:51:48 crc kubenswrapper[4718]: I1210 15:51:48.084237 4718 patch_prober.go:28] interesting pod/machine-config-daemon-8zmhn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 10 15:51:48 crc kubenswrapper[4718]: I1210 15:51:48.084595 4718 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" podUID="8db53917-7cfb-496d-b8a0-5cc68f3be4e7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 10 15:51:48 crc kubenswrapper[4718]: I1210 15:51:48.084641 4718 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" Dec 10 15:51:48 crc kubenswrapper[4718]: I1210 15:51:48.085502 4718 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"bcff67293106ed7846469b906065ff115e73a0638858cacd7edc48105630b838"} pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 10 15:51:48 crc kubenswrapper[4718]: I1210 15:51:48.085570 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" podUID="8db53917-7cfb-496d-b8a0-5cc68f3be4e7" containerName="machine-config-daemon" containerID="cri-o://bcff67293106ed7846469b906065ff115e73a0638858cacd7edc48105630b838" gracePeriod=600 Dec 10 15:51:49 crc kubenswrapper[4718]: I1210 15:51:49.291919 4718 generic.go:334] "Generic (PLEG): container finished" podID="8db53917-7cfb-496d-b8a0-5cc68f3be4e7" containerID="bcff67293106ed7846469b906065ff115e73a0638858cacd7edc48105630b838" exitCode=0 Dec 10 15:51:49 crc kubenswrapper[4718]: I1210 15:51:49.292008 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" event={"ID":"8db53917-7cfb-496d-b8a0-5cc68f3be4e7","Type":"ContainerDied","Data":"bcff67293106ed7846469b906065ff115e73a0638858cacd7edc48105630b838"} Dec 10 15:51:49 crc kubenswrapper[4718]: I1210 15:51:49.292336 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" event={"ID":"8db53917-7cfb-496d-b8a0-5cc68f3be4e7","Type":"ContainerStarted","Data":"ebdeff3b0943919b5bdef1d22dc252289066507510289625b43f45819ed60530"} Dec 10 15:51:49 crc kubenswrapper[4718]: I1210 15:51:49.292424 4718 scope.go:117] "RemoveContainer" containerID="cb6e06156cef44ea6acb938b6306c1862c0bc8b47e9e3894d04fd052dc237815" Dec 10 15:51:50 crc kubenswrapper[4718]: I1210 15:51:50.038282 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-f99tp" Dec 10 15:51:50 crc kubenswrapper[4718]: I1210 15:51:50.038651 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-f99tp" Dec 10 15:51:51 crc kubenswrapper[4718]: I1210 15:51:51.096502 4718 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-f99tp" podUID="144493c5-9000-4f1b-8287-9fbad1dc5f0d" containerName="registry-server" probeResult="failure" output=< Dec 10 15:51:51 crc kubenswrapper[4718]: timeout: failed to connect service ":50051" within 1s Dec 10 15:51:51 crc kubenswrapper[4718]: > Dec 10 15:52:00 crc kubenswrapper[4718]: I1210 15:52:00.095910 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-f99tp" Dec 10 15:52:00 crc kubenswrapper[4718]: I1210 15:52:00.155956 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-f99tp" Dec 10 15:52:00 crc kubenswrapper[4718]: I1210 15:52:00.337151 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-f99tp"] Dec 10 15:52:01 crc kubenswrapper[4718]: I1210 15:52:01.417989 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-f99tp" podUID="144493c5-9000-4f1b-8287-9fbad1dc5f0d" containerName="registry-server" containerID="cri-o://86448afac65401dd2395acd0aa1ee861930de8c366a4f0ccfdc560282fc869e6" gracePeriod=2 Dec 10 15:52:01 crc kubenswrapper[4718]: I1210 15:52:01.982569 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f99tp" Dec 10 15:52:02 crc kubenswrapper[4718]: I1210 15:52:02.032039 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7kbf8\" (UniqueName: \"kubernetes.io/projected/144493c5-9000-4f1b-8287-9fbad1dc5f0d-kube-api-access-7kbf8\") pod \"144493c5-9000-4f1b-8287-9fbad1dc5f0d\" (UID: \"144493c5-9000-4f1b-8287-9fbad1dc5f0d\") " Dec 10 15:52:02 crc kubenswrapper[4718]: I1210 15:52:02.032304 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/144493c5-9000-4f1b-8287-9fbad1dc5f0d-catalog-content\") pod \"144493c5-9000-4f1b-8287-9fbad1dc5f0d\" (UID: \"144493c5-9000-4f1b-8287-9fbad1dc5f0d\") " Dec 10 15:52:02 crc kubenswrapper[4718]: I1210 15:52:02.032473 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/144493c5-9000-4f1b-8287-9fbad1dc5f0d-utilities\") pod \"144493c5-9000-4f1b-8287-9fbad1dc5f0d\" (UID: \"144493c5-9000-4f1b-8287-9fbad1dc5f0d\") " Dec 10 15:52:02 crc kubenswrapper[4718]: I1210 15:52:02.033632 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/144493c5-9000-4f1b-8287-9fbad1dc5f0d-utilities" (OuterVolumeSpecName: "utilities") pod "144493c5-9000-4f1b-8287-9fbad1dc5f0d" (UID: "144493c5-9000-4f1b-8287-9fbad1dc5f0d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 15:52:02 crc kubenswrapper[4718]: I1210 15:52:02.046663 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/144493c5-9000-4f1b-8287-9fbad1dc5f0d-kube-api-access-7kbf8" (OuterVolumeSpecName: "kube-api-access-7kbf8") pod "144493c5-9000-4f1b-8287-9fbad1dc5f0d" (UID: "144493c5-9000-4f1b-8287-9fbad1dc5f0d"). InnerVolumeSpecName "kube-api-access-7kbf8". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:52:02 crc kubenswrapper[4718]: I1210 15:52:02.135889 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7kbf8\" (UniqueName: \"kubernetes.io/projected/144493c5-9000-4f1b-8287-9fbad1dc5f0d-kube-api-access-7kbf8\") on node \"crc\" DevicePath \"\"" Dec 10 15:52:02 crc kubenswrapper[4718]: I1210 15:52:02.135951 4718 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/144493c5-9000-4f1b-8287-9fbad1dc5f0d-utilities\") on node \"crc\" DevicePath \"\"" Dec 10 15:52:02 crc kubenswrapper[4718]: I1210 15:52:02.175318 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/144493c5-9000-4f1b-8287-9fbad1dc5f0d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "144493c5-9000-4f1b-8287-9fbad1dc5f0d" (UID: "144493c5-9000-4f1b-8287-9fbad1dc5f0d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 15:52:02 crc kubenswrapper[4718]: I1210 15:52:02.238647 4718 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/144493c5-9000-4f1b-8287-9fbad1dc5f0d-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 10 15:52:02 crc kubenswrapper[4718]: I1210 15:52:02.436772 4718 generic.go:334] "Generic (PLEG): container finished" podID="144493c5-9000-4f1b-8287-9fbad1dc5f0d" containerID="86448afac65401dd2395acd0aa1ee861930de8c366a4f0ccfdc560282fc869e6" exitCode=0 Dec 10 15:52:02 crc kubenswrapper[4718]: I1210 15:52:02.436903 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f99tp" Dec 10 15:52:02 crc kubenswrapper[4718]: I1210 15:52:02.436926 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-f99tp" event={"ID":"144493c5-9000-4f1b-8287-9fbad1dc5f0d","Type":"ContainerDied","Data":"86448afac65401dd2395acd0aa1ee861930de8c366a4f0ccfdc560282fc869e6"} Dec 10 15:52:02 crc kubenswrapper[4718]: I1210 15:52:02.439080 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-f99tp" event={"ID":"144493c5-9000-4f1b-8287-9fbad1dc5f0d","Type":"ContainerDied","Data":"79e9318751f220a12c65614ddb78a3c5ed5b43af3f168942524aa140a763eeaa"} Dec 10 15:52:02 crc kubenswrapper[4718]: I1210 15:52:02.439225 4718 scope.go:117] "RemoveContainer" containerID="86448afac65401dd2395acd0aa1ee861930de8c366a4f0ccfdc560282fc869e6" Dec 10 15:52:02 crc kubenswrapper[4718]: I1210 15:52:02.484104 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-f99tp"] Dec 10 15:52:02 crc kubenswrapper[4718]: I1210 15:52:02.493556 4718 scope.go:117] "RemoveContainer" containerID="5337d1c05d90b091240410c2af384668cae69bf78453e3b7c47a8a1fc51e8624" Dec 10 15:52:02 crc kubenswrapper[4718]: I1210 15:52:02.494211 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-f99tp"] Dec 10 15:52:02 crc kubenswrapper[4718]: I1210 15:52:02.527615 4718 scope.go:117] "RemoveContainer" containerID="a802525ad076af9afd22db4171943deaed308544f57ad452709de2d894e07c10" Dec 10 15:52:02 crc kubenswrapper[4718]: I1210 15:52:02.589720 4718 scope.go:117] "RemoveContainer" containerID="86448afac65401dd2395acd0aa1ee861930de8c366a4f0ccfdc560282fc869e6" Dec 10 15:52:02 crc kubenswrapper[4718]: E1210 15:52:02.590351 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"86448afac65401dd2395acd0aa1ee861930de8c366a4f0ccfdc560282fc869e6\": container with ID starting with 86448afac65401dd2395acd0aa1ee861930de8c366a4f0ccfdc560282fc869e6 not found: ID does not exist" containerID="86448afac65401dd2395acd0aa1ee861930de8c366a4f0ccfdc560282fc869e6" Dec 10 15:52:02 crc kubenswrapper[4718]: I1210 15:52:02.590413 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"86448afac65401dd2395acd0aa1ee861930de8c366a4f0ccfdc560282fc869e6"} err="failed to get container status \"86448afac65401dd2395acd0aa1ee861930de8c366a4f0ccfdc560282fc869e6\": rpc error: code = NotFound desc = could not find container \"86448afac65401dd2395acd0aa1ee861930de8c366a4f0ccfdc560282fc869e6\": container with ID starting with 86448afac65401dd2395acd0aa1ee861930de8c366a4f0ccfdc560282fc869e6 not found: ID does not exist" Dec 10 15:52:02 crc kubenswrapper[4718]: I1210 15:52:02.590444 4718 scope.go:117] "RemoveContainer" containerID="5337d1c05d90b091240410c2af384668cae69bf78453e3b7c47a8a1fc51e8624" Dec 10 15:52:02 crc kubenswrapper[4718]: E1210 15:52:02.590795 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5337d1c05d90b091240410c2af384668cae69bf78453e3b7c47a8a1fc51e8624\": container with ID starting with 5337d1c05d90b091240410c2af384668cae69bf78453e3b7c47a8a1fc51e8624 not found: ID does not exist" containerID="5337d1c05d90b091240410c2af384668cae69bf78453e3b7c47a8a1fc51e8624" Dec 10 15:52:02 crc kubenswrapper[4718]: I1210 15:52:02.590831 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5337d1c05d90b091240410c2af384668cae69bf78453e3b7c47a8a1fc51e8624"} err="failed to get container status \"5337d1c05d90b091240410c2af384668cae69bf78453e3b7c47a8a1fc51e8624\": rpc error: code = NotFound desc = could not find container \"5337d1c05d90b091240410c2af384668cae69bf78453e3b7c47a8a1fc51e8624\": container with ID starting with 5337d1c05d90b091240410c2af384668cae69bf78453e3b7c47a8a1fc51e8624 not found: ID does not exist" Dec 10 15:52:02 crc kubenswrapper[4718]: I1210 15:52:02.590850 4718 scope.go:117] "RemoveContainer" containerID="a802525ad076af9afd22db4171943deaed308544f57ad452709de2d894e07c10" Dec 10 15:52:02 crc kubenswrapper[4718]: E1210 15:52:02.591261 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a802525ad076af9afd22db4171943deaed308544f57ad452709de2d894e07c10\": container with ID starting with a802525ad076af9afd22db4171943deaed308544f57ad452709de2d894e07c10 not found: ID does not exist" containerID="a802525ad076af9afd22db4171943deaed308544f57ad452709de2d894e07c10" Dec 10 15:52:02 crc kubenswrapper[4718]: I1210 15:52:02.591297 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a802525ad076af9afd22db4171943deaed308544f57ad452709de2d894e07c10"} err="failed to get container status \"a802525ad076af9afd22db4171943deaed308544f57ad452709de2d894e07c10\": rpc error: code = NotFound desc = could not find container \"a802525ad076af9afd22db4171943deaed308544f57ad452709de2d894e07c10\": container with ID starting with a802525ad076af9afd22db4171943deaed308544f57ad452709de2d894e07c10 not found: ID does not exist" Dec 10 15:52:04 crc kubenswrapper[4718]: I1210 15:52:04.040327 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="144493c5-9000-4f1b-8287-9fbad1dc5f0d" path="/var/lib/kubelet/pods/144493c5-9000-4f1b-8287-9fbad1dc5f0d/volumes" Dec 10 15:54:18 crc kubenswrapper[4718]: I1210 15:54:18.084254 4718 patch_prober.go:28] interesting pod/machine-config-daemon-8zmhn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 10 15:54:18 crc kubenswrapper[4718]: I1210 15:54:18.084827 4718 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" podUID="8db53917-7cfb-496d-b8a0-5cc68f3be4e7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 10 15:54:48 crc kubenswrapper[4718]: I1210 15:54:48.084053 4718 patch_prober.go:28] interesting pod/machine-config-daemon-8zmhn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 10 15:54:48 crc kubenswrapper[4718]: I1210 15:54:48.084627 4718 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" podUID="8db53917-7cfb-496d-b8a0-5cc68f3be4e7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 10 15:55:18 crc kubenswrapper[4718]: I1210 15:55:18.084035 4718 patch_prober.go:28] interesting pod/machine-config-daemon-8zmhn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 10 15:55:18 crc kubenswrapper[4718]: I1210 15:55:18.084594 4718 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" podUID="8db53917-7cfb-496d-b8a0-5cc68f3be4e7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 10 15:55:18 crc kubenswrapper[4718]: I1210 15:55:18.084668 4718 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" Dec 10 15:55:18 crc kubenswrapper[4718]: I1210 15:55:18.085580 4718 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"ebdeff3b0943919b5bdef1d22dc252289066507510289625b43f45819ed60530"} pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 10 15:55:18 crc kubenswrapper[4718]: I1210 15:55:18.085638 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" podUID="8db53917-7cfb-496d-b8a0-5cc68f3be4e7" containerName="machine-config-daemon" containerID="cri-o://ebdeff3b0943919b5bdef1d22dc252289066507510289625b43f45819ed60530" gracePeriod=600 Dec 10 15:55:18 crc kubenswrapper[4718]: E1210 15:55:18.335076 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8zmhn_openshift-machine-config-operator(8db53917-7cfb-496d-b8a0-5cc68f3be4e7)\"" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" podUID="8db53917-7cfb-496d-b8a0-5cc68f3be4e7" Dec 10 15:55:18 crc kubenswrapper[4718]: I1210 15:55:18.545258 4718 generic.go:334] "Generic (PLEG): container finished" podID="8db53917-7cfb-496d-b8a0-5cc68f3be4e7" containerID="ebdeff3b0943919b5bdef1d22dc252289066507510289625b43f45819ed60530" exitCode=0 Dec 10 15:55:18 crc kubenswrapper[4718]: I1210 15:55:18.545321 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" event={"ID":"8db53917-7cfb-496d-b8a0-5cc68f3be4e7","Type":"ContainerDied","Data":"ebdeff3b0943919b5bdef1d22dc252289066507510289625b43f45819ed60530"} Dec 10 15:55:18 crc kubenswrapper[4718]: I1210 15:55:18.546029 4718 scope.go:117] "RemoveContainer" containerID="bcff67293106ed7846469b906065ff115e73a0638858cacd7edc48105630b838" Dec 10 15:55:18 crc kubenswrapper[4718]: I1210 15:55:18.546944 4718 scope.go:117] "RemoveContainer" containerID="ebdeff3b0943919b5bdef1d22dc252289066507510289625b43f45819ed60530" Dec 10 15:55:18 crc kubenswrapper[4718]: E1210 15:55:18.547286 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8zmhn_openshift-machine-config-operator(8db53917-7cfb-496d-b8a0-5cc68f3be4e7)\"" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" podUID="8db53917-7cfb-496d-b8a0-5cc68f3be4e7" Dec 10 15:55:32 crc kubenswrapper[4718]: I1210 15:55:32.021844 4718 scope.go:117] "RemoveContainer" containerID="ebdeff3b0943919b5bdef1d22dc252289066507510289625b43f45819ed60530" Dec 10 15:55:32 crc kubenswrapper[4718]: E1210 15:55:32.023071 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8zmhn_openshift-machine-config-operator(8db53917-7cfb-496d-b8a0-5cc68f3be4e7)\"" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" podUID="8db53917-7cfb-496d-b8a0-5cc68f3be4e7" Dec 10 15:55:47 crc kubenswrapper[4718]: I1210 15:55:47.021298 4718 scope.go:117] "RemoveContainer" containerID="ebdeff3b0943919b5bdef1d22dc252289066507510289625b43f45819ed60530" Dec 10 15:55:47 crc kubenswrapper[4718]: E1210 15:55:47.022479 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8zmhn_openshift-machine-config-operator(8db53917-7cfb-496d-b8a0-5cc68f3be4e7)\"" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" podUID="8db53917-7cfb-496d-b8a0-5cc68f3be4e7" Dec 10 15:56:01 crc kubenswrapper[4718]: I1210 15:56:01.021842 4718 scope.go:117] "RemoveContainer" containerID="ebdeff3b0943919b5bdef1d22dc252289066507510289625b43f45819ed60530" Dec 10 15:56:01 crc kubenswrapper[4718]: E1210 15:56:01.023269 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8zmhn_openshift-machine-config-operator(8db53917-7cfb-496d-b8a0-5cc68f3be4e7)\"" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" podUID="8db53917-7cfb-496d-b8a0-5cc68f3be4e7" Dec 10 15:56:12 crc kubenswrapper[4718]: I1210 15:56:12.021335 4718 scope.go:117] "RemoveContainer" containerID="ebdeff3b0943919b5bdef1d22dc252289066507510289625b43f45819ed60530" Dec 10 15:56:12 crc kubenswrapper[4718]: E1210 15:56:12.022091 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8zmhn_openshift-machine-config-operator(8db53917-7cfb-496d-b8a0-5cc68f3be4e7)\"" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" podUID="8db53917-7cfb-496d-b8a0-5cc68f3be4e7" Dec 10 15:56:26 crc kubenswrapper[4718]: I1210 15:56:26.030354 4718 scope.go:117] "RemoveContainer" containerID="ebdeff3b0943919b5bdef1d22dc252289066507510289625b43f45819ed60530" Dec 10 15:56:26 crc kubenswrapper[4718]: E1210 15:56:26.032092 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8zmhn_openshift-machine-config-operator(8db53917-7cfb-496d-b8a0-5cc68f3be4e7)\"" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" podUID="8db53917-7cfb-496d-b8a0-5cc68f3be4e7" Dec 10 15:56:41 crc kubenswrapper[4718]: I1210 15:56:41.021539 4718 scope.go:117] "RemoveContainer" containerID="ebdeff3b0943919b5bdef1d22dc252289066507510289625b43f45819ed60530" Dec 10 15:56:41 crc kubenswrapper[4718]: E1210 15:56:41.022370 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8zmhn_openshift-machine-config-operator(8db53917-7cfb-496d-b8a0-5cc68f3be4e7)\"" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" podUID="8db53917-7cfb-496d-b8a0-5cc68f3be4e7" Dec 10 15:56:52 crc kubenswrapper[4718]: I1210 15:56:52.020960 4718 scope.go:117] "RemoveContainer" containerID="ebdeff3b0943919b5bdef1d22dc252289066507510289625b43f45819ed60530" Dec 10 15:56:52 crc kubenswrapper[4718]: E1210 15:56:52.021771 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8zmhn_openshift-machine-config-operator(8db53917-7cfb-496d-b8a0-5cc68f3be4e7)\"" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" podUID="8db53917-7cfb-496d-b8a0-5cc68f3be4e7" Dec 10 15:57:05 crc kubenswrapper[4718]: I1210 15:57:05.020257 4718 scope.go:117] "RemoveContainer" containerID="ebdeff3b0943919b5bdef1d22dc252289066507510289625b43f45819ed60530" Dec 10 15:57:05 crc kubenswrapper[4718]: E1210 15:57:05.021302 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8zmhn_openshift-machine-config-operator(8db53917-7cfb-496d-b8a0-5cc68f3be4e7)\"" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" podUID="8db53917-7cfb-496d-b8a0-5cc68f3be4e7" Dec 10 15:57:18 crc kubenswrapper[4718]: I1210 15:57:18.021922 4718 scope.go:117] "RemoveContainer" containerID="ebdeff3b0943919b5bdef1d22dc252289066507510289625b43f45819ed60530" Dec 10 15:57:18 crc kubenswrapper[4718]: E1210 15:57:18.023057 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8zmhn_openshift-machine-config-operator(8db53917-7cfb-496d-b8a0-5cc68f3be4e7)\"" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" podUID="8db53917-7cfb-496d-b8a0-5cc68f3be4e7" Dec 10 15:57:29 crc kubenswrapper[4718]: I1210 15:57:29.020928 4718 scope.go:117] "RemoveContainer" containerID="ebdeff3b0943919b5bdef1d22dc252289066507510289625b43f45819ed60530" Dec 10 15:57:29 crc kubenswrapper[4718]: E1210 15:57:29.021719 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8zmhn_openshift-machine-config-operator(8db53917-7cfb-496d-b8a0-5cc68f3be4e7)\"" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" podUID="8db53917-7cfb-496d-b8a0-5cc68f3be4e7" Dec 10 15:57:40 crc kubenswrapper[4718]: I1210 15:57:40.020877 4718 scope.go:117] "RemoveContainer" containerID="ebdeff3b0943919b5bdef1d22dc252289066507510289625b43f45819ed60530" Dec 10 15:57:40 crc kubenswrapper[4718]: E1210 15:57:40.021848 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8zmhn_openshift-machine-config-operator(8db53917-7cfb-496d-b8a0-5cc68f3be4e7)\"" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" podUID="8db53917-7cfb-496d-b8a0-5cc68f3be4e7" Dec 10 15:57:53 crc kubenswrapper[4718]: I1210 15:57:53.023039 4718 scope.go:117] "RemoveContainer" containerID="ebdeff3b0943919b5bdef1d22dc252289066507510289625b43f45819ed60530" Dec 10 15:57:53 crc kubenswrapper[4718]: E1210 15:57:53.023791 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8zmhn_openshift-machine-config-operator(8db53917-7cfb-496d-b8a0-5cc68f3be4e7)\"" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" podUID="8db53917-7cfb-496d-b8a0-5cc68f3be4e7" Dec 10 15:58:04 crc kubenswrapper[4718]: I1210 15:58:04.021378 4718 scope.go:117] "RemoveContainer" containerID="ebdeff3b0943919b5bdef1d22dc252289066507510289625b43f45819ed60530" Dec 10 15:58:04 crc kubenswrapper[4718]: E1210 15:58:04.022736 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8zmhn_openshift-machine-config-operator(8db53917-7cfb-496d-b8a0-5cc68f3be4e7)\"" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" podUID="8db53917-7cfb-496d-b8a0-5cc68f3be4e7" Dec 10 15:58:16 crc kubenswrapper[4718]: I1210 15:58:16.029704 4718 scope.go:117] "RemoveContainer" containerID="ebdeff3b0943919b5bdef1d22dc252289066507510289625b43f45819ed60530" Dec 10 15:58:16 crc kubenswrapper[4718]: E1210 15:58:16.030899 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8zmhn_openshift-machine-config-operator(8db53917-7cfb-496d-b8a0-5cc68f3be4e7)\"" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" podUID="8db53917-7cfb-496d-b8a0-5cc68f3be4e7" Dec 10 15:58:31 crc kubenswrapper[4718]: I1210 15:58:31.020966 4718 scope.go:117] "RemoveContainer" containerID="ebdeff3b0943919b5bdef1d22dc252289066507510289625b43f45819ed60530" Dec 10 15:58:31 crc kubenswrapper[4718]: E1210 15:58:31.022621 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8zmhn_openshift-machine-config-operator(8db53917-7cfb-496d-b8a0-5cc68f3be4e7)\"" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" podUID="8db53917-7cfb-496d-b8a0-5cc68f3be4e7" Dec 10 15:58:42 crc kubenswrapper[4718]: I1210 15:58:42.021206 4718 scope.go:117] "RemoveContainer" containerID="ebdeff3b0943919b5bdef1d22dc252289066507510289625b43f45819ed60530" Dec 10 15:58:42 crc kubenswrapper[4718]: E1210 15:58:42.022132 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8zmhn_openshift-machine-config-operator(8db53917-7cfb-496d-b8a0-5cc68f3be4e7)\"" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" podUID="8db53917-7cfb-496d-b8a0-5cc68f3be4e7" Dec 10 15:58:53 crc kubenswrapper[4718]: I1210 15:58:53.020356 4718 scope.go:117] "RemoveContainer" containerID="ebdeff3b0943919b5bdef1d22dc252289066507510289625b43f45819ed60530" Dec 10 15:58:53 crc kubenswrapper[4718]: E1210 15:58:53.021037 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8zmhn_openshift-machine-config-operator(8db53917-7cfb-496d-b8a0-5cc68f3be4e7)\"" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" podUID="8db53917-7cfb-496d-b8a0-5cc68f3be4e7" Dec 10 15:59:08 crc kubenswrapper[4718]: I1210 15:59:08.021489 4718 scope.go:117] "RemoveContainer" containerID="ebdeff3b0943919b5bdef1d22dc252289066507510289625b43f45819ed60530" Dec 10 15:59:08 crc kubenswrapper[4718]: E1210 15:59:08.022893 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8zmhn_openshift-machine-config-operator(8db53917-7cfb-496d-b8a0-5cc68f3be4e7)\"" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" podUID="8db53917-7cfb-496d-b8a0-5cc68f3be4e7" Dec 10 15:59:23 crc kubenswrapper[4718]: I1210 15:59:23.020456 4718 scope.go:117] "RemoveContainer" containerID="ebdeff3b0943919b5bdef1d22dc252289066507510289625b43f45819ed60530" Dec 10 15:59:23 crc kubenswrapper[4718]: E1210 15:59:23.021484 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8zmhn_openshift-machine-config-operator(8db53917-7cfb-496d-b8a0-5cc68f3be4e7)\"" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" podUID="8db53917-7cfb-496d-b8a0-5cc68f3be4e7" Dec 10 15:59:23 crc kubenswrapper[4718]: I1210 15:59:23.311686 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-nkl2v"] Dec 10 15:59:23 crc kubenswrapper[4718]: E1210 15:59:23.312631 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="144493c5-9000-4f1b-8287-9fbad1dc5f0d" containerName="registry-server" Dec 10 15:59:23 crc kubenswrapper[4718]: I1210 15:59:23.312671 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="144493c5-9000-4f1b-8287-9fbad1dc5f0d" containerName="registry-server" Dec 10 15:59:23 crc kubenswrapper[4718]: E1210 15:59:23.312756 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="144493c5-9000-4f1b-8287-9fbad1dc5f0d" containerName="extract-utilities" Dec 10 15:59:23 crc kubenswrapper[4718]: I1210 15:59:23.312766 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="144493c5-9000-4f1b-8287-9fbad1dc5f0d" containerName="extract-utilities" Dec 10 15:59:23 crc kubenswrapper[4718]: E1210 15:59:23.312788 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="144493c5-9000-4f1b-8287-9fbad1dc5f0d" containerName="extract-content" Dec 10 15:59:23 crc kubenswrapper[4718]: I1210 15:59:23.312798 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="144493c5-9000-4f1b-8287-9fbad1dc5f0d" containerName="extract-content" Dec 10 15:59:23 crc kubenswrapper[4718]: I1210 15:59:23.313056 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="144493c5-9000-4f1b-8287-9fbad1dc5f0d" containerName="registry-server" Dec 10 15:59:23 crc kubenswrapper[4718]: I1210 15:59:23.315045 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-nkl2v" Dec 10 15:59:23 crc kubenswrapper[4718]: I1210 15:59:23.348770 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-nkl2v"] Dec 10 15:59:23 crc kubenswrapper[4718]: I1210 15:59:23.399605 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2lflf\" (UniqueName: \"kubernetes.io/projected/14c4b569-fb8d-42a7-be5d-150209d91edb-kube-api-access-2lflf\") pod \"certified-operators-nkl2v\" (UID: \"14c4b569-fb8d-42a7-be5d-150209d91edb\") " pod="openshift-marketplace/certified-operators-nkl2v" Dec 10 15:59:23 crc kubenswrapper[4718]: I1210 15:59:23.400035 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/14c4b569-fb8d-42a7-be5d-150209d91edb-catalog-content\") pod \"certified-operators-nkl2v\" (UID: \"14c4b569-fb8d-42a7-be5d-150209d91edb\") " pod="openshift-marketplace/certified-operators-nkl2v" Dec 10 15:59:23 crc kubenswrapper[4718]: I1210 15:59:23.400294 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/14c4b569-fb8d-42a7-be5d-150209d91edb-utilities\") pod \"certified-operators-nkl2v\" (UID: \"14c4b569-fb8d-42a7-be5d-150209d91edb\") " pod="openshift-marketplace/certified-operators-nkl2v" Dec 10 15:59:23 crc kubenswrapper[4718]: I1210 15:59:23.502711 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2lflf\" (UniqueName: \"kubernetes.io/projected/14c4b569-fb8d-42a7-be5d-150209d91edb-kube-api-access-2lflf\") pod \"certified-operators-nkl2v\" (UID: \"14c4b569-fb8d-42a7-be5d-150209d91edb\") " pod="openshift-marketplace/certified-operators-nkl2v" Dec 10 15:59:23 crc kubenswrapper[4718]: I1210 15:59:23.502841 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/14c4b569-fb8d-42a7-be5d-150209d91edb-catalog-content\") pod \"certified-operators-nkl2v\" (UID: \"14c4b569-fb8d-42a7-be5d-150209d91edb\") " pod="openshift-marketplace/certified-operators-nkl2v" Dec 10 15:59:23 crc kubenswrapper[4718]: I1210 15:59:23.502894 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/14c4b569-fb8d-42a7-be5d-150209d91edb-utilities\") pod \"certified-operators-nkl2v\" (UID: \"14c4b569-fb8d-42a7-be5d-150209d91edb\") " pod="openshift-marketplace/certified-operators-nkl2v" Dec 10 15:59:23 crc kubenswrapper[4718]: I1210 15:59:23.503505 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/14c4b569-fb8d-42a7-be5d-150209d91edb-catalog-content\") pod \"certified-operators-nkl2v\" (UID: \"14c4b569-fb8d-42a7-be5d-150209d91edb\") " pod="openshift-marketplace/certified-operators-nkl2v" Dec 10 15:59:23 crc kubenswrapper[4718]: I1210 15:59:23.503532 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/14c4b569-fb8d-42a7-be5d-150209d91edb-utilities\") pod \"certified-operators-nkl2v\" (UID: \"14c4b569-fb8d-42a7-be5d-150209d91edb\") " pod="openshift-marketplace/certified-operators-nkl2v" Dec 10 15:59:23 crc kubenswrapper[4718]: I1210 15:59:23.524974 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2lflf\" (UniqueName: \"kubernetes.io/projected/14c4b569-fb8d-42a7-be5d-150209d91edb-kube-api-access-2lflf\") pod \"certified-operators-nkl2v\" (UID: \"14c4b569-fb8d-42a7-be5d-150209d91edb\") " pod="openshift-marketplace/certified-operators-nkl2v" Dec 10 15:59:23 crc kubenswrapper[4718]: I1210 15:59:23.659206 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-nkl2v" Dec 10 15:59:24 crc kubenswrapper[4718]: I1210 15:59:24.250982 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-nkl2v"] Dec 10 15:59:25 crc kubenswrapper[4718]: I1210 15:59:25.390802 4718 generic.go:334] "Generic (PLEG): container finished" podID="14c4b569-fb8d-42a7-be5d-150209d91edb" containerID="c025e65194fdc9fc30972dcd25b90c38b874267f82fdfca081eed8d562a6066b" exitCode=0 Dec 10 15:59:25 crc kubenswrapper[4718]: I1210 15:59:25.391180 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nkl2v" event={"ID":"14c4b569-fb8d-42a7-be5d-150209d91edb","Type":"ContainerDied","Data":"c025e65194fdc9fc30972dcd25b90c38b874267f82fdfca081eed8d562a6066b"} Dec 10 15:59:25 crc kubenswrapper[4718]: I1210 15:59:25.391229 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nkl2v" event={"ID":"14c4b569-fb8d-42a7-be5d-150209d91edb","Type":"ContainerStarted","Data":"e6659c48d0ee6e86b2143be4f44964688757e76081515e76cc95643bee3dfdd9"} Dec 10 15:59:25 crc kubenswrapper[4718]: I1210 15:59:25.394323 4718 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 10 15:59:26 crc kubenswrapper[4718]: I1210 15:59:26.403059 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nkl2v" event={"ID":"14c4b569-fb8d-42a7-be5d-150209d91edb","Type":"ContainerStarted","Data":"37faa53b4f20bddb881706837b58569a56ba8229076bafc7282e6028c9205569"} Dec 10 15:59:27 crc kubenswrapper[4718]: I1210 15:59:27.416106 4718 generic.go:334] "Generic (PLEG): container finished" podID="14c4b569-fb8d-42a7-be5d-150209d91edb" containerID="37faa53b4f20bddb881706837b58569a56ba8229076bafc7282e6028c9205569" exitCode=0 Dec 10 15:59:27 crc kubenswrapper[4718]: I1210 15:59:27.416160 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nkl2v" event={"ID":"14c4b569-fb8d-42a7-be5d-150209d91edb","Type":"ContainerDied","Data":"37faa53b4f20bddb881706837b58569a56ba8229076bafc7282e6028c9205569"} Dec 10 15:59:29 crc kubenswrapper[4718]: I1210 15:59:29.610346 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nkl2v" event={"ID":"14c4b569-fb8d-42a7-be5d-150209d91edb","Type":"ContainerStarted","Data":"30a7c30a2ca73dd89b31a873749e5f73dc8442d3c0435b1ed00dd0fa39a957a9"} Dec 10 15:59:29 crc kubenswrapper[4718]: I1210 15:59:29.641035 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-nkl2v" podStartSLOduration=4.126425067 podStartE2EDuration="6.640992868s" podCreationTimestamp="2025-12-10 15:59:23 +0000 UTC" firstStartedPulling="2025-12-10 15:59:25.394004514 +0000 UTC m=+5270.343227931" lastFinishedPulling="2025-12-10 15:59:27.908572315 +0000 UTC m=+5272.857795732" observedRunningTime="2025-12-10 15:59:29.631652333 +0000 UTC m=+5274.580875760" watchObservedRunningTime="2025-12-10 15:59:29.640992868 +0000 UTC m=+5274.590216285" Dec 10 15:59:31 crc kubenswrapper[4718]: I1210 15:59:31.071453 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-csck6"] Dec 10 15:59:31 crc kubenswrapper[4718]: I1210 15:59:31.076349 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-csck6" Dec 10 15:59:31 crc kubenswrapper[4718]: I1210 15:59:31.087281 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-csck6"] Dec 10 15:59:31 crc kubenswrapper[4718]: I1210 15:59:31.223319 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/938888ad-ca4f-4575-8bb9-bb8f4aaaf04b-catalog-content\") pod \"community-operators-csck6\" (UID: \"938888ad-ca4f-4575-8bb9-bb8f4aaaf04b\") " pod="openshift-marketplace/community-operators-csck6" Dec 10 15:59:31 crc kubenswrapper[4718]: I1210 15:59:31.223565 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-24mf7\" (UniqueName: \"kubernetes.io/projected/938888ad-ca4f-4575-8bb9-bb8f4aaaf04b-kube-api-access-24mf7\") pod \"community-operators-csck6\" (UID: \"938888ad-ca4f-4575-8bb9-bb8f4aaaf04b\") " pod="openshift-marketplace/community-operators-csck6" Dec 10 15:59:31 crc kubenswrapper[4718]: I1210 15:59:31.223777 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/938888ad-ca4f-4575-8bb9-bb8f4aaaf04b-utilities\") pod \"community-operators-csck6\" (UID: \"938888ad-ca4f-4575-8bb9-bb8f4aaaf04b\") " pod="openshift-marketplace/community-operators-csck6" Dec 10 15:59:31 crc kubenswrapper[4718]: I1210 15:59:31.329019 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/938888ad-ca4f-4575-8bb9-bb8f4aaaf04b-catalog-content\") pod \"community-operators-csck6\" (UID: \"938888ad-ca4f-4575-8bb9-bb8f4aaaf04b\") " pod="openshift-marketplace/community-operators-csck6" Dec 10 15:59:31 crc kubenswrapper[4718]: I1210 15:59:31.329148 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-24mf7\" (UniqueName: \"kubernetes.io/projected/938888ad-ca4f-4575-8bb9-bb8f4aaaf04b-kube-api-access-24mf7\") pod \"community-operators-csck6\" (UID: \"938888ad-ca4f-4575-8bb9-bb8f4aaaf04b\") " pod="openshift-marketplace/community-operators-csck6" Dec 10 15:59:31 crc kubenswrapper[4718]: I1210 15:59:31.329290 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/938888ad-ca4f-4575-8bb9-bb8f4aaaf04b-utilities\") pod \"community-operators-csck6\" (UID: \"938888ad-ca4f-4575-8bb9-bb8f4aaaf04b\") " pod="openshift-marketplace/community-operators-csck6" Dec 10 15:59:31 crc kubenswrapper[4718]: I1210 15:59:31.329611 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/938888ad-ca4f-4575-8bb9-bb8f4aaaf04b-catalog-content\") pod \"community-operators-csck6\" (UID: \"938888ad-ca4f-4575-8bb9-bb8f4aaaf04b\") " pod="openshift-marketplace/community-operators-csck6" Dec 10 15:59:31 crc kubenswrapper[4718]: I1210 15:59:31.329908 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/938888ad-ca4f-4575-8bb9-bb8f4aaaf04b-utilities\") pod \"community-operators-csck6\" (UID: \"938888ad-ca4f-4575-8bb9-bb8f4aaaf04b\") " pod="openshift-marketplace/community-operators-csck6" Dec 10 15:59:31 crc kubenswrapper[4718]: I1210 15:59:31.355667 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-24mf7\" (UniqueName: \"kubernetes.io/projected/938888ad-ca4f-4575-8bb9-bb8f4aaaf04b-kube-api-access-24mf7\") pod \"community-operators-csck6\" (UID: \"938888ad-ca4f-4575-8bb9-bb8f4aaaf04b\") " pod="openshift-marketplace/community-operators-csck6" Dec 10 15:59:31 crc kubenswrapper[4718]: I1210 15:59:31.397920 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-csck6" Dec 10 15:59:32 crc kubenswrapper[4718]: W1210 15:59:32.083193 4718 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod938888ad_ca4f_4575_8bb9_bb8f4aaaf04b.slice/crio-5d9b12d5064ed71895612346f6f9ef5a7ffb7e7c3daa741d5c908f2209d501d9 WatchSource:0}: Error finding container 5d9b12d5064ed71895612346f6f9ef5a7ffb7e7c3daa741d5c908f2209d501d9: Status 404 returned error can't find the container with id 5d9b12d5064ed71895612346f6f9ef5a7ffb7e7c3daa741d5c908f2209d501d9 Dec 10 15:59:32 crc kubenswrapper[4718]: I1210 15:59:32.084475 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-csck6"] Dec 10 15:59:32 crc kubenswrapper[4718]: I1210 15:59:32.644446 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-csck6" event={"ID":"938888ad-ca4f-4575-8bb9-bb8f4aaaf04b","Type":"ContainerStarted","Data":"5d9b12d5064ed71895612346f6f9ef5a7ffb7e7c3daa741d5c908f2209d501d9"} Dec 10 15:59:33 crc kubenswrapper[4718]: I1210 15:59:33.284683 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-snlkg"] Dec 10 15:59:33 crc kubenswrapper[4718]: I1210 15:59:33.287201 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-snlkg" Dec 10 15:59:33 crc kubenswrapper[4718]: I1210 15:59:33.305301 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-snlkg"] Dec 10 15:59:33 crc kubenswrapper[4718]: I1210 15:59:33.390708 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nr8sr\" (UniqueName: \"kubernetes.io/projected/e4d5a1e3-2e1c-4570-b36d-0793123c8a0d-kube-api-access-nr8sr\") pod \"redhat-marketplace-snlkg\" (UID: \"e4d5a1e3-2e1c-4570-b36d-0793123c8a0d\") " pod="openshift-marketplace/redhat-marketplace-snlkg" Dec 10 15:59:33 crc kubenswrapper[4718]: I1210 15:59:33.391349 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e4d5a1e3-2e1c-4570-b36d-0793123c8a0d-utilities\") pod \"redhat-marketplace-snlkg\" (UID: \"e4d5a1e3-2e1c-4570-b36d-0793123c8a0d\") " pod="openshift-marketplace/redhat-marketplace-snlkg" Dec 10 15:59:33 crc kubenswrapper[4718]: I1210 15:59:33.391419 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e4d5a1e3-2e1c-4570-b36d-0793123c8a0d-catalog-content\") pod \"redhat-marketplace-snlkg\" (UID: \"e4d5a1e3-2e1c-4570-b36d-0793123c8a0d\") " pod="openshift-marketplace/redhat-marketplace-snlkg" Dec 10 15:59:33 crc kubenswrapper[4718]: I1210 15:59:33.494165 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nr8sr\" (UniqueName: \"kubernetes.io/projected/e4d5a1e3-2e1c-4570-b36d-0793123c8a0d-kube-api-access-nr8sr\") pod \"redhat-marketplace-snlkg\" (UID: \"e4d5a1e3-2e1c-4570-b36d-0793123c8a0d\") " pod="openshift-marketplace/redhat-marketplace-snlkg" Dec 10 15:59:33 crc kubenswrapper[4718]: I1210 15:59:33.494259 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e4d5a1e3-2e1c-4570-b36d-0793123c8a0d-utilities\") pod \"redhat-marketplace-snlkg\" (UID: \"e4d5a1e3-2e1c-4570-b36d-0793123c8a0d\") " pod="openshift-marketplace/redhat-marketplace-snlkg" Dec 10 15:59:33 crc kubenswrapper[4718]: I1210 15:59:33.494292 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e4d5a1e3-2e1c-4570-b36d-0793123c8a0d-catalog-content\") pod \"redhat-marketplace-snlkg\" (UID: \"e4d5a1e3-2e1c-4570-b36d-0793123c8a0d\") " pod="openshift-marketplace/redhat-marketplace-snlkg" Dec 10 15:59:33 crc kubenswrapper[4718]: I1210 15:59:33.494940 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e4d5a1e3-2e1c-4570-b36d-0793123c8a0d-catalog-content\") pod \"redhat-marketplace-snlkg\" (UID: \"e4d5a1e3-2e1c-4570-b36d-0793123c8a0d\") " pod="openshift-marketplace/redhat-marketplace-snlkg" Dec 10 15:59:33 crc kubenswrapper[4718]: I1210 15:59:33.495002 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e4d5a1e3-2e1c-4570-b36d-0793123c8a0d-utilities\") pod \"redhat-marketplace-snlkg\" (UID: \"e4d5a1e3-2e1c-4570-b36d-0793123c8a0d\") " pod="openshift-marketplace/redhat-marketplace-snlkg" Dec 10 15:59:33 crc kubenswrapper[4718]: I1210 15:59:33.515912 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nr8sr\" (UniqueName: \"kubernetes.io/projected/e4d5a1e3-2e1c-4570-b36d-0793123c8a0d-kube-api-access-nr8sr\") pod \"redhat-marketplace-snlkg\" (UID: \"e4d5a1e3-2e1c-4570-b36d-0793123c8a0d\") " pod="openshift-marketplace/redhat-marketplace-snlkg" Dec 10 15:59:33 crc kubenswrapper[4718]: I1210 15:59:33.623221 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-snlkg" Dec 10 15:59:33 crc kubenswrapper[4718]: I1210 15:59:33.659493 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-nkl2v" Dec 10 15:59:33 crc kubenswrapper[4718]: I1210 15:59:33.659568 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-nkl2v" Dec 10 15:59:33 crc kubenswrapper[4718]: I1210 15:59:33.659608 4718 generic.go:334] "Generic (PLEG): container finished" podID="938888ad-ca4f-4575-8bb9-bb8f4aaaf04b" containerID="44adfea0a4a7c7194353a21bbff0b8ebc2d677a07591cd5bed6e63fd4ba1f4cd" exitCode=0 Dec 10 15:59:33 crc kubenswrapper[4718]: I1210 15:59:33.659646 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-csck6" event={"ID":"938888ad-ca4f-4575-8bb9-bb8f4aaaf04b","Type":"ContainerDied","Data":"44adfea0a4a7c7194353a21bbff0b8ebc2d677a07591cd5bed6e63fd4ba1f4cd"} Dec 10 15:59:33 crc kubenswrapper[4718]: I1210 15:59:33.776327 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-nkl2v" Dec 10 15:59:34 crc kubenswrapper[4718]: W1210 15:59:34.145314 4718 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode4d5a1e3_2e1c_4570_b36d_0793123c8a0d.slice/crio-ada252dd6d66408f95b54b2f4b978533467ec1e2a3c365c85c28e98d2689c473 WatchSource:0}: Error finding container ada252dd6d66408f95b54b2f4b978533467ec1e2a3c365c85c28e98d2689c473: Status 404 returned error can't find the container with id ada252dd6d66408f95b54b2f4b978533467ec1e2a3c365c85c28e98d2689c473 Dec 10 15:59:34 crc kubenswrapper[4718]: I1210 15:59:34.150566 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-snlkg"] Dec 10 15:59:34 crc kubenswrapper[4718]: I1210 15:59:34.676435 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-snlkg" event={"ID":"e4d5a1e3-2e1c-4570-b36d-0793123c8a0d","Type":"ContainerStarted","Data":"ada252dd6d66408f95b54b2f4b978533467ec1e2a3c365c85c28e98d2689c473"} Dec 10 15:59:34 crc kubenswrapper[4718]: I1210 15:59:34.748473 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-nkl2v" Dec 10 15:59:35 crc kubenswrapper[4718]: I1210 15:59:35.692299 4718 generic.go:334] "Generic (PLEG): container finished" podID="e4d5a1e3-2e1c-4570-b36d-0793123c8a0d" containerID="e32e6d84ca5c2605bae829deee4ad0749b2dd9dc91384f4d9d9f9eb03fa88f64" exitCode=0 Dec 10 15:59:35 crc kubenswrapper[4718]: I1210 15:59:35.692365 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-snlkg" event={"ID":"e4d5a1e3-2e1c-4570-b36d-0793123c8a0d","Type":"ContainerDied","Data":"e32e6d84ca5c2605bae829deee4ad0749b2dd9dc91384f4d9d9f9eb03fa88f64"} Dec 10 15:59:35 crc kubenswrapper[4718]: I1210 15:59:35.698464 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-csck6" event={"ID":"938888ad-ca4f-4575-8bb9-bb8f4aaaf04b","Type":"ContainerStarted","Data":"262ef2840a389ab07f08611727717744dfd7f80c57a2866294161e9f39b1d661"} Dec 10 15:59:36 crc kubenswrapper[4718]: I1210 15:59:36.029839 4718 scope.go:117] "RemoveContainer" containerID="ebdeff3b0943919b5bdef1d22dc252289066507510289625b43f45819ed60530" Dec 10 15:59:36 crc kubenswrapper[4718]: E1210 15:59:36.030482 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8zmhn_openshift-machine-config-operator(8db53917-7cfb-496d-b8a0-5cc68f3be4e7)\"" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" podUID="8db53917-7cfb-496d-b8a0-5cc68f3be4e7" Dec 10 15:59:36 crc kubenswrapper[4718]: I1210 15:59:36.461175 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-nkl2v"] Dec 10 15:59:36 crc kubenswrapper[4718]: I1210 15:59:36.710776 4718 generic.go:334] "Generic (PLEG): container finished" podID="938888ad-ca4f-4575-8bb9-bb8f4aaaf04b" containerID="262ef2840a389ab07f08611727717744dfd7f80c57a2866294161e9f39b1d661" exitCode=0 Dec 10 15:59:36 crc kubenswrapper[4718]: I1210 15:59:36.710885 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-csck6" event={"ID":"938888ad-ca4f-4575-8bb9-bb8f4aaaf04b","Type":"ContainerDied","Data":"262ef2840a389ab07f08611727717744dfd7f80c57a2866294161e9f39b1d661"} Dec 10 15:59:36 crc kubenswrapper[4718]: I1210 15:59:36.711678 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-nkl2v" podUID="14c4b569-fb8d-42a7-be5d-150209d91edb" containerName="registry-server" containerID="cri-o://30a7c30a2ca73dd89b31a873749e5f73dc8442d3c0435b1ed00dd0fa39a957a9" gracePeriod=2 Dec 10 15:59:39 crc kubenswrapper[4718]: I1210 15:59:39.743686 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nkl2v" event={"ID":"14c4b569-fb8d-42a7-be5d-150209d91edb","Type":"ContainerDied","Data":"30a7c30a2ca73dd89b31a873749e5f73dc8442d3c0435b1ed00dd0fa39a957a9"} Dec 10 15:59:39 crc kubenswrapper[4718]: I1210 15:59:39.743693 4718 generic.go:334] "Generic (PLEG): container finished" podID="14c4b569-fb8d-42a7-be5d-150209d91edb" containerID="30a7c30a2ca73dd89b31a873749e5f73dc8442d3c0435b1ed00dd0fa39a957a9" exitCode=0 Dec 10 15:59:40 crc kubenswrapper[4718]: I1210 15:59:40.217852 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-nkl2v" Dec 10 15:59:40 crc kubenswrapper[4718]: I1210 15:59:40.395257 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2lflf\" (UniqueName: \"kubernetes.io/projected/14c4b569-fb8d-42a7-be5d-150209d91edb-kube-api-access-2lflf\") pod \"14c4b569-fb8d-42a7-be5d-150209d91edb\" (UID: \"14c4b569-fb8d-42a7-be5d-150209d91edb\") " Dec 10 15:59:40 crc kubenswrapper[4718]: I1210 15:59:40.395510 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/14c4b569-fb8d-42a7-be5d-150209d91edb-utilities\") pod \"14c4b569-fb8d-42a7-be5d-150209d91edb\" (UID: \"14c4b569-fb8d-42a7-be5d-150209d91edb\") " Dec 10 15:59:40 crc kubenswrapper[4718]: I1210 15:59:40.395568 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/14c4b569-fb8d-42a7-be5d-150209d91edb-catalog-content\") pod \"14c4b569-fb8d-42a7-be5d-150209d91edb\" (UID: \"14c4b569-fb8d-42a7-be5d-150209d91edb\") " Dec 10 15:59:40 crc kubenswrapper[4718]: I1210 15:59:40.396368 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/14c4b569-fb8d-42a7-be5d-150209d91edb-utilities" (OuterVolumeSpecName: "utilities") pod "14c4b569-fb8d-42a7-be5d-150209d91edb" (UID: "14c4b569-fb8d-42a7-be5d-150209d91edb"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 15:59:40 crc kubenswrapper[4718]: I1210 15:59:40.404675 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/14c4b569-fb8d-42a7-be5d-150209d91edb-kube-api-access-2lflf" (OuterVolumeSpecName: "kube-api-access-2lflf") pod "14c4b569-fb8d-42a7-be5d-150209d91edb" (UID: "14c4b569-fb8d-42a7-be5d-150209d91edb"). InnerVolumeSpecName "kube-api-access-2lflf". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:59:40 crc kubenswrapper[4718]: I1210 15:59:40.446750 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/14c4b569-fb8d-42a7-be5d-150209d91edb-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "14c4b569-fb8d-42a7-be5d-150209d91edb" (UID: "14c4b569-fb8d-42a7-be5d-150209d91edb"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 15:59:40 crc kubenswrapper[4718]: I1210 15:59:40.498629 4718 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/14c4b569-fb8d-42a7-be5d-150209d91edb-utilities\") on node \"crc\" DevicePath \"\"" Dec 10 15:59:40 crc kubenswrapper[4718]: I1210 15:59:40.498687 4718 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/14c4b569-fb8d-42a7-be5d-150209d91edb-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 10 15:59:40 crc kubenswrapper[4718]: I1210 15:59:40.498706 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2lflf\" (UniqueName: \"kubernetes.io/projected/14c4b569-fb8d-42a7-be5d-150209d91edb-kube-api-access-2lflf\") on node \"crc\" DevicePath \"\"" Dec 10 15:59:40 crc kubenswrapper[4718]: I1210 15:59:40.757690 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-csck6" event={"ID":"938888ad-ca4f-4575-8bb9-bb8f4aaaf04b","Type":"ContainerStarted","Data":"6625c1c0b68f5046f30e26b637a987a5c7b902dcde3ce8f7a00405dbb79a3371"} Dec 10 15:59:40 crc kubenswrapper[4718]: I1210 15:59:40.760164 4718 generic.go:334] "Generic (PLEG): container finished" podID="e4d5a1e3-2e1c-4570-b36d-0793123c8a0d" containerID="27c07d26f1c41688722ac07e9999c88c2da42c7bce913cf6570ad577b597b073" exitCode=0 Dec 10 15:59:40 crc kubenswrapper[4718]: I1210 15:59:40.760249 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-snlkg" event={"ID":"e4d5a1e3-2e1c-4570-b36d-0793123c8a0d","Type":"ContainerDied","Data":"27c07d26f1c41688722ac07e9999c88c2da42c7bce913cf6570ad577b597b073"} Dec 10 15:59:40 crc kubenswrapper[4718]: I1210 15:59:40.763011 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nkl2v" event={"ID":"14c4b569-fb8d-42a7-be5d-150209d91edb","Type":"ContainerDied","Data":"e6659c48d0ee6e86b2143be4f44964688757e76081515e76cc95643bee3dfdd9"} Dec 10 15:59:40 crc kubenswrapper[4718]: I1210 15:59:40.763054 4718 scope.go:117] "RemoveContainer" containerID="30a7c30a2ca73dd89b31a873749e5f73dc8442d3c0435b1ed00dd0fa39a957a9" Dec 10 15:59:40 crc kubenswrapper[4718]: I1210 15:59:40.763060 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-nkl2v" Dec 10 15:59:40 crc kubenswrapper[4718]: I1210 15:59:40.785924 4718 scope.go:117] "RemoveContainer" containerID="37faa53b4f20bddb881706837b58569a56ba8229076bafc7282e6028c9205569" Dec 10 15:59:40 crc kubenswrapper[4718]: I1210 15:59:40.789199 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-csck6" podStartSLOduration=3.528210541 podStartE2EDuration="9.789159345s" podCreationTimestamp="2025-12-10 15:59:31 +0000 UTC" firstStartedPulling="2025-12-10 15:59:33.66496796 +0000 UTC m=+5278.614191377" lastFinishedPulling="2025-12-10 15:59:39.925916764 +0000 UTC m=+5284.875140181" observedRunningTime="2025-12-10 15:59:40.784243337 +0000 UTC m=+5285.733466754" watchObservedRunningTime="2025-12-10 15:59:40.789159345 +0000 UTC m=+5285.738382762" Dec 10 15:59:40 crc kubenswrapper[4718]: I1210 15:59:40.826180 4718 scope.go:117] "RemoveContainer" containerID="c025e65194fdc9fc30972dcd25b90c38b874267f82fdfca081eed8d562a6066b" Dec 10 15:59:40 crc kubenswrapper[4718]: I1210 15:59:40.871767 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-nkl2v"] Dec 10 15:59:40 crc kubenswrapper[4718]: I1210 15:59:40.885372 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-nkl2v"] Dec 10 15:59:41 crc kubenswrapper[4718]: I1210 15:59:41.398137 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-csck6" Dec 10 15:59:41 crc kubenswrapper[4718]: I1210 15:59:41.398217 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-csck6" Dec 10 15:59:41 crc kubenswrapper[4718]: I1210 15:59:41.776601 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-snlkg" event={"ID":"e4d5a1e3-2e1c-4570-b36d-0793123c8a0d","Type":"ContainerStarted","Data":"e28eeddab739dc071b8414685b617b45e02c9b3e83dd89341a073b596698a188"} Dec 10 15:59:41 crc kubenswrapper[4718]: I1210 15:59:41.806551 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-snlkg" podStartSLOduration=3.250696094 podStartE2EDuration="8.806528423s" podCreationTimestamp="2025-12-10 15:59:33 +0000 UTC" firstStartedPulling="2025-12-10 15:59:35.694662795 +0000 UTC m=+5280.643886212" lastFinishedPulling="2025-12-10 15:59:41.250495124 +0000 UTC m=+5286.199718541" observedRunningTime="2025-12-10 15:59:41.804335765 +0000 UTC m=+5286.753559192" watchObservedRunningTime="2025-12-10 15:59:41.806528423 +0000 UTC m=+5286.755751840" Dec 10 15:59:42 crc kubenswrapper[4718]: I1210 15:59:42.034232 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="14c4b569-fb8d-42a7-be5d-150209d91edb" path="/var/lib/kubelet/pods/14c4b569-fb8d-42a7-be5d-150209d91edb/volumes" Dec 10 15:59:42 crc kubenswrapper[4718]: I1210 15:59:42.458323 4718 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-csck6" podUID="938888ad-ca4f-4575-8bb9-bb8f4aaaf04b" containerName="registry-server" probeResult="failure" output=< Dec 10 15:59:42 crc kubenswrapper[4718]: timeout: failed to connect service ":50051" within 1s Dec 10 15:59:42 crc kubenswrapper[4718]: > Dec 10 15:59:43 crc kubenswrapper[4718]: I1210 15:59:43.623665 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-snlkg" Dec 10 15:59:43 crc kubenswrapper[4718]: I1210 15:59:43.623758 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-snlkg" Dec 10 15:59:43 crc kubenswrapper[4718]: I1210 15:59:43.680481 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-snlkg" Dec 10 15:59:50 crc kubenswrapper[4718]: I1210 15:59:50.020548 4718 scope.go:117] "RemoveContainer" containerID="ebdeff3b0943919b5bdef1d22dc252289066507510289625b43f45819ed60530" Dec 10 15:59:50 crc kubenswrapper[4718]: E1210 15:59:50.022547 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8zmhn_openshift-machine-config-operator(8db53917-7cfb-496d-b8a0-5cc68f3be4e7)\"" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" podUID="8db53917-7cfb-496d-b8a0-5cc68f3be4e7" Dec 10 15:59:51 crc kubenswrapper[4718]: I1210 15:59:51.456927 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-csck6" Dec 10 15:59:51 crc kubenswrapper[4718]: I1210 15:59:51.512771 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-csck6" Dec 10 15:59:53 crc kubenswrapper[4718]: I1210 15:59:53.483087 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-csck6"] Dec 10 15:59:53 crc kubenswrapper[4718]: I1210 15:59:53.483549 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-csck6" podUID="938888ad-ca4f-4575-8bb9-bb8f4aaaf04b" containerName="registry-server" containerID="cri-o://6625c1c0b68f5046f30e26b637a987a5c7b902dcde3ce8f7a00405dbb79a3371" gracePeriod=2 Dec 10 15:59:53 crc kubenswrapper[4718]: I1210 15:59:53.689019 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-snlkg" Dec 10 15:59:53 crc kubenswrapper[4718]: I1210 15:59:53.906803 4718 generic.go:334] "Generic (PLEG): container finished" podID="938888ad-ca4f-4575-8bb9-bb8f4aaaf04b" containerID="6625c1c0b68f5046f30e26b637a987a5c7b902dcde3ce8f7a00405dbb79a3371" exitCode=0 Dec 10 15:59:53 crc kubenswrapper[4718]: I1210 15:59:53.906841 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-csck6" event={"ID":"938888ad-ca4f-4575-8bb9-bb8f4aaaf04b","Type":"ContainerDied","Data":"6625c1c0b68f5046f30e26b637a987a5c7b902dcde3ce8f7a00405dbb79a3371"} Dec 10 15:59:54 crc kubenswrapper[4718]: I1210 15:59:54.485323 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-csck6" Dec 10 15:59:54 crc kubenswrapper[4718]: I1210 15:59:54.639050 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-24mf7\" (UniqueName: \"kubernetes.io/projected/938888ad-ca4f-4575-8bb9-bb8f4aaaf04b-kube-api-access-24mf7\") pod \"938888ad-ca4f-4575-8bb9-bb8f4aaaf04b\" (UID: \"938888ad-ca4f-4575-8bb9-bb8f4aaaf04b\") " Dec 10 15:59:54 crc kubenswrapper[4718]: I1210 15:59:54.639422 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/938888ad-ca4f-4575-8bb9-bb8f4aaaf04b-utilities\") pod \"938888ad-ca4f-4575-8bb9-bb8f4aaaf04b\" (UID: \"938888ad-ca4f-4575-8bb9-bb8f4aaaf04b\") " Dec 10 15:59:54 crc kubenswrapper[4718]: I1210 15:59:54.639515 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/938888ad-ca4f-4575-8bb9-bb8f4aaaf04b-catalog-content\") pod \"938888ad-ca4f-4575-8bb9-bb8f4aaaf04b\" (UID: \"938888ad-ca4f-4575-8bb9-bb8f4aaaf04b\") " Dec 10 15:59:54 crc kubenswrapper[4718]: I1210 15:59:54.640112 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/938888ad-ca4f-4575-8bb9-bb8f4aaaf04b-utilities" (OuterVolumeSpecName: "utilities") pod "938888ad-ca4f-4575-8bb9-bb8f4aaaf04b" (UID: "938888ad-ca4f-4575-8bb9-bb8f4aaaf04b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 15:59:54 crc kubenswrapper[4718]: I1210 15:59:54.645838 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/938888ad-ca4f-4575-8bb9-bb8f4aaaf04b-kube-api-access-24mf7" (OuterVolumeSpecName: "kube-api-access-24mf7") pod "938888ad-ca4f-4575-8bb9-bb8f4aaaf04b" (UID: "938888ad-ca4f-4575-8bb9-bb8f4aaaf04b"). InnerVolumeSpecName "kube-api-access-24mf7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:59:54 crc kubenswrapper[4718]: I1210 15:59:54.701059 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/938888ad-ca4f-4575-8bb9-bb8f4aaaf04b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "938888ad-ca4f-4575-8bb9-bb8f4aaaf04b" (UID: "938888ad-ca4f-4575-8bb9-bb8f4aaaf04b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 15:59:54 crc kubenswrapper[4718]: I1210 15:59:54.741775 4718 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/938888ad-ca4f-4575-8bb9-bb8f4aaaf04b-utilities\") on node \"crc\" DevicePath \"\"" Dec 10 15:59:54 crc kubenswrapper[4718]: I1210 15:59:54.741824 4718 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/938888ad-ca4f-4575-8bb9-bb8f4aaaf04b-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 10 15:59:54 crc kubenswrapper[4718]: I1210 15:59:54.741835 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-24mf7\" (UniqueName: \"kubernetes.io/projected/938888ad-ca4f-4575-8bb9-bb8f4aaaf04b-kube-api-access-24mf7\") on node \"crc\" DevicePath \"\"" Dec 10 15:59:54 crc kubenswrapper[4718]: I1210 15:59:54.922415 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-csck6" event={"ID":"938888ad-ca4f-4575-8bb9-bb8f4aaaf04b","Type":"ContainerDied","Data":"5d9b12d5064ed71895612346f6f9ef5a7ffb7e7c3daa741d5c908f2209d501d9"} Dec 10 15:59:54 crc kubenswrapper[4718]: I1210 15:59:54.922468 4718 scope.go:117] "RemoveContainer" containerID="6625c1c0b68f5046f30e26b637a987a5c7b902dcde3ce8f7a00405dbb79a3371" Dec 10 15:59:54 crc kubenswrapper[4718]: I1210 15:59:54.922517 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-csck6" Dec 10 15:59:54 crc kubenswrapper[4718]: I1210 15:59:54.947653 4718 scope.go:117] "RemoveContainer" containerID="262ef2840a389ab07f08611727717744dfd7f80c57a2866294161e9f39b1d661" Dec 10 15:59:54 crc kubenswrapper[4718]: I1210 15:59:54.971286 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-csck6"] Dec 10 15:59:54 crc kubenswrapper[4718]: I1210 15:59:54.974438 4718 scope.go:117] "RemoveContainer" containerID="44adfea0a4a7c7194353a21bbff0b8ebc2d677a07591cd5bed6e63fd4ba1f4cd" Dec 10 15:59:54 crc kubenswrapper[4718]: I1210 15:59:54.982038 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-csck6"] Dec 10 15:59:56 crc kubenswrapper[4718]: I1210 15:59:56.034857 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="938888ad-ca4f-4575-8bb9-bb8f4aaaf04b" path="/var/lib/kubelet/pods/938888ad-ca4f-4575-8bb9-bb8f4aaaf04b/volumes" Dec 10 15:59:57 crc kubenswrapper[4718]: I1210 15:59:57.665061 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-snlkg"] Dec 10 15:59:57 crc kubenswrapper[4718]: I1210 15:59:57.666014 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-snlkg" podUID="e4d5a1e3-2e1c-4570-b36d-0793123c8a0d" containerName="registry-server" containerID="cri-o://e28eeddab739dc071b8414685b617b45e02c9b3e83dd89341a073b596698a188" gracePeriod=2 Dec 10 15:59:57 crc kubenswrapper[4718]: I1210 15:59:57.959505 4718 generic.go:334] "Generic (PLEG): container finished" podID="e4d5a1e3-2e1c-4570-b36d-0793123c8a0d" containerID="e28eeddab739dc071b8414685b617b45e02c9b3e83dd89341a073b596698a188" exitCode=0 Dec 10 15:59:57 crc kubenswrapper[4718]: I1210 15:59:57.959759 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-snlkg" event={"ID":"e4d5a1e3-2e1c-4570-b36d-0793123c8a0d","Type":"ContainerDied","Data":"e28eeddab739dc071b8414685b617b45e02c9b3e83dd89341a073b596698a188"} Dec 10 15:59:58 crc kubenswrapper[4718]: I1210 15:59:58.298718 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-snlkg" Dec 10 15:59:58 crc kubenswrapper[4718]: I1210 15:59:58.456653 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e4d5a1e3-2e1c-4570-b36d-0793123c8a0d-utilities\") pod \"e4d5a1e3-2e1c-4570-b36d-0793123c8a0d\" (UID: \"e4d5a1e3-2e1c-4570-b36d-0793123c8a0d\") " Dec 10 15:59:58 crc kubenswrapper[4718]: I1210 15:59:58.456712 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e4d5a1e3-2e1c-4570-b36d-0793123c8a0d-catalog-content\") pod \"e4d5a1e3-2e1c-4570-b36d-0793123c8a0d\" (UID: \"e4d5a1e3-2e1c-4570-b36d-0793123c8a0d\") " Dec 10 15:59:58 crc kubenswrapper[4718]: I1210 15:59:58.456916 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nr8sr\" (UniqueName: \"kubernetes.io/projected/e4d5a1e3-2e1c-4570-b36d-0793123c8a0d-kube-api-access-nr8sr\") pod \"e4d5a1e3-2e1c-4570-b36d-0793123c8a0d\" (UID: \"e4d5a1e3-2e1c-4570-b36d-0793123c8a0d\") " Dec 10 15:59:58 crc kubenswrapper[4718]: I1210 15:59:58.458029 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e4d5a1e3-2e1c-4570-b36d-0793123c8a0d-utilities" (OuterVolumeSpecName: "utilities") pod "e4d5a1e3-2e1c-4570-b36d-0793123c8a0d" (UID: "e4d5a1e3-2e1c-4570-b36d-0793123c8a0d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 15:59:58 crc kubenswrapper[4718]: I1210 15:59:58.463852 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e4d5a1e3-2e1c-4570-b36d-0793123c8a0d-kube-api-access-nr8sr" (OuterVolumeSpecName: "kube-api-access-nr8sr") pod "e4d5a1e3-2e1c-4570-b36d-0793123c8a0d" (UID: "e4d5a1e3-2e1c-4570-b36d-0793123c8a0d"). InnerVolumeSpecName "kube-api-access-nr8sr". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 15:59:58 crc kubenswrapper[4718]: I1210 15:59:58.485718 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e4d5a1e3-2e1c-4570-b36d-0793123c8a0d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e4d5a1e3-2e1c-4570-b36d-0793123c8a0d" (UID: "e4d5a1e3-2e1c-4570-b36d-0793123c8a0d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 15:59:58 crc kubenswrapper[4718]: I1210 15:59:58.559984 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nr8sr\" (UniqueName: \"kubernetes.io/projected/e4d5a1e3-2e1c-4570-b36d-0793123c8a0d-kube-api-access-nr8sr\") on node \"crc\" DevicePath \"\"" Dec 10 15:59:58 crc kubenswrapper[4718]: I1210 15:59:58.560030 4718 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e4d5a1e3-2e1c-4570-b36d-0793123c8a0d-utilities\") on node \"crc\" DevicePath \"\"" Dec 10 15:59:58 crc kubenswrapper[4718]: I1210 15:59:58.560043 4718 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e4d5a1e3-2e1c-4570-b36d-0793123c8a0d-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 10 15:59:58 crc kubenswrapper[4718]: I1210 15:59:58.973368 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-snlkg" event={"ID":"e4d5a1e3-2e1c-4570-b36d-0793123c8a0d","Type":"ContainerDied","Data":"ada252dd6d66408f95b54b2f4b978533467ec1e2a3c365c85c28e98d2689c473"} Dec 10 15:59:58 crc kubenswrapper[4718]: I1210 15:59:58.973449 4718 scope.go:117] "RemoveContainer" containerID="e28eeddab739dc071b8414685b617b45e02c9b3e83dd89341a073b596698a188" Dec 10 15:59:58 crc kubenswrapper[4718]: I1210 15:59:58.973644 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-snlkg" Dec 10 15:59:59 crc kubenswrapper[4718]: I1210 15:59:59.017746 4718 scope.go:117] "RemoveContainer" containerID="27c07d26f1c41688722ac07e9999c88c2da42c7bce913cf6570ad577b597b073" Dec 10 15:59:59 crc kubenswrapper[4718]: I1210 15:59:59.018907 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-snlkg"] Dec 10 15:59:59 crc kubenswrapper[4718]: I1210 15:59:59.029075 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-snlkg"] Dec 10 15:59:59 crc kubenswrapper[4718]: I1210 15:59:59.044288 4718 scope.go:117] "RemoveContainer" containerID="e32e6d84ca5c2605bae829deee4ad0749b2dd9dc91384f4d9d9f9eb03fa88f64" Dec 10 16:00:00 crc kubenswrapper[4718]: I1210 16:00:00.035818 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e4d5a1e3-2e1c-4570-b36d-0793123c8a0d" path="/var/lib/kubelet/pods/e4d5a1e3-2e1c-4570-b36d-0793123c8a0d/volumes" Dec 10 16:00:00 crc kubenswrapper[4718]: I1210 16:00:00.154736 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29423040-qq6mc"] Dec 10 16:00:00 crc kubenswrapper[4718]: E1210 16:00:00.155444 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e4d5a1e3-2e1c-4570-b36d-0793123c8a0d" containerName="registry-server" Dec 10 16:00:00 crc kubenswrapper[4718]: I1210 16:00:00.155487 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="e4d5a1e3-2e1c-4570-b36d-0793123c8a0d" containerName="registry-server" Dec 10 16:00:00 crc kubenswrapper[4718]: E1210 16:00:00.155525 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="938888ad-ca4f-4575-8bb9-bb8f4aaaf04b" containerName="extract-utilities" Dec 10 16:00:00 crc kubenswrapper[4718]: I1210 16:00:00.155534 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="938888ad-ca4f-4575-8bb9-bb8f4aaaf04b" containerName="extract-utilities" Dec 10 16:00:00 crc kubenswrapper[4718]: E1210 16:00:00.155557 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e4d5a1e3-2e1c-4570-b36d-0793123c8a0d" containerName="extract-content" Dec 10 16:00:00 crc kubenswrapper[4718]: I1210 16:00:00.155565 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="e4d5a1e3-2e1c-4570-b36d-0793123c8a0d" containerName="extract-content" Dec 10 16:00:00 crc kubenswrapper[4718]: E1210 16:00:00.155607 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14c4b569-fb8d-42a7-be5d-150209d91edb" containerName="registry-server" Dec 10 16:00:00 crc kubenswrapper[4718]: I1210 16:00:00.155616 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="14c4b569-fb8d-42a7-be5d-150209d91edb" containerName="registry-server" Dec 10 16:00:00 crc kubenswrapper[4718]: E1210 16:00:00.155632 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14c4b569-fb8d-42a7-be5d-150209d91edb" containerName="extract-utilities" Dec 10 16:00:00 crc kubenswrapper[4718]: I1210 16:00:00.155640 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="14c4b569-fb8d-42a7-be5d-150209d91edb" containerName="extract-utilities" Dec 10 16:00:00 crc kubenswrapper[4718]: E1210 16:00:00.155652 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e4d5a1e3-2e1c-4570-b36d-0793123c8a0d" containerName="extract-utilities" Dec 10 16:00:00 crc kubenswrapper[4718]: I1210 16:00:00.155660 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="e4d5a1e3-2e1c-4570-b36d-0793123c8a0d" containerName="extract-utilities" Dec 10 16:00:00 crc kubenswrapper[4718]: E1210 16:00:00.155676 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="938888ad-ca4f-4575-8bb9-bb8f4aaaf04b" containerName="registry-server" Dec 10 16:00:00 crc kubenswrapper[4718]: I1210 16:00:00.155684 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="938888ad-ca4f-4575-8bb9-bb8f4aaaf04b" containerName="registry-server" Dec 10 16:00:00 crc kubenswrapper[4718]: E1210 16:00:00.155705 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14c4b569-fb8d-42a7-be5d-150209d91edb" containerName="extract-content" Dec 10 16:00:00 crc kubenswrapper[4718]: I1210 16:00:00.155716 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="14c4b569-fb8d-42a7-be5d-150209d91edb" containerName="extract-content" Dec 10 16:00:00 crc kubenswrapper[4718]: E1210 16:00:00.155728 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="938888ad-ca4f-4575-8bb9-bb8f4aaaf04b" containerName="extract-content" Dec 10 16:00:00 crc kubenswrapper[4718]: I1210 16:00:00.155736 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="938888ad-ca4f-4575-8bb9-bb8f4aaaf04b" containerName="extract-content" Dec 10 16:00:00 crc kubenswrapper[4718]: I1210 16:00:00.156033 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="14c4b569-fb8d-42a7-be5d-150209d91edb" containerName="registry-server" Dec 10 16:00:00 crc kubenswrapper[4718]: I1210 16:00:00.156084 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="938888ad-ca4f-4575-8bb9-bb8f4aaaf04b" containerName="registry-server" Dec 10 16:00:00 crc kubenswrapper[4718]: I1210 16:00:00.156097 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="e4d5a1e3-2e1c-4570-b36d-0793123c8a0d" containerName="registry-server" Dec 10 16:00:00 crc kubenswrapper[4718]: I1210 16:00:00.157447 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29423040-qq6mc" Dec 10 16:00:00 crc kubenswrapper[4718]: I1210 16:00:00.162245 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 10 16:00:00 crc kubenswrapper[4718]: I1210 16:00:00.167726 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 10 16:00:00 crc kubenswrapper[4718]: I1210 16:00:00.189811 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29423040-qq6mc"] Dec 10 16:00:00 crc kubenswrapper[4718]: I1210 16:00:00.212143 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/751a2643-bd13-4194-b6f4-a7ea086bbce2-config-volume\") pod \"collect-profiles-29423040-qq6mc\" (UID: \"751a2643-bd13-4194-b6f4-a7ea086bbce2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29423040-qq6mc" Dec 10 16:00:00 crc kubenswrapper[4718]: I1210 16:00:00.212219 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/751a2643-bd13-4194-b6f4-a7ea086bbce2-secret-volume\") pod \"collect-profiles-29423040-qq6mc\" (UID: \"751a2643-bd13-4194-b6f4-a7ea086bbce2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29423040-qq6mc" Dec 10 16:00:00 crc kubenswrapper[4718]: I1210 16:00:00.212263 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-55qk7\" (UniqueName: \"kubernetes.io/projected/751a2643-bd13-4194-b6f4-a7ea086bbce2-kube-api-access-55qk7\") pod \"collect-profiles-29423040-qq6mc\" (UID: \"751a2643-bd13-4194-b6f4-a7ea086bbce2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29423040-qq6mc" Dec 10 16:00:00 crc kubenswrapper[4718]: I1210 16:00:00.315797 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/751a2643-bd13-4194-b6f4-a7ea086bbce2-secret-volume\") pod \"collect-profiles-29423040-qq6mc\" (UID: \"751a2643-bd13-4194-b6f4-a7ea086bbce2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29423040-qq6mc" Dec 10 16:00:00 crc kubenswrapper[4718]: I1210 16:00:00.316872 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-55qk7\" (UniqueName: \"kubernetes.io/projected/751a2643-bd13-4194-b6f4-a7ea086bbce2-kube-api-access-55qk7\") pod \"collect-profiles-29423040-qq6mc\" (UID: \"751a2643-bd13-4194-b6f4-a7ea086bbce2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29423040-qq6mc" Dec 10 16:00:00 crc kubenswrapper[4718]: I1210 16:00:00.317168 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/751a2643-bd13-4194-b6f4-a7ea086bbce2-config-volume\") pod \"collect-profiles-29423040-qq6mc\" (UID: \"751a2643-bd13-4194-b6f4-a7ea086bbce2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29423040-qq6mc" Dec 10 16:00:00 crc kubenswrapper[4718]: I1210 16:00:00.319322 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/751a2643-bd13-4194-b6f4-a7ea086bbce2-config-volume\") pod \"collect-profiles-29423040-qq6mc\" (UID: \"751a2643-bd13-4194-b6f4-a7ea086bbce2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29423040-qq6mc" Dec 10 16:00:00 crc kubenswrapper[4718]: I1210 16:00:00.325601 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/751a2643-bd13-4194-b6f4-a7ea086bbce2-secret-volume\") pod \"collect-profiles-29423040-qq6mc\" (UID: \"751a2643-bd13-4194-b6f4-a7ea086bbce2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29423040-qq6mc" Dec 10 16:00:00 crc kubenswrapper[4718]: I1210 16:00:00.339130 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-55qk7\" (UniqueName: \"kubernetes.io/projected/751a2643-bd13-4194-b6f4-a7ea086bbce2-kube-api-access-55qk7\") pod \"collect-profiles-29423040-qq6mc\" (UID: \"751a2643-bd13-4194-b6f4-a7ea086bbce2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29423040-qq6mc" Dec 10 16:00:00 crc kubenswrapper[4718]: I1210 16:00:00.483511 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29423040-qq6mc" Dec 10 16:00:00 crc kubenswrapper[4718]: I1210 16:00:00.953220 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29423040-qq6mc"] Dec 10 16:00:01 crc kubenswrapper[4718]: I1210 16:00:01.012862 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29423040-qq6mc" event={"ID":"751a2643-bd13-4194-b6f4-a7ea086bbce2","Type":"ContainerStarted","Data":"87e76005cd6ee8570db22b8f9e3739b3a1139266ebc679ad9d68357964a30907"} Dec 10 16:00:02 crc kubenswrapper[4718]: I1210 16:00:02.022522 4718 scope.go:117] "RemoveContainer" containerID="ebdeff3b0943919b5bdef1d22dc252289066507510289625b43f45819ed60530" Dec 10 16:00:02 crc kubenswrapper[4718]: E1210 16:00:02.023101 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8zmhn_openshift-machine-config-operator(8db53917-7cfb-496d-b8a0-5cc68f3be4e7)\"" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" podUID="8db53917-7cfb-496d-b8a0-5cc68f3be4e7" Dec 10 16:00:02 crc kubenswrapper[4718]: I1210 16:00:02.026572 4718 generic.go:334] "Generic (PLEG): container finished" podID="751a2643-bd13-4194-b6f4-a7ea086bbce2" containerID="bfbf6a251ab0ab80fd9418ec64cc04c48c7310c9cc8f121b5edc5dbbf7013932" exitCode=0 Dec 10 16:00:02 crc kubenswrapper[4718]: I1210 16:00:02.032800 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29423040-qq6mc" event={"ID":"751a2643-bd13-4194-b6f4-a7ea086bbce2","Type":"ContainerDied","Data":"bfbf6a251ab0ab80fd9418ec64cc04c48c7310c9cc8f121b5edc5dbbf7013932"} Dec 10 16:00:03 crc kubenswrapper[4718]: I1210 16:00:03.516424 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29423040-qq6mc" Dec 10 16:00:03 crc kubenswrapper[4718]: I1210 16:00:03.604645 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/751a2643-bd13-4194-b6f4-a7ea086bbce2-config-volume\") pod \"751a2643-bd13-4194-b6f4-a7ea086bbce2\" (UID: \"751a2643-bd13-4194-b6f4-a7ea086bbce2\") " Dec 10 16:00:03 crc kubenswrapper[4718]: I1210 16:00:03.604824 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/751a2643-bd13-4194-b6f4-a7ea086bbce2-secret-volume\") pod \"751a2643-bd13-4194-b6f4-a7ea086bbce2\" (UID: \"751a2643-bd13-4194-b6f4-a7ea086bbce2\") " Dec 10 16:00:03 crc kubenswrapper[4718]: I1210 16:00:03.604889 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-55qk7\" (UniqueName: \"kubernetes.io/projected/751a2643-bd13-4194-b6f4-a7ea086bbce2-kube-api-access-55qk7\") pod \"751a2643-bd13-4194-b6f4-a7ea086bbce2\" (UID: \"751a2643-bd13-4194-b6f4-a7ea086bbce2\") " Dec 10 16:00:03 crc kubenswrapper[4718]: I1210 16:00:03.605646 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/751a2643-bd13-4194-b6f4-a7ea086bbce2-config-volume" (OuterVolumeSpecName: "config-volume") pod "751a2643-bd13-4194-b6f4-a7ea086bbce2" (UID: "751a2643-bd13-4194-b6f4-a7ea086bbce2"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 16:00:03 crc kubenswrapper[4718]: I1210 16:00:03.614580 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/751a2643-bd13-4194-b6f4-a7ea086bbce2-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "751a2643-bd13-4194-b6f4-a7ea086bbce2" (UID: "751a2643-bd13-4194-b6f4-a7ea086bbce2"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 16:00:03 crc kubenswrapper[4718]: I1210 16:00:03.616817 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/751a2643-bd13-4194-b6f4-a7ea086bbce2-kube-api-access-55qk7" (OuterVolumeSpecName: "kube-api-access-55qk7") pod "751a2643-bd13-4194-b6f4-a7ea086bbce2" (UID: "751a2643-bd13-4194-b6f4-a7ea086bbce2"). InnerVolumeSpecName "kube-api-access-55qk7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 16:00:03 crc kubenswrapper[4718]: I1210 16:00:03.707816 4718 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/751a2643-bd13-4194-b6f4-a7ea086bbce2-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 10 16:00:03 crc kubenswrapper[4718]: I1210 16:00:03.707875 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-55qk7\" (UniqueName: \"kubernetes.io/projected/751a2643-bd13-4194-b6f4-a7ea086bbce2-kube-api-access-55qk7\") on node \"crc\" DevicePath \"\"" Dec 10 16:00:03 crc kubenswrapper[4718]: I1210 16:00:03.707889 4718 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/751a2643-bd13-4194-b6f4-a7ea086bbce2-config-volume\") on node \"crc\" DevicePath \"\"" Dec 10 16:00:04 crc kubenswrapper[4718]: I1210 16:00:04.068080 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29423040-qq6mc" Dec 10 16:00:04 crc kubenswrapper[4718]: I1210 16:00:04.067946 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29423040-qq6mc" event={"ID":"751a2643-bd13-4194-b6f4-a7ea086bbce2","Type":"ContainerDied","Data":"87e76005cd6ee8570db22b8f9e3739b3a1139266ebc679ad9d68357964a30907"} Dec 10 16:00:04 crc kubenswrapper[4718]: I1210 16:00:04.068415 4718 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="87e76005cd6ee8570db22b8f9e3739b3a1139266ebc679ad9d68357964a30907" Dec 10 16:00:04 crc kubenswrapper[4718]: I1210 16:00:04.618382 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29422995-2ksw5"] Dec 10 16:00:04 crc kubenswrapper[4718]: I1210 16:00:04.627795 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29422995-2ksw5"] Dec 10 16:00:06 crc kubenswrapper[4718]: I1210 16:00:06.035229 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7d6a73fa-7a27-4ede-ab43-bb532b0d7dc6" path="/var/lib/kubelet/pods/7d6a73fa-7a27-4ede-ab43-bb532b0d7dc6/volumes" Dec 10 16:00:17 crc kubenswrapper[4718]: I1210 16:00:17.020583 4718 scope.go:117] "RemoveContainer" containerID="ebdeff3b0943919b5bdef1d22dc252289066507510289625b43f45819ed60530" Dec 10 16:00:17 crc kubenswrapper[4718]: E1210 16:00:17.021484 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8zmhn_openshift-machine-config-operator(8db53917-7cfb-496d-b8a0-5cc68f3be4e7)\"" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" podUID="8db53917-7cfb-496d-b8a0-5cc68f3be4e7" Dec 10 16:00:28 crc kubenswrapper[4718]: I1210 16:00:28.020270 4718 scope.go:117] "RemoveContainer" containerID="ebdeff3b0943919b5bdef1d22dc252289066507510289625b43f45819ed60530" Dec 10 16:00:29 crc kubenswrapper[4718]: I1210 16:00:29.357113 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" event={"ID":"8db53917-7cfb-496d-b8a0-5cc68f3be4e7","Type":"ContainerStarted","Data":"d25a14785aa6626506b9deee3325065dc2c7827b1fb86d85f35cd0edf93d09dc"} Dec 10 16:00:57 crc kubenswrapper[4718]: I1210 16:00:57.023584 4718 scope.go:117] "RemoveContainer" containerID="2853606cb19265265246cb5773ac1f7585f84f129a28ccbcac4d94839ff262d5" Dec 10 16:01:00 crc kubenswrapper[4718]: I1210 16:01:00.423883 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29423041-7hjkl"] Dec 10 16:01:00 crc kubenswrapper[4718]: E1210 16:01:00.424976 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="751a2643-bd13-4194-b6f4-a7ea086bbce2" containerName="collect-profiles" Dec 10 16:01:00 crc kubenswrapper[4718]: I1210 16:01:00.425004 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="751a2643-bd13-4194-b6f4-a7ea086bbce2" containerName="collect-profiles" Dec 10 16:01:00 crc kubenswrapper[4718]: I1210 16:01:00.425317 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="751a2643-bd13-4194-b6f4-a7ea086bbce2" containerName="collect-profiles" Dec 10 16:01:00 crc kubenswrapper[4718]: I1210 16:01:00.426288 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29423041-7hjkl" Dec 10 16:01:00 crc kubenswrapper[4718]: I1210 16:01:00.438843 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6059577-40c7-4880-8a8f-f0b5736dbac2-combined-ca-bundle\") pod \"keystone-cron-29423041-7hjkl\" (UID: \"c6059577-40c7-4880-8a8f-f0b5736dbac2\") " pod="openstack/keystone-cron-29423041-7hjkl" Dec 10 16:01:00 crc kubenswrapper[4718]: I1210 16:01:00.439190 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jv77w\" (UniqueName: \"kubernetes.io/projected/c6059577-40c7-4880-8a8f-f0b5736dbac2-kube-api-access-jv77w\") pod \"keystone-cron-29423041-7hjkl\" (UID: \"c6059577-40c7-4880-8a8f-f0b5736dbac2\") " pod="openstack/keystone-cron-29423041-7hjkl" Dec 10 16:01:00 crc kubenswrapper[4718]: I1210 16:01:00.439435 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/c6059577-40c7-4880-8a8f-f0b5736dbac2-fernet-keys\") pod \"keystone-cron-29423041-7hjkl\" (UID: \"c6059577-40c7-4880-8a8f-f0b5736dbac2\") " pod="openstack/keystone-cron-29423041-7hjkl" Dec 10 16:01:00 crc kubenswrapper[4718]: I1210 16:01:00.439565 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c6059577-40c7-4880-8a8f-f0b5736dbac2-config-data\") pod \"keystone-cron-29423041-7hjkl\" (UID: \"c6059577-40c7-4880-8a8f-f0b5736dbac2\") " pod="openstack/keystone-cron-29423041-7hjkl" Dec 10 16:01:00 crc kubenswrapper[4718]: I1210 16:01:00.459408 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29423041-7hjkl"] Dec 10 16:01:00 crc kubenswrapper[4718]: I1210 16:01:00.544996 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6059577-40c7-4880-8a8f-f0b5736dbac2-combined-ca-bundle\") pod \"keystone-cron-29423041-7hjkl\" (UID: \"c6059577-40c7-4880-8a8f-f0b5736dbac2\") " pod="openstack/keystone-cron-29423041-7hjkl" Dec 10 16:01:00 crc kubenswrapper[4718]: I1210 16:01:00.545352 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jv77w\" (UniqueName: \"kubernetes.io/projected/c6059577-40c7-4880-8a8f-f0b5736dbac2-kube-api-access-jv77w\") pod \"keystone-cron-29423041-7hjkl\" (UID: \"c6059577-40c7-4880-8a8f-f0b5736dbac2\") " pod="openstack/keystone-cron-29423041-7hjkl" Dec 10 16:01:00 crc kubenswrapper[4718]: I1210 16:01:00.545420 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/c6059577-40c7-4880-8a8f-f0b5736dbac2-fernet-keys\") pod \"keystone-cron-29423041-7hjkl\" (UID: \"c6059577-40c7-4880-8a8f-f0b5736dbac2\") " pod="openstack/keystone-cron-29423041-7hjkl" Dec 10 16:01:00 crc kubenswrapper[4718]: I1210 16:01:00.545448 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c6059577-40c7-4880-8a8f-f0b5736dbac2-config-data\") pod \"keystone-cron-29423041-7hjkl\" (UID: \"c6059577-40c7-4880-8a8f-f0b5736dbac2\") " pod="openstack/keystone-cron-29423041-7hjkl" Dec 10 16:01:00 crc kubenswrapper[4718]: I1210 16:01:00.561729 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c6059577-40c7-4880-8a8f-f0b5736dbac2-config-data\") pod \"keystone-cron-29423041-7hjkl\" (UID: \"c6059577-40c7-4880-8a8f-f0b5736dbac2\") " pod="openstack/keystone-cron-29423041-7hjkl" Dec 10 16:01:00 crc kubenswrapper[4718]: I1210 16:01:00.563300 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/c6059577-40c7-4880-8a8f-f0b5736dbac2-fernet-keys\") pod \"keystone-cron-29423041-7hjkl\" (UID: \"c6059577-40c7-4880-8a8f-f0b5736dbac2\") " pod="openstack/keystone-cron-29423041-7hjkl" Dec 10 16:01:00 crc kubenswrapper[4718]: I1210 16:01:00.564326 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6059577-40c7-4880-8a8f-f0b5736dbac2-combined-ca-bundle\") pod \"keystone-cron-29423041-7hjkl\" (UID: \"c6059577-40c7-4880-8a8f-f0b5736dbac2\") " pod="openstack/keystone-cron-29423041-7hjkl" Dec 10 16:01:00 crc kubenswrapper[4718]: I1210 16:01:00.589634 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jv77w\" (UniqueName: \"kubernetes.io/projected/c6059577-40c7-4880-8a8f-f0b5736dbac2-kube-api-access-jv77w\") pod \"keystone-cron-29423041-7hjkl\" (UID: \"c6059577-40c7-4880-8a8f-f0b5736dbac2\") " pod="openstack/keystone-cron-29423041-7hjkl" Dec 10 16:01:00 crc kubenswrapper[4718]: I1210 16:01:00.790057 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29423041-7hjkl" Dec 10 16:01:01 crc kubenswrapper[4718]: I1210 16:01:01.276362 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29423041-7hjkl"] Dec 10 16:01:01 crc kubenswrapper[4718]: I1210 16:01:01.693485 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29423041-7hjkl" event={"ID":"c6059577-40c7-4880-8a8f-f0b5736dbac2","Type":"ContainerStarted","Data":"0688a3dc1e77aa624b65c10891d8114385cd01be158ce3fdaf155f54421cf9f8"} Dec 10 16:01:02 crc kubenswrapper[4718]: I1210 16:01:02.705947 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29423041-7hjkl" event={"ID":"c6059577-40c7-4880-8a8f-f0b5736dbac2","Type":"ContainerStarted","Data":"7402a4e559a8010e51de8256bb6a2db37cd02e34faa0d7f6a6a7abfe46ddc034"} Dec 10 16:01:02 crc kubenswrapper[4718]: I1210 16:01:02.725684 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29423041-7hjkl" podStartSLOduration=2.72566125 podStartE2EDuration="2.72566125s" podCreationTimestamp="2025-12-10 16:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 16:01:02.723816989 +0000 UTC m=+5367.673040416" watchObservedRunningTime="2025-12-10 16:01:02.72566125 +0000 UTC m=+5367.674884667" Dec 10 16:01:06 crc kubenswrapper[4718]: I1210 16:01:06.754713 4718 generic.go:334] "Generic (PLEG): container finished" podID="c6059577-40c7-4880-8a8f-f0b5736dbac2" containerID="7402a4e559a8010e51de8256bb6a2db37cd02e34faa0d7f6a6a7abfe46ddc034" exitCode=0 Dec 10 16:01:06 crc kubenswrapper[4718]: I1210 16:01:06.754787 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29423041-7hjkl" event={"ID":"c6059577-40c7-4880-8a8f-f0b5736dbac2","Type":"ContainerDied","Data":"7402a4e559a8010e51de8256bb6a2db37cd02e34faa0d7f6a6a7abfe46ddc034"} Dec 10 16:01:08 crc kubenswrapper[4718]: I1210 16:01:08.206756 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29423041-7hjkl" Dec 10 16:01:08 crc kubenswrapper[4718]: I1210 16:01:08.335418 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6059577-40c7-4880-8a8f-f0b5736dbac2-combined-ca-bundle\") pod \"c6059577-40c7-4880-8a8f-f0b5736dbac2\" (UID: \"c6059577-40c7-4880-8a8f-f0b5736dbac2\") " Dec 10 16:01:08 crc kubenswrapper[4718]: I1210 16:01:08.335566 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c6059577-40c7-4880-8a8f-f0b5736dbac2-config-data\") pod \"c6059577-40c7-4880-8a8f-f0b5736dbac2\" (UID: \"c6059577-40c7-4880-8a8f-f0b5736dbac2\") " Dec 10 16:01:08 crc kubenswrapper[4718]: I1210 16:01:08.335586 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/c6059577-40c7-4880-8a8f-f0b5736dbac2-fernet-keys\") pod \"c6059577-40c7-4880-8a8f-f0b5736dbac2\" (UID: \"c6059577-40c7-4880-8a8f-f0b5736dbac2\") " Dec 10 16:01:08 crc kubenswrapper[4718]: I1210 16:01:08.335631 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jv77w\" (UniqueName: \"kubernetes.io/projected/c6059577-40c7-4880-8a8f-f0b5736dbac2-kube-api-access-jv77w\") pod \"c6059577-40c7-4880-8a8f-f0b5736dbac2\" (UID: \"c6059577-40c7-4880-8a8f-f0b5736dbac2\") " Dec 10 16:01:08 crc kubenswrapper[4718]: I1210 16:01:08.342039 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c6059577-40c7-4880-8a8f-f0b5736dbac2-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "c6059577-40c7-4880-8a8f-f0b5736dbac2" (UID: "c6059577-40c7-4880-8a8f-f0b5736dbac2"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 16:01:08 crc kubenswrapper[4718]: I1210 16:01:08.348334 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c6059577-40c7-4880-8a8f-f0b5736dbac2-kube-api-access-jv77w" (OuterVolumeSpecName: "kube-api-access-jv77w") pod "c6059577-40c7-4880-8a8f-f0b5736dbac2" (UID: "c6059577-40c7-4880-8a8f-f0b5736dbac2"). InnerVolumeSpecName "kube-api-access-jv77w". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 16:01:08 crc kubenswrapper[4718]: I1210 16:01:08.371377 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c6059577-40c7-4880-8a8f-f0b5736dbac2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c6059577-40c7-4880-8a8f-f0b5736dbac2" (UID: "c6059577-40c7-4880-8a8f-f0b5736dbac2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 16:01:08 crc kubenswrapper[4718]: I1210 16:01:08.395989 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c6059577-40c7-4880-8a8f-f0b5736dbac2-config-data" (OuterVolumeSpecName: "config-data") pod "c6059577-40c7-4880-8a8f-f0b5736dbac2" (UID: "c6059577-40c7-4880-8a8f-f0b5736dbac2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 16:01:08 crc kubenswrapper[4718]: I1210 16:01:08.437863 4718 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c6059577-40c7-4880-8a8f-f0b5736dbac2-config-data\") on node \"crc\" DevicePath \"\"" Dec 10 16:01:08 crc kubenswrapper[4718]: I1210 16:01:08.437901 4718 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/c6059577-40c7-4880-8a8f-f0b5736dbac2-fernet-keys\") on node \"crc\" DevicePath \"\"" Dec 10 16:01:08 crc kubenswrapper[4718]: I1210 16:01:08.437911 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jv77w\" (UniqueName: \"kubernetes.io/projected/c6059577-40c7-4880-8a8f-f0b5736dbac2-kube-api-access-jv77w\") on node \"crc\" DevicePath \"\"" Dec 10 16:01:08 crc kubenswrapper[4718]: I1210 16:01:08.437922 4718 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6059577-40c7-4880-8a8f-f0b5736dbac2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 10 16:01:08 crc kubenswrapper[4718]: I1210 16:01:08.775771 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29423041-7hjkl" event={"ID":"c6059577-40c7-4880-8a8f-f0b5736dbac2","Type":"ContainerDied","Data":"0688a3dc1e77aa624b65c10891d8114385cd01be158ce3fdaf155f54421cf9f8"} Dec 10 16:01:08 crc kubenswrapper[4718]: I1210 16:01:08.776043 4718 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0688a3dc1e77aa624b65c10891d8114385cd01be158ce3fdaf155f54421cf9f8" Dec 10 16:01:08 crc kubenswrapper[4718]: I1210 16:01:08.776068 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29423041-7hjkl" Dec 10 16:01:47 crc kubenswrapper[4718]: E1210 16:01:47.785938 4718 upgradeaware.go:441] Error proxying data from backend to client: writeto tcp 38.102.83.17:43244->38.102.83.17:41117: read tcp 38.102.83.17:43244->38.102.83.17:41117: read: connection reset by peer Dec 10 16:02:20 crc kubenswrapper[4718]: I1210 16:02:20.096776 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-tl47n"] Dec 10 16:02:20 crc kubenswrapper[4718]: E1210 16:02:20.097865 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6059577-40c7-4880-8a8f-f0b5736dbac2" containerName="keystone-cron" Dec 10 16:02:20 crc kubenswrapper[4718]: I1210 16:02:20.097890 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6059577-40c7-4880-8a8f-f0b5736dbac2" containerName="keystone-cron" Dec 10 16:02:20 crc kubenswrapper[4718]: I1210 16:02:20.098088 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="c6059577-40c7-4880-8a8f-f0b5736dbac2" containerName="keystone-cron" Dec 10 16:02:20 crc kubenswrapper[4718]: I1210 16:02:20.099832 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tl47n" Dec 10 16:02:20 crc kubenswrapper[4718]: I1210 16:02:20.113817 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/59e3092f-2ef4-4d10-b0c7-ddac88efe3f6-catalog-content\") pod \"redhat-operators-tl47n\" (UID: \"59e3092f-2ef4-4d10-b0c7-ddac88efe3f6\") " pod="openshift-marketplace/redhat-operators-tl47n" Dec 10 16:02:20 crc kubenswrapper[4718]: I1210 16:02:20.113883 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/59e3092f-2ef4-4d10-b0c7-ddac88efe3f6-utilities\") pod \"redhat-operators-tl47n\" (UID: \"59e3092f-2ef4-4d10-b0c7-ddac88efe3f6\") " pod="openshift-marketplace/redhat-operators-tl47n" Dec 10 16:02:20 crc kubenswrapper[4718]: I1210 16:02:20.115078 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rrcqs\" (UniqueName: \"kubernetes.io/projected/59e3092f-2ef4-4d10-b0c7-ddac88efe3f6-kube-api-access-rrcqs\") pod \"redhat-operators-tl47n\" (UID: \"59e3092f-2ef4-4d10-b0c7-ddac88efe3f6\") " pod="openshift-marketplace/redhat-operators-tl47n" Dec 10 16:02:20 crc kubenswrapper[4718]: I1210 16:02:20.131064 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-tl47n"] Dec 10 16:02:20 crc kubenswrapper[4718]: I1210 16:02:20.217514 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rrcqs\" (UniqueName: \"kubernetes.io/projected/59e3092f-2ef4-4d10-b0c7-ddac88efe3f6-kube-api-access-rrcqs\") pod \"redhat-operators-tl47n\" (UID: \"59e3092f-2ef4-4d10-b0c7-ddac88efe3f6\") " pod="openshift-marketplace/redhat-operators-tl47n" Dec 10 16:02:20 crc kubenswrapper[4718]: I1210 16:02:20.217648 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/59e3092f-2ef4-4d10-b0c7-ddac88efe3f6-catalog-content\") pod \"redhat-operators-tl47n\" (UID: \"59e3092f-2ef4-4d10-b0c7-ddac88efe3f6\") " pod="openshift-marketplace/redhat-operators-tl47n" Dec 10 16:02:20 crc kubenswrapper[4718]: I1210 16:02:20.217694 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/59e3092f-2ef4-4d10-b0c7-ddac88efe3f6-utilities\") pod \"redhat-operators-tl47n\" (UID: \"59e3092f-2ef4-4d10-b0c7-ddac88efe3f6\") " pod="openshift-marketplace/redhat-operators-tl47n" Dec 10 16:02:20 crc kubenswrapper[4718]: I1210 16:02:20.218380 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/59e3092f-2ef4-4d10-b0c7-ddac88efe3f6-utilities\") pod \"redhat-operators-tl47n\" (UID: \"59e3092f-2ef4-4d10-b0c7-ddac88efe3f6\") " pod="openshift-marketplace/redhat-operators-tl47n" Dec 10 16:02:20 crc kubenswrapper[4718]: I1210 16:02:20.218745 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/59e3092f-2ef4-4d10-b0c7-ddac88efe3f6-catalog-content\") pod \"redhat-operators-tl47n\" (UID: \"59e3092f-2ef4-4d10-b0c7-ddac88efe3f6\") " pod="openshift-marketplace/redhat-operators-tl47n" Dec 10 16:02:20 crc kubenswrapper[4718]: I1210 16:02:20.243829 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rrcqs\" (UniqueName: \"kubernetes.io/projected/59e3092f-2ef4-4d10-b0c7-ddac88efe3f6-kube-api-access-rrcqs\") pod \"redhat-operators-tl47n\" (UID: \"59e3092f-2ef4-4d10-b0c7-ddac88efe3f6\") " pod="openshift-marketplace/redhat-operators-tl47n" Dec 10 16:02:20 crc kubenswrapper[4718]: I1210 16:02:20.421188 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tl47n" Dec 10 16:02:20 crc kubenswrapper[4718]: I1210 16:02:20.963958 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-tl47n"] Dec 10 16:02:20 crc kubenswrapper[4718]: W1210 16:02:20.976909 4718 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod59e3092f_2ef4_4d10_b0c7_ddac88efe3f6.slice/crio-eb62dcd8ee9db0e611ee67db5cb22fb4ff91a5da8524f3014f596c3adcc015c2 WatchSource:0}: Error finding container eb62dcd8ee9db0e611ee67db5cb22fb4ff91a5da8524f3014f596c3adcc015c2: Status 404 returned error can't find the container with id eb62dcd8ee9db0e611ee67db5cb22fb4ff91a5da8524f3014f596c3adcc015c2 Dec 10 16:02:21 crc kubenswrapper[4718]: I1210 16:02:21.657264 4718 generic.go:334] "Generic (PLEG): container finished" podID="59e3092f-2ef4-4d10-b0c7-ddac88efe3f6" containerID="9c35c5d17173d6c48b9e120fcf9188ea29113b468e9f4c04b592062f0c05d62d" exitCode=0 Dec 10 16:02:21 crc kubenswrapper[4718]: I1210 16:02:21.657363 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tl47n" event={"ID":"59e3092f-2ef4-4d10-b0c7-ddac88efe3f6","Type":"ContainerDied","Data":"9c35c5d17173d6c48b9e120fcf9188ea29113b468e9f4c04b592062f0c05d62d"} Dec 10 16:02:21 crc kubenswrapper[4718]: I1210 16:02:21.657674 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tl47n" event={"ID":"59e3092f-2ef4-4d10-b0c7-ddac88efe3f6","Type":"ContainerStarted","Data":"eb62dcd8ee9db0e611ee67db5cb22fb4ff91a5da8524f3014f596c3adcc015c2"} Dec 10 16:02:24 crc kubenswrapper[4718]: I1210 16:02:24.691506 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tl47n" event={"ID":"59e3092f-2ef4-4d10-b0c7-ddac88efe3f6","Type":"ContainerStarted","Data":"38181998d3e22882941f8d4c8bf9068d0b3ce4eeb2b12a9520776076e8f5fa7d"} Dec 10 16:02:29 crc kubenswrapper[4718]: I1210 16:02:29.029735 4718 generic.go:334] "Generic (PLEG): container finished" podID="59e3092f-2ef4-4d10-b0c7-ddac88efe3f6" containerID="38181998d3e22882941f8d4c8bf9068d0b3ce4eeb2b12a9520776076e8f5fa7d" exitCode=0 Dec 10 16:02:29 crc kubenswrapper[4718]: I1210 16:02:29.030853 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tl47n" event={"ID":"59e3092f-2ef4-4d10-b0c7-ddac88efe3f6","Type":"ContainerDied","Data":"38181998d3e22882941f8d4c8bf9068d0b3ce4eeb2b12a9520776076e8f5fa7d"} Dec 10 16:02:30 crc kubenswrapper[4718]: I1210 16:02:30.043004 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tl47n" event={"ID":"59e3092f-2ef4-4d10-b0c7-ddac88efe3f6","Type":"ContainerStarted","Data":"496eb5d128fb9ae07e3a95e40539391e695e8d13abf82e96f2aaa0d2a0412f0a"} Dec 10 16:02:30 crc kubenswrapper[4718]: I1210 16:02:30.069237 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-tl47n" podStartSLOduration=2.069075369 podStartE2EDuration="10.069146266s" podCreationTimestamp="2025-12-10 16:02:20 +0000 UTC" firstStartedPulling="2025-12-10 16:02:21.65956769 +0000 UTC m=+5446.608791107" lastFinishedPulling="2025-12-10 16:02:29.659638587 +0000 UTC m=+5454.608862004" observedRunningTime="2025-12-10 16:02:30.062598876 +0000 UTC m=+5455.011822303" watchObservedRunningTime="2025-12-10 16:02:30.069146266 +0000 UTC m=+5455.018369693" Dec 10 16:02:30 crc kubenswrapper[4718]: I1210 16:02:30.421580 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-tl47n" Dec 10 16:02:30 crc kubenswrapper[4718]: I1210 16:02:30.421640 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-tl47n" Dec 10 16:02:32 crc kubenswrapper[4718]: I1210 16:02:32.213199 4718 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-tl47n" podUID="59e3092f-2ef4-4d10-b0c7-ddac88efe3f6" containerName="registry-server" probeResult="failure" output=< Dec 10 16:02:32 crc kubenswrapper[4718]: timeout: failed to connect service ":50051" within 1s Dec 10 16:02:32 crc kubenswrapper[4718]: > Dec 10 16:02:40 crc kubenswrapper[4718]: I1210 16:02:40.476365 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-tl47n" Dec 10 16:02:40 crc kubenswrapper[4718]: I1210 16:02:40.541293 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-tl47n" Dec 10 16:02:41 crc kubenswrapper[4718]: I1210 16:02:41.196109 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-tl47n"] Dec 10 16:02:42 crc kubenswrapper[4718]: I1210 16:02:42.194229 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-tl47n" podUID="59e3092f-2ef4-4d10-b0c7-ddac88efe3f6" containerName="registry-server" containerID="cri-o://496eb5d128fb9ae07e3a95e40539391e695e8d13abf82e96f2aaa0d2a0412f0a" gracePeriod=2 Dec 10 16:02:43 crc kubenswrapper[4718]: I1210 16:02:43.242452 4718 generic.go:334] "Generic (PLEG): container finished" podID="59e3092f-2ef4-4d10-b0c7-ddac88efe3f6" containerID="496eb5d128fb9ae07e3a95e40539391e695e8d13abf82e96f2aaa0d2a0412f0a" exitCode=0 Dec 10 16:02:43 crc kubenswrapper[4718]: I1210 16:02:43.242537 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tl47n" event={"ID":"59e3092f-2ef4-4d10-b0c7-ddac88efe3f6","Type":"ContainerDied","Data":"496eb5d128fb9ae07e3a95e40539391e695e8d13abf82e96f2aaa0d2a0412f0a"} Dec 10 16:02:43 crc kubenswrapper[4718]: I1210 16:02:43.419576 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tl47n" Dec 10 16:02:43 crc kubenswrapper[4718]: I1210 16:02:43.536722 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/59e3092f-2ef4-4d10-b0c7-ddac88efe3f6-catalog-content\") pod \"59e3092f-2ef4-4d10-b0c7-ddac88efe3f6\" (UID: \"59e3092f-2ef4-4d10-b0c7-ddac88efe3f6\") " Dec 10 16:02:43 crc kubenswrapper[4718]: I1210 16:02:43.537199 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rrcqs\" (UniqueName: \"kubernetes.io/projected/59e3092f-2ef4-4d10-b0c7-ddac88efe3f6-kube-api-access-rrcqs\") pod \"59e3092f-2ef4-4d10-b0c7-ddac88efe3f6\" (UID: \"59e3092f-2ef4-4d10-b0c7-ddac88efe3f6\") " Dec 10 16:02:43 crc kubenswrapper[4718]: I1210 16:02:43.537246 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/59e3092f-2ef4-4d10-b0c7-ddac88efe3f6-utilities\") pod \"59e3092f-2ef4-4d10-b0c7-ddac88efe3f6\" (UID: \"59e3092f-2ef4-4d10-b0c7-ddac88efe3f6\") " Dec 10 16:02:43 crc kubenswrapper[4718]: I1210 16:02:43.538109 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/59e3092f-2ef4-4d10-b0c7-ddac88efe3f6-utilities" (OuterVolumeSpecName: "utilities") pod "59e3092f-2ef4-4d10-b0c7-ddac88efe3f6" (UID: "59e3092f-2ef4-4d10-b0c7-ddac88efe3f6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 16:02:43 crc kubenswrapper[4718]: I1210 16:02:43.544642 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/59e3092f-2ef4-4d10-b0c7-ddac88efe3f6-kube-api-access-rrcqs" (OuterVolumeSpecName: "kube-api-access-rrcqs") pod "59e3092f-2ef4-4d10-b0c7-ddac88efe3f6" (UID: "59e3092f-2ef4-4d10-b0c7-ddac88efe3f6"). InnerVolumeSpecName "kube-api-access-rrcqs". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 16:02:43 crc kubenswrapper[4718]: I1210 16:02:43.640015 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rrcqs\" (UniqueName: \"kubernetes.io/projected/59e3092f-2ef4-4d10-b0c7-ddac88efe3f6-kube-api-access-rrcqs\") on node \"crc\" DevicePath \"\"" Dec 10 16:02:43 crc kubenswrapper[4718]: I1210 16:02:43.640065 4718 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/59e3092f-2ef4-4d10-b0c7-ddac88efe3f6-utilities\") on node \"crc\" DevicePath \"\"" Dec 10 16:02:43 crc kubenswrapper[4718]: I1210 16:02:43.671277 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/59e3092f-2ef4-4d10-b0c7-ddac88efe3f6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "59e3092f-2ef4-4d10-b0c7-ddac88efe3f6" (UID: "59e3092f-2ef4-4d10-b0c7-ddac88efe3f6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 16:02:43 crc kubenswrapper[4718]: I1210 16:02:43.742480 4718 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/59e3092f-2ef4-4d10-b0c7-ddac88efe3f6-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 10 16:02:44 crc kubenswrapper[4718]: I1210 16:02:44.259914 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tl47n" event={"ID":"59e3092f-2ef4-4d10-b0c7-ddac88efe3f6","Type":"ContainerDied","Data":"eb62dcd8ee9db0e611ee67db5cb22fb4ff91a5da8524f3014f596c3adcc015c2"} Dec 10 16:02:44 crc kubenswrapper[4718]: I1210 16:02:44.260979 4718 scope.go:117] "RemoveContainer" containerID="496eb5d128fb9ae07e3a95e40539391e695e8d13abf82e96f2aaa0d2a0412f0a" Dec 10 16:02:44 crc kubenswrapper[4718]: I1210 16:02:44.259998 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tl47n" Dec 10 16:02:44 crc kubenswrapper[4718]: I1210 16:02:44.330876 4718 scope.go:117] "RemoveContainer" containerID="38181998d3e22882941f8d4c8bf9068d0b3ce4eeb2b12a9520776076e8f5fa7d" Dec 10 16:02:44 crc kubenswrapper[4718]: I1210 16:02:44.414042 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-tl47n"] Dec 10 16:02:44 crc kubenswrapper[4718]: I1210 16:02:44.442123 4718 scope.go:117] "RemoveContainer" containerID="9c35c5d17173d6c48b9e120fcf9188ea29113b468e9f4c04b592062f0c05d62d" Dec 10 16:02:44 crc kubenswrapper[4718]: I1210 16:02:44.488319 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-tl47n"] Dec 10 16:02:46 crc kubenswrapper[4718]: I1210 16:02:46.035132 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="59e3092f-2ef4-4d10-b0c7-ddac88efe3f6" path="/var/lib/kubelet/pods/59e3092f-2ef4-4d10-b0c7-ddac88efe3f6/volumes" Dec 10 16:02:48 crc kubenswrapper[4718]: I1210 16:02:48.084518 4718 patch_prober.go:28] interesting pod/machine-config-daemon-8zmhn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 10 16:02:48 crc kubenswrapper[4718]: I1210 16:02:48.084892 4718 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" podUID="8db53917-7cfb-496d-b8a0-5cc68f3be4e7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 10 16:03:18 crc kubenswrapper[4718]: I1210 16:03:18.084020 4718 patch_prober.go:28] interesting pod/machine-config-daemon-8zmhn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 10 16:03:18 crc kubenswrapper[4718]: I1210 16:03:18.084544 4718 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" podUID="8db53917-7cfb-496d-b8a0-5cc68f3be4e7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 10 16:03:48 crc kubenswrapper[4718]: I1210 16:03:48.084991 4718 patch_prober.go:28] interesting pod/machine-config-daemon-8zmhn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 10 16:03:48 crc kubenswrapper[4718]: I1210 16:03:48.085715 4718 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" podUID="8db53917-7cfb-496d-b8a0-5cc68f3be4e7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 10 16:03:48 crc kubenswrapper[4718]: I1210 16:03:48.085772 4718 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" Dec 10 16:03:48 crc kubenswrapper[4718]: I1210 16:03:48.086685 4718 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d25a14785aa6626506b9deee3325065dc2c7827b1fb86d85f35cd0edf93d09dc"} pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 10 16:03:48 crc kubenswrapper[4718]: I1210 16:03:48.086756 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" podUID="8db53917-7cfb-496d-b8a0-5cc68f3be4e7" containerName="machine-config-daemon" containerID="cri-o://d25a14785aa6626506b9deee3325065dc2c7827b1fb86d85f35cd0edf93d09dc" gracePeriod=600 Dec 10 16:03:48 crc kubenswrapper[4718]: I1210 16:03:48.948866 4718 generic.go:334] "Generic (PLEG): container finished" podID="8db53917-7cfb-496d-b8a0-5cc68f3be4e7" containerID="d25a14785aa6626506b9deee3325065dc2c7827b1fb86d85f35cd0edf93d09dc" exitCode=0 Dec 10 16:03:48 crc kubenswrapper[4718]: I1210 16:03:48.948930 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" event={"ID":"8db53917-7cfb-496d-b8a0-5cc68f3be4e7","Type":"ContainerDied","Data":"d25a14785aa6626506b9deee3325065dc2c7827b1fb86d85f35cd0edf93d09dc"} Dec 10 16:03:48 crc kubenswrapper[4718]: I1210 16:03:48.949477 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" event={"ID":"8db53917-7cfb-496d-b8a0-5cc68f3be4e7","Type":"ContainerStarted","Data":"e617b8520f6e76222e4940a8409c36020b31e0b95dc5aafbdaf0e3c690b84a37"} Dec 10 16:03:48 crc kubenswrapper[4718]: I1210 16:03:48.949512 4718 scope.go:117] "RemoveContainer" containerID="ebdeff3b0943919b5bdef1d22dc252289066507510289625b43f45819ed60530" Dec 10 16:05:48 crc kubenswrapper[4718]: I1210 16:05:48.084019 4718 patch_prober.go:28] interesting pod/machine-config-daemon-8zmhn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 10 16:05:48 crc kubenswrapper[4718]: I1210 16:05:48.084600 4718 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" podUID="8db53917-7cfb-496d-b8a0-5cc68f3be4e7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 10 16:06:18 crc kubenswrapper[4718]: I1210 16:06:18.084695 4718 patch_prober.go:28] interesting pod/machine-config-daemon-8zmhn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 10 16:06:18 crc kubenswrapper[4718]: I1210 16:06:18.085309 4718 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" podUID="8db53917-7cfb-496d-b8a0-5cc68f3be4e7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 10 16:06:48 crc kubenswrapper[4718]: I1210 16:06:48.084448 4718 patch_prober.go:28] interesting pod/machine-config-daemon-8zmhn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 10 16:06:48 crc kubenswrapper[4718]: I1210 16:06:48.085103 4718 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" podUID="8db53917-7cfb-496d-b8a0-5cc68f3be4e7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 10 16:06:48 crc kubenswrapper[4718]: I1210 16:06:48.085159 4718 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" Dec 10 16:06:48 crc kubenswrapper[4718]: I1210 16:06:48.086129 4718 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e617b8520f6e76222e4940a8409c36020b31e0b95dc5aafbdaf0e3c690b84a37"} pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 10 16:06:48 crc kubenswrapper[4718]: I1210 16:06:48.086208 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" podUID="8db53917-7cfb-496d-b8a0-5cc68f3be4e7" containerName="machine-config-daemon" containerID="cri-o://e617b8520f6e76222e4940a8409c36020b31e0b95dc5aafbdaf0e3c690b84a37" gracePeriod=600 Dec 10 16:06:48 crc kubenswrapper[4718]: E1210 16:06:48.210930 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8zmhn_openshift-machine-config-operator(8db53917-7cfb-496d-b8a0-5cc68f3be4e7)\"" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" podUID="8db53917-7cfb-496d-b8a0-5cc68f3be4e7" Dec 10 16:06:48 crc kubenswrapper[4718]: I1210 16:06:48.604205 4718 generic.go:334] "Generic (PLEG): container finished" podID="8db53917-7cfb-496d-b8a0-5cc68f3be4e7" containerID="e617b8520f6e76222e4940a8409c36020b31e0b95dc5aafbdaf0e3c690b84a37" exitCode=0 Dec 10 16:06:48 crc kubenswrapper[4718]: I1210 16:06:48.604274 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" event={"ID":"8db53917-7cfb-496d-b8a0-5cc68f3be4e7","Type":"ContainerDied","Data":"e617b8520f6e76222e4940a8409c36020b31e0b95dc5aafbdaf0e3c690b84a37"} Dec 10 16:06:48 crc kubenswrapper[4718]: I1210 16:06:48.604338 4718 scope.go:117] "RemoveContainer" containerID="d25a14785aa6626506b9deee3325065dc2c7827b1fb86d85f35cd0edf93d09dc" Dec 10 16:06:48 crc kubenswrapper[4718]: I1210 16:06:48.605321 4718 scope.go:117] "RemoveContainer" containerID="e617b8520f6e76222e4940a8409c36020b31e0b95dc5aafbdaf0e3c690b84a37" Dec 10 16:06:48 crc kubenswrapper[4718]: E1210 16:06:48.605695 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8zmhn_openshift-machine-config-operator(8db53917-7cfb-496d-b8a0-5cc68f3be4e7)\"" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" podUID="8db53917-7cfb-496d-b8a0-5cc68f3be4e7" Dec 10 16:07:02 crc kubenswrapper[4718]: I1210 16:07:02.021286 4718 scope.go:117] "RemoveContainer" containerID="e617b8520f6e76222e4940a8409c36020b31e0b95dc5aafbdaf0e3c690b84a37" Dec 10 16:07:02 crc kubenswrapper[4718]: E1210 16:07:02.022250 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8zmhn_openshift-machine-config-operator(8db53917-7cfb-496d-b8a0-5cc68f3be4e7)\"" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" podUID="8db53917-7cfb-496d-b8a0-5cc68f3be4e7" Dec 10 16:07:16 crc kubenswrapper[4718]: I1210 16:07:16.021544 4718 scope.go:117] "RemoveContainer" containerID="e617b8520f6e76222e4940a8409c36020b31e0b95dc5aafbdaf0e3c690b84a37" Dec 10 16:07:16 crc kubenswrapper[4718]: E1210 16:07:16.022491 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8zmhn_openshift-machine-config-operator(8db53917-7cfb-496d-b8a0-5cc68f3be4e7)\"" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" podUID="8db53917-7cfb-496d-b8a0-5cc68f3be4e7" Dec 10 16:07:30 crc kubenswrapper[4718]: I1210 16:07:30.020673 4718 scope.go:117] "RemoveContainer" containerID="e617b8520f6e76222e4940a8409c36020b31e0b95dc5aafbdaf0e3c690b84a37" Dec 10 16:07:30 crc kubenswrapper[4718]: E1210 16:07:30.021554 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8zmhn_openshift-machine-config-operator(8db53917-7cfb-496d-b8a0-5cc68f3be4e7)\"" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" podUID="8db53917-7cfb-496d-b8a0-5cc68f3be4e7" Dec 10 16:07:45 crc kubenswrapper[4718]: I1210 16:07:45.020691 4718 scope.go:117] "RemoveContainer" containerID="e617b8520f6e76222e4940a8409c36020b31e0b95dc5aafbdaf0e3c690b84a37" Dec 10 16:07:45 crc kubenswrapper[4718]: E1210 16:07:45.021693 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8zmhn_openshift-machine-config-operator(8db53917-7cfb-496d-b8a0-5cc68f3be4e7)\"" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" podUID="8db53917-7cfb-496d-b8a0-5cc68f3be4e7" Dec 10 16:08:00 crc kubenswrapper[4718]: I1210 16:08:00.020258 4718 scope.go:117] "RemoveContainer" containerID="e617b8520f6e76222e4940a8409c36020b31e0b95dc5aafbdaf0e3c690b84a37" Dec 10 16:08:00 crc kubenswrapper[4718]: E1210 16:08:00.021174 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8zmhn_openshift-machine-config-operator(8db53917-7cfb-496d-b8a0-5cc68f3be4e7)\"" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" podUID="8db53917-7cfb-496d-b8a0-5cc68f3be4e7" Dec 10 16:08:13 crc kubenswrapper[4718]: I1210 16:08:13.021276 4718 scope.go:117] "RemoveContainer" containerID="e617b8520f6e76222e4940a8409c36020b31e0b95dc5aafbdaf0e3c690b84a37" Dec 10 16:08:13 crc kubenswrapper[4718]: E1210 16:08:13.022229 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8zmhn_openshift-machine-config-operator(8db53917-7cfb-496d-b8a0-5cc68f3be4e7)\"" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" podUID="8db53917-7cfb-496d-b8a0-5cc68f3be4e7" Dec 10 16:08:25 crc kubenswrapper[4718]: I1210 16:08:25.021166 4718 scope.go:117] "RemoveContainer" containerID="e617b8520f6e76222e4940a8409c36020b31e0b95dc5aafbdaf0e3c690b84a37" Dec 10 16:08:25 crc kubenswrapper[4718]: E1210 16:08:25.023521 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8zmhn_openshift-machine-config-operator(8db53917-7cfb-496d-b8a0-5cc68f3be4e7)\"" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" podUID="8db53917-7cfb-496d-b8a0-5cc68f3be4e7" Dec 10 16:08:38 crc kubenswrapper[4718]: I1210 16:08:38.025060 4718 scope.go:117] "RemoveContainer" containerID="e617b8520f6e76222e4940a8409c36020b31e0b95dc5aafbdaf0e3c690b84a37" Dec 10 16:08:38 crc kubenswrapper[4718]: E1210 16:08:38.025976 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8zmhn_openshift-machine-config-operator(8db53917-7cfb-496d-b8a0-5cc68f3be4e7)\"" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" podUID="8db53917-7cfb-496d-b8a0-5cc68f3be4e7" Dec 10 16:08:49 crc kubenswrapper[4718]: I1210 16:08:49.020633 4718 scope.go:117] "RemoveContainer" containerID="e617b8520f6e76222e4940a8409c36020b31e0b95dc5aafbdaf0e3c690b84a37" Dec 10 16:08:49 crc kubenswrapper[4718]: E1210 16:08:49.021440 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8zmhn_openshift-machine-config-operator(8db53917-7cfb-496d-b8a0-5cc68f3be4e7)\"" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" podUID="8db53917-7cfb-496d-b8a0-5cc68f3be4e7" Dec 10 16:09:02 crc kubenswrapper[4718]: I1210 16:09:02.021290 4718 scope.go:117] "RemoveContainer" containerID="e617b8520f6e76222e4940a8409c36020b31e0b95dc5aafbdaf0e3c690b84a37" Dec 10 16:09:02 crc kubenswrapper[4718]: E1210 16:09:02.022132 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8zmhn_openshift-machine-config-operator(8db53917-7cfb-496d-b8a0-5cc68f3be4e7)\"" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" podUID="8db53917-7cfb-496d-b8a0-5cc68f3be4e7" Dec 10 16:09:15 crc kubenswrapper[4718]: I1210 16:09:15.021826 4718 scope.go:117] "RemoveContainer" containerID="e617b8520f6e76222e4940a8409c36020b31e0b95dc5aafbdaf0e3c690b84a37" Dec 10 16:09:15 crc kubenswrapper[4718]: E1210 16:09:15.023857 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8zmhn_openshift-machine-config-operator(8db53917-7cfb-496d-b8a0-5cc68f3be4e7)\"" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" podUID="8db53917-7cfb-496d-b8a0-5cc68f3be4e7" Dec 10 16:09:30 crc kubenswrapper[4718]: I1210 16:09:30.020909 4718 scope.go:117] "RemoveContainer" containerID="e617b8520f6e76222e4940a8409c36020b31e0b95dc5aafbdaf0e3c690b84a37" Dec 10 16:09:30 crc kubenswrapper[4718]: E1210 16:09:30.021927 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8zmhn_openshift-machine-config-operator(8db53917-7cfb-496d-b8a0-5cc68f3be4e7)\"" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" podUID="8db53917-7cfb-496d-b8a0-5cc68f3be4e7" Dec 10 16:09:45 crc kubenswrapper[4718]: I1210 16:09:45.020725 4718 scope.go:117] "RemoveContainer" containerID="e617b8520f6e76222e4940a8409c36020b31e0b95dc5aafbdaf0e3c690b84a37" Dec 10 16:09:45 crc kubenswrapper[4718]: E1210 16:09:45.021496 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8zmhn_openshift-machine-config-operator(8db53917-7cfb-496d-b8a0-5cc68f3be4e7)\"" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" podUID="8db53917-7cfb-496d-b8a0-5cc68f3be4e7" Dec 10 16:09:46 crc kubenswrapper[4718]: I1210 16:09:46.533703 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-x8gvj"] Dec 10 16:09:46 crc kubenswrapper[4718]: E1210 16:09:46.534813 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="59e3092f-2ef4-4d10-b0c7-ddac88efe3f6" containerName="extract-utilities" Dec 10 16:09:46 crc kubenswrapper[4718]: I1210 16:09:46.534846 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="59e3092f-2ef4-4d10-b0c7-ddac88efe3f6" containerName="extract-utilities" Dec 10 16:09:46 crc kubenswrapper[4718]: E1210 16:09:46.534882 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="59e3092f-2ef4-4d10-b0c7-ddac88efe3f6" containerName="extract-content" Dec 10 16:09:46 crc kubenswrapper[4718]: I1210 16:09:46.534891 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="59e3092f-2ef4-4d10-b0c7-ddac88efe3f6" containerName="extract-content" Dec 10 16:09:46 crc kubenswrapper[4718]: E1210 16:09:46.534914 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="59e3092f-2ef4-4d10-b0c7-ddac88efe3f6" containerName="registry-server" Dec 10 16:09:46 crc kubenswrapper[4718]: I1210 16:09:46.534922 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="59e3092f-2ef4-4d10-b0c7-ddac88efe3f6" containerName="registry-server" Dec 10 16:09:46 crc kubenswrapper[4718]: I1210 16:09:46.535283 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="59e3092f-2ef4-4d10-b0c7-ddac88efe3f6" containerName="registry-server" Dec 10 16:09:46 crc kubenswrapper[4718]: I1210 16:09:46.538014 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-x8gvj" Dec 10 16:09:46 crc kubenswrapper[4718]: I1210 16:09:46.558153 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-x8gvj"] Dec 10 16:09:46 crc kubenswrapper[4718]: I1210 16:09:46.584091 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/79e74e21-b6ff-41c9-9b93-553f934b70ba-catalog-content\") pod \"certified-operators-x8gvj\" (UID: \"79e74e21-b6ff-41c9-9b93-553f934b70ba\") " pod="openshift-marketplace/certified-operators-x8gvj" Dec 10 16:09:46 crc kubenswrapper[4718]: I1210 16:09:46.584145 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/79e74e21-b6ff-41c9-9b93-553f934b70ba-utilities\") pod \"certified-operators-x8gvj\" (UID: \"79e74e21-b6ff-41c9-9b93-553f934b70ba\") " pod="openshift-marketplace/certified-operators-x8gvj" Dec 10 16:09:46 crc kubenswrapper[4718]: I1210 16:09:46.584708 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kpq5s\" (UniqueName: \"kubernetes.io/projected/79e74e21-b6ff-41c9-9b93-553f934b70ba-kube-api-access-kpq5s\") pod \"certified-operators-x8gvj\" (UID: \"79e74e21-b6ff-41c9-9b93-553f934b70ba\") " pod="openshift-marketplace/certified-operators-x8gvj" Dec 10 16:09:46 crc kubenswrapper[4718]: I1210 16:09:46.691108 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kpq5s\" (UniqueName: \"kubernetes.io/projected/79e74e21-b6ff-41c9-9b93-553f934b70ba-kube-api-access-kpq5s\") pod \"certified-operators-x8gvj\" (UID: \"79e74e21-b6ff-41c9-9b93-553f934b70ba\") " pod="openshift-marketplace/certified-operators-x8gvj" Dec 10 16:09:46 crc kubenswrapper[4718]: I1210 16:09:46.691208 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/79e74e21-b6ff-41c9-9b93-553f934b70ba-catalog-content\") pod \"certified-operators-x8gvj\" (UID: \"79e74e21-b6ff-41c9-9b93-553f934b70ba\") " pod="openshift-marketplace/certified-operators-x8gvj" Dec 10 16:09:46 crc kubenswrapper[4718]: I1210 16:09:46.691247 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/79e74e21-b6ff-41c9-9b93-553f934b70ba-utilities\") pod \"certified-operators-x8gvj\" (UID: \"79e74e21-b6ff-41c9-9b93-553f934b70ba\") " pod="openshift-marketplace/certified-operators-x8gvj" Dec 10 16:09:46 crc kubenswrapper[4718]: I1210 16:09:46.692208 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/79e74e21-b6ff-41c9-9b93-553f934b70ba-utilities\") pod \"certified-operators-x8gvj\" (UID: \"79e74e21-b6ff-41c9-9b93-553f934b70ba\") " pod="openshift-marketplace/certified-operators-x8gvj" Dec 10 16:09:46 crc kubenswrapper[4718]: I1210 16:09:46.692433 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/79e74e21-b6ff-41c9-9b93-553f934b70ba-catalog-content\") pod \"certified-operators-x8gvj\" (UID: \"79e74e21-b6ff-41c9-9b93-553f934b70ba\") " pod="openshift-marketplace/certified-operators-x8gvj" Dec 10 16:09:46 crc kubenswrapper[4718]: I1210 16:09:46.714662 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kpq5s\" (UniqueName: \"kubernetes.io/projected/79e74e21-b6ff-41c9-9b93-553f934b70ba-kube-api-access-kpq5s\") pod \"certified-operators-x8gvj\" (UID: \"79e74e21-b6ff-41c9-9b93-553f934b70ba\") " pod="openshift-marketplace/certified-operators-x8gvj" Dec 10 16:09:46 crc kubenswrapper[4718]: I1210 16:09:46.865101 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-x8gvj" Dec 10 16:09:47 crc kubenswrapper[4718]: I1210 16:09:47.406244 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-x8gvj"] Dec 10 16:09:48 crc kubenswrapper[4718]: I1210 16:09:48.179807 4718 generic.go:334] "Generic (PLEG): container finished" podID="79e74e21-b6ff-41c9-9b93-553f934b70ba" containerID="48c93ddc29126c1b7649493e361daed8fe97f8f1f3216672c8d37e338690dd64" exitCode=0 Dec 10 16:09:48 crc kubenswrapper[4718]: I1210 16:09:48.179914 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-x8gvj" event={"ID":"79e74e21-b6ff-41c9-9b93-553f934b70ba","Type":"ContainerDied","Data":"48c93ddc29126c1b7649493e361daed8fe97f8f1f3216672c8d37e338690dd64"} Dec 10 16:09:48 crc kubenswrapper[4718]: I1210 16:09:48.180211 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-x8gvj" event={"ID":"79e74e21-b6ff-41c9-9b93-553f934b70ba","Type":"ContainerStarted","Data":"4a02e5dc87d4ed1f7ec973496eaea7ab666345cf22c493579962c16d75f8f47a"} Dec 10 16:09:48 crc kubenswrapper[4718]: I1210 16:09:48.182737 4718 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 10 16:09:50 crc kubenswrapper[4718]: I1210 16:09:50.202087 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-x8gvj" event={"ID":"79e74e21-b6ff-41c9-9b93-553f934b70ba","Type":"ContainerStarted","Data":"7cff43669ca5c029247a2a8fd03dd1f3d3b90eef69752486588b0f256dbb6a85"} Dec 10 16:09:51 crc kubenswrapper[4718]: I1210 16:09:51.226812 4718 generic.go:334] "Generic (PLEG): container finished" podID="79e74e21-b6ff-41c9-9b93-553f934b70ba" containerID="7cff43669ca5c029247a2a8fd03dd1f3d3b90eef69752486588b0f256dbb6a85" exitCode=0 Dec 10 16:09:51 crc kubenswrapper[4718]: I1210 16:09:51.226931 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-x8gvj" event={"ID":"79e74e21-b6ff-41c9-9b93-553f934b70ba","Type":"ContainerDied","Data":"7cff43669ca5c029247a2a8fd03dd1f3d3b90eef69752486588b0f256dbb6a85"} Dec 10 16:09:52 crc kubenswrapper[4718]: I1210 16:09:52.243614 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-x8gvj" event={"ID":"79e74e21-b6ff-41c9-9b93-553f934b70ba","Type":"ContainerStarted","Data":"283da19f6ef4dcae37559380f71df64c60bb988843586d7c56b0029b1dc7dbc4"} Dec 10 16:09:52 crc kubenswrapper[4718]: I1210 16:09:52.271552 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-x8gvj" podStartSLOduration=2.680579814 podStartE2EDuration="6.271497859s" podCreationTimestamp="2025-12-10 16:09:46 +0000 UTC" firstStartedPulling="2025-12-10 16:09:48.182342686 +0000 UTC m=+5893.131566103" lastFinishedPulling="2025-12-10 16:09:51.773260721 +0000 UTC m=+5896.722484148" observedRunningTime="2025-12-10 16:09:52.265331974 +0000 UTC m=+5897.214555391" watchObservedRunningTime="2025-12-10 16:09:52.271497859 +0000 UTC m=+5897.220721276" Dec 10 16:09:56 crc kubenswrapper[4718]: I1210 16:09:56.028200 4718 scope.go:117] "RemoveContainer" containerID="e617b8520f6e76222e4940a8409c36020b31e0b95dc5aafbdaf0e3c690b84a37" Dec 10 16:09:56 crc kubenswrapper[4718]: E1210 16:09:56.028892 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8zmhn_openshift-machine-config-operator(8db53917-7cfb-496d-b8a0-5cc68f3be4e7)\"" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" podUID="8db53917-7cfb-496d-b8a0-5cc68f3be4e7" Dec 10 16:09:56 crc kubenswrapper[4718]: I1210 16:09:56.865480 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-x8gvj" Dec 10 16:09:56 crc kubenswrapper[4718]: I1210 16:09:56.865535 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-x8gvj" Dec 10 16:09:56 crc kubenswrapper[4718]: I1210 16:09:56.918746 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-x8gvj" Dec 10 16:09:57 crc kubenswrapper[4718]: I1210 16:09:57.366328 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-x8gvj" Dec 10 16:09:57 crc kubenswrapper[4718]: I1210 16:09:57.423615 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-x8gvj"] Dec 10 16:09:59 crc kubenswrapper[4718]: I1210 16:09:59.349775 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-x8gvj" podUID="79e74e21-b6ff-41c9-9b93-553f934b70ba" containerName="registry-server" containerID="cri-o://283da19f6ef4dcae37559380f71df64c60bb988843586d7c56b0029b1dc7dbc4" gracePeriod=2 Dec 10 16:09:59 crc kubenswrapper[4718]: I1210 16:09:59.965181 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-x8gvj" Dec 10 16:10:00 crc kubenswrapper[4718]: I1210 16:10:00.098098 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/79e74e21-b6ff-41c9-9b93-553f934b70ba-utilities\") pod \"79e74e21-b6ff-41c9-9b93-553f934b70ba\" (UID: \"79e74e21-b6ff-41c9-9b93-553f934b70ba\") " Dec 10 16:10:00 crc kubenswrapper[4718]: I1210 16:10:00.098273 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/79e74e21-b6ff-41c9-9b93-553f934b70ba-catalog-content\") pod \"79e74e21-b6ff-41c9-9b93-553f934b70ba\" (UID: \"79e74e21-b6ff-41c9-9b93-553f934b70ba\") " Dec 10 16:10:00 crc kubenswrapper[4718]: I1210 16:10:00.098304 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kpq5s\" (UniqueName: \"kubernetes.io/projected/79e74e21-b6ff-41c9-9b93-553f934b70ba-kube-api-access-kpq5s\") pod \"79e74e21-b6ff-41c9-9b93-553f934b70ba\" (UID: \"79e74e21-b6ff-41c9-9b93-553f934b70ba\") " Dec 10 16:10:00 crc kubenswrapper[4718]: I1210 16:10:00.100129 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/79e74e21-b6ff-41c9-9b93-553f934b70ba-utilities" (OuterVolumeSpecName: "utilities") pod "79e74e21-b6ff-41c9-9b93-553f934b70ba" (UID: "79e74e21-b6ff-41c9-9b93-553f934b70ba"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 16:10:00 crc kubenswrapper[4718]: I1210 16:10:00.111860 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/79e74e21-b6ff-41c9-9b93-553f934b70ba-kube-api-access-kpq5s" (OuterVolumeSpecName: "kube-api-access-kpq5s") pod "79e74e21-b6ff-41c9-9b93-553f934b70ba" (UID: "79e74e21-b6ff-41c9-9b93-553f934b70ba"). InnerVolumeSpecName "kube-api-access-kpq5s". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 16:10:00 crc kubenswrapper[4718]: I1210 16:10:00.168083 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/79e74e21-b6ff-41c9-9b93-553f934b70ba-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "79e74e21-b6ff-41c9-9b93-553f934b70ba" (UID: "79e74e21-b6ff-41c9-9b93-553f934b70ba"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 16:10:00 crc kubenswrapper[4718]: I1210 16:10:00.200755 4718 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/79e74e21-b6ff-41c9-9b93-553f934b70ba-utilities\") on node \"crc\" DevicePath \"\"" Dec 10 16:10:00 crc kubenswrapper[4718]: I1210 16:10:00.200792 4718 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/79e74e21-b6ff-41c9-9b93-553f934b70ba-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 10 16:10:00 crc kubenswrapper[4718]: I1210 16:10:00.200806 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kpq5s\" (UniqueName: \"kubernetes.io/projected/79e74e21-b6ff-41c9-9b93-553f934b70ba-kube-api-access-kpq5s\") on node \"crc\" DevicePath \"\"" Dec 10 16:10:00 crc kubenswrapper[4718]: I1210 16:10:00.362257 4718 generic.go:334] "Generic (PLEG): container finished" podID="79e74e21-b6ff-41c9-9b93-553f934b70ba" containerID="283da19f6ef4dcae37559380f71df64c60bb988843586d7c56b0029b1dc7dbc4" exitCode=0 Dec 10 16:10:00 crc kubenswrapper[4718]: I1210 16:10:00.362376 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-x8gvj" Dec 10 16:10:00 crc kubenswrapper[4718]: I1210 16:10:00.362352 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-x8gvj" event={"ID":"79e74e21-b6ff-41c9-9b93-553f934b70ba","Type":"ContainerDied","Data":"283da19f6ef4dcae37559380f71df64c60bb988843586d7c56b0029b1dc7dbc4"} Dec 10 16:10:00 crc kubenswrapper[4718]: I1210 16:10:00.362488 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-x8gvj" event={"ID":"79e74e21-b6ff-41c9-9b93-553f934b70ba","Type":"ContainerDied","Data":"4a02e5dc87d4ed1f7ec973496eaea7ab666345cf22c493579962c16d75f8f47a"} Dec 10 16:10:00 crc kubenswrapper[4718]: I1210 16:10:00.362523 4718 scope.go:117] "RemoveContainer" containerID="283da19f6ef4dcae37559380f71df64c60bb988843586d7c56b0029b1dc7dbc4" Dec 10 16:10:00 crc kubenswrapper[4718]: I1210 16:10:00.388273 4718 scope.go:117] "RemoveContainer" containerID="7cff43669ca5c029247a2a8fd03dd1f3d3b90eef69752486588b0f256dbb6a85" Dec 10 16:10:00 crc kubenswrapper[4718]: I1210 16:10:00.424075 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-x8gvj"] Dec 10 16:10:00 crc kubenswrapper[4718]: I1210 16:10:00.428781 4718 scope.go:117] "RemoveContainer" containerID="48c93ddc29126c1b7649493e361daed8fe97f8f1f3216672c8d37e338690dd64" Dec 10 16:10:00 crc kubenswrapper[4718]: I1210 16:10:00.441678 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-x8gvj"] Dec 10 16:10:00 crc kubenswrapper[4718]: I1210 16:10:00.481766 4718 scope.go:117] "RemoveContainer" containerID="283da19f6ef4dcae37559380f71df64c60bb988843586d7c56b0029b1dc7dbc4" Dec 10 16:10:00 crc kubenswrapper[4718]: E1210 16:10:00.482349 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"283da19f6ef4dcae37559380f71df64c60bb988843586d7c56b0029b1dc7dbc4\": container with ID starting with 283da19f6ef4dcae37559380f71df64c60bb988843586d7c56b0029b1dc7dbc4 not found: ID does not exist" containerID="283da19f6ef4dcae37559380f71df64c60bb988843586d7c56b0029b1dc7dbc4" Dec 10 16:10:00 crc kubenswrapper[4718]: I1210 16:10:00.482436 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"283da19f6ef4dcae37559380f71df64c60bb988843586d7c56b0029b1dc7dbc4"} err="failed to get container status \"283da19f6ef4dcae37559380f71df64c60bb988843586d7c56b0029b1dc7dbc4\": rpc error: code = NotFound desc = could not find container \"283da19f6ef4dcae37559380f71df64c60bb988843586d7c56b0029b1dc7dbc4\": container with ID starting with 283da19f6ef4dcae37559380f71df64c60bb988843586d7c56b0029b1dc7dbc4 not found: ID does not exist" Dec 10 16:10:00 crc kubenswrapper[4718]: I1210 16:10:00.482471 4718 scope.go:117] "RemoveContainer" containerID="7cff43669ca5c029247a2a8fd03dd1f3d3b90eef69752486588b0f256dbb6a85" Dec 10 16:10:00 crc kubenswrapper[4718]: E1210 16:10:00.482957 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7cff43669ca5c029247a2a8fd03dd1f3d3b90eef69752486588b0f256dbb6a85\": container with ID starting with 7cff43669ca5c029247a2a8fd03dd1f3d3b90eef69752486588b0f256dbb6a85 not found: ID does not exist" containerID="7cff43669ca5c029247a2a8fd03dd1f3d3b90eef69752486588b0f256dbb6a85" Dec 10 16:10:00 crc kubenswrapper[4718]: I1210 16:10:00.482980 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7cff43669ca5c029247a2a8fd03dd1f3d3b90eef69752486588b0f256dbb6a85"} err="failed to get container status \"7cff43669ca5c029247a2a8fd03dd1f3d3b90eef69752486588b0f256dbb6a85\": rpc error: code = NotFound desc = could not find container \"7cff43669ca5c029247a2a8fd03dd1f3d3b90eef69752486588b0f256dbb6a85\": container with ID starting with 7cff43669ca5c029247a2a8fd03dd1f3d3b90eef69752486588b0f256dbb6a85 not found: ID does not exist" Dec 10 16:10:00 crc kubenswrapper[4718]: I1210 16:10:00.482997 4718 scope.go:117] "RemoveContainer" containerID="48c93ddc29126c1b7649493e361daed8fe97f8f1f3216672c8d37e338690dd64" Dec 10 16:10:00 crc kubenswrapper[4718]: E1210 16:10:00.483290 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"48c93ddc29126c1b7649493e361daed8fe97f8f1f3216672c8d37e338690dd64\": container with ID starting with 48c93ddc29126c1b7649493e361daed8fe97f8f1f3216672c8d37e338690dd64 not found: ID does not exist" containerID="48c93ddc29126c1b7649493e361daed8fe97f8f1f3216672c8d37e338690dd64" Dec 10 16:10:00 crc kubenswrapper[4718]: I1210 16:10:00.483310 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"48c93ddc29126c1b7649493e361daed8fe97f8f1f3216672c8d37e338690dd64"} err="failed to get container status \"48c93ddc29126c1b7649493e361daed8fe97f8f1f3216672c8d37e338690dd64\": rpc error: code = NotFound desc = could not find container \"48c93ddc29126c1b7649493e361daed8fe97f8f1f3216672c8d37e338690dd64\": container with ID starting with 48c93ddc29126c1b7649493e361daed8fe97f8f1f3216672c8d37e338690dd64 not found: ID does not exist" Dec 10 16:10:02 crc kubenswrapper[4718]: I1210 16:10:02.031292 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="79e74e21-b6ff-41c9-9b93-553f934b70ba" path="/var/lib/kubelet/pods/79e74e21-b6ff-41c9-9b93-553f934b70ba/volumes" Dec 10 16:10:07 crc kubenswrapper[4718]: I1210 16:10:07.021204 4718 scope.go:117] "RemoveContainer" containerID="e617b8520f6e76222e4940a8409c36020b31e0b95dc5aafbdaf0e3c690b84a37" Dec 10 16:10:07 crc kubenswrapper[4718]: E1210 16:10:07.022105 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8zmhn_openshift-machine-config-operator(8db53917-7cfb-496d-b8a0-5cc68f3be4e7)\"" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" podUID="8db53917-7cfb-496d-b8a0-5cc68f3be4e7" Dec 10 16:10:20 crc kubenswrapper[4718]: I1210 16:10:20.021229 4718 scope.go:117] "RemoveContainer" containerID="e617b8520f6e76222e4940a8409c36020b31e0b95dc5aafbdaf0e3c690b84a37" Dec 10 16:10:20 crc kubenswrapper[4718]: E1210 16:10:20.022203 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8zmhn_openshift-machine-config-operator(8db53917-7cfb-496d-b8a0-5cc68f3be4e7)\"" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" podUID="8db53917-7cfb-496d-b8a0-5cc68f3be4e7" Dec 10 16:10:32 crc kubenswrapper[4718]: I1210 16:10:32.020490 4718 scope.go:117] "RemoveContainer" containerID="e617b8520f6e76222e4940a8409c36020b31e0b95dc5aafbdaf0e3c690b84a37" Dec 10 16:10:32 crc kubenswrapper[4718]: E1210 16:10:32.021280 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8zmhn_openshift-machine-config-operator(8db53917-7cfb-496d-b8a0-5cc68f3be4e7)\"" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" podUID="8db53917-7cfb-496d-b8a0-5cc68f3be4e7" Dec 10 16:10:46 crc kubenswrapper[4718]: I1210 16:10:46.028191 4718 scope.go:117] "RemoveContainer" containerID="e617b8520f6e76222e4940a8409c36020b31e0b95dc5aafbdaf0e3c690b84a37" Dec 10 16:10:46 crc kubenswrapper[4718]: E1210 16:10:46.029077 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8zmhn_openshift-machine-config-operator(8db53917-7cfb-496d-b8a0-5cc68f3be4e7)\"" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" podUID="8db53917-7cfb-496d-b8a0-5cc68f3be4e7" Dec 10 16:10:47 crc kubenswrapper[4718]: I1210 16:10:47.815360 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-s2spd"] Dec 10 16:10:47 crc kubenswrapper[4718]: E1210 16:10:47.816164 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="79e74e21-b6ff-41c9-9b93-553f934b70ba" containerName="registry-server" Dec 10 16:10:47 crc kubenswrapper[4718]: I1210 16:10:47.816178 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="79e74e21-b6ff-41c9-9b93-553f934b70ba" containerName="registry-server" Dec 10 16:10:47 crc kubenswrapper[4718]: E1210 16:10:47.816197 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="79e74e21-b6ff-41c9-9b93-553f934b70ba" containerName="extract-utilities" Dec 10 16:10:47 crc kubenswrapper[4718]: I1210 16:10:47.816204 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="79e74e21-b6ff-41c9-9b93-553f934b70ba" containerName="extract-utilities" Dec 10 16:10:47 crc kubenswrapper[4718]: E1210 16:10:47.816242 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="79e74e21-b6ff-41c9-9b93-553f934b70ba" containerName="extract-content" Dec 10 16:10:47 crc kubenswrapper[4718]: I1210 16:10:47.816248 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="79e74e21-b6ff-41c9-9b93-553f934b70ba" containerName="extract-content" Dec 10 16:10:47 crc kubenswrapper[4718]: I1210 16:10:47.816479 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="79e74e21-b6ff-41c9-9b93-553f934b70ba" containerName="registry-server" Dec 10 16:10:47 crc kubenswrapper[4718]: I1210 16:10:47.818037 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-s2spd" Dec 10 16:10:47 crc kubenswrapper[4718]: I1210 16:10:47.833967 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-s2spd"] Dec 10 16:10:47 crc kubenswrapper[4718]: I1210 16:10:47.983856 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s4497\" (UniqueName: \"kubernetes.io/projected/db32c7ad-c4a6-47da-a624-03fd7ef1fede-kube-api-access-s4497\") pod \"redhat-marketplace-s2spd\" (UID: \"db32c7ad-c4a6-47da-a624-03fd7ef1fede\") " pod="openshift-marketplace/redhat-marketplace-s2spd" Dec 10 16:10:47 crc kubenswrapper[4718]: I1210 16:10:47.984694 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/db32c7ad-c4a6-47da-a624-03fd7ef1fede-catalog-content\") pod \"redhat-marketplace-s2spd\" (UID: \"db32c7ad-c4a6-47da-a624-03fd7ef1fede\") " pod="openshift-marketplace/redhat-marketplace-s2spd" Dec 10 16:10:47 crc kubenswrapper[4718]: I1210 16:10:47.984855 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/db32c7ad-c4a6-47da-a624-03fd7ef1fede-utilities\") pod \"redhat-marketplace-s2spd\" (UID: \"db32c7ad-c4a6-47da-a624-03fd7ef1fede\") " pod="openshift-marketplace/redhat-marketplace-s2spd" Dec 10 16:10:48 crc kubenswrapper[4718]: I1210 16:10:48.087452 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s4497\" (UniqueName: \"kubernetes.io/projected/db32c7ad-c4a6-47da-a624-03fd7ef1fede-kube-api-access-s4497\") pod \"redhat-marketplace-s2spd\" (UID: \"db32c7ad-c4a6-47da-a624-03fd7ef1fede\") " pod="openshift-marketplace/redhat-marketplace-s2spd" Dec 10 16:10:48 crc kubenswrapper[4718]: I1210 16:10:48.087554 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/db32c7ad-c4a6-47da-a624-03fd7ef1fede-catalog-content\") pod \"redhat-marketplace-s2spd\" (UID: \"db32c7ad-c4a6-47da-a624-03fd7ef1fede\") " pod="openshift-marketplace/redhat-marketplace-s2spd" Dec 10 16:10:48 crc kubenswrapper[4718]: I1210 16:10:48.087621 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/db32c7ad-c4a6-47da-a624-03fd7ef1fede-utilities\") pod \"redhat-marketplace-s2spd\" (UID: \"db32c7ad-c4a6-47da-a624-03fd7ef1fede\") " pod="openshift-marketplace/redhat-marketplace-s2spd" Dec 10 16:10:48 crc kubenswrapper[4718]: I1210 16:10:48.088226 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/db32c7ad-c4a6-47da-a624-03fd7ef1fede-catalog-content\") pod \"redhat-marketplace-s2spd\" (UID: \"db32c7ad-c4a6-47da-a624-03fd7ef1fede\") " pod="openshift-marketplace/redhat-marketplace-s2spd" Dec 10 16:10:48 crc kubenswrapper[4718]: I1210 16:10:48.088242 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/db32c7ad-c4a6-47da-a624-03fd7ef1fede-utilities\") pod \"redhat-marketplace-s2spd\" (UID: \"db32c7ad-c4a6-47da-a624-03fd7ef1fede\") " pod="openshift-marketplace/redhat-marketplace-s2spd" Dec 10 16:10:48 crc kubenswrapper[4718]: I1210 16:10:48.109334 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s4497\" (UniqueName: \"kubernetes.io/projected/db32c7ad-c4a6-47da-a624-03fd7ef1fede-kube-api-access-s4497\") pod \"redhat-marketplace-s2spd\" (UID: \"db32c7ad-c4a6-47da-a624-03fd7ef1fede\") " pod="openshift-marketplace/redhat-marketplace-s2spd" Dec 10 16:10:48 crc kubenswrapper[4718]: I1210 16:10:48.150495 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-s2spd" Dec 10 16:10:48 crc kubenswrapper[4718]: I1210 16:10:48.666662 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-s2spd"] Dec 10 16:10:48 crc kubenswrapper[4718]: I1210 16:10:48.893031 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-s2spd" event={"ID":"db32c7ad-c4a6-47da-a624-03fd7ef1fede","Type":"ContainerStarted","Data":"74f8a06da4523f0cd94c46bb7cd58524ccdc53c009da69552fb392f30b93eb00"} Dec 10 16:10:49 crc kubenswrapper[4718]: I1210 16:10:49.937200 4718 generic.go:334] "Generic (PLEG): container finished" podID="db32c7ad-c4a6-47da-a624-03fd7ef1fede" containerID="16a259a6370f27c7e326f8c119e99962fce0088420e3c903ffd750a0527fdc9c" exitCode=0 Dec 10 16:10:49 crc kubenswrapper[4718]: I1210 16:10:49.937267 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-s2spd" event={"ID":"db32c7ad-c4a6-47da-a624-03fd7ef1fede","Type":"ContainerDied","Data":"16a259a6370f27c7e326f8c119e99962fce0088420e3c903ffd750a0527fdc9c"} Dec 10 16:10:51 crc kubenswrapper[4718]: I1210 16:10:51.965489 4718 generic.go:334] "Generic (PLEG): container finished" podID="db32c7ad-c4a6-47da-a624-03fd7ef1fede" containerID="b0d6aafd7172a1d29027b7b6f832b77fc0078e04b645b6c06956809561476070" exitCode=0 Dec 10 16:10:51 crc kubenswrapper[4718]: I1210 16:10:51.965607 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-s2spd" event={"ID":"db32c7ad-c4a6-47da-a624-03fd7ef1fede","Type":"ContainerDied","Data":"b0d6aafd7172a1d29027b7b6f832b77fc0078e04b645b6c06956809561476070"} Dec 10 16:10:53 crc kubenswrapper[4718]: I1210 16:10:53.994655 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-s2spd" event={"ID":"db32c7ad-c4a6-47da-a624-03fd7ef1fede","Type":"ContainerStarted","Data":"247196b40438372efc8b2ba92e4c86096d7565a1aafc71971e614f279b9ded03"} Dec 10 16:10:54 crc kubenswrapper[4718]: I1210 16:10:54.026549 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-s2spd" podStartSLOduration=4.227784199 podStartE2EDuration="7.026523681s" podCreationTimestamp="2025-12-10 16:10:47 +0000 UTC" firstStartedPulling="2025-12-10 16:10:49.941744176 +0000 UTC m=+5954.890967603" lastFinishedPulling="2025-12-10 16:10:52.740483648 +0000 UTC m=+5957.689707085" observedRunningTime="2025-12-10 16:10:54.016141008 +0000 UTC m=+5958.965364425" watchObservedRunningTime="2025-12-10 16:10:54.026523681 +0000 UTC m=+5958.975747098" Dec 10 16:10:58 crc kubenswrapper[4718]: I1210 16:10:58.151234 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-s2spd" Dec 10 16:10:58 crc kubenswrapper[4718]: I1210 16:10:58.151714 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-s2spd" Dec 10 16:10:58 crc kubenswrapper[4718]: I1210 16:10:58.207136 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-s2spd" Dec 10 16:10:59 crc kubenswrapper[4718]: I1210 16:10:59.021049 4718 scope.go:117] "RemoveContainer" containerID="e617b8520f6e76222e4940a8409c36020b31e0b95dc5aafbdaf0e3c690b84a37" Dec 10 16:10:59 crc kubenswrapper[4718]: E1210 16:10:59.021711 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8zmhn_openshift-machine-config-operator(8db53917-7cfb-496d-b8a0-5cc68f3be4e7)\"" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" podUID="8db53917-7cfb-496d-b8a0-5cc68f3be4e7" Dec 10 16:10:59 crc kubenswrapper[4718]: I1210 16:10:59.110807 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-s2spd" Dec 10 16:10:59 crc kubenswrapper[4718]: I1210 16:10:59.159576 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-s2spd"] Dec 10 16:11:01 crc kubenswrapper[4718]: I1210 16:11:01.072039 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-s2spd" podUID="db32c7ad-c4a6-47da-a624-03fd7ef1fede" containerName="registry-server" containerID="cri-o://247196b40438372efc8b2ba92e4c86096d7565a1aafc71971e614f279b9ded03" gracePeriod=2 Dec 10 16:11:01 crc kubenswrapper[4718]: I1210 16:11:01.648418 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-s2spd" Dec 10 16:11:01 crc kubenswrapper[4718]: I1210 16:11:01.909259 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/db32c7ad-c4a6-47da-a624-03fd7ef1fede-utilities\") pod \"db32c7ad-c4a6-47da-a624-03fd7ef1fede\" (UID: \"db32c7ad-c4a6-47da-a624-03fd7ef1fede\") " Dec 10 16:11:01 crc kubenswrapper[4718]: I1210 16:11:01.909374 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/db32c7ad-c4a6-47da-a624-03fd7ef1fede-catalog-content\") pod \"db32c7ad-c4a6-47da-a624-03fd7ef1fede\" (UID: \"db32c7ad-c4a6-47da-a624-03fd7ef1fede\") " Dec 10 16:11:01 crc kubenswrapper[4718]: I1210 16:11:01.909518 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4497\" (UniqueName: \"kubernetes.io/projected/db32c7ad-c4a6-47da-a624-03fd7ef1fede-kube-api-access-s4497\") pod \"db32c7ad-c4a6-47da-a624-03fd7ef1fede\" (UID: \"db32c7ad-c4a6-47da-a624-03fd7ef1fede\") " Dec 10 16:11:01 crc kubenswrapper[4718]: I1210 16:11:01.910531 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/db32c7ad-c4a6-47da-a624-03fd7ef1fede-utilities" (OuterVolumeSpecName: "utilities") pod "db32c7ad-c4a6-47da-a624-03fd7ef1fede" (UID: "db32c7ad-c4a6-47da-a624-03fd7ef1fede"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 16:11:01 crc kubenswrapper[4718]: I1210 16:11:01.916418 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/db32c7ad-c4a6-47da-a624-03fd7ef1fede-kube-api-access-s4497" (OuterVolumeSpecName: "kube-api-access-s4497") pod "db32c7ad-c4a6-47da-a624-03fd7ef1fede" (UID: "db32c7ad-c4a6-47da-a624-03fd7ef1fede"). InnerVolumeSpecName "kube-api-access-s4497". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 16:11:01 crc kubenswrapper[4718]: I1210 16:11:01.935256 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/db32c7ad-c4a6-47da-a624-03fd7ef1fede-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "db32c7ad-c4a6-47da-a624-03fd7ef1fede" (UID: "db32c7ad-c4a6-47da-a624-03fd7ef1fede"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 16:11:02 crc kubenswrapper[4718]: I1210 16:11:02.013130 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4497\" (UniqueName: \"kubernetes.io/projected/db32c7ad-c4a6-47da-a624-03fd7ef1fede-kube-api-access-s4497\") on node \"crc\" DevicePath \"\"" Dec 10 16:11:02 crc kubenswrapper[4718]: I1210 16:11:02.013183 4718 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/db32c7ad-c4a6-47da-a624-03fd7ef1fede-utilities\") on node \"crc\" DevicePath \"\"" Dec 10 16:11:02 crc kubenswrapper[4718]: I1210 16:11:02.013201 4718 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/db32c7ad-c4a6-47da-a624-03fd7ef1fede-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 10 16:11:02 crc kubenswrapper[4718]: I1210 16:11:02.085604 4718 generic.go:334] "Generic (PLEG): container finished" podID="db32c7ad-c4a6-47da-a624-03fd7ef1fede" containerID="247196b40438372efc8b2ba92e4c86096d7565a1aafc71971e614f279b9ded03" exitCode=0 Dec 10 16:11:02 crc kubenswrapper[4718]: I1210 16:11:02.085659 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-s2spd" event={"ID":"db32c7ad-c4a6-47da-a624-03fd7ef1fede","Type":"ContainerDied","Data":"247196b40438372efc8b2ba92e4c86096d7565a1aafc71971e614f279b9ded03"} Dec 10 16:11:02 crc kubenswrapper[4718]: I1210 16:11:02.085691 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-s2spd" event={"ID":"db32c7ad-c4a6-47da-a624-03fd7ef1fede","Type":"ContainerDied","Data":"74f8a06da4523f0cd94c46bb7cd58524ccdc53c009da69552fb392f30b93eb00"} Dec 10 16:11:02 crc kubenswrapper[4718]: I1210 16:11:02.085711 4718 scope.go:117] "RemoveContainer" containerID="247196b40438372efc8b2ba92e4c86096d7565a1aafc71971e614f279b9ded03" Dec 10 16:11:02 crc kubenswrapper[4718]: I1210 16:11:02.085710 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-s2spd" Dec 10 16:11:02 crc kubenswrapper[4718]: I1210 16:11:02.111199 4718 scope.go:117] "RemoveContainer" containerID="b0d6aafd7172a1d29027b7b6f832b77fc0078e04b645b6c06956809561476070" Dec 10 16:11:02 crc kubenswrapper[4718]: I1210 16:11:02.118609 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-s2spd"] Dec 10 16:11:02 crc kubenswrapper[4718]: I1210 16:11:02.130539 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-s2spd"] Dec 10 16:11:02 crc kubenswrapper[4718]: I1210 16:11:02.135785 4718 scope.go:117] "RemoveContainer" containerID="16a259a6370f27c7e326f8c119e99962fce0088420e3c903ffd750a0527fdc9c" Dec 10 16:11:02 crc kubenswrapper[4718]: I1210 16:11:02.202187 4718 scope.go:117] "RemoveContainer" containerID="247196b40438372efc8b2ba92e4c86096d7565a1aafc71971e614f279b9ded03" Dec 10 16:11:02 crc kubenswrapper[4718]: E1210 16:11:02.202748 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"247196b40438372efc8b2ba92e4c86096d7565a1aafc71971e614f279b9ded03\": container with ID starting with 247196b40438372efc8b2ba92e4c86096d7565a1aafc71971e614f279b9ded03 not found: ID does not exist" containerID="247196b40438372efc8b2ba92e4c86096d7565a1aafc71971e614f279b9ded03" Dec 10 16:11:02 crc kubenswrapper[4718]: I1210 16:11:02.202801 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"247196b40438372efc8b2ba92e4c86096d7565a1aafc71971e614f279b9ded03"} err="failed to get container status \"247196b40438372efc8b2ba92e4c86096d7565a1aafc71971e614f279b9ded03\": rpc error: code = NotFound desc = could not find container \"247196b40438372efc8b2ba92e4c86096d7565a1aafc71971e614f279b9ded03\": container with ID starting with 247196b40438372efc8b2ba92e4c86096d7565a1aafc71971e614f279b9ded03 not found: ID does not exist" Dec 10 16:11:02 crc kubenswrapper[4718]: I1210 16:11:02.202822 4718 scope.go:117] "RemoveContainer" containerID="b0d6aafd7172a1d29027b7b6f832b77fc0078e04b645b6c06956809561476070" Dec 10 16:11:02 crc kubenswrapper[4718]: E1210 16:11:02.203112 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b0d6aafd7172a1d29027b7b6f832b77fc0078e04b645b6c06956809561476070\": container with ID starting with b0d6aafd7172a1d29027b7b6f832b77fc0078e04b645b6c06956809561476070 not found: ID does not exist" containerID="b0d6aafd7172a1d29027b7b6f832b77fc0078e04b645b6c06956809561476070" Dec 10 16:11:02 crc kubenswrapper[4718]: I1210 16:11:02.203129 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b0d6aafd7172a1d29027b7b6f832b77fc0078e04b645b6c06956809561476070"} err="failed to get container status \"b0d6aafd7172a1d29027b7b6f832b77fc0078e04b645b6c06956809561476070\": rpc error: code = NotFound desc = could not find container \"b0d6aafd7172a1d29027b7b6f832b77fc0078e04b645b6c06956809561476070\": container with ID starting with b0d6aafd7172a1d29027b7b6f832b77fc0078e04b645b6c06956809561476070 not found: ID does not exist" Dec 10 16:11:02 crc kubenswrapper[4718]: I1210 16:11:02.203142 4718 scope.go:117] "RemoveContainer" containerID="16a259a6370f27c7e326f8c119e99962fce0088420e3c903ffd750a0527fdc9c" Dec 10 16:11:02 crc kubenswrapper[4718]: E1210 16:11:02.203421 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"16a259a6370f27c7e326f8c119e99962fce0088420e3c903ffd750a0527fdc9c\": container with ID starting with 16a259a6370f27c7e326f8c119e99962fce0088420e3c903ffd750a0527fdc9c not found: ID does not exist" containerID="16a259a6370f27c7e326f8c119e99962fce0088420e3c903ffd750a0527fdc9c" Dec 10 16:11:02 crc kubenswrapper[4718]: I1210 16:11:02.203439 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"16a259a6370f27c7e326f8c119e99962fce0088420e3c903ffd750a0527fdc9c"} err="failed to get container status \"16a259a6370f27c7e326f8c119e99962fce0088420e3c903ffd750a0527fdc9c\": rpc error: code = NotFound desc = could not find container \"16a259a6370f27c7e326f8c119e99962fce0088420e3c903ffd750a0527fdc9c\": container with ID starting with 16a259a6370f27c7e326f8c119e99962fce0088420e3c903ffd750a0527fdc9c not found: ID does not exist" Dec 10 16:11:04 crc kubenswrapper[4718]: I1210 16:11:04.031747 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="db32c7ad-c4a6-47da-a624-03fd7ef1fede" path="/var/lib/kubelet/pods/db32c7ad-c4a6-47da-a624-03fd7ef1fede/volumes" Dec 10 16:11:13 crc kubenswrapper[4718]: I1210 16:11:13.021543 4718 scope.go:117] "RemoveContainer" containerID="e617b8520f6e76222e4940a8409c36020b31e0b95dc5aafbdaf0e3c690b84a37" Dec 10 16:11:13 crc kubenswrapper[4718]: E1210 16:11:13.023038 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8zmhn_openshift-machine-config-operator(8db53917-7cfb-496d-b8a0-5cc68f3be4e7)\"" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" podUID="8db53917-7cfb-496d-b8a0-5cc68f3be4e7" Dec 10 16:11:26 crc kubenswrapper[4718]: I1210 16:11:26.027497 4718 scope.go:117] "RemoveContainer" containerID="e617b8520f6e76222e4940a8409c36020b31e0b95dc5aafbdaf0e3c690b84a37" Dec 10 16:11:26 crc kubenswrapper[4718]: E1210 16:11:26.028226 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8zmhn_openshift-machine-config-operator(8db53917-7cfb-496d-b8a0-5cc68f3be4e7)\"" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" podUID="8db53917-7cfb-496d-b8a0-5cc68f3be4e7" Dec 10 16:11:39 crc kubenswrapper[4718]: I1210 16:11:39.021316 4718 scope.go:117] "RemoveContainer" containerID="e617b8520f6e76222e4940a8409c36020b31e0b95dc5aafbdaf0e3c690b84a37" Dec 10 16:11:39 crc kubenswrapper[4718]: E1210 16:11:39.022356 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8zmhn_openshift-machine-config-operator(8db53917-7cfb-496d-b8a0-5cc68f3be4e7)\"" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" podUID="8db53917-7cfb-496d-b8a0-5cc68f3be4e7" Dec 10 16:11:52 crc kubenswrapper[4718]: I1210 16:11:52.590985 4718 generic.go:334] "Generic (PLEG): container finished" podID="df134785-8fb2-418f-89ba-55f6d822f50a" containerID="1f4ef711b0b00e11624af540c169e9f859704946e61429a4a90e8e686afe7ace" exitCode=0 Dec 10 16:11:52 crc kubenswrapper[4718]: I1210 16:11:52.591093 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"df134785-8fb2-418f-89ba-55f6d822f50a","Type":"ContainerDied","Data":"1f4ef711b0b00e11624af540c169e9f859704946e61429a4a90e8e686afe7ace"} Dec 10 16:11:53 crc kubenswrapper[4718]: I1210 16:11:53.994312 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Dec 10 16:11:54 crc kubenswrapper[4718]: I1210 16:11:54.022674 4718 scope.go:117] "RemoveContainer" containerID="e617b8520f6e76222e4940a8409c36020b31e0b95dc5aafbdaf0e3c690b84a37" Dec 10 16:11:54 crc kubenswrapper[4718]: I1210 16:11:54.137164 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/df134785-8fb2-418f-89ba-55f6d822f50a-openstack-config\") pod \"df134785-8fb2-418f-89ba-55f6d822f50a\" (UID: \"df134785-8fb2-418f-89ba-55f6d822f50a\") " Dec 10 16:11:54 crc kubenswrapper[4718]: I1210 16:11:54.137328 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/df134785-8fb2-418f-89ba-55f6d822f50a-openstack-config-secret\") pod \"df134785-8fb2-418f-89ba-55f6d822f50a\" (UID: \"df134785-8fb2-418f-89ba-55f6d822f50a\") " Dec 10 16:11:54 crc kubenswrapper[4718]: I1210 16:11:54.137403 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ck94q\" (UniqueName: \"kubernetes.io/projected/df134785-8fb2-418f-89ba-55f6d822f50a-kube-api-access-ck94q\") pod \"df134785-8fb2-418f-89ba-55f6d822f50a\" (UID: \"df134785-8fb2-418f-89ba-55f6d822f50a\") " Dec 10 16:11:54 crc kubenswrapper[4718]: I1210 16:11:54.137468 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/df134785-8fb2-418f-89ba-55f6d822f50a-ca-certs\") pod \"df134785-8fb2-418f-89ba-55f6d822f50a\" (UID: \"df134785-8fb2-418f-89ba-55f6d822f50a\") " Dec 10 16:11:54 crc kubenswrapper[4718]: I1210 16:11:54.137507 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/df134785-8fb2-418f-89ba-55f6d822f50a-config-data\") pod \"df134785-8fb2-418f-89ba-55f6d822f50a\" (UID: \"df134785-8fb2-418f-89ba-55f6d822f50a\") " Dec 10 16:11:54 crc kubenswrapper[4718]: I1210 16:11:54.137573 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/df134785-8fb2-418f-89ba-55f6d822f50a-test-operator-ephemeral-temporary\") pod \"df134785-8fb2-418f-89ba-55f6d822f50a\" (UID: \"df134785-8fb2-418f-89ba-55f6d822f50a\") " Dec 10 16:11:54 crc kubenswrapper[4718]: I1210 16:11:54.137634 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-logs\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"df134785-8fb2-418f-89ba-55f6d822f50a\" (UID: \"df134785-8fb2-418f-89ba-55f6d822f50a\") " Dec 10 16:11:54 crc kubenswrapper[4718]: I1210 16:11:54.137668 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/df134785-8fb2-418f-89ba-55f6d822f50a-ssh-key\") pod \"df134785-8fb2-418f-89ba-55f6d822f50a\" (UID: \"df134785-8fb2-418f-89ba-55f6d822f50a\") " Dec 10 16:11:54 crc kubenswrapper[4718]: I1210 16:11:54.137698 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/df134785-8fb2-418f-89ba-55f6d822f50a-test-operator-ephemeral-workdir\") pod \"df134785-8fb2-418f-89ba-55f6d822f50a\" (UID: \"df134785-8fb2-418f-89ba-55f6d822f50a\") " Dec 10 16:11:54 crc kubenswrapper[4718]: I1210 16:11:54.147312 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/df134785-8fb2-418f-89ba-55f6d822f50a-test-operator-ephemeral-workdir" (OuterVolumeSpecName: "test-operator-ephemeral-workdir") pod "df134785-8fb2-418f-89ba-55f6d822f50a" (UID: "df134785-8fb2-418f-89ba-55f6d822f50a"). InnerVolumeSpecName "test-operator-ephemeral-workdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 16:11:54 crc kubenswrapper[4718]: I1210 16:11:54.149120 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/df134785-8fb2-418f-89ba-55f6d822f50a-test-operator-ephemeral-temporary" (OuterVolumeSpecName: "test-operator-ephemeral-temporary") pod "df134785-8fb2-418f-89ba-55f6d822f50a" (UID: "df134785-8fb2-418f-89ba-55f6d822f50a"). InnerVolumeSpecName "test-operator-ephemeral-temporary". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 16:11:54 crc kubenswrapper[4718]: I1210 16:11:54.149366 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/df134785-8fb2-418f-89ba-55f6d822f50a-config-data" (OuterVolumeSpecName: "config-data") pod "df134785-8fb2-418f-89ba-55f6d822f50a" (UID: "df134785-8fb2-418f-89ba-55f6d822f50a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 16:11:54 crc kubenswrapper[4718]: I1210 16:11:54.168441 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/df134785-8fb2-418f-89ba-55f6d822f50a-kube-api-access-ck94q" (OuterVolumeSpecName: "kube-api-access-ck94q") pod "df134785-8fb2-418f-89ba-55f6d822f50a" (UID: "df134785-8fb2-418f-89ba-55f6d822f50a"). InnerVolumeSpecName "kube-api-access-ck94q". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 16:11:54 crc kubenswrapper[4718]: I1210 16:11:54.171498 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage07-crc" (OuterVolumeSpecName: "test-operator-logs") pod "df134785-8fb2-418f-89ba-55f6d822f50a" (UID: "df134785-8fb2-418f-89ba-55f6d822f50a"). InnerVolumeSpecName "local-storage07-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Dec 10 16:11:54 crc kubenswrapper[4718]: I1210 16:11:54.187491 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/df134785-8fb2-418f-89ba-55f6d822f50a-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "df134785-8fb2-418f-89ba-55f6d822f50a" (UID: "df134785-8fb2-418f-89ba-55f6d822f50a"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 16:11:54 crc kubenswrapper[4718]: I1210 16:11:54.188669 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/df134785-8fb2-418f-89ba-55f6d822f50a-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "df134785-8fb2-418f-89ba-55f6d822f50a" (UID: "df134785-8fb2-418f-89ba-55f6d822f50a"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 16:11:54 crc kubenswrapper[4718]: I1210 16:11:54.197901 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/df134785-8fb2-418f-89ba-55f6d822f50a-ca-certs" (OuterVolumeSpecName: "ca-certs") pod "df134785-8fb2-418f-89ba-55f6d822f50a" (UID: "df134785-8fb2-418f-89ba-55f6d822f50a"). InnerVolumeSpecName "ca-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 16:11:54 crc kubenswrapper[4718]: I1210 16:11:54.219086 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/df134785-8fb2-418f-89ba-55f6d822f50a-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "df134785-8fb2-418f-89ba-55f6d822f50a" (UID: "df134785-8fb2-418f-89ba-55f6d822f50a"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 16:11:54 crc kubenswrapper[4718]: I1210 16:11:54.239797 4718 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/df134785-8fb2-418f-89ba-55f6d822f50a-test-operator-ephemeral-temporary\") on node \"crc\" DevicePath \"\"" Dec 10 16:11:54 crc kubenswrapper[4718]: I1210 16:11:54.239890 4718 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" " Dec 10 16:11:54 crc kubenswrapper[4718]: I1210 16:11:54.239906 4718 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/df134785-8fb2-418f-89ba-55f6d822f50a-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 10 16:11:54 crc kubenswrapper[4718]: I1210 16:11:54.239915 4718 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/df134785-8fb2-418f-89ba-55f6d822f50a-test-operator-ephemeral-workdir\") on node \"crc\" DevicePath \"\"" Dec 10 16:11:54 crc kubenswrapper[4718]: I1210 16:11:54.239927 4718 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/df134785-8fb2-418f-89ba-55f6d822f50a-openstack-config\") on node \"crc\" DevicePath \"\"" Dec 10 16:11:54 crc kubenswrapper[4718]: I1210 16:11:54.239939 4718 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/df134785-8fb2-418f-89ba-55f6d822f50a-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Dec 10 16:11:54 crc kubenswrapper[4718]: I1210 16:11:54.239948 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ck94q\" (UniqueName: \"kubernetes.io/projected/df134785-8fb2-418f-89ba-55f6d822f50a-kube-api-access-ck94q\") on node \"crc\" DevicePath \"\"" Dec 10 16:11:54 crc kubenswrapper[4718]: I1210 16:11:54.239958 4718 reconciler_common.go:293] "Volume detached for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/df134785-8fb2-418f-89ba-55f6d822f50a-ca-certs\") on node \"crc\" DevicePath \"\"" Dec 10 16:11:54 crc kubenswrapper[4718]: I1210 16:11:54.239965 4718 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/df134785-8fb2-418f-89ba-55f6d822f50a-config-data\") on node \"crc\" DevicePath \"\"" Dec 10 16:11:54 crc kubenswrapper[4718]: I1210 16:11:54.260420 4718 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage07-crc" (UniqueName: "kubernetes.io/local-volume/local-storage07-crc") on node "crc" Dec 10 16:11:54 crc kubenswrapper[4718]: I1210 16:11:54.342663 4718 reconciler_common.go:293] "Volume detached for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" DevicePath \"\"" Dec 10 16:11:54 crc kubenswrapper[4718]: I1210 16:11:54.613517 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"df134785-8fb2-418f-89ba-55f6d822f50a","Type":"ContainerDied","Data":"14cb497c47bcd778008bf70848bc1510428dc849baf8fa2d13a6bad1eaf1e2a0"} Dec 10 16:11:54 crc kubenswrapper[4718]: I1210 16:11:54.613548 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Dec 10 16:11:54 crc kubenswrapper[4718]: I1210 16:11:54.613572 4718 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="14cb497c47bcd778008bf70848bc1510428dc849baf8fa2d13a6bad1eaf1e2a0" Dec 10 16:11:54 crc kubenswrapper[4718]: I1210 16:11:54.630730 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" event={"ID":"8db53917-7cfb-496d-b8a0-5cc68f3be4e7","Type":"ContainerStarted","Data":"0c11c531f13dd9c964e72f22f6c356f673d535df358bec5460370fca430952f1"} Dec 10 16:12:05 crc kubenswrapper[4718]: I1210 16:12:05.371189 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Dec 10 16:12:05 crc kubenswrapper[4718]: E1210 16:12:05.372192 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df134785-8fb2-418f-89ba-55f6d822f50a" containerName="tempest-tests-tempest-tests-runner" Dec 10 16:12:05 crc kubenswrapper[4718]: I1210 16:12:05.372207 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="df134785-8fb2-418f-89ba-55f6d822f50a" containerName="tempest-tests-tempest-tests-runner" Dec 10 16:12:05 crc kubenswrapper[4718]: E1210 16:12:05.372221 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db32c7ad-c4a6-47da-a624-03fd7ef1fede" containerName="registry-server" Dec 10 16:12:05 crc kubenswrapper[4718]: I1210 16:12:05.372227 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="db32c7ad-c4a6-47da-a624-03fd7ef1fede" containerName="registry-server" Dec 10 16:12:05 crc kubenswrapper[4718]: E1210 16:12:05.372252 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db32c7ad-c4a6-47da-a624-03fd7ef1fede" containerName="extract-utilities" Dec 10 16:12:05 crc kubenswrapper[4718]: I1210 16:12:05.372259 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="db32c7ad-c4a6-47da-a624-03fd7ef1fede" containerName="extract-utilities" Dec 10 16:12:05 crc kubenswrapper[4718]: E1210 16:12:05.372286 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db32c7ad-c4a6-47da-a624-03fd7ef1fede" containerName="extract-content" Dec 10 16:12:05 crc kubenswrapper[4718]: I1210 16:12:05.372294 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="db32c7ad-c4a6-47da-a624-03fd7ef1fede" containerName="extract-content" Dec 10 16:12:05 crc kubenswrapper[4718]: I1210 16:12:05.372733 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="db32c7ad-c4a6-47da-a624-03fd7ef1fede" containerName="registry-server" Dec 10 16:12:05 crc kubenswrapper[4718]: I1210 16:12:05.372763 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="df134785-8fb2-418f-89ba-55f6d822f50a" containerName="tempest-tests-tempest-tests-runner" Dec 10 16:12:05 crc kubenswrapper[4718]: I1210 16:12:05.373491 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Dec 10 16:12:05 crc kubenswrapper[4718]: I1210 16:12:05.378826 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-vwgkm" Dec 10 16:12:05 crc kubenswrapper[4718]: I1210 16:12:05.386591 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Dec 10 16:12:05 crc kubenswrapper[4718]: I1210 16:12:05.564172 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rrqwm\" (UniqueName: \"kubernetes.io/projected/c99e4dd1-6bd0-4be7-ad00-40d50d9a4e2e-kube-api-access-rrqwm\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"c99e4dd1-6bd0-4be7-ad00-40d50d9a4e2e\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Dec 10 16:12:05 crc kubenswrapper[4718]: I1210 16:12:05.564282 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"c99e4dd1-6bd0-4be7-ad00-40d50d9a4e2e\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Dec 10 16:12:05 crc kubenswrapper[4718]: I1210 16:12:05.666823 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rrqwm\" (UniqueName: \"kubernetes.io/projected/c99e4dd1-6bd0-4be7-ad00-40d50d9a4e2e-kube-api-access-rrqwm\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"c99e4dd1-6bd0-4be7-ad00-40d50d9a4e2e\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Dec 10 16:12:05 crc kubenswrapper[4718]: I1210 16:12:05.666888 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"c99e4dd1-6bd0-4be7-ad00-40d50d9a4e2e\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Dec 10 16:12:05 crc kubenswrapper[4718]: I1210 16:12:05.667588 4718 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"c99e4dd1-6bd0-4be7-ad00-40d50d9a4e2e\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Dec 10 16:12:05 crc kubenswrapper[4718]: I1210 16:12:05.691924 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rrqwm\" (UniqueName: \"kubernetes.io/projected/c99e4dd1-6bd0-4be7-ad00-40d50d9a4e2e-kube-api-access-rrqwm\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"c99e4dd1-6bd0-4be7-ad00-40d50d9a4e2e\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Dec 10 16:12:05 crc kubenswrapper[4718]: I1210 16:12:05.707497 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"c99e4dd1-6bd0-4be7-ad00-40d50d9a4e2e\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Dec 10 16:12:05 crc kubenswrapper[4718]: I1210 16:12:05.993334 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Dec 10 16:12:06 crc kubenswrapper[4718]: I1210 16:12:06.570025 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Dec 10 16:12:06 crc kubenswrapper[4718]: I1210 16:12:06.773200 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"c99e4dd1-6bd0-4be7-ad00-40d50d9a4e2e","Type":"ContainerStarted","Data":"38a9e1c74716318b9ba2ea134260a63422e8b659a26eb2693d6552047303e1fa"} Dec 10 16:12:08 crc kubenswrapper[4718]: I1210 16:12:08.794763 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"c99e4dd1-6bd0-4be7-ad00-40d50d9a4e2e","Type":"ContainerStarted","Data":"ed0fcafd3d0b5bb6c8896c9ee4cddc6806d39e6a6a536039124cb32a6bcf709e"} Dec 10 16:12:08 crc kubenswrapper[4718]: I1210 16:12:08.815014 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podStartSLOduration=2.463573565 podStartE2EDuration="3.814992873s" podCreationTimestamp="2025-12-10 16:12:05 +0000 UTC" firstStartedPulling="2025-12-10 16:12:06.577800922 +0000 UTC m=+6031.527024339" lastFinishedPulling="2025-12-10 16:12:07.92922023 +0000 UTC m=+6032.878443647" observedRunningTime="2025-12-10 16:12:08.813772413 +0000 UTC m=+6033.762995850" watchObservedRunningTime="2025-12-10 16:12:08.814992873 +0000 UTC m=+6033.764216300" Dec 10 16:12:31 crc kubenswrapper[4718]: I1210 16:12:31.581561 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-xvrn2/must-gather-7sj76"] Dec 10 16:12:31 crc kubenswrapper[4718]: I1210 16:12:31.584581 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-xvrn2/must-gather-7sj76" Dec 10 16:12:31 crc kubenswrapper[4718]: I1210 16:12:31.586501 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-xvrn2"/"kube-root-ca.crt" Dec 10 16:12:31 crc kubenswrapper[4718]: I1210 16:12:31.586746 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-xvrn2"/"openshift-service-ca.crt" Dec 10 16:12:31 crc kubenswrapper[4718]: I1210 16:12:31.620370 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-xvrn2/must-gather-7sj76"] Dec 10 16:12:31 crc kubenswrapper[4718]: I1210 16:12:31.716002 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j9tgj\" (UniqueName: \"kubernetes.io/projected/f8400f40-f090-420a-9c71-5bef1a2bce1f-kube-api-access-j9tgj\") pod \"must-gather-7sj76\" (UID: \"f8400f40-f090-420a-9c71-5bef1a2bce1f\") " pod="openshift-must-gather-xvrn2/must-gather-7sj76" Dec 10 16:12:31 crc kubenswrapper[4718]: I1210 16:12:31.716320 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/f8400f40-f090-420a-9c71-5bef1a2bce1f-must-gather-output\") pod \"must-gather-7sj76\" (UID: \"f8400f40-f090-420a-9c71-5bef1a2bce1f\") " pod="openshift-must-gather-xvrn2/must-gather-7sj76" Dec 10 16:12:31 crc kubenswrapper[4718]: I1210 16:12:31.818289 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j9tgj\" (UniqueName: \"kubernetes.io/projected/f8400f40-f090-420a-9c71-5bef1a2bce1f-kube-api-access-j9tgj\") pod \"must-gather-7sj76\" (UID: \"f8400f40-f090-420a-9c71-5bef1a2bce1f\") " pod="openshift-must-gather-xvrn2/must-gather-7sj76" Dec 10 16:12:31 crc kubenswrapper[4718]: I1210 16:12:31.818725 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/f8400f40-f090-420a-9c71-5bef1a2bce1f-must-gather-output\") pod \"must-gather-7sj76\" (UID: \"f8400f40-f090-420a-9c71-5bef1a2bce1f\") " pod="openshift-must-gather-xvrn2/must-gather-7sj76" Dec 10 16:12:31 crc kubenswrapper[4718]: I1210 16:12:31.819227 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/f8400f40-f090-420a-9c71-5bef1a2bce1f-must-gather-output\") pod \"must-gather-7sj76\" (UID: \"f8400f40-f090-420a-9c71-5bef1a2bce1f\") " pod="openshift-must-gather-xvrn2/must-gather-7sj76" Dec 10 16:12:31 crc kubenswrapper[4718]: I1210 16:12:31.836522 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j9tgj\" (UniqueName: \"kubernetes.io/projected/f8400f40-f090-420a-9c71-5bef1a2bce1f-kube-api-access-j9tgj\") pod \"must-gather-7sj76\" (UID: \"f8400f40-f090-420a-9c71-5bef1a2bce1f\") " pod="openshift-must-gather-xvrn2/must-gather-7sj76" Dec 10 16:12:31 crc kubenswrapper[4718]: I1210 16:12:31.917444 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-xvrn2/must-gather-7sj76" Dec 10 16:12:32 crc kubenswrapper[4718]: I1210 16:12:32.712939 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-xvrn2/must-gather-7sj76"] Dec 10 16:12:32 crc kubenswrapper[4718]: W1210 16:12:32.716356 4718 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf8400f40_f090_420a_9c71_5bef1a2bce1f.slice/crio-e3ea4992b14a7fe3b85cc9e9e1eddf10f9862b13524f3790f4b68132cd8221af WatchSource:0}: Error finding container e3ea4992b14a7fe3b85cc9e9e1eddf10f9862b13524f3790f4b68132cd8221af: Status 404 returned error can't find the container with id e3ea4992b14a7fe3b85cc9e9e1eddf10f9862b13524f3790f4b68132cd8221af Dec 10 16:12:33 crc kubenswrapper[4718]: I1210 16:12:33.222291 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-xvrn2/must-gather-7sj76" event={"ID":"f8400f40-f090-420a-9c71-5bef1a2bce1f","Type":"ContainerStarted","Data":"e3ea4992b14a7fe3b85cc9e9e1eddf10f9862b13524f3790f4b68132cd8221af"} Dec 10 16:12:42 crc kubenswrapper[4718]: I1210 16:12:42.327734 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-xvrn2/must-gather-7sj76" event={"ID":"f8400f40-f090-420a-9c71-5bef1a2bce1f","Type":"ContainerStarted","Data":"25dd7b76ab8d70a94eac648b0e0be013e7836a2e7d298803ddf67997248ee57a"} Dec 10 16:12:42 crc kubenswrapper[4718]: I1210 16:12:42.328287 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-xvrn2/must-gather-7sj76" event={"ID":"f8400f40-f090-420a-9c71-5bef1a2bce1f","Type":"ContainerStarted","Data":"d058b5da14c3f5abe00b3a50316fce676c9cc30e96bacc30a23060b168ad804b"} Dec 10 16:12:42 crc kubenswrapper[4718]: I1210 16:12:42.345651 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-xvrn2/must-gather-7sj76" podStartSLOduration=3.060144841 podStartE2EDuration="11.345628608s" podCreationTimestamp="2025-12-10 16:12:31 +0000 UTC" firstStartedPulling="2025-12-10 16:12:32.719482575 +0000 UTC m=+6057.668705992" lastFinishedPulling="2025-12-10 16:12:41.004966342 +0000 UTC m=+6065.954189759" observedRunningTime="2025-12-10 16:12:42.34366569 +0000 UTC m=+6067.292889107" watchObservedRunningTime="2025-12-10 16:12:42.345628608 +0000 UTC m=+6067.294852025" Dec 10 16:12:44 crc kubenswrapper[4718]: I1210 16:12:44.306247 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-xbhdp"] Dec 10 16:12:44 crc kubenswrapper[4718]: I1210 16:12:44.308750 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xbhdp" Dec 10 16:12:44 crc kubenswrapper[4718]: I1210 16:12:44.325724 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-xbhdp"] Dec 10 16:12:44 crc kubenswrapper[4718]: I1210 16:12:44.382349 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dbfda8ef-d2e3-4d54-8551-5f6cee9d221d-catalog-content\") pod \"redhat-operators-xbhdp\" (UID: \"dbfda8ef-d2e3-4d54-8551-5f6cee9d221d\") " pod="openshift-marketplace/redhat-operators-xbhdp" Dec 10 16:12:44 crc kubenswrapper[4718]: I1210 16:12:44.382536 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dbfda8ef-d2e3-4d54-8551-5f6cee9d221d-utilities\") pod \"redhat-operators-xbhdp\" (UID: \"dbfda8ef-d2e3-4d54-8551-5f6cee9d221d\") " pod="openshift-marketplace/redhat-operators-xbhdp" Dec 10 16:12:44 crc kubenswrapper[4718]: I1210 16:12:44.382657 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nkmc9\" (UniqueName: \"kubernetes.io/projected/dbfda8ef-d2e3-4d54-8551-5f6cee9d221d-kube-api-access-nkmc9\") pod \"redhat-operators-xbhdp\" (UID: \"dbfda8ef-d2e3-4d54-8551-5f6cee9d221d\") " pod="openshift-marketplace/redhat-operators-xbhdp" Dec 10 16:12:44 crc kubenswrapper[4718]: I1210 16:12:44.484290 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dbfda8ef-d2e3-4d54-8551-5f6cee9d221d-utilities\") pod \"redhat-operators-xbhdp\" (UID: \"dbfda8ef-d2e3-4d54-8551-5f6cee9d221d\") " pod="openshift-marketplace/redhat-operators-xbhdp" Dec 10 16:12:44 crc kubenswrapper[4718]: I1210 16:12:44.484804 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nkmc9\" (UniqueName: \"kubernetes.io/projected/dbfda8ef-d2e3-4d54-8551-5f6cee9d221d-kube-api-access-nkmc9\") pod \"redhat-operators-xbhdp\" (UID: \"dbfda8ef-d2e3-4d54-8551-5f6cee9d221d\") " pod="openshift-marketplace/redhat-operators-xbhdp" Dec 10 16:12:44 crc kubenswrapper[4718]: I1210 16:12:44.484878 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dbfda8ef-d2e3-4d54-8551-5f6cee9d221d-catalog-content\") pod \"redhat-operators-xbhdp\" (UID: \"dbfda8ef-d2e3-4d54-8551-5f6cee9d221d\") " pod="openshift-marketplace/redhat-operators-xbhdp" Dec 10 16:12:44 crc kubenswrapper[4718]: I1210 16:12:44.485029 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dbfda8ef-d2e3-4d54-8551-5f6cee9d221d-utilities\") pod \"redhat-operators-xbhdp\" (UID: \"dbfda8ef-d2e3-4d54-8551-5f6cee9d221d\") " pod="openshift-marketplace/redhat-operators-xbhdp" Dec 10 16:12:44 crc kubenswrapper[4718]: I1210 16:12:44.485406 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dbfda8ef-d2e3-4d54-8551-5f6cee9d221d-catalog-content\") pod \"redhat-operators-xbhdp\" (UID: \"dbfda8ef-d2e3-4d54-8551-5f6cee9d221d\") " pod="openshift-marketplace/redhat-operators-xbhdp" Dec 10 16:12:44 crc kubenswrapper[4718]: I1210 16:12:44.516336 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nkmc9\" (UniqueName: \"kubernetes.io/projected/dbfda8ef-d2e3-4d54-8551-5f6cee9d221d-kube-api-access-nkmc9\") pod \"redhat-operators-xbhdp\" (UID: \"dbfda8ef-d2e3-4d54-8551-5f6cee9d221d\") " pod="openshift-marketplace/redhat-operators-xbhdp" Dec 10 16:12:44 crc kubenswrapper[4718]: I1210 16:12:44.643668 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xbhdp" Dec 10 16:12:45 crc kubenswrapper[4718]: I1210 16:12:45.245703 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-xbhdp"] Dec 10 16:12:45 crc kubenswrapper[4718]: W1210 16:12:45.285769 4718 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddbfda8ef_d2e3_4d54_8551_5f6cee9d221d.slice/crio-d1f1ab111e3cfcc9edb0e87ec1c42eec647b821a68b79447830c3b640a7e173a WatchSource:0}: Error finding container d1f1ab111e3cfcc9edb0e87ec1c42eec647b821a68b79447830c3b640a7e173a: Status 404 returned error can't find the container with id d1f1ab111e3cfcc9edb0e87ec1c42eec647b821a68b79447830c3b640a7e173a Dec 10 16:12:45 crc kubenswrapper[4718]: I1210 16:12:45.380968 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xbhdp" event={"ID":"dbfda8ef-d2e3-4d54-8551-5f6cee9d221d","Type":"ContainerStarted","Data":"d1f1ab111e3cfcc9edb0e87ec1c42eec647b821a68b79447830c3b640a7e173a"} Dec 10 16:12:46 crc kubenswrapper[4718]: I1210 16:12:46.523681 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-xvrn2/crc-debug-qs2bg"] Dec 10 16:12:46 crc kubenswrapper[4718]: I1210 16:12:46.526381 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-xvrn2/crc-debug-qs2bg" Dec 10 16:12:46 crc kubenswrapper[4718]: I1210 16:12:46.530053 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-xvrn2"/"default-dockercfg-qqrtw" Dec 10 16:12:46 crc kubenswrapper[4718]: I1210 16:12:46.652767 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/278cfb92-d16e-4bb2-b4b4-d158f13e9d8e-host\") pod \"crc-debug-qs2bg\" (UID: \"278cfb92-d16e-4bb2-b4b4-d158f13e9d8e\") " pod="openshift-must-gather-xvrn2/crc-debug-qs2bg" Dec 10 16:12:46 crc kubenswrapper[4718]: I1210 16:12:46.652874 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6f8g7\" (UniqueName: \"kubernetes.io/projected/278cfb92-d16e-4bb2-b4b4-d158f13e9d8e-kube-api-access-6f8g7\") pod \"crc-debug-qs2bg\" (UID: \"278cfb92-d16e-4bb2-b4b4-d158f13e9d8e\") " pod="openshift-must-gather-xvrn2/crc-debug-qs2bg" Dec 10 16:12:46 crc kubenswrapper[4718]: I1210 16:12:46.754918 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/278cfb92-d16e-4bb2-b4b4-d158f13e9d8e-host\") pod \"crc-debug-qs2bg\" (UID: \"278cfb92-d16e-4bb2-b4b4-d158f13e9d8e\") " pod="openshift-must-gather-xvrn2/crc-debug-qs2bg" Dec 10 16:12:46 crc kubenswrapper[4718]: I1210 16:12:46.755024 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6f8g7\" (UniqueName: \"kubernetes.io/projected/278cfb92-d16e-4bb2-b4b4-d158f13e9d8e-kube-api-access-6f8g7\") pod \"crc-debug-qs2bg\" (UID: \"278cfb92-d16e-4bb2-b4b4-d158f13e9d8e\") " pod="openshift-must-gather-xvrn2/crc-debug-qs2bg" Dec 10 16:12:46 crc kubenswrapper[4718]: I1210 16:12:46.755560 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/278cfb92-d16e-4bb2-b4b4-d158f13e9d8e-host\") pod \"crc-debug-qs2bg\" (UID: \"278cfb92-d16e-4bb2-b4b4-d158f13e9d8e\") " pod="openshift-must-gather-xvrn2/crc-debug-qs2bg" Dec 10 16:12:46 crc kubenswrapper[4718]: I1210 16:12:46.776188 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6f8g7\" (UniqueName: \"kubernetes.io/projected/278cfb92-d16e-4bb2-b4b4-d158f13e9d8e-kube-api-access-6f8g7\") pod \"crc-debug-qs2bg\" (UID: \"278cfb92-d16e-4bb2-b4b4-d158f13e9d8e\") " pod="openshift-must-gather-xvrn2/crc-debug-qs2bg" Dec 10 16:12:46 crc kubenswrapper[4718]: I1210 16:12:46.856188 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-xvrn2/crc-debug-qs2bg" Dec 10 16:12:47 crc kubenswrapper[4718]: I1210 16:12:47.415030 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-xvrn2/crc-debug-qs2bg" event={"ID":"278cfb92-d16e-4bb2-b4b4-d158f13e9d8e","Type":"ContainerStarted","Data":"ecbb419299bf45b877920deeaa83621ecc2755db6ef82c20eefc3455d2e12a0e"} Dec 10 16:12:47 crc kubenswrapper[4718]: I1210 16:12:47.426348 4718 generic.go:334] "Generic (PLEG): container finished" podID="dbfda8ef-d2e3-4d54-8551-5f6cee9d221d" containerID="6f5e1c30e1e7191dbd00699b199d8f0a70ab2540fcc40705047d6f36bcbc4cc1" exitCode=0 Dec 10 16:12:47 crc kubenswrapper[4718]: I1210 16:12:47.426579 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xbhdp" event={"ID":"dbfda8ef-d2e3-4d54-8551-5f6cee9d221d","Type":"ContainerDied","Data":"6f5e1c30e1e7191dbd00699b199d8f0a70ab2540fcc40705047d6f36bcbc4cc1"} Dec 10 16:12:49 crc kubenswrapper[4718]: I1210 16:12:49.464528 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xbhdp" event={"ID":"dbfda8ef-d2e3-4d54-8551-5f6cee9d221d","Type":"ContainerStarted","Data":"bfcd64a338a9e635ae31433893ae9a6e5649ab41645b36eab9be8ff1fca16b32"} Dec 10 16:12:52 crc kubenswrapper[4718]: I1210 16:12:52.505296 4718 generic.go:334] "Generic (PLEG): container finished" podID="dbfda8ef-d2e3-4d54-8551-5f6cee9d221d" containerID="bfcd64a338a9e635ae31433893ae9a6e5649ab41645b36eab9be8ff1fca16b32" exitCode=0 Dec 10 16:12:52 crc kubenswrapper[4718]: I1210 16:12:52.505415 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xbhdp" event={"ID":"dbfda8ef-d2e3-4d54-8551-5f6cee9d221d","Type":"ContainerDied","Data":"bfcd64a338a9e635ae31433893ae9a6e5649ab41645b36eab9be8ff1fca16b32"} Dec 10 16:13:00 crc kubenswrapper[4718]: I1210 16:13:00.591456 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-xvrn2/crc-debug-qs2bg" event={"ID":"278cfb92-d16e-4bb2-b4b4-d158f13e9d8e","Type":"ContainerStarted","Data":"1004157a1b96c2ac088177793e4a256b15af59cbcd43c6182b738d3452c00f25"} Dec 10 16:13:00 crc kubenswrapper[4718]: I1210 16:13:00.597597 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xbhdp" event={"ID":"dbfda8ef-d2e3-4d54-8551-5f6cee9d221d","Type":"ContainerStarted","Data":"a59baaff7daa9dc8e65da4f1b229f90f461aefddafe1b6e4fff12148ddba6959"} Dec 10 16:13:00 crc kubenswrapper[4718]: I1210 16:13:00.615058 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-xvrn2/crc-debug-qs2bg" podStartSLOduration=1.729215379 podStartE2EDuration="14.615000487s" podCreationTimestamp="2025-12-10 16:12:46 +0000 UTC" firstStartedPulling="2025-12-10 16:12:46.901336783 +0000 UTC m=+6071.850560200" lastFinishedPulling="2025-12-10 16:12:59.787121891 +0000 UTC m=+6084.736345308" observedRunningTime="2025-12-10 16:13:00.610461426 +0000 UTC m=+6085.559684863" watchObservedRunningTime="2025-12-10 16:13:00.615000487 +0000 UTC m=+6085.564223904" Dec 10 16:13:00 crc kubenswrapper[4718]: I1210 16:13:00.651692 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-xbhdp" podStartSLOduration=4.329236859 podStartE2EDuration="16.651664259s" podCreationTimestamp="2025-12-10 16:12:44 +0000 UTC" firstStartedPulling="2025-12-10 16:12:47.429443282 +0000 UTC m=+6072.378666699" lastFinishedPulling="2025-12-10 16:12:59.751870672 +0000 UTC m=+6084.701094099" observedRunningTime="2025-12-10 16:13:00.634894211 +0000 UTC m=+6085.584117638" watchObservedRunningTime="2025-12-10 16:13:00.651664259 +0000 UTC m=+6085.600887686" Dec 10 16:13:04 crc kubenswrapper[4718]: I1210 16:13:04.644482 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-xbhdp" Dec 10 16:13:04 crc kubenswrapper[4718]: I1210 16:13:04.645184 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-xbhdp" Dec 10 16:13:05 crc kubenswrapper[4718]: I1210 16:13:05.712093 4718 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-xbhdp" podUID="dbfda8ef-d2e3-4d54-8551-5f6cee9d221d" containerName="registry-server" probeResult="failure" output=< Dec 10 16:13:05 crc kubenswrapper[4718]: timeout: failed to connect service ":50051" within 1s Dec 10 16:13:05 crc kubenswrapper[4718]: > Dec 10 16:13:14 crc kubenswrapper[4718]: I1210 16:13:14.706139 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-xbhdp" Dec 10 16:13:14 crc kubenswrapper[4718]: I1210 16:13:14.765864 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-xbhdp" Dec 10 16:13:15 crc kubenswrapper[4718]: I1210 16:13:15.512093 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-xbhdp"] Dec 10 16:13:15 crc kubenswrapper[4718]: I1210 16:13:15.752156 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-xbhdp" podUID="dbfda8ef-d2e3-4d54-8551-5f6cee9d221d" containerName="registry-server" containerID="cri-o://a59baaff7daa9dc8e65da4f1b229f90f461aefddafe1b6e4fff12148ddba6959" gracePeriod=2 Dec 10 16:13:16 crc kubenswrapper[4718]: I1210 16:13:16.764100 4718 generic.go:334] "Generic (PLEG): container finished" podID="dbfda8ef-d2e3-4d54-8551-5f6cee9d221d" containerID="a59baaff7daa9dc8e65da4f1b229f90f461aefddafe1b6e4fff12148ddba6959" exitCode=0 Dec 10 16:13:16 crc kubenswrapper[4718]: I1210 16:13:16.764125 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xbhdp" event={"ID":"dbfda8ef-d2e3-4d54-8551-5f6cee9d221d","Type":"ContainerDied","Data":"a59baaff7daa9dc8e65da4f1b229f90f461aefddafe1b6e4fff12148ddba6959"} Dec 10 16:13:17 crc kubenswrapper[4718]: I1210 16:13:17.309003 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xbhdp" Dec 10 16:13:17 crc kubenswrapper[4718]: I1210 16:13:17.437764 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dbfda8ef-d2e3-4d54-8551-5f6cee9d221d-catalog-content\") pod \"dbfda8ef-d2e3-4d54-8551-5f6cee9d221d\" (UID: \"dbfda8ef-d2e3-4d54-8551-5f6cee9d221d\") " Dec 10 16:13:17 crc kubenswrapper[4718]: I1210 16:13:17.437945 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nkmc9\" (UniqueName: \"kubernetes.io/projected/dbfda8ef-d2e3-4d54-8551-5f6cee9d221d-kube-api-access-nkmc9\") pod \"dbfda8ef-d2e3-4d54-8551-5f6cee9d221d\" (UID: \"dbfda8ef-d2e3-4d54-8551-5f6cee9d221d\") " Dec 10 16:13:17 crc kubenswrapper[4718]: I1210 16:13:17.437994 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dbfda8ef-d2e3-4d54-8551-5f6cee9d221d-utilities\") pod \"dbfda8ef-d2e3-4d54-8551-5f6cee9d221d\" (UID: \"dbfda8ef-d2e3-4d54-8551-5f6cee9d221d\") " Dec 10 16:13:17 crc kubenswrapper[4718]: I1210 16:13:17.439138 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dbfda8ef-d2e3-4d54-8551-5f6cee9d221d-utilities" (OuterVolumeSpecName: "utilities") pod "dbfda8ef-d2e3-4d54-8551-5f6cee9d221d" (UID: "dbfda8ef-d2e3-4d54-8551-5f6cee9d221d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 16:13:17 crc kubenswrapper[4718]: I1210 16:13:17.450642 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dbfda8ef-d2e3-4d54-8551-5f6cee9d221d-kube-api-access-nkmc9" (OuterVolumeSpecName: "kube-api-access-nkmc9") pod "dbfda8ef-d2e3-4d54-8551-5f6cee9d221d" (UID: "dbfda8ef-d2e3-4d54-8551-5f6cee9d221d"). InnerVolumeSpecName "kube-api-access-nkmc9". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 16:13:17 crc kubenswrapper[4718]: I1210 16:13:17.541580 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nkmc9\" (UniqueName: \"kubernetes.io/projected/dbfda8ef-d2e3-4d54-8551-5f6cee9d221d-kube-api-access-nkmc9\") on node \"crc\" DevicePath \"\"" Dec 10 16:13:17 crc kubenswrapper[4718]: I1210 16:13:17.541647 4718 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dbfda8ef-d2e3-4d54-8551-5f6cee9d221d-utilities\") on node \"crc\" DevicePath \"\"" Dec 10 16:13:17 crc kubenswrapper[4718]: I1210 16:13:17.651288 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dbfda8ef-d2e3-4d54-8551-5f6cee9d221d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "dbfda8ef-d2e3-4d54-8551-5f6cee9d221d" (UID: "dbfda8ef-d2e3-4d54-8551-5f6cee9d221d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 16:13:17 crc kubenswrapper[4718]: I1210 16:13:17.746032 4718 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dbfda8ef-d2e3-4d54-8551-5f6cee9d221d-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 10 16:13:17 crc kubenswrapper[4718]: I1210 16:13:17.776951 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xbhdp" event={"ID":"dbfda8ef-d2e3-4d54-8551-5f6cee9d221d","Type":"ContainerDied","Data":"d1f1ab111e3cfcc9edb0e87ec1c42eec647b821a68b79447830c3b640a7e173a"} Dec 10 16:13:17 crc kubenswrapper[4718]: I1210 16:13:17.777046 4718 scope.go:117] "RemoveContainer" containerID="a59baaff7daa9dc8e65da4f1b229f90f461aefddafe1b6e4fff12148ddba6959" Dec 10 16:13:17 crc kubenswrapper[4718]: I1210 16:13:17.777217 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xbhdp" Dec 10 16:13:17 crc kubenswrapper[4718]: I1210 16:13:17.802562 4718 scope.go:117] "RemoveContainer" containerID="bfcd64a338a9e635ae31433893ae9a6e5649ab41645b36eab9be8ff1fca16b32" Dec 10 16:13:17 crc kubenswrapper[4718]: I1210 16:13:17.836462 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-xbhdp"] Dec 10 16:13:17 crc kubenswrapper[4718]: I1210 16:13:17.846319 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-xbhdp"] Dec 10 16:13:17 crc kubenswrapper[4718]: I1210 16:13:17.872578 4718 scope.go:117] "RemoveContainer" containerID="6f5e1c30e1e7191dbd00699b199d8f0a70ab2540fcc40705047d6f36bcbc4cc1" Dec 10 16:13:18 crc kubenswrapper[4718]: I1210 16:13:18.034704 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dbfda8ef-d2e3-4d54-8551-5f6cee9d221d" path="/var/lib/kubelet/pods/dbfda8ef-d2e3-4d54-8551-5f6cee9d221d/volumes" Dec 10 16:14:06 crc kubenswrapper[4718]: I1210 16:14:06.302340 4718 generic.go:334] "Generic (PLEG): container finished" podID="278cfb92-d16e-4bb2-b4b4-d158f13e9d8e" containerID="1004157a1b96c2ac088177793e4a256b15af59cbcd43c6182b738d3452c00f25" exitCode=0 Dec 10 16:14:06 crc kubenswrapper[4718]: I1210 16:14:06.302832 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-xvrn2/crc-debug-qs2bg" event={"ID":"278cfb92-d16e-4bb2-b4b4-d158f13e9d8e","Type":"ContainerDied","Data":"1004157a1b96c2ac088177793e4a256b15af59cbcd43c6182b738d3452c00f25"} Dec 10 16:14:07 crc kubenswrapper[4718]: I1210 16:14:07.434038 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-xvrn2/crc-debug-qs2bg" Dec 10 16:14:07 crc kubenswrapper[4718]: I1210 16:14:07.471689 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-xvrn2/crc-debug-qs2bg"] Dec 10 16:14:07 crc kubenswrapper[4718]: I1210 16:14:07.480985 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-xvrn2/crc-debug-qs2bg"] Dec 10 16:14:07 crc kubenswrapper[4718]: I1210 16:14:07.535790 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6f8g7\" (UniqueName: \"kubernetes.io/projected/278cfb92-d16e-4bb2-b4b4-d158f13e9d8e-kube-api-access-6f8g7\") pod \"278cfb92-d16e-4bb2-b4b4-d158f13e9d8e\" (UID: \"278cfb92-d16e-4bb2-b4b4-d158f13e9d8e\") " Dec 10 16:14:07 crc kubenswrapper[4718]: I1210 16:14:07.536080 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/278cfb92-d16e-4bb2-b4b4-d158f13e9d8e-host\") pod \"278cfb92-d16e-4bb2-b4b4-d158f13e9d8e\" (UID: \"278cfb92-d16e-4bb2-b4b4-d158f13e9d8e\") " Dec 10 16:14:07 crc kubenswrapper[4718]: I1210 16:14:07.536217 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/278cfb92-d16e-4bb2-b4b4-d158f13e9d8e-host" (OuterVolumeSpecName: "host") pod "278cfb92-d16e-4bb2-b4b4-d158f13e9d8e" (UID: "278cfb92-d16e-4bb2-b4b4-d158f13e9d8e"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 10 16:14:07 crc kubenswrapper[4718]: I1210 16:14:07.536546 4718 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/278cfb92-d16e-4bb2-b4b4-d158f13e9d8e-host\") on node \"crc\" DevicePath \"\"" Dec 10 16:14:07 crc kubenswrapper[4718]: I1210 16:14:07.541822 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/278cfb92-d16e-4bb2-b4b4-d158f13e9d8e-kube-api-access-6f8g7" (OuterVolumeSpecName: "kube-api-access-6f8g7") pod "278cfb92-d16e-4bb2-b4b4-d158f13e9d8e" (UID: "278cfb92-d16e-4bb2-b4b4-d158f13e9d8e"). InnerVolumeSpecName "kube-api-access-6f8g7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 16:14:07 crc kubenswrapper[4718]: I1210 16:14:07.638072 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6f8g7\" (UniqueName: \"kubernetes.io/projected/278cfb92-d16e-4bb2-b4b4-d158f13e9d8e-kube-api-access-6f8g7\") on node \"crc\" DevicePath \"\"" Dec 10 16:14:08 crc kubenswrapper[4718]: I1210 16:14:08.039024 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="278cfb92-d16e-4bb2-b4b4-d158f13e9d8e" path="/var/lib/kubelet/pods/278cfb92-d16e-4bb2-b4b4-d158f13e9d8e/volumes" Dec 10 16:14:08 crc kubenswrapper[4718]: I1210 16:14:08.326423 4718 scope.go:117] "RemoveContainer" containerID="1004157a1b96c2ac088177793e4a256b15af59cbcd43c6182b738d3452c00f25" Dec 10 16:14:08 crc kubenswrapper[4718]: I1210 16:14:08.326473 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-xvrn2/crc-debug-qs2bg" Dec 10 16:14:08 crc kubenswrapper[4718]: I1210 16:14:08.683043 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-xvrn2/crc-debug-95kfx"] Dec 10 16:14:08 crc kubenswrapper[4718]: E1210 16:14:08.683600 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dbfda8ef-d2e3-4d54-8551-5f6cee9d221d" containerName="extract-content" Dec 10 16:14:08 crc kubenswrapper[4718]: I1210 16:14:08.683624 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="dbfda8ef-d2e3-4d54-8551-5f6cee9d221d" containerName="extract-content" Dec 10 16:14:08 crc kubenswrapper[4718]: E1210 16:14:08.683647 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dbfda8ef-d2e3-4d54-8551-5f6cee9d221d" containerName="registry-server" Dec 10 16:14:08 crc kubenswrapper[4718]: I1210 16:14:08.683655 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="dbfda8ef-d2e3-4d54-8551-5f6cee9d221d" containerName="registry-server" Dec 10 16:14:08 crc kubenswrapper[4718]: E1210 16:14:08.683667 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dbfda8ef-d2e3-4d54-8551-5f6cee9d221d" containerName="extract-utilities" Dec 10 16:14:08 crc kubenswrapper[4718]: I1210 16:14:08.683673 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="dbfda8ef-d2e3-4d54-8551-5f6cee9d221d" containerName="extract-utilities" Dec 10 16:14:08 crc kubenswrapper[4718]: E1210 16:14:08.683684 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="278cfb92-d16e-4bb2-b4b4-d158f13e9d8e" containerName="container-00" Dec 10 16:14:08 crc kubenswrapper[4718]: I1210 16:14:08.683690 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="278cfb92-d16e-4bb2-b4b4-d158f13e9d8e" containerName="container-00" Dec 10 16:14:08 crc kubenswrapper[4718]: I1210 16:14:08.683898 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="dbfda8ef-d2e3-4d54-8551-5f6cee9d221d" containerName="registry-server" Dec 10 16:14:08 crc kubenswrapper[4718]: I1210 16:14:08.683931 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="278cfb92-d16e-4bb2-b4b4-d158f13e9d8e" containerName="container-00" Dec 10 16:14:08 crc kubenswrapper[4718]: I1210 16:14:08.685679 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-xvrn2/crc-debug-95kfx" Dec 10 16:14:08 crc kubenswrapper[4718]: I1210 16:14:08.688283 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-xvrn2"/"default-dockercfg-qqrtw" Dec 10 16:14:08 crc kubenswrapper[4718]: I1210 16:14:08.865849 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gkb24\" (UniqueName: \"kubernetes.io/projected/5673eddf-6818-4f47-909e-38c966c7b6cb-kube-api-access-gkb24\") pod \"crc-debug-95kfx\" (UID: \"5673eddf-6818-4f47-909e-38c966c7b6cb\") " pod="openshift-must-gather-xvrn2/crc-debug-95kfx" Dec 10 16:14:08 crc kubenswrapper[4718]: I1210 16:14:08.866007 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/5673eddf-6818-4f47-909e-38c966c7b6cb-host\") pod \"crc-debug-95kfx\" (UID: \"5673eddf-6818-4f47-909e-38c966c7b6cb\") " pod="openshift-must-gather-xvrn2/crc-debug-95kfx" Dec 10 16:14:08 crc kubenswrapper[4718]: I1210 16:14:08.968243 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/5673eddf-6818-4f47-909e-38c966c7b6cb-host\") pod \"crc-debug-95kfx\" (UID: \"5673eddf-6818-4f47-909e-38c966c7b6cb\") " pod="openshift-must-gather-xvrn2/crc-debug-95kfx" Dec 10 16:14:08 crc kubenswrapper[4718]: I1210 16:14:08.968422 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/5673eddf-6818-4f47-909e-38c966c7b6cb-host\") pod \"crc-debug-95kfx\" (UID: \"5673eddf-6818-4f47-909e-38c966c7b6cb\") " pod="openshift-must-gather-xvrn2/crc-debug-95kfx" Dec 10 16:14:08 crc kubenswrapper[4718]: I1210 16:14:08.968443 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gkb24\" (UniqueName: \"kubernetes.io/projected/5673eddf-6818-4f47-909e-38c966c7b6cb-kube-api-access-gkb24\") pod \"crc-debug-95kfx\" (UID: \"5673eddf-6818-4f47-909e-38c966c7b6cb\") " pod="openshift-must-gather-xvrn2/crc-debug-95kfx" Dec 10 16:14:08 crc kubenswrapper[4718]: I1210 16:14:08.986619 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gkb24\" (UniqueName: \"kubernetes.io/projected/5673eddf-6818-4f47-909e-38c966c7b6cb-kube-api-access-gkb24\") pod \"crc-debug-95kfx\" (UID: \"5673eddf-6818-4f47-909e-38c966c7b6cb\") " pod="openshift-must-gather-xvrn2/crc-debug-95kfx" Dec 10 16:14:09 crc kubenswrapper[4718]: I1210 16:14:09.004821 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-xvrn2/crc-debug-95kfx" Dec 10 16:14:09 crc kubenswrapper[4718]: I1210 16:14:09.344162 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-xvrn2/crc-debug-95kfx" event={"ID":"5673eddf-6818-4f47-909e-38c966c7b6cb","Type":"ContainerStarted","Data":"fcea4d5b070fdbae0c506d61a7338592ca19e1cc8583ba0117c1336240a5726b"} Dec 10 16:14:09 crc kubenswrapper[4718]: I1210 16:14:09.344234 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-xvrn2/crc-debug-95kfx" event={"ID":"5673eddf-6818-4f47-909e-38c966c7b6cb","Type":"ContainerStarted","Data":"d7de42bdd76b49e355584b118c66d1e143be496b11f4968fd56f21005e031724"} Dec 10 16:14:09 crc kubenswrapper[4718]: I1210 16:14:09.368035 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-xvrn2/crc-debug-95kfx" podStartSLOduration=1.368012141 podStartE2EDuration="1.368012141s" podCreationTimestamp="2025-12-10 16:14:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 16:14:09.361567954 +0000 UTC m=+6154.310791371" watchObservedRunningTime="2025-12-10 16:14:09.368012141 +0000 UTC m=+6154.317235558" Dec 10 16:14:10 crc kubenswrapper[4718]: I1210 16:14:10.360308 4718 generic.go:334] "Generic (PLEG): container finished" podID="5673eddf-6818-4f47-909e-38c966c7b6cb" containerID="fcea4d5b070fdbae0c506d61a7338592ca19e1cc8583ba0117c1336240a5726b" exitCode=0 Dec 10 16:14:10 crc kubenswrapper[4718]: I1210 16:14:10.360625 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-xvrn2/crc-debug-95kfx" event={"ID":"5673eddf-6818-4f47-909e-38c966c7b6cb","Type":"ContainerDied","Data":"fcea4d5b070fdbae0c506d61a7338592ca19e1cc8583ba0117c1336240a5726b"} Dec 10 16:14:11 crc kubenswrapper[4718]: I1210 16:14:11.485965 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-xvrn2/crc-debug-95kfx" Dec 10 16:14:11 crc kubenswrapper[4718]: I1210 16:14:11.644631 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/5673eddf-6818-4f47-909e-38c966c7b6cb-host\") pod \"5673eddf-6818-4f47-909e-38c966c7b6cb\" (UID: \"5673eddf-6818-4f47-909e-38c966c7b6cb\") " Dec 10 16:14:11 crc kubenswrapper[4718]: I1210 16:14:11.644774 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5673eddf-6818-4f47-909e-38c966c7b6cb-host" (OuterVolumeSpecName: "host") pod "5673eddf-6818-4f47-909e-38c966c7b6cb" (UID: "5673eddf-6818-4f47-909e-38c966c7b6cb"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 10 16:14:11 crc kubenswrapper[4718]: I1210 16:14:11.645363 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gkb24\" (UniqueName: \"kubernetes.io/projected/5673eddf-6818-4f47-909e-38c966c7b6cb-kube-api-access-gkb24\") pod \"5673eddf-6818-4f47-909e-38c966c7b6cb\" (UID: \"5673eddf-6818-4f47-909e-38c966c7b6cb\") " Dec 10 16:14:11 crc kubenswrapper[4718]: I1210 16:14:11.649111 4718 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/5673eddf-6818-4f47-909e-38c966c7b6cb-host\") on node \"crc\" DevicePath \"\"" Dec 10 16:14:11 crc kubenswrapper[4718]: I1210 16:14:11.652720 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5673eddf-6818-4f47-909e-38c966c7b6cb-kube-api-access-gkb24" (OuterVolumeSpecName: "kube-api-access-gkb24") pod "5673eddf-6818-4f47-909e-38c966c7b6cb" (UID: "5673eddf-6818-4f47-909e-38c966c7b6cb"). InnerVolumeSpecName "kube-api-access-gkb24". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 16:14:11 crc kubenswrapper[4718]: I1210 16:14:11.750758 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gkb24\" (UniqueName: \"kubernetes.io/projected/5673eddf-6818-4f47-909e-38c966c7b6cb-kube-api-access-gkb24\") on node \"crc\" DevicePath \"\"" Dec 10 16:14:12 crc kubenswrapper[4718]: I1210 16:14:12.319650 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-xvrn2/crc-debug-95kfx"] Dec 10 16:14:12 crc kubenswrapper[4718]: I1210 16:14:12.334196 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-xvrn2/crc-debug-95kfx"] Dec 10 16:14:12 crc kubenswrapper[4718]: I1210 16:14:12.383011 4718 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d7de42bdd76b49e355584b118c66d1e143be496b11f4968fd56f21005e031724" Dec 10 16:14:12 crc kubenswrapper[4718]: I1210 16:14:12.383137 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-xvrn2/crc-debug-95kfx" Dec 10 16:14:13 crc kubenswrapper[4718]: I1210 16:14:13.501505 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-xvrn2/crc-debug-mwtln"] Dec 10 16:14:13 crc kubenswrapper[4718]: E1210 16:14:13.503108 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5673eddf-6818-4f47-909e-38c966c7b6cb" containerName="container-00" Dec 10 16:14:13 crc kubenswrapper[4718]: I1210 16:14:13.503128 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="5673eddf-6818-4f47-909e-38c966c7b6cb" containerName="container-00" Dec 10 16:14:13 crc kubenswrapper[4718]: I1210 16:14:13.503418 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="5673eddf-6818-4f47-909e-38c966c7b6cb" containerName="container-00" Dec 10 16:14:13 crc kubenswrapper[4718]: I1210 16:14:13.504270 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-xvrn2/crc-debug-mwtln" Dec 10 16:14:13 crc kubenswrapper[4718]: I1210 16:14:13.507157 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-xvrn2"/"default-dockercfg-qqrtw" Dec 10 16:14:13 crc kubenswrapper[4718]: I1210 16:14:13.592273 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-767hn\" (UniqueName: \"kubernetes.io/projected/08a852bb-c636-4bf3-9f06-129bd1278dcb-kube-api-access-767hn\") pod \"crc-debug-mwtln\" (UID: \"08a852bb-c636-4bf3-9f06-129bd1278dcb\") " pod="openshift-must-gather-xvrn2/crc-debug-mwtln" Dec 10 16:14:13 crc kubenswrapper[4718]: I1210 16:14:13.592519 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/08a852bb-c636-4bf3-9f06-129bd1278dcb-host\") pod \"crc-debug-mwtln\" (UID: \"08a852bb-c636-4bf3-9f06-129bd1278dcb\") " pod="openshift-must-gather-xvrn2/crc-debug-mwtln" Dec 10 16:14:13 crc kubenswrapper[4718]: I1210 16:14:13.694333 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-767hn\" (UniqueName: \"kubernetes.io/projected/08a852bb-c636-4bf3-9f06-129bd1278dcb-kube-api-access-767hn\") pod \"crc-debug-mwtln\" (UID: \"08a852bb-c636-4bf3-9f06-129bd1278dcb\") " pod="openshift-must-gather-xvrn2/crc-debug-mwtln" Dec 10 16:14:13 crc kubenswrapper[4718]: I1210 16:14:13.694538 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/08a852bb-c636-4bf3-9f06-129bd1278dcb-host\") pod \"crc-debug-mwtln\" (UID: \"08a852bb-c636-4bf3-9f06-129bd1278dcb\") " pod="openshift-must-gather-xvrn2/crc-debug-mwtln" Dec 10 16:14:13 crc kubenswrapper[4718]: I1210 16:14:13.694696 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/08a852bb-c636-4bf3-9f06-129bd1278dcb-host\") pod \"crc-debug-mwtln\" (UID: \"08a852bb-c636-4bf3-9f06-129bd1278dcb\") " pod="openshift-must-gather-xvrn2/crc-debug-mwtln" Dec 10 16:14:13 crc kubenswrapper[4718]: I1210 16:14:13.730637 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-767hn\" (UniqueName: \"kubernetes.io/projected/08a852bb-c636-4bf3-9f06-129bd1278dcb-kube-api-access-767hn\") pod \"crc-debug-mwtln\" (UID: \"08a852bb-c636-4bf3-9f06-129bd1278dcb\") " pod="openshift-must-gather-xvrn2/crc-debug-mwtln" Dec 10 16:14:13 crc kubenswrapper[4718]: I1210 16:14:13.825494 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-xvrn2/crc-debug-mwtln" Dec 10 16:14:13 crc kubenswrapper[4718]: W1210 16:14:13.857781 4718 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod08a852bb_c636_4bf3_9f06_129bd1278dcb.slice/crio-9580a1b712a4ef635f445332cc5741372b0f79aa834473c6898fe9e97504c35e WatchSource:0}: Error finding container 9580a1b712a4ef635f445332cc5741372b0f79aa834473c6898fe9e97504c35e: Status 404 returned error can't find the container with id 9580a1b712a4ef635f445332cc5741372b0f79aa834473c6898fe9e97504c35e Dec 10 16:14:14 crc kubenswrapper[4718]: I1210 16:14:14.043970 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5673eddf-6818-4f47-909e-38c966c7b6cb" path="/var/lib/kubelet/pods/5673eddf-6818-4f47-909e-38c966c7b6cb/volumes" Dec 10 16:14:14 crc kubenswrapper[4718]: I1210 16:14:14.409427 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-xvrn2/crc-debug-mwtln" event={"ID":"08a852bb-c636-4bf3-9f06-129bd1278dcb","Type":"ContainerStarted","Data":"9580a1b712a4ef635f445332cc5741372b0f79aa834473c6898fe9e97504c35e"} Dec 10 16:14:15 crc kubenswrapper[4718]: I1210 16:14:15.420354 4718 generic.go:334] "Generic (PLEG): container finished" podID="08a852bb-c636-4bf3-9f06-129bd1278dcb" containerID="29e66ac3187b42d7b0a5add69462d8a5c47a99ab21a7d9c4a9d6713fb8ccbc33" exitCode=0 Dec 10 16:14:15 crc kubenswrapper[4718]: I1210 16:14:15.420479 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-xvrn2/crc-debug-mwtln" event={"ID":"08a852bb-c636-4bf3-9f06-129bd1278dcb","Type":"ContainerDied","Data":"29e66ac3187b42d7b0a5add69462d8a5c47a99ab21a7d9c4a9d6713fb8ccbc33"} Dec 10 16:14:15 crc kubenswrapper[4718]: I1210 16:14:15.471842 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-xvrn2/crc-debug-mwtln"] Dec 10 16:14:15 crc kubenswrapper[4718]: I1210 16:14:15.483434 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-xvrn2/crc-debug-mwtln"] Dec 10 16:14:16 crc kubenswrapper[4718]: I1210 16:14:16.541624 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-xvrn2/crc-debug-mwtln" Dec 10 16:14:16 crc kubenswrapper[4718]: I1210 16:14:16.668771 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/08a852bb-c636-4bf3-9f06-129bd1278dcb-host\") pod \"08a852bb-c636-4bf3-9f06-129bd1278dcb\" (UID: \"08a852bb-c636-4bf3-9f06-129bd1278dcb\") " Dec 10 16:14:16 crc kubenswrapper[4718]: I1210 16:14:16.668840 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/08a852bb-c636-4bf3-9f06-129bd1278dcb-host" (OuterVolumeSpecName: "host") pod "08a852bb-c636-4bf3-9f06-129bd1278dcb" (UID: "08a852bb-c636-4bf3-9f06-129bd1278dcb"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 10 16:14:16 crc kubenswrapper[4718]: I1210 16:14:16.669019 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-767hn\" (UniqueName: \"kubernetes.io/projected/08a852bb-c636-4bf3-9f06-129bd1278dcb-kube-api-access-767hn\") pod \"08a852bb-c636-4bf3-9f06-129bd1278dcb\" (UID: \"08a852bb-c636-4bf3-9f06-129bd1278dcb\") " Dec 10 16:14:16 crc kubenswrapper[4718]: I1210 16:14:16.669702 4718 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/08a852bb-c636-4bf3-9f06-129bd1278dcb-host\") on node \"crc\" DevicePath \"\"" Dec 10 16:14:16 crc kubenswrapper[4718]: I1210 16:14:16.693361 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/08a852bb-c636-4bf3-9f06-129bd1278dcb-kube-api-access-767hn" (OuterVolumeSpecName: "kube-api-access-767hn") pod "08a852bb-c636-4bf3-9f06-129bd1278dcb" (UID: "08a852bb-c636-4bf3-9f06-129bd1278dcb"). InnerVolumeSpecName "kube-api-access-767hn". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 16:14:16 crc kubenswrapper[4718]: I1210 16:14:16.772079 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-767hn\" (UniqueName: \"kubernetes.io/projected/08a852bb-c636-4bf3-9f06-129bd1278dcb-kube-api-access-767hn\") on node \"crc\" DevicePath \"\"" Dec 10 16:14:17 crc kubenswrapper[4718]: I1210 16:14:17.443166 4718 scope.go:117] "RemoveContainer" containerID="29e66ac3187b42d7b0a5add69462d8a5c47a99ab21a7d9c4a9d6713fb8ccbc33" Dec 10 16:14:17 crc kubenswrapper[4718]: I1210 16:14:17.443199 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-xvrn2/crc-debug-mwtln" Dec 10 16:14:18 crc kubenswrapper[4718]: I1210 16:14:18.033920 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="08a852bb-c636-4bf3-9f06-129bd1278dcb" path="/var/lib/kubelet/pods/08a852bb-c636-4bf3-9f06-129bd1278dcb/volumes" Dec 10 16:14:18 crc kubenswrapper[4718]: I1210 16:14:18.084437 4718 patch_prober.go:28] interesting pod/machine-config-daemon-8zmhn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 10 16:14:18 crc kubenswrapper[4718]: I1210 16:14:18.084523 4718 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" podUID="8db53917-7cfb-496d-b8a0-5cc68f3be4e7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 10 16:14:43 crc kubenswrapper[4718]: I1210 16:14:43.161896 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-58896fd778-pk5pp_c53fcbf5-3330-4aff-a699-ff475344e705/barbican-api/0.log" Dec 10 16:14:43 crc kubenswrapper[4718]: I1210 16:14:43.503211 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-58896fd778-pk5pp_c53fcbf5-3330-4aff-a699-ff475344e705/barbican-api-log/0.log" Dec 10 16:14:43 crc kubenswrapper[4718]: I1210 16:14:43.649585 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-595577df7d-rjzmf_2b5afb71-a4ff-4786-ab1b-3e3d04f7e7dc/barbican-keystone-listener/0.log" Dec 10 16:14:43 crc kubenswrapper[4718]: I1210 16:14:43.777744 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-595577df7d-rjzmf_2b5afb71-a4ff-4786-ab1b-3e3d04f7e7dc/barbican-keystone-listener-log/0.log" Dec 10 16:14:43 crc kubenswrapper[4718]: I1210 16:14:43.866293 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-86c6894dc7-l9prg_f8b33603-9f2b-410e-a4ae-52b20ea62bd9/barbican-worker/0.log" Dec 10 16:14:43 crc kubenswrapper[4718]: I1210 16:14:43.882242 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-86c6894dc7-l9prg_f8b33603-9f2b-410e-a4ae-52b20ea62bd9/barbican-worker-log/0.log" Dec 10 16:14:44 crc kubenswrapper[4718]: I1210 16:14:44.016317 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-wkbj7_a7b1a942-17e0-4573-abd2-4bf182a8eef0/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Dec 10 16:14:44 crc kubenswrapper[4718]: I1210 16:14:44.181681 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_464e9486-56f7-4723-9cb6-6fe63cd86ae4/ceilometer-central-agent/0.log" Dec 10 16:14:44 crc kubenswrapper[4718]: I1210 16:14:44.291122 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_464e9486-56f7-4723-9cb6-6fe63cd86ae4/proxy-httpd/0.log" Dec 10 16:14:44 crc kubenswrapper[4718]: I1210 16:14:44.315927 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_464e9486-56f7-4723-9cb6-6fe63cd86ae4/ceilometer-notification-agent/0.log" Dec 10 16:14:44 crc kubenswrapper[4718]: I1210 16:14:44.376014 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_464e9486-56f7-4723-9cb6-6fe63cd86ae4/sg-core/0.log" Dec 10 16:14:44 crc kubenswrapper[4718]: I1210 16:14:44.583628 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_fbb11f94-73a2-4870-94d1-f7c6a699bc57/cinder-api-log/0.log" Dec 10 16:14:44 crc kubenswrapper[4718]: I1210 16:14:44.819356 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_74391ed2-7cf2-4c88-a940-e2bb3e6d3ad7/cinder-scheduler/0.log" Dec 10 16:14:44 crc kubenswrapper[4718]: I1210 16:14:44.895108 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_fbb11f94-73a2-4870-94d1-f7c6a699bc57/cinder-api/0.log" Dec 10 16:14:44 crc kubenswrapper[4718]: I1210 16:14:44.903270 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_74391ed2-7cf2-4c88-a940-e2bb3e6d3ad7/probe/0.log" Dec 10 16:14:45 crc kubenswrapper[4718]: I1210 16:14:45.174567 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-2plpc_7cea2862-2631-4d3a-98f8-29afc2428d28/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Dec 10 16:14:45 crc kubenswrapper[4718]: I1210 16:14:45.211675 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-vzmzr_2c9d42bb-c253-4ef3-92eb-71a2ffbbee3c/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Dec 10 16:14:45 crc kubenswrapper[4718]: I1210 16:14:45.397647 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-5c6b84c7df-hwqf9_177ad74b-362a-478f-a755-7c2862fa179d/init/0.log" Dec 10 16:14:45 crc kubenswrapper[4718]: I1210 16:14:45.561462 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-5c6b84c7df-hwqf9_177ad74b-362a-478f-a755-7c2862fa179d/init/0.log" Dec 10 16:14:45 crc kubenswrapper[4718]: I1210 16:14:45.663698 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-k9sjb_93b2ee40-9140-4ccf-8af3-d9bfc04ca78c/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Dec 10 16:14:45 crc kubenswrapper[4718]: I1210 16:14:45.766087 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-5c6b84c7df-hwqf9_177ad74b-362a-478f-a755-7c2862fa179d/dnsmasq-dns/0.log" Dec 10 16:14:45 crc kubenswrapper[4718]: I1210 16:14:45.959656 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_45fa8b4a-59e6-4adf-bbf1-4dc8d23a802b/glance-httpd/0.log" Dec 10 16:14:46 crc kubenswrapper[4718]: I1210 16:14:46.002873 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_45fa8b4a-59e6-4adf-bbf1-4dc8d23a802b/glance-log/0.log" Dec 10 16:14:46 crc kubenswrapper[4718]: I1210 16:14:46.223233 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_2504be79-1852-48ec-b2d2-d687ae68bd09/glance-log/0.log" Dec 10 16:14:46 crc kubenswrapper[4718]: I1210 16:14:46.248276 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_2504be79-1852-48ec-b2d2-d687ae68bd09/glance-httpd/0.log" Dec 10 16:14:46 crc kubenswrapper[4718]: I1210 16:14:46.445174 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-77c9ddb894-brvxz_e1a09589-44b9-49f4-8970-d3381c3d4b99/horizon/1.log" Dec 10 16:14:46 crc kubenswrapper[4718]: I1210 16:14:46.630322 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-77c9ddb894-brvxz_e1a09589-44b9-49f4-8970-d3381c3d4b99/horizon/0.log" Dec 10 16:14:46 crc kubenswrapper[4718]: I1210 16:14:46.755325 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-lnpsl_3de81842-2365-419e-88fd-b0b4611f3e8e/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Dec 10 16:14:46 crc kubenswrapper[4718]: I1210 16:14:46.992746 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-8vcjs_df7b87da-2bb2-494d-b840-478a58f1950c/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Dec 10 16:14:47 crc kubenswrapper[4718]: I1210 16:14:47.237838 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-77c9ddb894-brvxz_e1a09589-44b9-49f4-8970-d3381c3d4b99/horizon-log/0.log" Dec 10 16:14:47 crc kubenswrapper[4718]: I1210 16:14:47.355008 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29422981-fh4sm_dc3a87e8-6bf0-43e8-a75d-d743c4182d36/keystone-cron/0.log" Dec 10 16:14:47 crc kubenswrapper[4718]: I1210 16:14:47.597335 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29423041-7hjkl_c6059577-40c7-4880-8a8f-f0b5736dbac2/keystone-cron/0.log" Dec 10 16:14:47 crc kubenswrapper[4718]: I1210 16:14:47.665672 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_7a8da87b-0de3-4e86-ad3d-b29f4cd4ad1c/kube-state-metrics/0.log" Dec 10 16:14:47 crc kubenswrapper[4718]: I1210 16:14:47.684209 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-6bcdc7c9dc-hxhdn_6e6f831b-5d26-4e7c-9b6b-ebddeb01327c/keystone-api/0.log" Dec 10 16:14:47 crc kubenswrapper[4718]: I1210 16:14:47.886077 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-r2rqj_080c7769-f2d8-47fa-aa3d-a1b63190a679/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Dec 10 16:14:48 crc kubenswrapper[4718]: I1210 16:14:48.083851 4718 patch_prober.go:28] interesting pod/machine-config-daemon-8zmhn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 10 16:14:48 crc kubenswrapper[4718]: I1210 16:14:48.083913 4718 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" podUID="8db53917-7cfb-496d-b8a0-5cc68f3be4e7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 10 16:14:48 crc kubenswrapper[4718]: I1210 16:14:48.384265 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-b9vhb_2ce587d1-61d0-4844-bb2b-54894131a5bb/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Dec 10 16:14:48 crc kubenswrapper[4718]: I1210 16:14:48.404645 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-5fd4cf9989-fxv7b_ddd7c56e-7efb-44f9-8da2-45d0d54a9756/neutron-httpd/0.log" Dec 10 16:14:48 crc kubenswrapper[4718]: I1210 16:14:48.471357 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-5fd4cf9989-fxv7b_ddd7c56e-7efb-44f9-8da2-45d0d54a9756/neutron-api/0.log" Dec 10 16:14:49 crc kubenswrapper[4718]: I1210 16:14:49.190649 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_61020ad7-3d1d-4dab-9d46-2bb54b2e92d0/nova-cell0-conductor-conductor/0.log" Dec 10 16:14:49 crc kubenswrapper[4718]: I1210 16:14:49.427030 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_612c753e-cb9b-4995-b219-6e3b0d60cc22/nova-cell1-conductor-conductor/0.log" Dec 10 16:14:49 crc kubenswrapper[4718]: I1210 16:14:49.780970 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_da134815-ca06-4544-86a3-ebbc3d219c56/nova-cell1-novncproxy-novncproxy/0.log" Dec 10 16:14:50 crc kubenswrapper[4718]: I1210 16:14:50.068515 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-edpm-deployment-openstack-edpm-ipam-td49f_2dd95b44-946b-43ef-91a6-3eeab6ded836/nova-edpm-deployment-openstack-edpm-ipam/0.log" Dec 10 16:14:50 crc kubenswrapper[4718]: I1210 16:14:50.224313 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_12aef49c-9e40-4cc4-a280-103e9c6180de/nova-api-log/0.log" Dec 10 16:14:50 crc kubenswrapper[4718]: I1210 16:14:50.431866 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_badca5dd-ef88-4de8-a596-9cb2adc01193/nova-metadata-log/0.log" Dec 10 16:14:50 crc kubenswrapper[4718]: I1210 16:14:50.640950 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_12aef49c-9e40-4cc4-a280-103e9c6180de/nova-api-api/0.log" Dec 10 16:14:50 crc kubenswrapper[4718]: I1210 16:14:50.904534 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_f81943cf-47c7-424d-9473-2df3195bc9a6/mysql-bootstrap/0.log" Dec 10 16:14:50 crc kubenswrapper[4718]: I1210 16:14:50.950470 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_cc4b750b-8599-4d08-9b09-d2d75f035dc4/nova-scheduler-scheduler/0.log" Dec 10 16:14:51 crc kubenswrapper[4718]: I1210 16:14:51.118708 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_f81943cf-47c7-424d-9473-2df3195bc9a6/mysql-bootstrap/0.log" Dec 10 16:14:51 crc kubenswrapper[4718]: I1210 16:14:51.172067 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_f81943cf-47c7-424d-9473-2df3195bc9a6/galera/0.log" Dec 10 16:14:51 crc kubenswrapper[4718]: I1210 16:14:51.404259 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_0708d5de-311d-46e3-981e-7bd7a2fc495c/mysql-bootstrap/0.log" Dec 10 16:14:51 crc kubenswrapper[4718]: I1210 16:14:51.595681 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_0708d5de-311d-46e3-981e-7bd7a2fc495c/mysql-bootstrap/0.log" Dec 10 16:14:51 crc kubenswrapper[4718]: I1210 16:14:51.693123 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_0708d5de-311d-46e3-981e-7bd7a2fc495c/galera/0.log" Dec 10 16:14:51 crc kubenswrapper[4718]: I1210 16:14:51.833521 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_c0b43254-f8fe-4187-a8ce-aa65f7ac327e/openstackclient/0.log" Dec 10 16:14:52 crc kubenswrapper[4718]: I1210 16:14:52.078576 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ftrjh_3e4f376b-2175-46d8-8b88-0560a3fcf231/ovn-controller/0.log" Dec 10 16:14:52 crc kubenswrapper[4718]: I1210 16:14:52.253090 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-nf6b6_7326e5bc-27b1-4b9a-b0ea-979589622ea3/openstack-network-exporter/0.log" Dec 10 16:14:52 crc kubenswrapper[4718]: I1210 16:14:52.421101 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-l8vtq_061fa283-77d3-42e2-b267-2c01852d4123/ovsdb-server-init/0.log" Dec 10 16:14:52 crc kubenswrapper[4718]: I1210 16:14:52.665115 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_badca5dd-ef88-4de8-a596-9cb2adc01193/nova-metadata-metadata/0.log" Dec 10 16:14:52 crc kubenswrapper[4718]: I1210 16:14:52.676552 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-l8vtq_061fa283-77d3-42e2-b267-2c01852d4123/ovsdb-server/0.log" Dec 10 16:14:52 crc kubenswrapper[4718]: I1210 16:14:52.697198 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-l8vtq_061fa283-77d3-42e2-b267-2c01852d4123/ovsdb-server-init/0.log" Dec 10 16:14:52 crc kubenswrapper[4718]: I1210 16:14:52.948112 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-sfg42_8ace4c93-2b2a-4185-b16a-d782334fa608/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Dec 10 16:14:53 crc kubenswrapper[4718]: I1210 16:14:53.009111 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-l8vtq_061fa283-77d3-42e2-b267-2c01852d4123/ovs-vswitchd/0.log" Dec 10 16:14:53 crc kubenswrapper[4718]: I1210 16:14:53.088086 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_870a0a88-dfaa-49b8-96ae-96f5991f2e75/openstack-network-exporter/0.log" Dec 10 16:14:53 crc kubenswrapper[4718]: I1210 16:14:53.198977 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_870a0a88-dfaa-49b8-96ae-96f5991f2e75/ovn-northd/0.log" Dec 10 16:14:53 crc kubenswrapper[4718]: I1210 16:14:53.289332 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_50fbf3ca-d871-4ccd-a412-636fa783e3d4/openstack-network-exporter/0.log" Dec 10 16:14:53 crc kubenswrapper[4718]: I1210 16:14:53.367715 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_50fbf3ca-d871-4ccd-a412-636fa783e3d4/ovsdbserver-nb/0.log" Dec 10 16:14:53 crc kubenswrapper[4718]: I1210 16:14:53.484049 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_1ccf190d-cc0e-471c-b506-9784b1e8b038/openstack-network-exporter/0.log" Dec 10 16:14:53 crc kubenswrapper[4718]: I1210 16:14:53.580093 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_1ccf190d-cc0e-471c-b506-9784b1e8b038/ovsdbserver-sb/0.log" Dec 10 16:14:53 crc kubenswrapper[4718]: I1210 16:14:53.901576 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-7cb9f4c9bb-nx4ml_2ea453fb-60ad-4093-b15f-5cb288f92511/placement-api/0.log" Dec 10 16:14:53 crc kubenswrapper[4718]: I1210 16:14:53.967021 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_cac0f06e-eca1-4268-9fb6-78207619e61c/init-config-reloader/0.log" Dec 10 16:14:54 crc kubenswrapper[4718]: I1210 16:14:54.015980 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-7cb9f4c9bb-nx4ml_2ea453fb-60ad-4093-b15f-5cb288f92511/placement-log/0.log" Dec 10 16:14:54 crc kubenswrapper[4718]: I1210 16:14:54.244706 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_cac0f06e-eca1-4268-9fb6-78207619e61c/config-reloader/0.log" Dec 10 16:14:54 crc kubenswrapper[4718]: I1210 16:14:54.281624 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_cac0f06e-eca1-4268-9fb6-78207619e61c/prometheus/0.log" Dec 10 16:14:54 crc kubenswrapper[4718]: I1210 16:14:54.283544 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_cac0f06e-eca1-4268-9fb6-78207619e61c/init-config-reloader/0.log" Dec 10 16:14:54 crc kubenswrapper[4718]: I1210 16:14:54.317739 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_cac0f06e-eca1-4268-9fb6-78207619e61c/thanos-sidecar/0.log" Dec 10 16:14:54 crc kubenswrapper[4718]: I1210 16:14:54.541041 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_55b4c58e-c07e-4cd2-8592-f57b1d9f9233/setup-container/0.log" Dec 10 16:14:54 crc kubenswrapper[4718]: I1210 16:14:54.810796 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_55b4c58e-c07e-4cd2-8592-f57b1d9f9233/setup-container/0.log" Dec 10 16:14:54 crc kubenswrapper[4718]: I1210 16:14:54.861679 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_55b4c58e-c07e-4cd2-8592-f57b1d9f9233/rabbitmq/0.log" Dec 10 16:14:54 crc kubenswrapper[4718]: I1210 16:14:54.865992 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-notifications-server-0_5611ee41-14a4-45d3-88b1-e6e6c9bc4d13/setup-container/0.log" Dec 10 16:14:55 crc kubenswrapper[4718]: I1210 16:14:55.226286 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-notifications-server-0_5611ee41-14a4-45d3-88b1-e6e6c9bc4d13/rabbitmq/0.log" Dec 10 16:14:55 crc kubenswrapper[4718]: I1210 16:14:55.269964 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-notifications-server-0_5611ee41-14a4-45d3-88b1-e6e6c9bc4d13/setup-container/0.log" Dec 10 16:14:55 crc kubenswrapper[4718]: I1210 16:14:55.319717 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_4e530819-d029-4526-aed9-2cd33568dbcb/setup-container/0.log" Dec 10 16:14:55 crc kubenswrapper[4718]: I1210 16:14:55.471178 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_4e530819-d029-4526-aed9-2cd33568dbcb/setup-container/0.log" Dec 10 16:14:55 crc kubenswrapper[4718]: I1210 16:14:55.478258 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_4e530819-d029-4526-aed9-2cd33568dbcb/rabbitmq/0.log" Dec 10 16:14:55 crc kubenswrapper[4718]: I1210 16:14:55.703896 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-vldh6_71d72952-f3a0-4c3c-97f8-26c143f154cc/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Dec 10 16:14:55 crc kubenswrapper[4718]: I1210 16:14:55.801550 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_redhat-edpm-deployment-openstack-edpm-ipam-b6wmm_99702136-5bd9-4803-8ce9-8a89bd572648/redhat-edpm-deployment-openstack-edpm-ipam/0.log" Dec 10 16:14:56 crc kubenswrapper[4718]: I1210 16:14:56.334303 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-89nsq_10400b99-0213-470b-b37a-f0b9cd98ab2b/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Dec 10 16:14:56 crc kubenswrapper[4718]: I1210 16:14:56.422488 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-dt6pb_311630cc-3a9b-48d5-9407-879b0f508508/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Dec 10 16:14:56 crc kubenswrapper[4718]: I1210 16:14:56.650097 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-bsdk2_4d6f6be0-1d66-4c7e-a8e9-09826a416501/ssh-known-hosts-edpm-deployment/0.log" Dec 10 16:14:56 crc kubenswrapper[4718]: I1210 16:14:56.952055 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-6c7874df4c-ns7dm_52552bcb-7acc-4882-86ef-0353a39e7262/proxy-server/0.log" Dec 10 16:14:57 crc kubenswrapper[4718]: I1210 16:14:57.039690 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-nhp59_ea7defa5-2130-4d6d-8bba-9416bec21dfa/swift-ring-rebalance/0.log" Dec 10 16:14:57 crc kubenswrapper[4718]: I1210 16:14:57.130848 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-6c7874df4c-ns7dm_52552bcb-7acc-4882-86ef-0353a39e7262/proxy-httpd/0.log" Dec 10 16:14:57 crc kubenswrapper[4718]: I1210 16:14:57.291296 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e094c947-215b-4386-906f-5ee833afa9d0/account-auditor/0.log" Dec 10 16:14:57 crc kubenswrapper[4718]: I1210 16:14:57.329622 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e094c947-215b-4386-906f-5ee833afa9d0/account-reaper/0.log" Dec 10 16:14:57 crc kubenswrapper[4718]: I1210 16:14:57.514464 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e094c947-215b-4386-906f-5ee833afa9d0/account-replicator/0.log" Dec 10 16:14:57 crc kubenswrapper[4718]: I1210 16:14:57.628749 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e094c947-215b-4386-906f-5ee833afa9d0/account-server/0.log" Dec 10 16:14:57 crc kubenswrapper[4718]: I1210 16:14:57.656525 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e094c947-215b-4386-906f-5ee833afa9d0/container-auditor/0.log" Dec 10 16:14:57 crc kubenswrapper[4718]: I1210 16:14:57.735301 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e094c947-215b-4386-906f-5ee833afa9d0/container-replicator/0.log" Dec 10 16:14:57 crc kubenswrapper[4718]: I1210 16:14:57.870310 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e094c947-215b-4386-906f-5ee833afa9d0/container-server/0.log" Dec 10 16:14:57 crc kubenswrapper[4718]: I1210 16:14:57.920333 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e094c947-215b-4386-906f-5ee833afa9d0/container-updater/0.log" Dec 10 16:14:58 crc kubenswrapper[4718]: I1210 16:14:58.035007 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e094c947-215b-4386-906f-5ee833afa9d0/object-expirer/0.log" Dec 10 16:14:58 crc kubenswrapper[4718]: I1210 16:14:58.056255 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e094c947-215b-4386-906f-5ee833afa9d0/object-auditor/0.log" Dec 10 16:14:58 crc kubenswrapper[4718]: I1210 16:14:58.157445 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e094c947-215b-4386-906f-5ee833afa9d0/object-replicator/0.log" Dec 10 16:14:58 crc kubenswrapper[4718]: I1210 16:14:58.219278 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e094c947-215b-4386-906f-5ee833afa9d0/object-server/0.log" Dec 10 16:14:58 crc kubenswrapper[4718]: I1210 16:14:58.335302 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e094c947-215b-4386-906f-5ee833afa9d0/rsync/0.log" Dec 10 16:14:58 crc kubenswrapper[4718]: I1210 16:14:58.381382 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e094c947-215b-4386-906f-5ee833afa9d0/object-updater/0.log" Dec 10 16:14:58 crc kubenswrapper[4718]: I1210 16:14:58.508276 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e094c947-215b-4386-906f-5ee833afa9d0/swift-recon-cron/0.log" Dec 10 16:14:58 crc kubenswrapper[4718]: I1210 16:14:58.669737 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-edpm-deployment-openstack-edpm-ipam-hf8zl_7aeaa205-67e7-4b41-a2d6-fff74ab0d61b/telemetry-edpm-deployment-openstack-edpm-ipam/0.log" Dec 10 16:14:59 crc kubenswrapper[4718]: I1210 16:14:59.120031 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_df134785-8fb2-418f-89ba-55f6d822f50a/tempest-tests-tempest-tests-runner/0.log" Dec 10 16:14:59 crc kubenswrapper[4718]: I1210 16:14:59.155549 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_c99e4dd1-6bd0-4be7-ad00-40d50d9a4e2e/test-operator-logs-container/0.log" Dec 10 16:14:59 crc kubenswrapper[4718]: I1210 16:14:59.404721 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-nt5t9_e98b8eb6-cfd6-4125-973e-7cda6cdeceeb/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Dec 10 16:15:00 crc kubenswrapper[4718]: I1210 16:15:00.167110 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29423055-g2mpk"] Dec 10 16:15:00 crc kubenswrapper[4718]: E1210 16:15:00.169155 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="08a852bb-c636-4bf3-9f06-129bd1278dcb" containerName="container-00" Dec 10 16:15:00 crc kubenswrapper[4718]: I1210 16:15:00.169309 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="08a852bb-c636-4bf3-9f06-129bd1278dcb" containerName="container-00" Dec 10 16:15:00 crc kubenswrapper[4718]: I1210 16:15:00.169784 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="08a852bb-c636-4bf3-9f06-129bd1278dcb" containerName="container-00" Dec 10 16:15:00 crc kubenswrapper[4718]: I1210 16:15:00.171147 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29423055-g2mpk" Dec 10 16:15:00 crc kubenswrapper[4718]: I1210 16:15:00.174137 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 10 16:15:00 crc kubenswrapper[4718]: I1210 16:15:00.175777 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 10 16:15:00 crc kubenswrapper[4718]: I1210 16:15:00.182008 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29423055-g2mpk"] Dec 10 16:15:00 crc kubenswrapper[4718]: I1210 16:15:00.256296 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/63275924-0729-4a9e-be2a-12316f70282f-secret-volume\") pod \"collect-profiles-29423055-g2mpk\" (UID: \"63275924-0729-4a9e-be2a-12316f70282f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29423055-g2mpk" Dec 10 16:15:00 crc kubenswrapper[4718]: I1210 16:15:00.256407 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7f8jp\" (UniqueName: \"kubernetes.io/projected/63275924-0729-4a9e-be2a-12316f70282f-kube-api-access-7f8jp\") pod \"collect-profiles-29423055-g2mpk\" (UID: \"63275924-0729-4a9e-be2a-12316f70282f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29423055-g2mpk" Dec 10 16:15:00 crc kubenswrapper[4718]: I1210 16:15:00.256443 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/63275924-0729-4a9e-be2a-12316f70282f-config-volume\") pod \"collect-profiles-29423055-g2mpk\" (UID: \"63275924-0729-4a9e-be2a-12316f70282f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29423055-g2mpk" Dec 10 16:15:00 crc kubenswrapper[4718]: I1210 16:15:00.359921 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7f8jp\" (UniqueName: \"kubernetes.io/projected/63275924-0729-4a9e-be2a-12316f70282f-kube-api-access-7f8jp\") pod \"collect-profiles-29423055-g2mpk\" (UID: \"63275924-0729-4a9e-be2a-12316f70282f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29423055-g2mpk" Dec 10 16:15:00 crc kubenswrapper[4718]: I1210 16:15:00.360357 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/63275924-0729-4a9e-be2a-12316f70282f-config-volume\") pod \"collect-profiles-29423055-g2mpk\" (UID: \"63275924-0729-4a9e-be2a-12316f70282f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29423055-g2mpk" Dec 10 16:15:00 crc kubenswrapper[4718]: I1210 16:15:00.361453 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/63275924-0729-4a9e-be2a-12316f70282f-secret-volume\") pod \"collect-profiles-29423055-g2mpk\" (UID: \"63275924-0729-4a9e-be2a-12316f70282f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29423055-g2mpk" Dec 10 16:15:00 crc kubenswrapper[4718]: I1210 16:15:00.362076 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/63275924-0729-4a9e-be2a-12316f70282f-config-volume\") pod \"collect-profiles-29423055-g2mpk\" (UID: \"63275924-0729-4a9e-be2a-12316f70282f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29423055-g2mpk" Dec 10 16:15:00 crc kubenswrapper[4718]: I1210 16:15:00.369983 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/63275924-0729-4a9e-be2a-12316f70282f-secret-volume\") pod \"collect-profiles-29423055-g2mpk\" (UID: \"63275924-0729-4a9e-be2a-12316f70282f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29423055-g2mpk" Dec 10 16:15:00 crc kubenswrapper[4718]: I1210 16:15:00.401890 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7f8jp\" (UniqueName: \"kubernetes.io/projected/63275924-0729-4a9e-be2a-12316f70282f-kube-api-access-7f8jp\") pod \"collect-profiles-29423055-g2mpk\" (UID: \"63275924-0729-4a9e-be2a-12316f70282f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29423055-g2mpk" Dec 10 16:15:00 crc kubenswrapper[4718]: I1210 16:15:00.507991 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29423055-g2mpk" Dec 10 16:15:00 crc kubenswrapper[4718]: I1210 16:15:00.523992 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_watcher-applier-0_cda091c0-e668-4132-95bf-e956b4ee9b39/watcher-applier/0.log" Dec 10 16:15:01 crc kubenswrapper[4718]: I1210 16:15:01.026004 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29423055-g2mpk"] Dec 10 16:15:01 crc kubenswrapper[4718]: I1210 16:15:01.459926 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_watcher-api-0_b0c0a449-91c3-43fe-ba1b-02a146745b82/watcher-api-log/0.log" Dec 10 16:15:01 crc kubenswrapper[4718]: I1210 16:15:01.945732 4718 generic.go:334] "Generic (PLEG): container finished" podID="63275924-0729-4a9e-be2a-12316f70282f" containerID="12552f49b1a9886341a225f3adb81742a458ef8d8fa30a042ca714eb62580a6d" exitCode=0 Dec 10 16:15:01 crc kubenswrapper[4718]: I1210 16:15:01.946091 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29423055-g2mpk" event={"ID":"63275924-0729-4a9e-be2a-12316f70282f","Type":"ContainerDied","Data":"12552f49b1a9886341a225f3adb81742a458ef8d8fa30a042ca714eb62580a6d"} Dec 10 16:15:01 crc kubenswrapper[4718]: I1210 16:15:01.946129 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29423055-g2mpk" event={"ID":"63275924-0729-4a9e-be2a-12316f70282f","Type":"ContainerStarted","Data":"477a309d8eb5143c927732d5f15418efe1c23210b3cc54e5d37a1bc65a2bc53f"} Dec 10 16:15:03 crc kubenswrapper[4718]: I1210 16:15:03.385248 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29423055-g2mpk" Dec 10 16:15:03 crc kubenswrapper[4718]: I1210 16:15:03.451053 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/63275924-0729-4a9e-be2a-12316f70282f-config-volume\") pod \"63275924-0729-4a9e-be2a-12316f70282f\" (UID: \"63275924-0729-4a9e-be2a-12316f70282f\") " Dec 10 16:15:03 crc kubenswrapper[4718]: I1210 16:15:03.451250 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/63275924-0729-4a9e-be2a-12316f70282f-secret-volume\") pod \"63275924-0729-4a9e-be2a-12316f70282f\" (UID: \"63275924-0729-4a9e-be2a-12316f70282f\") " Dec 10 16:15:03 crc kubenswrapper[4718]: I1210 16:15:03.451300 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7f8jp\" (UniqueName: \"kubernetes.io/projected/63275924-0729-4a9e-be2a-12316f70282f-kube-api-access-7f8jp\") pod \"63275924-0729-4a9e-be2a-12316f70282f\" (UID: \"63275924-0729-4a9e-be2a-12316f70282f\") " Dec 10 16:15:03 crc kubenswrapper[4718]: I1210 16:15:03.455309 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/63275924-0729-4a9e-be2a-12316f70282f-config-volume" (OuterVolumeSpecName: "config-volume") pod "63275924-0729-4a9e-be2a-12316f70282f" (UID: "63275924-0729-4a9e-be2a-12316f70282f"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 16:15:03 crc kubenswrapper[4718]: I1210 16:15:03.462797 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/63275924-0729-4a9e-be2a-12316f70282f-kube-api-access-7f8jp" (OuterVolumeSpecName: "kube-api-access-7f8jp") pod "63275924-0729-4a9e-be2a-12316f70282f" (UID: "63275924-0729-4a9e-be2a-12316f70282f"). InnerVolumeSpecName "kube-api-access-7f8jp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 16:15:03 crc kubenswrapper[4718]: I1210 16:15:03.484637 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/63275924-0729-4a9e-be2a-12316f70282f-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "63275924-0729-4a9e-be2a-12316f70282f" (UID: "63275924-0729-4a9e-be2a-12316f70282f"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 16:15:03 crc kubenswrapper[4718]: I1210 16:15:03.554978 4718 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/63275924-0729-4a9e-be2a-12316f70282f-config-volume\") on node \"crc\" DevicePath \"\"" Dec 10 16:15:03 crc kubenswrapper[4718]: I1210 16:15:03.555022 4718 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/63275924-0729-4a9e-be2a-12316f70282f-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 10 16:15:03 crc kubenswrapper[4718]: I1210 16:15:03.555036 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7f8jp\" (UniqueName: \"kubernetes.io/projected/63275924-0729-4a9e-be2a-12316f70282f-kube-api-access-7f8jp\") on node \"crc\" DevicePath \"\"" Dec 10 16:15:03 crc kubenswrapper[4718]: I1210 16:15:03.977322 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29423055-g2mpk" event={"ID":"63275924-0729-4a9e-be2a-12316f70282f","Type":"ContainerDied","Data":"477a309d8eb5143c927732d5f15418efe1c23210b3cc54e5d37a1bc65a2bc53f"} Dec 10 16:15:03 crc kubenswrapper[4718]: I1210 16:15:03.977366 4718 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="477a309d8eb5143c927732d5f15418efe1c23210b3cc54e5d37a1bc65a2bc53f" Dec 10 16:15:03 crc kubenswrapper[4718]: I1210 16:15:03.977694 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29423055-g2mpk" Dec 10 16:15:04 crc kubenswrapper[4718]: I1210 16:15:04.496480 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29423010-rwmzq"] Dec 10 16:15:04 crc kubenswrapper[4718]: I1210 16:15:04.504902 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29423010-rwmzq"] Dec 10 16:15:04 crc kubenswrapper[4718]: I1210 16:15:04.697902 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_watcher-decision-engine-0_0105b21d-8a6a-4368-aec4-80c009daecd1/watcher-decision-engine/0.log" Dec 10 16:15:06 crc kubenswrapper[4718]: I1210 16:15:06.040095 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4a02cd7d-3718-4e26-b054-9a47df1e475b" path="/var/lib/kubelet/pods/4a02cd7d-3718-4e26-b054-9a47df1e475b/volumes" Dec 10 16:15:06 crc kubenswrapper[4718]: I1210 16:15:06.739607 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_watcher-api-0_b0c0a449-91c3-43fe-ba1b-02a146745b82/watcher-api/0.log" Dec 10 16:15:18 crc kubenswrapper[4718]: I1210 16:15:18.084130 4718 patch_prober.go:28] interesting pod/machine-config-daemon-8zmhn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 10 16:15:18 crc kubenswrapper[4718]: I1210 16:15:18.084793 4718 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" podUID="8db53917-7cfb-496d-b8a0-5cc68f3be4e7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 10 16:15:18 crc kubenswrapper[4718]: I1210 16:15:18.084868 4718 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" Dec 10 16:15:18 crc kubenswrapper[4718]: I1210 16:15:18.086224 4718 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"0c11c531f13dd9c964e72f22f6c356f673d535df358bec5460370fca430952f1"} pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 10 16:15:18 crc kubenswrapper[4718]: I1210 16:15:18.086411 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" podUID="8db53917-7cfb-496d-b8a0-5cc68f3be4e7" containerName="machine-config-daemon" containerID="cri-o://0c11c531f13dd9c964e72f22f6c356f673d535df358bec5460370fca430952f1" gracePeriod=600 Dec 10 16:15:19 crc kubenswrapper[4718]: I1210 16:15:19.205808 4718 generic.go:334] "Generic (PLEG): container finished" podID="8db53917-7cfb-496d-b8a0-5cc68f3be4e7" containerID="0c11c531f13dd9c964e72f22f6c356f673d535df358bec5460370fca430952f1" exitCode=0 Dec 10 16:15:19 crc kubenswrapper[4718]: I1210 16:15:19.206234 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" event={"ID":"8db53917-7cfb-496d-b8a0-5cc68f3be4e7","Type":"ContainerDied","Data":"0c11c531f13dd9c964e72f22f6c356f673d535df358bec5460370fca430952f1"} Dec 10 16:15:19 crc kubenswrapper[4718]: I1210 16:15:19.206762 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" event={"ID":"8db53917-7cfb-496d-b8a0-5cc68f3be4e7","Type":"ContainerStarted","Data":"efbe25aac6a704d523a48396ea05982c2cc2181337c42dd02c1adda5c0cc383d"} Dec 10 16:15:19 crc kubenswrapper[4718]: I1210 16:15:19.206800 4718 scope.go:117] "RemoveContainer" containerID="e617b8520f6e76222e4940a8409c36020b31e0b95dc5aafbdaf0e3c690b84a37" Dec 10 16:15:19 crc kubenswrapper[4718]: I1210 16:15:19.531556 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_571880ea-f2a9-4e9e-99a5-c8bcaffb8675/memcached/0.log" Dec 10 16:15:37 crc kubenswrapper[4718]: I1210 16:15:37.830395 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7d9dfd778-4s6xq_61e4671d-9417-472d-9d76-64fdcc0e3297/kube-rbac-proxy/0.log" Dec 10 16:15:37 crc kubenswrapper[4718]: I1210 16:15:37.857965 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7d9dfd778-4s6xq_61e4671d-9417-472d-9d76-64fdcc0e3297/manager/0.log" Dec 10 16:15:38 crc kubenswrapper[4718]: I1210 16:15:38.075543 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-6c677c69b-jfdsv_82086b4c-0222-45a7-a3c3-fc2504f63a4e/kube-rbac-proxy/0.log" Dec 10 16:15:38 crc kubenswrapper[4718]: I1210 16:15:38.138436 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-6c677c69b-jfdsv_82086b4c-0222-45a7-a3c3-fc2504f63a4e/manager/0.log" Dec 10 16:15:38 crc kubenswrapper[4718]: I1210 16:15:38.197696 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_d48c0f09d4f85e6997d52028ed3b1a3994c4f4b73b6b561c4a3bbe5136txtln_92739fd0-cf7c-45af-a0db-dbf3f2ffdddf/util/0.log" Dec 10 16:15:38 crc kubenswrapper[4718]: I1210 16:15:38.634121 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_d48c0f09d4f85e6997d52028ed3b1a3994c4f4b73b6b561c4a3bbe5136txtln_92739fd0-cf7c-45af-a0db-dbf3f2ffdddf/util/0.log" Dec 10 16:15:38 crc kubenswrapper[4718]: I1210 16:15:38.647300 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_d48c0f09d4f85e6997d52028ed3b1a3994c4f4b73b6b561c4a3bbe5136txtln_92739fd0-cf7c-45af-a0db-dbf3f2ffdddf/pull/0.log" Dec 10 16:15:38 crc kubenswrapper[4718]: I1210 16:15:38.648214 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_d48c0f09d4f85e6997d52028ed3b1a3994c4f4b73b6b561c4a3bbe5136txtln_92739fd0-cf7c-45af-a0db-dbf3f2ffdddf/pull/0.log" Dec 10 16:15:38 crc kubenswrapper[4718]: I1210 16:15:38.848360 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_d48c0f09d4f85e6997d52028ed3b1a3994c4f4b73b6b561c4a3bbe5136txtln_92739fd0-cf7c-45af-a0db-dbf3f2ffdddf/util/0.log" Dec 10 16:15:38 crc kubenswrapper[4718]: I1210 16:15:38.891004 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_d48c0f09d4f85e6997d52028ed3b1a3994c4f4b73b6b561c4a3bbe5136txtln_92739fd0-cf7c-45af-a0db-dbf3f2ffdddf/pull/0.log" Dec 10 16:15:38 crc kubenswrapper[4718]: I1210 16:15:38.897765 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_d48c0f09d4f85e6997d52028ed3b1a3994c4f4b73b6b561c4a3bbe5136txtln_92739fd0-cf7c-45af-a0db-dbf3f2ffdddf/extract/0.log" Dec 10 16:15:39 crc kubenswrapper[4718]: I1210 16:15:39.095158 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-697fb699cf-6mx57_a3023af7-f9ec-44a3-a532-0f6d51843443/kube-rbac-proxy/0.log" Dec 10 16:15:39 crc kubenswrapper[4718]: I1210 16:15:39.111460 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-697fb699cf-6mx57_a3023af7-f9ec-44a3-a532-0f6d51843443/manager/0.log" Dec 10 16:15:39 crc kubenswrapper[4718]: I1210 16:15:39.159021 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-5697bb5779-xvwrl_513a8781-70b0-4692-9141-0c60ef254a98/kube-rbac-proxy/0.log" Dec 10 16:15:39 crc kubenswrapper[4718]: I1210 16:15:39.390460 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-5697bb5779-xvwrl_513a8781-70b0-4692-9141-0c60ef254a98/manager/0.log" Dec 10 16:15:39 crc kubenswrapper[4718]: I1210 16:15:39.426291 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-5f64f6f8bb-gw22w_a701287e-359e-429d-8b94-c4e06e8922a8/kube-rbac-proxy/0.log" Dec 10 16:15:39 crc kubenswrapper[4718]: I1210 16:15:39.431722 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-5f64f6f8bb-gw22w_a701287e-359e-429d-8b94-c4e06e8922a8/manager/0.log" Dec 10 16:15:39 crc kubenswrapper[4718]: I1210 16:15:39.633583 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-68c6d99b8f-mshqn_80f8ae23-3a84-4810-9868-6571b6cf56a1/kube-rbac-proxy/0.log" Dec 10 16:15:39 crc kubenswrapper[4718]: I1210 16:15:39.672106 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-68c6d99b8f-mshqn_80f8ae23-3a84-4810-9868-6571b6cf56a1/manager/0.log" Dec 10 16:15:39 crc kubenswrapper[4718]: I1210 16:15:39.866160 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-78d48bff9d-cvh4g_6a2a49d9-73fc-4798-8173-ed230aa16811/kube-rbac-proxy/0.log" Dec 10 16:15:39 crc kubenswrapper[4718]: I1210 16:15:39.980680 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-967d97867-2jgmr_2516d98a-9991-4d5b-9791-14642a4ec629/kube-rbac-proxy/0.log" Dec 10 16:15:40 crc kubenswrapper[4718]: I1210 16:15:40.134160 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-78d48bff9d-cvh4g_6a2a49d9-73fc-4798-8173-ed230aa16811/manager/0.log" Dec 10 16:15:40 crc kubenswrapper[4718]: I1210 16:15:40.151991 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-967d97867-2jgmr_2516d98a-9991-4d5b-9791-14642a4ec629/manager/0.log" Dec 10 16:15:40 crc kubenswrapper[4718]: I1210 16:15:40.192964 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-7765d96ddf-9sxwk_7d8ae7e9-7545-4ab6-b87c-6c5484b47424/kube-rbac-proxy/0.log" Dec 10 16:15:40 crc kubenswrapper[4718]: I1210 16:15:40.411994 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-7765d96ddf-9sxwk_7d8ae7e9-7545-4ab6-b87c-6c5484b47424/manager/0.log" Dec 10 16:15:40 crc kubenswrapper[4718]: I1210 16:15:40.454879 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-5b5fd79c9c-cfcm2_e7677f94-866d-45c7-b1c9-70fd2b7c7012/kube-rbac-proxy/0.log" Dec 10 16:15:40 crc kubenswrapper[4718]: I1210 16:15:40.531640 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-5b5fd79c9c-cfcm2_e7677f94-866d-45c7-b1c9-70fd2b7c7012/manager/0.log" Dec 10 16:15:40 crc kubenswrapper[4718]: I1210 16:15:40.703696 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-79c8c4686c-7qqzr_32690a0c-0ce7-4639-b30f-18a1a91ed86d/kube-rbac-proxy/0.log" Dec 10 16:15:40 crc kubenswrapper[4718]: I1210 16:15:40.757256 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-79c8c4686c-7qqzr_32690a0c-0ce7-4639-b30f-18a1a91ed86d/manager/0.log" Dec 10 16:15:40 crc kubenswrapper[4718]: I1210 16:15:40.926305 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-5fdfd5b6b5-6smz5_191f3c0d-be7d-463e-9979-922dfb629747/kube-rbac-proxy/0.log" Dec 10 16:15:40 crc kubenswrapper[4718]: I1210 16:15:40.987735 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-5fdfd5b6b5-6smz5_191f3c0d-be7d-463e-9979-922dfb629747/manager/0.log" Dec 10 16:15:41 crc kubenswrapper[4718]: I1210 16:15:41.041471 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-697bc559fc-426mn_a3e406b6-e5c1-4c25-b9ee-80fbbb3eb89b/kube-rbac-proxy/0.log" Dec 10 16:15:41 crc kubenswrapper[4718]: I1210 16:15:41.238869 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-697bc559fc-426mn_a3e406b6-e5c1-4c25-b9ee-80fbbb3eb89b/manager/0.log" Dec 10 16:15:41 crc kubenswrapper[4718]: I1210 16:15:41.283521 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-998648c74-s4568_664faf77-d6a3-4b57-9dc9-ca7a4879c0ef/kube-rbac-proxy/0.log" Dec 10 16:15:41 crc kubenswrapper[4718]: I1210 16:15:41.324865 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-998648c74-s4568_664faf77-d6a3-4b57-9dc9-ca7a4879c0ef/manager/0.log" Dec 10 16:15:41 crc kubenswrapper[4718]: I1210 16:15:41.498638 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-7f95dc5b94n9mbl_463f6bf2-85ef-488a-8223-56898633fe8f/kube-rbac-proxy/0.log" Dec 10 16:15:41 crc kubenswrapper[4718]: I1210 16:15:41.552226 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-7f95dc5b94n9mbl_463f6bf2-85ef-488a-8223-56898633fe8f/manager/0.log" Dec 10 16:15:42 crc kubenswrapper[4718]: I1210 16:15:42.054343 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-operator-966884dd6-7tsss_08ebcdcc-242d-43fe-bb21-3ddf6b7ae71f/operator/0.log" Dec 10 16:15:42 crc kubenswrapper[4718]: I1210 16:15:42.412214 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-b6456fdb6-hq6tv_204f0155-9693-4239-8a7a-440255d5ad50/kube-rbac-proxy/0.log" Dec 10 16:15:42 crc kubenswrapper[4718]: I1210 16:15:42.573358 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-b6456fdb6-hq6tv_204f0155-9693-4239-8a7a-440255d5ad50/manager/0.log" Dec 10 16:15:42 crc kubenswrapper[4718]: I1210 16:15:42.630098 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-ggxhd_5b117425-d366-4008-9216-4696f8736b81/registry-server/0.log" Dec 10 16:15:42 crc kubenswrapper[4718]: I1210 16:15:42.693782 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-78f8948974-6s7dx_e4e01550-5ee5-4afc-a01a-b3ea52b47f23/kube-rbac-proxy/0.log" Dec 10 16:15:42 crc kubenswrapper[4718]: I1210 16:15:42.875953 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-78f8948974-6s7dx_e4e01550-5ee5-4afc-a01a-b3ea52b47f23/manager/0.log" Dec 10 16:15:42 crc kubenswrapper[4718]: I1210 16:15:42.956544 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-nb754_91cdfe7c-2e49-4919-a7ff-0559e12ecf8b/operator/0.log" Dec 10 16:15:43 crc kubenswrapper[4718]: I1210 16:15:43.082668 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-9d58d64bc-7vvmc_daeefe3a-b055-4ee9-be2e-a93afc257365/kube-rbac-proxy/0.log" Dec 10 16:15:43 crc kubenswrapper[4718]: I1210 16:15:43.272114 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-9d58d64bc-7vvmc_daeefe3a-b055-4ee9-be2e-a93afc257365/manager/0.log" Dec 10 16:15:43 crc kubenswrapper[4718]: I1210 16:15:43.319131 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-kxwk7"] Dec 10 16:15:43 crc kubenswrapper[4718]: E1210 16:15:43.319714 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="63275924-0729-4a9e-be2a-12316f70282f" containerName="collect-profiles" Dec 10 16:15:43 crc kubenswrapper[4718]: I1210 16:15:43.319733 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="63275924-0729-4a9e-be2a-12316f70282f" containerName="collect-profiles" Dec 10 16:15:43 crc kubenswrapper[4718]: I1210 16:15:43.319940 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="63275924-0729-4a9e-be2a-12316f70282f" containerName="collect-profiles" Dec 10 16:15:43 crc kubenswrapper[4718]: I1210 16:15:43.321508 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-kxwk7" Dec 10 16:15:43 crc kubenswrapper[4718]: I1210 16:15:43.339610 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-kxwk7"] Dec 10 16:15:43 crc kubenswrapper[4718]: I1210 16:15:43.369279 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-85cbc5886b-z2lw4_81cbd3a0-2031-418d-95b6-fdac9d170a51/manager/0.log" Dec 10 16:15:43 crc kubenswrapper[4718]: I1210 16:15:43.405417 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/31e61951-a00b-4150-be6e-2705059eeeee-catalog-content\") pod \"community-operators-kxwk7\" (UID: \"31e61951-a00b-4150-be6e-2705059eeeee\") " pod="openshift-marketplace/community-operators-kxwk7" Dec 10 16:15:43 crc kubenswrapper[4718]: I1210 16:15:43.406062 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/31e61951-a00b-4150-be6e-2705059eeeee-utilities\") pod \"community-operators-kxwk7\" (UID: \"31e61951-a00b-4150-be6e-2705059eeeee\") " pod="openshift-marketplace/community-operators-kxwk7" Dec 10 16:15:43 crc kubenswrapper[4718]: I1210 16:15:43.406271 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nwhcv\" (UniqueName: \"kubernetes.io/projected/31e61951-a00b-4150-be6e-2705059eeeee-kube-api-access-nwhcv\") pod \"community-operators-kxwk7\" (UID: \"31e61951-a00b-4150-be6e-2705059eeeee\") " pod="openshift-marketplace/community-operators-kxwk7" Dec 10 16:15:43 crc kubenswrapper[4718]: I1210 16:15:43.448498 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-58d5ff84df-jqlwv_469e8dbb-654f-464b-80f9-ac7b0d55439f/kube-rbac-proxy/0.log" Dec 10 16:15:43 crc kubenswrapper[4718]: I1210 16:15:43.507838 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nwhcv\" (UniqueName: \"kubernetes.io/projected/31e61951-a00b-4150-be6e-2705059eeeee-kube-api-access-nwhcv\") pod \"community-operators-kxwk7\" (UID: \"31e61951-a00b-4150-be6e-2705059eeeee\") " pod="openshift-marketplace/community-operators-kxwk7" Dec 10 16:15:43 crc kubenswrapper[4718]: I1210 16:15:43.508189 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/31e61951-a00b-4150-be6e-2705059eeeee-catalog-content\") pod \"community-operators-kxwk7\" (UID: \"31e61951-a00b-4150-be6e-2705059eeeee\") " pod="openshift-marketplace/community-operators-kxwk7" Dec 10 16:15:43 crc kubenswrapper[4718]: I1210 16:15:43.508381 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/31e61951-a00b-4150-be6e-2705059eeeee-utilities\") pod \"community-operators-kxwk7\" (UID: \"31e61951-a00b-4150-be6e-2705059eeeee\") " pod="openshift-marketplace/community-operators-kxwk7" Dec 10 16:15:43 crc kubenswrapper[4718]: I1210 16:15:43.509022 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/31e61951-a00b-4150-be6e-2705059eeeee-utilities\") pod \"community-operators-kxwk7\" (UID: \"31e61951-a00b-4150-be6e-2705059eeeee\") " pod="openshift-marketplace/community-operators-kxwk7" Dec 10 16:15:43 crc kubenswrapper[4718]: I1210 16:15:43.511845 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/31e61951-a00b-4150-be6e-2705059eeeee-catalog-content\") pod \"community-operators-kxwk7\" (UID: \"31e61951-a00b-4150-be6e-2705059eeeee\") " pod="openshift-marketplace/community-operators-kxwk7" Dec 10 16:15:43 crc kubenswrapper[4718]: I1210 16:15:43.535503 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nwhcv\" (UniqueName: \"kubernetes.io/projected/31e61951-a00b-4150-be6e-2705059eeeee-kube-api-access-nwhcv\") pod \"community-operators-kxwk7\" (UID: \"31e61951-a00b-4150-be6e-2705059eeeee\") " pod="openshift-marketplace/community-operators-kxwk7" Dec 10 16:15:43 crc kubenswrapper[4718]: I1210 16:15:43.668621 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-5854674fcc-r7vfj_12ba5675-3e82-41d7-be5a-ecbe1a440af5/kube-rbac-proxy/0.log" Dec 10 16:15:43 crc kubenswrapper[4718]: I1210 16:15:43.669198 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-kxwk7" Dec 10 16:15:43 crc kubenswrapper[4718]: I1210 16:15:43.692122 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-58d5ff84df-jqlwv_469e8dbb-654f-464b-80f9-ac7b0d55439f/manager/0.log" Dec 10 16:15:43 crc kubenswrapper[4718]: I1210 16:15:43.862453 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-5854674fcc-r7vfj_12ba5675-3e82-41d7-be5a-ecbe1a440af5/manager/0.log" Dec 10 16:15:44 crc kubenswrapper[4718]: I1210 16:15:44.181094 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-75944c9b7-rn6gn_21d69144-5afe-4aa8-95f0-c6e7c8802b14/kube-rbac-proxy/0.log" Dec 10 16:15:44 crc kubenswrapper[4718]: I1210 16:15:44.284029 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-kxwk7"] Dec 10 16:15:44 crc kubenswrapper[4718]: I1210 16:15:44.307064 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-75944c9b7-rn6gn_21d69144-5afe-4aa8-95f0-c6e7c8802b14/manager/0.log" Dec 10 16:15:44 crc kubenswrapper[4718]: I1210 16:15:44.459716 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kxwk7" event={"ID":"31e61951-a00b-4150-be6e-2705059eeeee","Type":"ContainerStarted","Data":"ff3472b66ba22b7bdbf692e449e06dce6dc6a5ef4ffabbbba3cf14c1a65784dd"} Dec 10 16:15:45 crc kubenswrapper[4718]: I1210 16:15:45.469234 4718 generic.go:334] "Generic (PLEG): container finished" podID="31e61951-a00b-4150-be6e-2705059eeeee" containerID="44dd1d4ebe10499fa9cf30ceac1a72082bbb1541177b61e28b8d03cfc339814e" exitCode=0 Dec 10 16:15:45 crc kubenswrapper[4718]: I1210 16:15:45.469287 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kxwk7" event={"ID":"31e61951-a00b-4150-be6e-2705059eeeee","Type":"ContainerDied","Data":"44dd1d4ebe10499fa9cf30ceac1a72082bbb1541177b61e28b8d03cfc339814e"} Dec 10 16:15:45 crc kubenswrapper[4718]: I1210 16:15:45.472590 4718 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 10 16:15:46 crc kubenswrapper[4718]: I1210 16:15:46.482143 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kxwk7" event={"ID":"31e61951-a00b-4150-be6e-2705059eeeee","Type":"ContainerStarted","Data":"236119734a2fd1d6f196cbb847536ba0d87c93420d6e9b2f21f57c5755984c00"} Dec 10 16:15:47 crc kubenswrapper[4718]: E1210 16:15:47.042841 4718 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod31e61951_a00b_4150_be6e_2705059eeeee.slice/crio-236119734a2fd1d6f196cbb847536ba0d87c93420d6e9b2f21f57c5755984c00.scope\": RecentStats: unable to find data in memory cache]" Dec 10 16:15:47 crc kubenswrapper[4718]: I1210 16:15:47.494861 4718 generic.go:334] "Generic (PLEG): container finished" podID="31e61951-a00b-4150-be6e-2705059eeeee" containerID="236119734a2fd1d6f196cbb847536ba0d87c93420d6e9b2f21f57c5755984c00" exitCode=0 Dec 10 16:15:47 crc kubenswrapper[4718]: I1210 16:15:47.494919 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kxwk7" event={"ID":"31e61951-a00b-4150-be6e-2705059eeeee","Type":"ContainerDied","Data":"236119734a2fd1d6f196cbb847536ba0d87c93420d6e9b2f21f57c5755984c00"} Dec 10 16:15:48 crc kubenswrapper[4718]: I1210 16:15:48.507206 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kxwk7" event={"ID":"31e61951-a00b-4150-be6e-2705059eeeee","Type":"ContainerStarted","Data":"ecc99f9e70b9b1db2807995c384ffc0069c02a2292f618d3a732fd604e5053c8"} Dec 10 16:15:48 crc kubenswrapper[4718]: I1210 16:15:48.531086 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-kxwk7" podStartSLOduration=2.9759388490000003 podStartE2EDuration="5.531058471s" podCreationTimestamp="2025-12-10 16:15:43 +0000 UTC" firstStartedPulling="2025-12-10 16:15:45.472232342 +0000 UTC m=+6250.421455759" lastFinishedPulling="2025-12-10 16:15:48.027351964 +0000 UTC m=+6252.976575381" observedRunningTime="2025-12-10 16:15:48.529158974 +0000 UTC m=+6253.478382401" watchObservedRunningTime="2025-12-10 16:15:48.531058471 +0000 UTC m=+6253.480281908" Dec 10 16:15:53 crc kubenswrapper[4718]: I1210 16:15:53.670696 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-kxwk7" Dec 10 16:15:53 crc kubenswrapper[4718]: I1210 16:15:53.671328 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-kxwk7" Dec 10 16:15:53 crc kubenswrapper[4718]: I1210 16:15:53.729449 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-kxwk7" Dec 10 16:15:54 crc kubenswrapper[4718]: I1210 16:15:54.615730 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-kxwk7" Dec 10 16:15:57 crc kubenswrapper[4718]: I1210 16:15:57.279581 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-kxwk7"] Dec 10 16:15:57 crc kubenswrapper[4718]: I1210 16:15:57.280462 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-kxwk7" podUID="31e61951-a00b-4150-be6e-2705059eeeee" containerName="registry-server" containerID="cri-o://ecc99f9e70b9b1db2807995c384ffc0069c02a2292f618d3a732fd604e5053c8" gracePeriod=2 Dec 10 16:15:57 crc kubenswrapper[4718]: I1210 16:15:57.531316 4718 scope.go:117] "RemoveContainer" containerID="3c982718c52d8aec964ad4cdf30e83736ce2bfa51c3ceb95be430e20257f322f" Dec 10 16:15:58 crc kubenswrapper[4718]: I1210 16:15:58.304414 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-kxwk7" Dec 10 16:15:58 crc kubenswrapper[4718]: I1210 16:15:58.449367 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nwhcv\" (UniqueName: \"kubernetes.io/projected/31e61951-a00b-4150-be6e-2705059eeeee-kube-api-access-nwhcv\") pod \"31e61951-a00b-4150-be6e-2705059eeeee\" (UID: \"31e61951-a00b-4150-be6e-2705059eeeee\") " Dec 10 16:15:58 crc kubenswrapper[4718]: I1210 16:15:58.449643 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/31e61951-a00b-4150-be6e-2705059eeeee-utilities\") pod \"31e61951-a00b-4150-be6e-2705059eeeee\" (UID: \"31e61951-a00b-4150-be6e-2705059eeeee\") " Dec 10 16:15:58 crc kubenswrapper[4718]: I1210 16:15:58.449822 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/31e61951-a00b-4150-be6e-2705059eeeee-catalog-content\") pod \"31e61951-a00b-4150-be6e-2705059eeeee\" (UID: \"31e61951-a00b-4150-be6e-2705059eeeee\") " Dec 10 16:15:58 crc kubenswrapper[4718]: I1210 16:15:58.455685 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31e61951-a00b-4150-be6e-2705059eeeee-kube-api-access-nwhcv" (OuterVolumeSpecName: "kube-api-access-nwhcv") pod "31e61951-a00b-4150-be6e-2705059eeeee" (UID: "31e61951-a00b-4150-be6e-2705059eeeee"). InnerVolumeSpecName "kube-api-access-nwhcv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 16:15:58 crc kubenswrapper[4718]: I1210 16:15:58.463065 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/31e61951-a00b-4150-be6e-2705059eeeee-utilities" (OuterVolumeSpecName: "utilities") pod "31e61951-a00b-4150-be6e-2705059eeeee" (UID: "31e61951-a00b-4150-be6e-2705059eeeee"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 16:15:58 crc kubenswrapper[4718]: I1210 16:15:58.503307 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/31e61951-a00b-4150-be6e-2705059eeeee-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "31e61951-a00b-4150-be6e-2705059eeeee" (UID: "31e61951-a00b-4150-be6e-2705059eeeee"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 16:15:58 crc kubenswrapper[4718]: I1210 16:15:58.552094 4718 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/31e61951-a00b-4150-be6e-2705059eeeee-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 10 16:15:58 crc kubenswrapper[4718]: I1210 16:15:58.552140 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nwhcv\" (UniqueName: \"kubernetes.io/projected/31e61951-a00b-4150-be6e-2705059eeeee-kube-api-access-nwhcv\") on node \"crc\" DevicePath \"\"" Dec 10 16:15:58 crc kubenswrapper[4718]: I1210 16:15:58.552150 4718 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/31e61951-a00b-4150-be6e-2705059eeeee-utilities\") on node \"crc\" DevicePath \"\"" Dec 10 16:15:58 crc kubenswrapper[4718]: I1210 16:15:58.606812 4718 generic.go:334] "Generic (PLEG): container finished" podID="31e61951-a00b-4150-be6e-2705059eeeee" containerID="ecc99f9e70b9b1db2807995c384ffc0069c02a2292f618d3a732fd604e5053c8" exitCode=0 Dec 10 16:15:58 crc kubenswrapper[4718]: I1210 16:15:58.606871 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kxwk7" event={"ID":"31e61951-a00b-4150-be6e-2705059eeeee","Type":"ContainerDied","Data":"ecc99f9e70b9b1db2807995c384ffc0069c02a2292f618d3a732fd604e5053c8"} Dec 10 16:15:58 crc kubenswrapper[4718]: I1210 16:15:58.606910 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kxwk7" event={"ID":"31e61951-a00b-4150-be6e-2705059eeeee","Type":"ContainerDied","Data":"ff3472b66ba22b7bdbf692e449e06dce6dc6a5ef4ffabbbba3cf14c1a65784dd"} Dec 10 16:15:58 crc kubenswrapper[4718]: I1210 16:15:58.606932 4718 scope.go:117] "RemoveContainer" containerID="ecc99f9e70b9b1db2807995c384ffc0069c02a2292f618d3a732fd604e5053c8" Dec 10 16:15:58 crc kubenswrapper[4718]: I1210 16:15:58.607118 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-kxwk7" Dec 10 16:15:58 crc kubenswrapper[4718]: I1210 16:15:58.629373 4718 scope.go:117] "RemoveContainer" containerID="236119734a2fd1d6f196cbb847536ba0d87c93420d6e9b2f21f57c5755984c00" Dec 10 16:15:58 crc kubenswrapper[4718]: I1210 16:15:58.652690 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-kxwk7"] Dec 10 16:15:58 crc kubenswrapper[4718]: I1210 16:15:58.665107 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-kxwk7"] Dec 10 16:15:58 crc kubenswrapper[4718]: I1210 16:15:58.672952 4718 scope.go:117] "RemoveContainer" containerID="44dd1d4ebe10499fa9cf30ceac1a72082bbb1541177b61e28b8d03cfc339814e" Dec 10 16:15:58 crc kubenswrapper[4718]: I1210 16:15:58.709264 4718 scope.go:117] "RemoveContainer" containerID="ecc99f9e70b9b1db2807995c384ffc0069c02a2292f618d3a732fd604e5053c8" Dec 10 16:15:58 crc kubenswrapper[4718]: E1210 16:15:58.709894 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ecc99f9e70b9b1db2807995c384ffc0069c02a2292f618d3a732fd604e5053c8\": container with ID starting with ecc99f9e70b9b1db2807995c384ffc0069c02a2292f618d3a732fd604e5053c8 not found: ID does not exist" containerID="ecc99f9e70b9b1db2807995c384ffc0069c02a2292f618d3a732fd604e5053c8" Dec 10 16:15:58 crc kubenswrapper[4718]: I1210 16:15:58.709988 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ecc99f9e70b9b1db2807995c384ffc0069c02a2292f618d3a732fd604e5053c8"} err="failed to get container status \"ecc99f9e70b9b1db2807995c384ffc0069c02a2292f618d3a732fd604e5053c8\": rpc error: code = NotFound desc = could not find container \"ecc99f9e70b9b1db2807995c384ffc0069c02a2292f618d3a732fd604e5053c8\": container with ID starting with ecc99f9e70b9b1db2807995c384ffc0069c02a2292f618d3a732fd604e5053c8 not found: ID does not exist" Dec 10 16:15:58 crc kubenswrapper[4718]: I1210 16:15:58.710035 4718 scope.go:117] "RemoveContainer" containerID="236119734a2fd1d6f196cbb847536ba0d87c93420d6e9b2f21f57c5755984c00" Dec 10 16:15:58 crc kubenswrapper[4718]: E1210 16:15:58.710636 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"236119734a2fd1d6f196cbb847536ba0d87c93420d6e9b2f21f57c5755984c00\": container with ID starting with 236119734a2fd1d6f196cbb847536ba0d87c93420d6e9b2f21f57c5755984c00 not found: ID does not exist" containerID="236119734a2fd1d6f196cbb847536ba0d87c93420d6e9b2f21f57c5755984c00" Dec 10 16:15:58 crc kubenswrapper[4718]: I1210 16:15:58.710681 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"236119734a2fd1d6f196cbb847536ba0d87c93420d6e9b2f21f57c5755984c00"} err="failed to get container status \"236119734a2fd1d6f196cbb847536ba0d87c93420d6e9b2f21f57c5755984c00\": rpc error: code = NotFound desc = could not find container \"236119734a2fd1d6f196cbb847536ba0d87c93420d6e9b2f21f57c5755984c00\": container with ID starting with 236119734a2fd1d6f196cbb847536ba0d87c93420d6e9b2f21f57c5755984c00 not found: ID does not exist" Dec 10 16:15:58 crc kubenswrapper[4718]: I1210 16:15:58.710707 4718 scope.go:117] "RemoveContainer" containerID="44dd1d4ebe10499fa9cf30ceac1a72082bbb1541177b61e28b8d03cfc339814e" Dec 10 16:15:58 crc kubenswrapper[4718]: E1210 16:15:58.710998 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"44dd1d4ebe10499fa9cf30ceac1a72082bbb1541177b61e28b8d03cfc339814e\": container with ID starting with 44dd1d4ebe10499fa9cf30ceac1a72082bbb1541177b61e28b8d03cfc339814e not found: ID does not exist" containerID="44dd1d4ebe10499fa9cf30ceac1a72082bbb1541177b61e28b8d03cfc339814e" Dec 10 16:15:58 crc kubenswrapper[4718]: I1210 16:15:58.711026 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"44dd1d4ebe10499fa9cf30ceac1a72082bbb1541177b61e28b8d03cfc339814e"} err="failed to get container status \"44dd1d4ebe10499fa9cf30ceac1a72082bbb1541177b61e28b8d03cfc339814e\": rpc error: code = NotFound desc = could not find container \"44dd1d4ebe10499fa9cf30ceac1a72082bbb1541177b61e28b8d03cfc339814e\": container with ID starting with 44dd1d4ebe10499fa9cf30ceac1a72082bbb1541177b61e28b8d03cfc339814e not found: ID does not exist" Dec 10 16:16:00 crc kubenswrapper[4718]: I1210 16:16:00.036342 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31e61951-a00b-4150-be6e-2705059eeeee" path="/var/lib/kubelet/pods/31e61951-a00b-4150-be6e-2705059eeeee/volumes" Dec 10 16:16:05 crc kubenswrapper[4718]: I1210 16:16:05.775563 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-bvd4w_fe1f76c3-fb22-4c92-bc66-7048e04e63b0/control-plane-machine-set-operator/0.log" Dec 10 16:16:05 crc kubenswrapper[4718]: I1210 16:16:05.965380 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-prs2h_8b5042f5-52a1-42da-9a21-72d7b2a75c75/machine-api-operator/0.log" Dec 10 16:16:05 crc kubenswrapper[4718]: I1210 16:16:05.985788 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-prs2h_8b5042f5-52a1-42da-9a21-72d7b2a75c75/kube-rbac-proxy/0.log" Dec 10 16:16:18 crc kubenswrapper[4718]: I1210 16:16:18.655939 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-5b446d88c5-nmkz4_8989c984-6f88-4b26-9c39-37cd583802d7/cert-manager-controller/0.log" Dec 10 16:16:18 crc kubenswrapper[4718]: I1210 16:16:18.750080 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-7f985d654d-kjxkw_33afc510-28a4-4f71-810d-da9f04ca2a86/cert-manager-cainjector/0.log" Dec 10 16:16:18 crc kubenswrapper[4718]: I1210 16:16:18.824632 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-5655c58dd6-t9tzp_07467c5a-e532-4233-8736-8191dbbdd234/cert-manager-webhook/0.log" Dec 10 16:16:31 crc kubenswrapper[4718]: I1210 16:16:31.218378 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-7fbb5f6569-p79rx_15f18f34-32e0-49a4-b05d-ccd88e6c9541/nmstate-console-plugin/0.log" Dec 10 16:16:31 crc kubenswrapper[4718]: I1210 16:16:31.393253 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-2pm2t_1d31a2e1-7843-4881-807c-38aed6f2ee1d/nmstate-handler/0.log" Dec 10 16:16:31 crc kubenswrapper[4718]: I1210 16:16:31.412440 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-7f946cbc9-l52p4_dc2fd026-789f-445a-befd-cdaf23a77c25/kube-rbac-proxy/0.log" Dec 10 16:16:31 crc kubenswrapper[4718]: I1210 16:16:31.510931 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-7f946cbc9-l52p4_dc2fd026-789f-445a-befd-cdaf23a77c25/nmstate-metrics/0.log" Dec 10 16:16:31 crc kubenswrapper[4718]: I1210 16:16:31.623153 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-5b5b58f5c8-99xzv_b28ae747-6984-4dea-8efd-f3f238f56386/nmstate-operator/0.log" Dec 10 16:16:31 crc kubenswrapper[4718]: I1210 16:16:31.756853 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-5f6d4c5ccb-xkpb5_64da34db-cd1c-46a7-9a41-69926590d466/nmstate-webhook/0.log" Dec 10 16:16:47 crc kubenswrapper[4718]: I1210 16:16:47.597981 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-f8648f98b-wm6nx_e08743b1-961a-43f6-a4b4-c546f2ce87cf/kube-rbac-proxy/0.log" Dec 10 16:16:47 crc kubenswrapper[4718]: I1210 16:16:47.828795 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-jhgkf_8c7642dd-0879-49cf-870a-a30a11c4d1b9/cp-frr-files/0.log" Dec 10 16:16:47 crc kubenswrapper[4718]: I1210 16:16:47.837128 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-f8648f98b-wm6nx_e08743b1-961a-43f6-a4b4-c546f2ce87cf/controller/0.log" Dec 10 16:16:47 crc kubenswrapper[4718]: I1210 16:16:47.994702 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-jhgkf_8c7642dd-0879-49cf-870a-a30a11c4d1b9/cp-frr-files/0.log" Dec 10 16:16:48 crc kubenswrapper[4718]: I1210 16:16:48.030141 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-jhgkf_8c7642dd-0879-49cf-870a-a30a11c4d1b9/cp-metrics/0.log" Dec 10 16:16:48 crc kubenswrapper[4718]: I1210 16:16:48.076785 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-jhgkf_8c7642dd-0879-49cf-870a-a30a11c4d1b9/cp-reloader/0.log" Dec 10 16:16:48 crc kubenswrapper[4718]: I1210 16:16:48.093085 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-jhgkf_8c7642dd-0879-49cf-870a-a30a11c4d1b9/cp-reloader/0.log" Dec 10 16:16:48 crc kubenswrapper[4718]: I1210 16:16:48.259490 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-jhgkf_8c7642dd-0879-49cf-870a-a30a11c4d1b9/cp-frr-files/0.log" Dec 10 16:16:48 crc kubenswrapper[4718]: I1210 16:16:48.278169 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-jhgkf_8c7642dd-0879-49cf-870a-a30a11c4d1b9/cp-reloader/0.log" Dec 10 16:16:48 crc kubenswrapper[4718]: I1210 16:16:48.282250 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-jhgkf_8c7642dd-0879-49cf-870a-a30a11c4d1b9/cp-metrics/0.log" Dec 10 16:16:48 crc kubenswrapper[4718]: I1210 16:16:48.352868 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-jhgkf_8c7642dd-0879-49cf-870a-a30a11c4d1b9/cp-metrics/0.log" Dec 10 16:16:48 crc kubenswrapper[4718]: I1210 16:16:48.519211 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-jhgkf_8c7642dd-0879-49cf-870a-a30a11c4d1b9/cp-reloader/0.log" Dec 10 16:16:48 crc kubenswrapper[4718]: I1210 16:16:48.533795 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-jhgkf_8c7642dd-0879-49cf-870a-a30a11c4d1b9/cp-frr-files/0.log" Dec 10 16:16:48 crc kubenswrapper[4718]: I1210 16:16:48.547776 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-jhgkf_8c7642dd-0879-49cf-870a-a30a11c4d1b9/cp-metrics/0.log" Dec 10 16:16:48 crc kubenswrapper[4718]: I1210 16:16:48.605769 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-jhgkf_8c7642dd-0879-49cf-870a-a30a11c4d1b9/controller/0.log" Dec 10 16:16:48 crc kubenswrapper[4718]: I1210 16:16:48.806380 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-jhgkf_8c7642dd-0879-49cf-870a-a30a11c4d1b9/kube-rbac-proxy/0.log" Dec 10 16:16:48 crc kubenswrapper[4718]: I1210 16:16:48.821247 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-jhgkf_8c7642dd-0879-49cf-870a-a30a11c4d1b9/frr-metrics/0.log" Dec 10 16:16:48 crc kubenswrapper[4718]: I1210 16:16:48.860516 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-jhgkf_8c7642dd-0879-49cf-870a-a30a11c4d1b9/kube-rbac-proxy-frr/0.log" Dec 10 16:16:49 crc kubenswrapper[4718]: I1210 16:16:49.022204 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-jhgkf_8c7642dd-0879-49cf-870a-a30a11c4d1b9/reloader/0.log" Dec 10 16:16:49 crc kubenswrapper[4718]: I1210 16:16:49.164927 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7fcb986d4-45v7s_63f53c92-7e30-4be9-be8e-4eb3126d9fc1/frr-k8s-webhook-server/0.log" Dec 10 16:16:49 crc kubenswrapper[4718]: I1210 16:16:49.421245 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-5d7fdbff7b-h2jjz_2c08926d-27dd-4571-a545-aeb91d97a810/manager/0.log" Dec 10 16:16:49 crc kubenswrapper[4718]: I1210 16:16:49.514992 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-6fdd887f57-qm9df_dc4a693b-8b70-42fd-9a9e-83fcdcc7cb6e/webhook-server/0.log" Dec 10 16:16:49 crc kubenswrapper[4718]: I1210 16:16:49.686993 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-pbmz7_1a988a3f-3408-4963-8f6c-b77351286aab/kube-rbac-proxy/0.log" Dec 10 16:16:50 crc kubenswrapper[4718]: I1210 16:16:50.468594 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-pbmz7_1a988a3f-3408-4963-8f6c-b77351286aab/speaker/0.log" Dec 10 16:16:50 crc kubenswrapper[4718]: I1210 16:16:50.726483 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-jhgkf_8c7642dd-0879-49cf-870a-a30a11c4d1b9/frr/0.log" Dec 10 16:17:03 crc kubenswrapper[4718]: I1210 16:17:03.107857 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f6qfc5_ee0c87fb-677b-4338-85a7-3a53ad85e806/util/0.log" Dec 10 16:17:03 crc kubenswrapper[4718]: I1210 16:17:03.350164 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f6qfc5_ee0c87fb-677b-4338-85a7-3a53ad85e806/pull/0.log" Dec 10 16:17:03 crc kubenswrapper[4718]: I1210 16:17:03.357107 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f6qfc5_ee0c87fb-677b-4338-85a7-3a53ad85e806/util/0.log" Dec 10 16:17:03 crc kubenswrapper[4718]: I1210 16:17:03.398858 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f6qfc5_ee0c87fb-677b-4338-85a7-3a53ad85e806/pull/0.log" Dec 10 16:17:03 crc kubenswrapper[4718]: I1210 16:17:03.574917 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f6qfc5_ee0c87fb-677b-4338-85a7-3a53ad85e806/util/0.log" Dec 10 16:17:03 crc kubenswrapper[4718]: I1210 16:17:03.585239 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f6qfc5_ee0c87fb-677b-4338-85a7-3a53ad85e806/extract/0.log" Dec 10 16:17:03 crc kubenswrapper[4718]: I1210 16:17:03.604888 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f6qfc5_ee0c87fb-677b-4338-85a7-3a53ad85e806/pull/0.log" Dec 10 16:17:03 crc kubenswrapper[4718]: I1210 16:17:03.758990 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210dk787_792e6faa-53e5-4bb6-baa5-32ce33828b19/util/0.log" Dec 10 16:17:03 crc kubenswrapper[4718]: I1210 16:17:03.983021 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210dk787_792e6faa-53e5-4bb6-baa5-32ce33828b19/pull/0.log" Dec 10 16:17:03 crc kubenswrapper[4718]: I1210 16:17:03.992913 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210dk787_792e6faa-53e5-4bb6-baa5-32ce33828b19/pull/0.log" Dec 10 16:17:04 crc kubenswrapper[4718]: I1210 16:17:04.009598 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210dk787_792e6faa-53e5-4bb6-baa5-32ce33828b19/util/0.log" Dec 10 16:17:04 crc kubenswrapper[4718]: I1210 16:17:04.171939 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210dk787_792e6faa-53e5-4bb6-baa5-32ce33828b19/util/0.log" Dec 10 16:17:04 crc kubenswrapper[4718]: I1210 16:17:04.207283 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210dk787_792e6faa-53e5-4bb6-baa5-32ce33828b19/pull/0.log" Dec 10 16:17:04 crc kubenswrapper[4718]: I1210 16:17:04.218231 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210dk787_792e6faa-53e5-4bb6-baa5-32ce33828b19/extract/0.log" Dec 10 16:17:04 crc kubenswrapper[4718]: I1210 16:17:04.499279 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83pdbgb_9350b27a-484f-491b-9a2f-2ae333f3636b/util/0.log" Dec 10 16:17:04 crc kubenswrapper[4718]: I1210 16:17:04.599422 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83pdbgb_9350b27a-484f-491b-9a2f-2ae333f3636b/util/0.log" Dec 10 16:17:04 crc kubenswrapper[4718]: I1210 16:17:04.619824 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83pdbgb_9350b27a-484f-491b-9a2f-2ae333f3636b/pull/0.log" Dec 10 16:17:04 crc kubenswrapper[4718]: I1210 16:17:04.620897 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83pdbgb_9350b27a-484f-491b-9a2f-2ae333f3636b/pull/0.log" Dec 10 16:17:04 crc kubenswrapper[4718]: I1210 16:17:04.842008 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83pdbgb_9350b27a-484f-491b-9a2f-2ae333f3636b/util/0.log" Dec 10 16:17:04 crc kubenswrapper[4718]: I1210 16:17:04.899164 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83pdbgb_9350b27a-484f-491b-9a2f-2ae333f3636b/extract/0.log" Dec 10 16:17:04 crc kubenswrapper[4718]: I1210 16:17:04.905853 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83pdbgb_9350b27a-484f-491b-9a2f-2ae333f3636b/pull/0.log" Dec 10 16:17:05 crc kubenswrapper[4718]: I1210 16:17:05.056455 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-5vctd_81ebdd62-3494-4d9a-8d04-ae6122173e69/extract-utilities/0.log" Dec 10 16:17:05 crc kubenswrapper[4718]: I1210 16:17:05.271267 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-5vctd_81ebdd62-3494-4d9a-8d04-ae6122173e69/extract-utilities/0.log" Dec 10 16:17:05 crc kubenswrapper[4718]: I1210 16:17:05.281629 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-5vctd_81ebdd62-3494-4d9a-8d04-ae6122173e69/extract-content/0.log" Dec 10 16:17:05 crc kubenswrapper[4718]: I1210 16:17:05.289122 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-5vctd_81ebdd62-3494-4d9a-8d04-ae6122173e69/extract-content/0.log" Dec 10 16:17:05 crc kubenswrapper[4718]: I1210 16:17:05.461332 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-5vctd_81ebdd62-3494-4d9a-8d04-ae6122173e69/extract-utilities/0.log" Dec 10 16:17:05 crc kubenswrapper[4718]: I1210 16:17:05.484746 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-5vctd_81ebdd62-3494-4d9a-8d04-ae6122173e69/extract-content/0.log" Dec 10 16:17:05 crc kubenswrapper[4718]: I1210 16:17:05.808579 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-dh2hb_ed6de05f-b121-47be-9317-39b153c3012b/extract-utilities/0.log" Dec 10 16:17:06 crc kubenswrapper[4718]: I1210 16:17:06.016749 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-dh2hb_ed6de05f-b121-47be-9317-39b153c3012b/extract-content/0.log" Dec 10 16:17:06 crc kubenswrapper[4718]: I1210 16:17:06.098612 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-dh2hb_ed6de05f-b121-47be-9317-39b153c3012b/extract-utilities/0.log" Dec 10 16:17:06 crc kubenswrapper[4718]: I1210 16:17:06.194300 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-dh2hb_ed6de05f-b121-47be-9317-39b153c3012b/extract-content/0.log" Dec 10 16:17:06 crc kubenswrapper[4718]: I1210 16:17:06.323441 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-5vctd_81ebdd62-3494-4d9a-8d04-ae6122173e69/registry-server/0.log" Dec 10 16:17:06 crc kubenswrapper[4718]: I1210 16:17:06.390704 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-dh2hb_ed6de05f-b121-47be-9317-39b153c3012b/extract-content/0.log" Dec 10 16:17:06 crc kubenswrapper[4718]: I1210 16:17:06.396018 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-dh2hb_ed6de05f-b121-47be-9317-39b153c3012b/extract-utilities/0.log" Dec 10 16:17:06 crc kubenswrapper[4718]: I1210 16:17:06.763439 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-f8m8j_fe45ddb0-b6e9-4690-8d2c-dfcef7de0d90/marketplace-operator/0.log" Dec 10 16:17:06 crc kubenswrapper[4718]: I1210 16:17:06.908459 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-rdjz4_4e412181-46c3-4cde-81c7-92efeeacc196/extract-utilities/0.log" Dec 10 16:17:07 crc kubenswrapper[4718]: I1210 16:17:07.210762 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-rdjz4_4e412181-46c3-4cde-81c7-92efeeacc196/extract-content/0.log" Dec 10 16:17:07 crc kubenswrapper[4718]: I1210 16:17:07.247935 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-rdjz4_4e412181-46c3-4cde-81c7-92efeeacc196/extract-utilities/0.log" Dec 10 16:17:07 crc kubenswrapper[4718]: I1210 16:17:07.290944 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-rdjz4_4e412181-46c3-4cde-81c7-92efeeacc196/extract-content/0.log" Dec 10 16:17:07 crc kubenswrapper[4718]: I1210 16:17:07.470055 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-rdjz4_4e412181-46c3-4cde-81c7-92efeeacc196/extract-utilities/0.log" Dec 10 16:17:07 crc kubenswrapper[4718]: I1210 16:17:07.567056 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-rdjz4_4e412181-46c3-4cde-81c7-92efeeacc196/extract-content/0.log" Dec 10 16:17:07 crc kubenswrapper[4718]: I1210 16:17:07.599709 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-dh2hb_ed6de05f-b121-47be-9317-39b153c3012b/registry-server/0.log" Dec 10 16:17:07 crc kubenswrapper[4718]: I1210 16:17:07.774742 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-rdjz4_4e412181-46c3-4cde-81c7-92efeeacc196/registry-server/0.log" Dec 10 16:17:07 crc kubenswrapper[4718]: I1210 16:17:07.801275 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-k9h7w_4525909a-e5eb-458e-9a90-b5a079e0eb09/extract-utilities/0.log" Dec 10 16:17:07 crc kubenswrapper[4718]: I1210 16:17:07.969019 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-k9h7w_4525909a-e5eb-458e-9a90-b5a079e0eb09/extract-content/0.log" Dec 10 16:17:07 crc kubenswrapper[4718]: I1210 16:17:07.990509 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-k9h7w_4525909a-e5eb-458e-9a90-b5a079e0eb09/extract-utilities/0.log" Dec 10 16:17:08 crc kubenswrapper[4718]: I1210 16:17:08.005759 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-k9h7w_4525909a-e5eb-458e-9a90-b5a079e0eb09/extract-content/0.log" Dec 10 16:17:08 crc kubenswrapper[4718]: I1210 16:17:08.205551 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-k9h7w_4525909a-e5eb-458e-9a90-b5a079e0eb09/extract-content/0.log" Dec 10 16:17:08 crc kubenswrapper[4718]: I1210 16:17:08.240156 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-k9h7w_4525909a-e5eb-458e-9a90-b5a079e0eb09/extract-utilities/0.log" Dec 10 16:17:08 crc kubenswrapper[4718]: I1210 16:17:08.601820 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-k9h7w_4525909a-e5eb-458e-9a90-b5a079e0eb09/registry-server/0.log" Dec 10 16:17:18 crc kubenswrapper[4718]: I1210 16:17:18.084597 4718 patch_prober.go:28] interesting pod/machine-config-daemon-8zmhn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 10 16:17:18 crc kubenswrapper[4718]: I1210 16:17:18.085275 4718 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" podUID="8db53917-7cfb-496d-b8a0-5cc68f3be4e7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 10 16:17:21 crc kubenswrapper[4718]: I1210 16:17:21.793792 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-668cf9dfbb-czssc_8dd390a0-a978-4de3-ad2e-b76c9f9288ff/prometheus-operator/0.log" Dec 10 16:17:21 crc kubenswrapper[4718]: I1210 16:17:21.995215 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-85c6579888-795fs_d7da36d4-9aa5-4d1c-8135-f4af9a21dde9/prometheus-operator-admission-webhook/0.log" Dec 10 16:17:22 crc kubenswrapper[4718]: I1210 16:17:22.061781 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-85c6579888-mxh5h_dc2f2282-251a-4b32-b59d-8e28aa8e28b1/prometheus-operator-admission-webhook/0.log" Dec 10 16:17:22 crc kubenswrapper[4718]: I1210 16:17:22.296336 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-d8bb48f5d-k2ncb_ecb014f7-37b1-431b-a452-676f723287f4/operator/0.log" Dec 10 16:17:22 crc kubenswrapper[4718]: I1210 16:17:22.362445 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5446b9c989-fxb27_5ae1b9f9-6939-4aa0-8651-a76dafd291a4/perses-operator/0.log" Dec 10 16:17:48 crc kubenswrapper[4718]: I1210 16:17:48.084330 4718 patch_prober.go:28] interesting pod/machine-config-daemon-8zmhn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 10 16:17:48 crc kubenswrapper[4718]: I1210 16:17:48.084968 4718 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" podUID="8db53917-7cfb-496d-b8a0-5cc68f3be4e7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 10 16:18:18 crc kubenswrapper[4718]: I1210 16:18:18.084498 4718 patch_prober.go:28] interesting pod/machine-config-daemon-8zmhn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 10 16:18:18 crc kubenswrapper[4718]: I1210 16:18:18.085746 4718 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" podUID="8db53917-7cfb-496d-b8a0-5cc68f3be4e7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 10 16:18:18 crc kubenswrapper[4718]: I1210 16:18:18.085882 4718 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" Dec 10 16:18:18 crc kubenswrapper[4718]: I1210 16:18:18.086854 4718 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"efbe25aac6a704d523a48396ea05982c2cc2181337c42dd02c1adda5c0cc383d"} pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 10 16:18:18 crc kubenswrapper[4718]: I1210 16:18:18.086996 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" podUID="8db53917-7cfb-496d-b8a0-5cc68f3be4e7" containerName="machine-config-daemon" containerID="cri-o://efbe25aac6a704d523a48396ea05982c2cc2181337c42dd02c1adda5c0cc383d" gracePeriod=600 Dec 10 16:18:18 crc kubenswrapper[4718]: E1210 16:18:18.286352 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8zmhn_openshift-machine-config-operator(8db53917-7cfb-496d-b8a0-5cc68f3be4e7)\"" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" podUID="8db53917-7cfb-496d-b8a0-5cc68f3be4e7" Dec 10 16:18:18 crc kubenswrapper[4718]: I1210 16:18:18.293986 4718 generic.go:334] "Generic (PLEG): container finished" podID="8db53917-7cfb-496d-b8a0-5cc68f3be4e7" containerID="efbe25aac6a704d523a48396ea05982c2cc2181337c42dd02c1adda5c0cc383d" exitCode=0 Dec 10 16:18:18 crc kubenswrapper[4718]: I1210 16:18:18.294036 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" event={"ID":"8db53917-7cfb-496d-b8a0-5cc68f3be4e7","Type":"ContainerDied","Data":"efbe25aac6a704d523a48396ea05982c2cc2181337c42dd02c1adda5c0cc383d"} Dec 10 16:18:18 crc kubenswrapper[4718]: I1210 16:18:18.294079 4718 scope.go:117] "RemoveContainer" containerID="0c11c531f13dd9c964e72f22f6c356f673d535df358bec5460370fca430952f1" Dec 10 16:18:18 crc kubenswrapper[4718]: I1210 16:18:18.294798 4718 scope.go:117] "RemoveContainer" containerID="efbe25aac6a704d523a48396ea05982c2cc2181337c42dd02c1adda5c0cc383d" Dec 10 16:18:18 crc kubenswrapper[4718]: E1210 16:18:18.295113 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8zmhn_openshift-machine-config-operator(8db53917-7cfb-496d-b8a0-5cc68f3be4e7)\"" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" podUID="8db53917-7cfb-496d-b8a0-5cc68f3be4e7" Dec 10 16:18:33 crc kubenswrapper[4718]: I1210 16:18:33.541939 4718 scope.go:117] "RemoveContainer" containerID="efbe25aac6a704d523a48396ea05982c2cc2181337c42dd02c1adda5c0cc383d" Dec 10 16:18:33 crc kubenswrapper[4718]: E1210 16:18:33.542922 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8zmhn_openshift-machine-config-operator(8db53917-7cfb-496d-b8a0-5cc68f3be4e7)\"" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" podUID="8db53917-7cfb-496d-b8a0-5cc68f3be4e7" Dec 10 16:18:46 crc kubenswrapper[4718]: I1210 16:18:46.027570 4718 scope.go:117] "RemoveContainer" containerID="efbe25aac6a704d523a48396ea05982c2cc2181337c42dd02c1adda5c0cc383d" Dec 10 16:18:46 crc kubenswrapper[4718]: E1210 16:18:46.028506 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8zmhn_openshift-machine-config-operator(8db53917-7cfb-496d-b8a0-5cc68f3be4e7)\"" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" podUID="8db53917-7cfb-496d-b8a0-5cc68f3be4e7" Dec 10 16:18:57 crc kubenswrapper[4718]: I1210 16:18:57.022019 4718 scope.go:117] "RemoveContainer" containerID="efbe25aac6a704d523a48396ea05982c2cc2181337c42dd02c1adda5c0cc383d" Dec 10 16:18:57 crc kubenswrapper[4718]: E1210 16:18:57.024482 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8zmhn_openshift-machine-config-operator(8db53917-7cfb-496d-b8a0-5cc68f3be4e7)\"" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" podUID="8db53917-7cfb-496d-b8a0-5cc68f3be4e7" Dec 10 16:19:08 crc kubenswrapper[4718]: I1210 16:19:08.021435 4718 scope.go:117] "RemoveContainer" containerID="efbe25aac6a704d523a48396ea05982c2cc2181337c42dd02c1adda5c0cc383d" Dec 10 16:19:08 crc kubenswrapper[4718]: E1210 16:19:08.022673 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8zmhn_openshift-machine-config-operator(8db53917-7cfb-496d-b8a0-5cc68f3be4e7)\"" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" podUID="8db53917-7cfb-496d-b8a0-5cc68f3be4e7" Dec 10 16:19:13 crc kubenswrapper[4718]: I1210 16:19:13.887838 4718 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/swift-proxy-6c7874df4c-ns7dm" podUID="52552bcb-7acc-4882-86ef-0353a39e7262" containerName="proxy-server" probeResult="failure" output="HTTP probe failed with statuscode: 502" Dec 10 16:19:23 crc kubenswrapper[4718]: I1210 16:19:23.020149 4718 scope.go:117] "RemoveContainer" containerID="efbe25aac6a704d523a48396ea05982c2cc2181337c42dd02c1adda5c0cc383d" Dec 10 16:19:23 crc kubenswrapper[4718]: E1210 16:19:23.021096 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8zmhn_openshift-machine-config-operator(8db53917-7cfb-496d-b8a0-5cc68f3be4e7)\"" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" podUID="8db53917-7cfb-496d-b8a0-5cc68f3be4e7" Dec 10 16:19:34 crc kubenswrapper[4718]: I1210 16:19:34.020218 4718 scope.go:117] "RemoveContainer" containerID="efbe25aac6a704d523a48396ea05982c2cc2181337c42dd02c1adda5c0cc383d" Dec 10 16:19:34 crc kubenswrapper[4718]: E1210 16:19:34.021372 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8zmhn_openshift-machine-config-operator(8db53917-7cfb-496d-b8a0-5cc68f3be4e7)\"" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" podUID="8db53917-7cfb-496d-b8a0-5cc68f3be4e7" Dec 10 16:19:46 crc kubenswrapper[4718]: I1210 16:19:46.195006 4718 scope.go:117] "RemoveContainer" containerID="efbe25aac6a704d523a48396ea05982c2cc2181337c42dd02c1adda5c0cc383d" Dec 10 16:19:46 crc kubenswrapper[4718]: E1210 16:19:46.196133 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8zmhn_openshift-machine-config-operator(8db53917-7cfb-496d-b8a0-5cc68f3be4e7)\"" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" podUID="8db53917-7cfb-496d-b8a0-5cc68f3be4e7" Dec 10 16:19:50 crc kubenswrapper[4718]: I1210 16:19:50.297517 4718 generic.go:334] "Generic (PLEG): container finished" podID="f8400f40-f090-420a-9c71-5bef1a2bce1f" containerID="d058b5da14c3f5abe00b3a50316fce676c9cc30e96bacc30a23060b168ad804b" exitCode=0 Dec 10 16:19:50 crc kubenswrapper[4718]: I1210 16:19:50.297573 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-xvrn2/must-gather-7sj76" event={"ID":"f8400f40-f090-420a-9c71-5bef1a2bce1f","Type":"ContainerDied","Data":"d058b5da14c3f5abe00b3a50316fce676c9cc30e96bacc30a23060b168ad804b"} Dec 10 16:19:50 crc kubenswrapper[4718]: I1210 16:19:50.299702 4718 scope.go:117] "RemoveContainer" containerID="d058b5da14c3f5abe00b3a50316fce676c9cc30e96bacc30a23060b168ad804b" Dec 10 16:19:50 crc kubenswrapper[4718]: I1210 16:19:50.806894 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-xvrn2_must-gather-7sj76_f8400f40-f090-420a-9c71-5bef1a2bce1f/gather/0.log" Dec 10 16:19:58 crc kubenswrapper[4718]: I1210 16:19:58.020900 4718 scope.go:117] "RemoveContainer" containerID="efbe25aac6a704d523a48396ea05982c2cc2181337c42dd02c1adda5c0cc383d" Dec 10 16:19:58 crc kubenswrapper[4718]: E1210 16:19:58.021721 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8zmhn_openshift-machine-config-operator(8db53917-7cfb-496d-b8a0-5cc68f3be4e7)\"" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" podUID="8db53917-7cfb-496d-b8a0-5cc68f3be4e7" Dec 10 16:19:59 crc kubenswrapper[4718]: I1210 16:19:59.828995 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-xvrn2/must-gather-7sj76"] Dec 10 16:19:59 crc kubenswrapper[4718]: I1210 16:19:59.829787 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-xvrn2/must-gather-7sj76" podUID="f8400f40-f090-420a-9c71-5bef1a2bce1f" containerName="copy" containerID="cri-o://25dd7b76ab8d70a94eac648b0e0be013e7836a2e7d298803ddf67997248ee57a" gracePeriod=2 Dec 10 16:19:59 crc kubenswrapper[4718]: I1210 16:19:59.842604 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-xvrn2/must-gather-7sj76"] Dec 10 16:20:00 crc kubenswrapper[4718]: I1210 16:20:00.286456 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-xvrn2_must-gather-7sj76_f8400f40-f090-420a-9c71-5bef1a2bce1f/copy/0.log" Dec 10 16:20:00 crc kubenswrapper[4718]: I1210 16:20:00.287253 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-xvrn2/must-gather-7sj76" Dec 10 16:20:00 crc kubenswrapper[4718]: I1210 16:20:00.463162 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j9tgj\" (UniqueName: \"kubernetes.io/projected/f8400f40-f090-420a-9c71-5bef1a2bce1f-kube-api-access-j9tgj\") pod \"f8400f40-f090-420a-9c71-5bef1a2bce1f\" (UID: \"f8400f40-f090-420a-9c71-5bef1a2bce1f\") " Dec 10 16:20:00 crc kubenswrapper[4718]: I1210 16:20:00.463600 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/f8400f40-f090-420a-9c71-5bef1a2bce1f-must-gather-output\") pod \"f8400f40-f090-420a-9c71-5bef1a2bce1f\" (UID: \"f8400f40-f090-420a-9c71-5bef1a2bce1f\") " Dec 10 16:20:00 crc kubenswrapper[4718]: I1210 16:20:00.474729 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f8400f40-f090-420a-9c71-5bef1a2bce1f-kube-api-access-j9tgj" (OuterVolumeSpecName: "kube-api-access-j9tgj") pod "f8400f40-f090-420a-9c71-5bef1a2bce1f" (UID: "f8400f40-f090-420a-9c71-5bef1a2bce1f"). InnerVolumeSpecName "kube-api-access-j9tgj". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 16:20:00 crc kubenswrapper[4718]: I1210 16:20:00.547676 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-xvrn2_must-gather-7sj76_f8400f40-f090-420a-9c71-5bef1a2bce1f/copy/0.log" Dec 10 16:20:00 crc kubenswrapper[4718]: I1210 16:20:00.548167 4718 generic.go:334] "Generic (PLEG): container finished" podID="f8400f40-f090-420a-9c71-5bef1a2bce1f" containerID="25dd7b76ab8d70a94eac648b0e0be013e7836a2e7d298803ddf67997248ee57a" exitCode=143 Dec 10 16:20:00 crc kubenswrapper[4718]: I1210 16:20:00.548258 4718 scope.go:117] "RemoveContainer" containerID="25dd7b76ab8d70a94eac648b0e0be013e7836a2e7d298803ddf67997248ee57a" Dec 10 16:20:00 crc kubenswrapper[4718]: I1210 16:20:00.548289 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-xvrn2/must-gather-7sj76" Dec 10 16:20:00 crc kubenswrapper[4718]: I1210 16:20:00.566238 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j9tgj\" (UniqueName: \"kubernetes.io/projected/f8400f40-f090-420a-9c71-5bef1a2bce1f-kube-api-access-j9tgj\") on node \"crc\" DevicePath \"\"" Dec 10 16:20:00 crc kubenswrapper[4718]: I1210 16:20:00.586836 4718 scope.go:117] "RemoveContainer" containerID="d058b5da14c3f5abe00b3a50316fce676c9cc30e96bacc30a23060b168ad804b" Dec 10 16:20:00 crc kubenswrapper[4718]: I1210 16:20:00.713808 4718 scope.go:117] "RemoveContainer" containerID="25dd7b76ab8d70a94eac648b0e0be013e7836a2e7d298803ddf67997248ee57a" Dec 10 16:20:00 crc kubenswrapper[4718]: E1210 16:20:00.714639 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"25dd7b76ab8d70a94eac648b0e0be013e7836a2e7d298803ddf67997248ee57a\": container with ID starting with 25dd7b76ab8d70a94eac648b0e0be013e7836a2e7d298803ddf67997248ee57a not found: ID does not exist" containerID="25dd7b76ab8d70a94eac648b0e0be013e7836a2e7d298803ddf67997248ee57a" Dec 10 16:20:00 crc kubenswrapper[4718]: I1210 16:20:00.714700 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"25dd7b76ab8d70a94eac648b0e0be013e7836a2e7d298803ddf67997248ee57a"} err="failed to get container status \"25dd7b76ab8d70a94eac648b0e0be013e7836a2e7d298803ddf67997248ee57a\": rpc error: code = NotFound desc = could not find container \"25dd7b76ab8d70a94eac648b0e0be013e7836a2e7d298803ddf67997248ee57a\": container with ID starting with 25dd7b76ab8d70a94eac648b0e0be013e7836a2e7d298803ddf67997248ee57a not found: ID does not exist" Dec 10 16:20:00 crc kubenswrapper[4718]: I1210 16:20:00.714732 4718 scope.go:117] "RemoveContainer" containerID="d058b5da14c3f5abe00b3a50316fce676c9cc30e96bacc30a23060b168ad804b" Dec 10 16:20:00 crc kubenswrapper[4718]: E1210 16:20:00.715113 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d058b5da14c3f5abe00b3a50316fce676c9cc30e96bacc30a23060b168ad804b\": container with ID starting with d058b5da14c3f5abe00b3a50316fce676c9cc30e96bacc30a23060b168ad804b not found: ID does not exist" containerID="d058b5da14c3f5abe00b3a50316fce676c9cc30e96bacc30a23060b168ad804b" Dec 10 16:20:00 crc kubenswrapper[4718]: I1210 16:20:00.715171 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d058b5da14c3f5abe00b3a50316fce676c9cc30e96bacc30a23060b168ad804b"} err="failed to get container status \"d058b5da14c3f5abe00b3a50316fce676c9cc30e96bacc30a23060b168ad804b\": rpc error: code = NotFound desc = could not find container \"d058b5da14c3f5abe00b3a50316fce676c9cc30e96bacc30a23060b168ad804b\": container with ID starting with d058b5da14c3f5abe00b3a50316fce676c9cc30e96bacc30a23060b168ad804b not found: ID does not exist" Dec 10 16:20:00 crc kubenswrapper[4718]: I1210 16:20:00.730004 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f8400f40-f090-420a-9c71-5bef1a2bce1f-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "f8400f40-f090-420a-9c71-5bef1a2bce1f" (UID: "f8400f40-f090-420a-9c71-5bef1a2bce1f"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 16:20:00 crc kubenswrapper[4718]: I1210 16:20:00.770177 4718 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/f8400f40-f090-420a-9c71-5bef1a2bce1f-must-gather-output\") on node \"crc\" DevicePath \"\"" Dec 10 16:20:02 crc kubenswrapper[4718]: I1210 16:20:02.035931 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f8400f40-f090-420a-9c71-5bef1a2bce1f" path="/var/lib/kubelet/pods/f8400f40-f090-420a-9c71-5bef1a2bce1f/volumes" Dec 10 16:20:05 crc kubenswrapper[4718]: I1210 16:20:05.831828 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-kssn4"] Dec 10 16:20:05 crc kubenswrapper[4718]: E1210 16:20:05.833073 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f8400f40-f090-420a-9c71-5bef1a2bce1f" containerName="gather" Dec 10 16:20:05 crc kubenswrapper[4718]: I1210 16:20:05.833103 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="f8400f40-f090-420a-9c71-5bef1a2bce1f" containerName="gather" Dec 10 16:20:05 crc kubenswrapper[4718]: E1210 16:20:05.833127 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="31e61951-a00b-4150-be6e-2705059eeeee" containerName="extract-content" Dec 10 16:20:05 crc kubenswrapper[4718]: I1210 16:20:05.833135 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="31e61951-a00b-4150-be6e-2705059eeeee" containerName="extract-content" Dec 10 16:20:05 crc kubenswrapper[4718]: E1210 16:20:05.833164 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="31e61951-a00b-4150-be6e-2705059eeeee" containerName="extract-utilities" Dec 10 16:20:05 crc kubenswrapper[4718]: I1210 16:20:05.833169 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="31e61951-a00b-4150-be6e-2705059eeeee" containerName="extract-utilities" Dec 10 16:20:05 crc kubenswrapper[4718]: E1210 16:20:05.833199 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="31e61951-a00b-4150-be6e-2705059eeeee" containerName="registry-server" Dec 10 16:20:05 crc kubenswrapper[4718]: I1210 16:20:05.833205 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="31e61951-a00b-4150-be6e-2705059eeeee" containerName="registry-server" Dec 10 16:20:05 crc kubenswrapper[4718]: E1210 16:20:05.833228 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f8400f40-f090-420a-9c71-5bef1a2bce1f" containerName="copy" Dec 10 16:20:05 crc kubenswrapper[4718]: I1210 16:20:05.833233 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="f8400f40-f090-420a-9c71-5bef1a2bce1f" containerName="copy" Dec 10 16:20:05 crc kubenswrapper[4718]: I1210 16:20:05.833514 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="f8400f40-f090-420a-9c71-5bef1a2bce1f" containerName="copy" Dec 10 16:20:05 crc kubenswrapper[4718]: I1210 16:20:05.833542 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="31e61951-a00b-4150-be6e-2705059eeeee" containerName="registry-server" Dec 10 16:20:05 crc kubenswrapper[4718]: I1210 16:20:05.833564 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="f8400f40-f090-420a-9c71-5bef1a2bce1f" containerName="gather" Dec 10 16:20:05 crc kubenswrapper[4718]: I1210 16:20:05.836010 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kssn4" Dec 10 16:20:05 crc kubenswrapper[4718]: I1210 16:20:05.847715 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-kssn4"] Dec 10 16:20:05 crc kubenswrapper[4718]: I1210 16:20:05.910041 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/77f9cb8f-fb1d-42f5-94f9-fa27a348b9ad-catalog-content\") pod \"certified-operators-kssn4\" (UID: \"77f9cb8f-fb1d-42f5-94f9-fa27a348b9ad\") " pod="openshift-marketplace/certified-operators-kssn4" Dec 10 16:20:05 crc kubenswrapper[4718]: I1210 16:20:05.910136 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l8vpc\" (UniqueName: \"kubernetes.io/projected/77f9cb8f-fb1d-42f5-94f9-fa27a348b9ad-kube-api-access-l8vpc\") pod \"certified-operators-kssn4\" (UID: \"77f9cb8f-fb1d-42f5-94f9-fa27a348b9ad\") " pod="openshift-marketplace/certified-operators-kssn4" Dec 10 16:20:05 crc kubenswrapper[4718]: I1210 16:20:05.910276 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/77f9cb8f-fb1d-42f5-94f9-fa27a348b9ad-utilities\") pod \"certified-operators-kssn4\" (UID: \"77f9cb8f-fb1d-42f5-94f9-fa27a348b9ad\") " pod="openshift-marketplace/certified-operators-kssn4" Dec 10 16:20:06 crc kubenswrapper[4718]: I1210 16:20:06.012688 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/77f9cb8f-fb1d-42f5-94f9-fa27a348b9ad-catalog-content\") pod \"certified-operators-kssn4\" (UID: \"77f9cb8f-fb1d-42f5-94f9-fa27a348b9ad\") " pod="openshift-marketplace/certified-operators-kssn4" Dec 10 16:20:06 crc kubenswrapper[4718]: I1210 16:20:06.012796 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l8vpc\" (UniqueName: \"kubernetes.io/projected/77f9cb8f-fb1d-42f5-94f9-fa27a348b9ad-kube-api-access-l8vpc\") pod \"certified-operators-kssn4\" (UID: \"77f9cb8f-fb1d-42f5-94f9-fa27a348b9ad\") " pod="openshift-marketplace/certified-operators-kssn4" Dec 10 16:20:06 crc kubenswrapper[4718]: I1210 16:20:06.012900 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/77f9cb8f-fb1d-42f5-94f9-fa27a348b9ad-utilities\") pod \"certified-operators-kssn4\" (UID: \"77f9cb8f-fb1d-42f5-94f9-fa27a348b9ad\") " pod="openshift-marketplace/certified-operators-kssn4" Dec 10 16:20:06 crc kubenswrapper[4718]: I1210 16:20:06.013405 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/77f9cb8f-fb1d-42f5-94f9-fa27a348b9ad-catalog-content\") pod \"certified-operators-kssn4\" (UID: \"77f9cb8f-fb1d-42f5-94f9-fa27a348b9ad\") " pod="openshift-marketplace/certified-operators-kssn4" Dec 10 16:20:06 crc kubenswrapper[4718]: I1210 16:20:06.013500 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/77f9cb8f-fb1d-42f5-94f9-fa27a348b9ad-utilities\") pod \"certified-operators-kssn4\" (UID: \"77f9cb8f-fb1d-42f5-94f9-fa27a348b9ad\") " pod="openshift-marketplace/certified-operators-kssn4" Dec 10 16:20:06 crc kubenswrapper[4718]: I1210 16:20:06.035551 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l8vpc\" (UniqueName: \"kubernetes.io/projected/77f9cb8f-fb1d-42f5-94f9-fa27a348b9ad-kube-api-access-l8vpc\") pod \"certified-operators-kssn4\" (UID: \"77f9cb8f-fb1d-42f5-94f9-fa27a348b9ad\") " pod="openshift-marketplace/certified-operators-kssn4" Dec 10 16:20:06 crc kubenswrapper[4718]: I1210 16:20:06.169793 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kssn4" Dec 10 16:20:06 crc kubenswrapper[4718]: I1210 16:20:06.760130 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-kssn4"] Dec 10 16:20:06 crc kubenswrapper[4718]: I1210 16:20:06.830548 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kssn4" event={"ID":"77f9cb8f-fb1d-42f5-94f9-fa27a348b9ad","Type":"ContainerStarted","Data":"4359fc949660cc2eab5138a77d71332ed03bbf4816679432866043c35a53461c"} Dec 10 16:20:07 crc kubenswrapper[4718]: I1210 16:20:07.845568 4718 generic.go:334] "Generic (PLEG): container finished" podID="77f9cb8f-fb1d-42f5-94f9-fa27a348b9ad" containerID="c53135c00e4ebbe5667f3c265e987d16a0276e0f03a86936992cf1653840037e" exitCode=0 Dec 10 16:20:07 crc kubenswrapper[4718]: I1210 16:20:07.845632 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kssn4" event={"ID":"77f9cb8f-fb1d-42f5-94f9-fa27a348b9ad","Type":"ContainerDied","Data":"c53135c00e4ebbe5667f3c265e987d16a0276e0f03a86936992cf1653840037e"} Dec 10 16:20:09 crc kubenswrapper[4718]: I1210 16:20:09.020176 4718 scope.go:117] "RemoveContainer" containerID="efbe25aac6a704d523a48396ea05982c2cc2181337c42dd02c1adda5c0cc383d" Dec 10 16:20:09 crc kubenswrapper[4718]: E1210 16:20:09.020777 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8zmhn_openshift-machine-config-operator(8db53917-7cfb-496d-b8a0-5cc68f3be4e7)\"" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" podUID="8db53917-7cfb-496d-b8a0-5cc68f3be4e7" Dec 10 16:20:09 crc kubenswrapper[4718]: I1210 16:20:09.867031 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kssn4" event={"ID":"77f9cb8f-fb1d-42f5-94f9-fa27a348b9ad","Type":"ContainerStarted","Data":"1a018d7ca8921c4271c3e7442da6784db8a0249e38400287d7d4df194bfb47fa"} Dec 10 16:20:10 crc kubenswrapper[4718]: I1210 16:20:10.879953 4718 generic.go:334] "Generic (PLEG): container finished" podID="77f9cb8f-fb1d-42f5-94f9-fa27a348b9ad" containerID="1a018d7ca8921c4271c3e7442da6784db8a0249e38400287d7d4df194bfb47fa" exitCode=0 Dec 10 16:20:10 crc kubenswrapper[4718]: I1210 16:20:10.880057 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kssn4" event={"ID":"77f9cb8f-fb1d-42f5-94f9-fa27a348b9ad","Type":"ContainerDied","Data":"1a018d7ca8921c4271c3e7442da6784db8a0249e38400287d7d4df194bfb47fa"} Dec 10 16:20:11 crc kubenswrapper[4718]: I1210 16:20:11.892763 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kssn4" event={"ID":"77f9cb8f-fb1d-42f5-94f9-fa27a348b9ad","Type":"ContainerStarted","Data":"25b1f029d9bbf4f907b22088746187bc1a46d9e8f5d712149264704bb777fcbf"} Dec 10 16:20:11 crc kubenswrapper[4718]: I1210 16:20:11.920254 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-kssn4" podStartSLOduration=3.373665989 podStartE2EDuration="6.920166545s" podCreationTimestamp="2025-12-10 16:20:05 +0000 UTC" firstStartedPulling="2025-12-10 16:20:07.848421574 +0000 UTC m=+6512.797645001" lastFinishedPulling="2025-12-10 16:20:11.39492214 +0000 UTC m=+6516.344145557" observedRunningTime="2025-12-10 16:20:11.91056984 +0000 UTC m=+6516.859793257" watchObservedRunningTime="2025-12-10 16:20:11.920166545 +0000 UTC m=+6516.869389962" Dec 10 16:20:16 crc kubenswrapper[4718]: I1210 16:20:16.170978 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-kssn4" Dec 10 16:20:16 crc kubenswrapper[4718]: I1210 16:20:16.171757 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-kssn4" Dec 10 16:20:16 crc kubenswrapper[4718]: I1210 16:20:16.243024 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-kssn4" Dec 10 16:20:16 crc kubenswrapper[4718]: I1210 16:20:16.997034 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-kssn4" Dec 10 16:20:17 crc kubenswrapper[4718]: I1210 16:20:17.048047 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-kssn4"] Dec 10 16:20:19 crc kubenswrapper[4718]: I1210 16:20:19.018657 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-kssn4" podUID="77f9cb8f-fb1d-42f5-94f9-fa27a348b9ad" containerName="registry-server" containerID="cri-o://25b1f029d9bbf4f907b22088746187bc1a46d9e8f5d712149264704bb777fcbf" gracePeriod=2 Dec 10 16:20:19 crc kubenswrapper[4718]: I1210 16:20:19.525001 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kssn4" Dec 10 16:20:19 crc kubenswrapper[4718]: I1210 16:20:19.639790 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l8vpc\" (UniqueName: \"kubernetes.io/projected/77f9cb8f-fb1d-42f5-94f9-fa27a348b9ad-kube-api-access-l8vpc\") pod \"77f9cb8f-fb1d-42f5-94f9-fa27a348b9ad\" (UID: \"77f9cb8f-fb1d-42f5-94f9-fa27a348b9ad\") " Dec 10 16:20:19 crc kubenswrapper[4718]: I1210 16:20:19.639925 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/77f9cb8f-fb1d-42f5-94f9-fa27a348b9ad-catalog-content\") pod \"77f9cb8f-fb1d-42f5-94f9-fa27a348b9ad\" (UID: \"77f9cb8f-fb1d-42f5-94f9-fa27a348b9ad\") " Dec 10 16:20:19 crc kubenswrapper[4718]: I1210 16:20:19.639951 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/77f9cb8f-fb1d-42f5-94f9-fa27a348b9ad-utilities\") pod \"77f9cb8f-fb1d-42f5-94f9-fa27a348b9ad\" (UID: \"77f9cb8f-fb1d-42f5-94f9-fa27a348b9ad\") " Dec 10 16:20:19 crc kubenswrapper[4718]: I1210 16:20:19.641216 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/77f9cb8f-fb1d-42f5-94f9-fa27a348b9ad-utilities" (OuterVolumeSpecName: "utilities") pod "77f9cb8f-fb1d-42f5-94f9-fa27a348b9ad" (UID: "77f9cb8f-fb1d-42f5-94f9-fa27a348b9ad"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 16:20:19 crc kubenswrapper[4718]: I1210 16:20:19.656673 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/77f9cb8f-fb1d-42f5-94f9-fa27a348b9ad-kube-api-access-l8vpc" (OuterVolumeSpecName: "kube-api-access-l8vpc") pod "77f9cb8f-fb1d-42f5-94f9-fa27a348b9ad" (UID: "77f9cb8f-fb1d-42f5-94f9-fa27a348b9ad"). InnerVolumeSpecName "kube-api-access-l8vpc". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 16:20:19 crc kubenswrapper[4718]: I1210 16:20:19.717512 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/77f9cb8f-fb1d-42f5-94f9-fa27a348b9ad-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "77f9cb8f-fb1d-42f5-94f9-fa27a348b9ad" (UID: "77f9cb8f-fb1d-42f5-94f9-fa27a348b9ad"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 16:20:19 crc kubenswrapper[4718]: I1210 16:20:19.742259 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l8vpc\" (UniqueName: \"kubernetes.io/projected/77f9cb8f-fb1d-42f5-94f9-fa27a348b9ad-kube-api-access-l8vpc\") on node \"crc\" DevicePath \"\"" Dec 10 16:20:19 crc kubenswrapper[4718]: I1210 16:20:19.742332 4718 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/77f9cb8f-fb1d-42f5-94f9-fa27a348b9ad-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 10 16:20:19 crc kubenswrapper[4718]: I1210 16:20:19.742363 4718 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/77f9cb8f-fb1d-42f5-94f9-fa27a348b9ad-utilities\") on node \"crc\" DevicePath \"\"" Dec 10 16:20:20 crc kubenswrapper[4718]: I1210 16:20:20.020718 4718 scope.go:117] "RemoveContainer" containerID="efbe25aac6a704d523a48396ea05982c2cc2181337c42dd02c1adda5c0cc383d" Dec 10 16:20:20 crc kubenswrapper[4718]: E1210 16:20:20.021021 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8zmhn_openshift-machine-config-operator(8db53917-7cfb-496d-b8a0-5cc68f3be4e7)\"" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" podUID="8db53917-7cfb-496d-b8a0-5cc68f3be4e7" Dec 10 16:20:20 crc kubenswrapper[4718]: I1210 16:20:20.031682 4718 generic.go:334] "Generic (PLEG): container finished" podID="77f9cb8f-fb1d-42f5-94f9-fa27a348b9ad" containerID="25b1f029d9bbf4f907b22088746187bc1a46d9e8f5d712149264704bb777fcbf" exitCode=0 Dec 10 16:20:20 crc kubenswrapper[4718]: I1210 16:20:20.031771 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kssn4" Dec 10 16:20:20 crc kubenswrapper[4718]: I1210 16:20:20.032565 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kssn4" event={"ID":"77f9cb8f-fb1d-42f5-94f9-fa27a348b9ad","Type":"ContainerDied","Data":"25b1f029d9bbf4f907b22088746187bc1a46d9e8f5d712149264704bb777fcbf"} Dec 10 16:20:20 crc kubenswrapper[4718]: I1210 16:20:20.032602 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kssn4" event={"ID":"77f9cb8f-fb1d-42f5-94f9-fa27a348b9ad","Type":"ContainerDied","Data":"4359fc949660cc2eab5138a77d71332ed03bbf4816679432866043c35a53461c"} Dec 10 16:20:20 crc kubenswrapper[4718]: I1210 16:20:20.032628 4718 scope.go:117] "RemoveContainer" containerID="25b1f029d9bbf4f907b22088746187bc1a46d9e8f5d712149264704bb777fcbf" Dec 10 16:20:20 crc kubenswrapper[4718]: I1210 16:20:20.055630 4718 scope.go:117] "RemoveContainer" containerID="1a018d7ca8921c4271c3e7442da6784db8a0249e38400287d7d4df194bfb47fa" Dec 10 16:20:20 crc kubenswrapper[4718]: I1210 16:20:20.074662 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-kssn4"] Dec 10 16:20:20 crc kubenswrapper[4718]: I1210 16:20:20.085420 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-kssn4"] Dec 10 16:20:20 crc kubenswrapper[4718]: I1210 16:20:20.102494 4718 scope.go:117] "RemoveContainer" containerID="c53135c00e4ebbe5667f3c265e987d16a0276e0f03a86936992cf1653840037e" Dec 10 16:20:20 crc kubenswrapper[4718]: I1210 16:20:20.137104 4718 scope.go:117] "RemoveContainer" containerID="25b1f029d9bbf4f907b22088746187bc1a46d9e8f5d712149264704bb777fcbf" Dec 10 16:20:20 crc kubenswrapper[4718]: E1210 16:20:20.137766 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"25b1f029d9bbf4f907b22088746187bc1a46d9e8f5d712149264704bb777fcbf\": container with ID starting with 25b1f029d9bbf4f907b22088746187bc1a46d9e8f5d712149264704bb777fcbf not found: ID does not exist" containerID="25b1f029d9bbf4f907b22088746187bc1a46d9e8f5d712149264704bb777fcbf" Dec 10 16:20:20 crc kubenswrapper[4718]: I1210 16:20:20.137919 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"25b1f029d9bbf4f907b22088746187bc1a46d9e8f5d712149264704bb777fcbf"} err="failed to get container status \"25b1f029d9bbf4f907b22088746187bc1a46d9e8f5d712149264704bb777fcbf\": rpc error: code = NotFound desc = could not find container \"25b1f029d9bbf4f907b22088746187bc1a46d9e8f5d712149264704bb777fcbf\": container with ID starting with 25b1f029d9bbf4f907b22088746187bc1a46d9e8f5d712149264704bb777fcbf not found: ID does not exist" Dec 10 16:20:20 crc kubenswrapper[4718]: I1210 16:20:20.138033 4718 scope.go:117] "RemoveContainer" containerID="1a018d7ca8921c4271c3e7442da6784db8a0249e38400287d7d4df194bfb47fa" Dec 10 16:20:20 crc kubenswrapper[4718]: E1210 16:20:20.138684 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1a018d7ca8921c4271c3e7442da6784db8a0249e38400287d7d4df194bfb47fa\": container with ID starting with 1a018d7ca8921c4271c3e7442da6784db8a0249e38400287d7d4df194bfb47fa not found: ID does not exist" containerID="1a018d7ca8921c4271c3e7442da6784db8a0249e38400287d7d4df194bfb47fa" Dec 10 16:20:20 crc kubenswrapper[4718]: I1210 16:20:20.138733 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1a018d7ca8921c4271c3e7442da6784db8a0249e38400287d7d4df194bfb47fa"} err="failed to get container status \"1a018d7ca8921c4271c3e7442da6784db8a0249e38400287d7d4df194bfb47fa\": rpc error: code = NotFound desc = could not find container \"1a018d7ca8921c4271c3e7442da6784db8a0249e38400287d7d4df194bfb47fa\": container with ID starting with 1a018d7ca8921c4271c3e7442da6784db8a0249e38400287d7d4df194bfb47fa not found: ID does not exist" Dec 10 16:20:20 crc kubenswrapper[4718]: I1210 16:20:20.138770 4718 scope.go:117] "RemoveContainer" containerID="c53135c00e4ebbe5667f3c265e987d16a0276e0f03a86936992cf1653840037e" Dec 10 16:20:20 crc kubenswrapper[4718]: E1210 16:20:20.139323 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c53135c00e4ebbe5667f3c265e987d16a0276e0f03a86936992cf1653840037e\": container with ID starting with c53135c00e4ebbe5667f3c265e987d16a0276e0f03a86936992cf1653840037e not found: ID does not exist" containerID="c53135c00e4ebbe5667f3c265e987d16a0276e0f03a86936992cf1653840037e" Dec 10 16:20:20 crc kubenswrapper[4718]: I1210 16:20:20.139469 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c53135c00e4ebbe5667f3c265e987d16a0276e0f03a86936992cf1653840037e"} err="failed to get container status \"c53135c00e4ebbe5667f3c265e987d16a0276e0f03a86936992cf1653840037e\": rpc error: code = NotFound desc = could not find container \"c53135c00e4ebbe5667f3c265e987d16a0276e0f03a86936992cf1653840037e\": container with ID starting with c53135c00e4ebbe5667f3c265e987d16a0276e0f03a86936992cf1653840037e not found: ID does not exist" Dec 10 16:20:22 crc kubenswrapper[4718]: I1210 16:20:22.034982 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="77f9cb8f-fb1d-42f5-94f9-fa27a348b9ad" path="/var/lib/kubelet/pods/77f9cb8f-fb1d-42f5-94f9-fa27a348b9ad/volumes" Dec 10 16:20:34 crc kubenswrapper[4718]: I1210 16:20:34.020278 4718 scope.go:117] "RemoveContainer" containerID="efbe25aac6a704d523a48396ea05982c2cc2181337c42dd02c1adda5c0cc383d" Dec 10 16:20:34 crc kubenswrapper[4718]: E1210 16:20:34.021118 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8zmhn_openshift-machine-config-operator(8db53917-7cfb-496d-b8a0-5cc68f3be4e7)\"" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" podUID="8db53917-7cfb-496d-b8a0-5cc68f3be4e7" Dec 10 16:20:45 crc kubenswrapper[4718]: I1210 16:20:45.021047 4718 scope.go:117] "RemoveContainer" containerID="efbe25aac6a704d523a48396ea05982c2cc2181337c42dd02c1adda5c0cc383d" Dec 10 16:20:45 crc kubenswrapper[4718]: E1210 16:20:45.021887 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8zmhn_openshift-machine-config-operator(8db53917-7cfb-496d-b8a0-5cc68f3be4e7)\"" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" podUID="8db53917-7cfb-496d-b8a0-5cc68f3be4e7" Dec 10 16:20:57 crc kubenswrapper[4718]: I1210 16:20:57.887867 4718 scope.go:117] "RemoveContainer" containerID="fcea4d5b070fdbae0c506d61a7338592ca19e1cc8583ba0117c1336240a5726b" Dec 10 16:20:58 crc kubenswrapper[4718]: I1210 16:20:58.021208 4718 scope.go:117] "RemoveContainer" containerID="efbe25aac6a704d523a48396ea05982c2cc2181337c42dd02c1adda5c0cc383d" Dec 10 16:20:58 crc kubenswrapper[4718]: E1210 16:20:58.021739 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8zmhn_openshift-machine-config-operator(8db53917-7cfb-496d-b8a0-5cc68f3be4e7)\"" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" podUID="8db53917-7cfb-496d-b8a0-5cc68f3be4e7" Dec 10 16:21:11 crc kubenswrapper[4718]: I1210 16:21:11.020516 4718 scope.go:117] "RemoveContainer" containerID="efbe25aac6a704d523a48396ea05982c2cc2181337c42dd02c1adda5c0cc383d" Dec 10 16:21:11 crc kubenswrapper[4718]: E1210 16:21:11.021727 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8zmhn_openshift-machine-config-operator(8db53917-7cfb-496d-b8a0-5cc68f3be4e7)\"" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" podUID="8db53917-7cfb-496d-b8a0-5cc68f3be4e7" Dec 10 16:21:22 crc kubenswrapper[4718]: I1210 16:21:22.020629 4718 scope.go:117] "RemoveContainer" containerID="efbe25aac6a704d523a48396ea05982c2cc2181337c42dd02c1adda5c0cc383d" Dec 10 16:21:22 crc kubenswrapper[4718]: E1210 16:21:22.021482 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8zmhn_openshift-machine-config-operator(8db53917-7cfb-496d-b8a0-5cc68f3be4e7)\"" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" podUID="8db53917-7cfb-496d-b8a0-5cc68f3be4e7" Dec 10 16:21:35 crc kubenswrapper[4718]: I1210 16:21:35.020976 4718 scope.go:117] "RemoveContainer" containerID="efbe25aac6a704d523a48396ea05982c2cc2181337c42dd02c1adda5c0cc383d" Dec 10 16:21:35 crc kubenswrapper[4718]: E1210 16:21:35.021961 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8zmhn_openshift-machine-config-operator(8db53917-7cfb-496d-b8a0-5cc68f3be4e7)\"" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" podUID="8db53917-7cfb-496d-b8a0-5cc68f3be4e7" Dec 10 16:21:38 crc kubenswrapper[4718]: I1210 16:21:38.216063 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-vmn9z"] Dec 10 16:21:38 crc kubenswrapper[4718]: E1210 16:21:38.217310 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="77f9cb8f-fb1d-42f5-94f9-fa27a348b9ad" containerName="extract-content" Dec 10 16:21:38 crc kubenswrapper[4718]: I1210 16:21:38.217332 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="77f9cb8f-fb1d-42f5-94f9-fa27a348b9ad" containerName="extract-content" Dec 10 16:21:38 crc kubenswrapper[4718]: E1210 16:21:38.217349 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="77f9cb8f-fb1d-42f5-94f9-fa27a348b9ad" containerName="registry-server" Dec 10 16:21:38 crc kubenswrapper[4718]: I1210 16:21:38.217359 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="77f9cb8f-fb1d-42f5-94f9-fa27a348b9ad" containerName="registry-server" Dec 10 16:21:38 crc kubenswrapper[4718]: E1210 16:21:38.217431 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="77f9cb8f-fb1d-42f5-94f9-fa27a348b9ad" containerName="extract-utilities" Dec 10 16:21:38 crc kubenswrapper[4718]: I1210 16:21:38.217443 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="77f9cb8f-fb1d-42f5-94f9-fa27a348b9ad" containerName="extract-utilities" Dec 10 16:21:38 crc kubenswrapper[4718]: I1210 16:21:38.217765 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="77f9cb8f-fb1d-42f5-94f9-fa27a348b9ad" containerName="registry-server" Dec 10 16:21:38 crc kubenswrapper[4718]: I1210 16:21:38.219786 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vmn9z" Dec 10 16:21:38 crc kubenswrapper[4718]: I1210 16:21:38.227931 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-vmn9z"] Dec 10 16:21:38 crc kubenswrapper[4718]: I1210 16:21:38.350926 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dq6lr\" (UniqueName: \"kubernetes.io/projected/28438923-c961-460d-8a5a-6a2d5e172fd3-kube-api-access-dq6lr\") pod \"redhat-marketplace-vmn9z\" (UID: \"28438923-c961-460d-8a5a-6a2d5e172fd3\") " pod="openshift-marketplace/redhat-marketplace-vmn9z" Dec 10 16:21:38 crc kubenswrapper[4718]: I1210 16:21:38.351072 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/28438923-c961-460d-8a5a-6a2d5e172fd3-utilities\") pod \"redhat-marketplace-vmn9z\" (UID: \"28438923-c961-460d-8a5a-6a2d5e172fd3\") " pod="openshift-marketplace/redhat-marketplace-vmn9z" Dec 10 16:21:38 crc kubenswrapper[4718]: I1210 16:21:38.351418 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/28438923-c961-460d-8a5a-6a2d5e172fd3-catalog-content\") pod \"redhat-marketplace-vmn9z\" (UID: \"28438923-c961-460d-8a5a-6a2d5e172fd3\") " pod="openshift-marketplace/redhat-marketplace-vmn9z" Dec 10 16:21:38 crc kubenswrapper[4718]: I1210 16:21:38.454208 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dq6lr\" (UniqueName: \"kubernetes.io/projected/28438923-c961-460d-8a5a-6a2d5e172fd3-kube-api-access-dq6lr\") pod \"redhat-marketplace-vmn9z\" (UID: \"28438923-c961-460d-8a5a-6a2d5e172fd3\") " pod="openshift-marketplace/redhat-marketplace-vmn9z" Dec 10 16:21:38 crc kubenswrapper[4718]: I1210 16:21:38.454380 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/28438923-c961-460d-8a5a-6a2d5e172fd3-utilities\") pod \"redhat-marketplace-vmn9z\" (UID: \"28438923-c961-460d-8a5a-6a2d5e172fd3\") " pod="openshift-marketplace/redhat-marketplace-vmn9z" Dec 10 16:21:38 crc kubenswrapper[4718]: I1210 16:21:38.454476 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/28438923-c961-460d-8a5a-6a2d5e172fd3-catalog-content\") pod \"redhat-marketplace-vmn9z\" (UID: \"28438923-c961-460d-8a5a-6a2d5e172fd3\") " pod="openshift-marketplace/redhat-marketplace-vmn9z" Dec 10 16:21:38 crc kubenswrapper[4718]: I1210 16:21:38.455214 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/28438923-c961-460d-8a5a-6a2d5e172fd3-catalog-content\") pod \"redhat-marketplace-vmn9z\" (UID: \"28438923-c961-460d-8a5a-6a2d5e172fd3\") " pod="openshift-marketplace/redhat-marketplace-vmn9z" Dec 10 16:21:38 crc kubenswrapper[4718]: I1210 16:21:38.455252 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/28438923-c961-460d-8a5a-6a2d5e172fd3-utilities\") pod \"redhat-marketplace-vmn9z\" (UID: \"28438923-c961-460d-8a5a-6a2d5e172fd3\") " pod="openshift-marketplace/redhat-marketplace-vmn9z" Dec 10 16:21:38 crc kubenswrapper[4718]: I1210 16:21:38.477727 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dq6lr\" (UniqueName: \"kubernetes.io/projected/28438923-c961-460d-8a5a-6a2d5e172fd3-kube-api-access-dq6lr\") pod \"redhat-marketplace-vmn9z\" (UID: \"28438923-c961-460d-8a5a-6a2d5e172fd3\") " pod="openshift-marketplace/redhat-marketplace-vmn9z" Dec 10 16:21:38 crc kubenswrapper[4718]: I1210 16:21:38.556148 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vmn9z" Dec 10 16:21:39 crc kubenswrapper[4718]: I1210 16:21:39.033206 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-vmn9z"] Dec 10 16:21:39 crc kubenswrapper[4718]: I1210 16:21:39.933591 4718 generic.go:334] "Generic (PLEG): container finished" podID="28438923-c961-460d-8a5a-6a2d5e172fd3" containerID="1c1e2c1f1b50cbe3f126fa2967ab6bde40a444c3b2e3439d7384ae2e311f24a4" exitCode=0 Dec 10 16:21:39 crc kubenswrapper[4718]: I1210 16:21:39.933698 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vmn9z" event={"ID":"28438923-c961-460d-8a5a-6a2d5e172fd3","Type":"ContainerDied","Data":"1c1e2c1f1b50cbe3f126fa2967ab6bde40a444c3b2e3439d7384ae2e311f24a4"} Dec 10 16:21:39 crc kubenswrapper[4718]: I1210 16:21:39.933924 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vmn9z" event={"ID":"28438923-c961-460d-8a5a-6a2d5e172fd3","Type":"ContainerStarted","Data":"f03f83e60ce2bcba3d5fe22977a312a2e3ada922484b7db6e974d016eb2f14e5"} Dec 10 16:21:39 crc kubenswrapper[4718]: I1210 16:21:39.935978 4718 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 10 16:21:41 crc kubenswrapper[4718]: I1210 16:21:41.970230 4718 generic.go:334] "Generic (PLEG): container finished" podID="28438923-c961-460d-8a5a-6a2d5e172fd3" containerID="fd1ac219bb963c5582db948308500bd6fb745a85fd176f4686cefe4fd2285597" exitCode=0 Dec 10 16:21:41 crc kubenswrapper[4718]: I1210 16:21:41.970498 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vmn9z" event={"ID":"28438923-c961-460d-8a5a-6a2d5e172fd3","Type":"ContainerDied","Data":"fd1ac219bb963c5582db948308500bd6fb745a85fd176f4686cefe4fd2285597"} Dec 10 16:21:42 crc kubenswrapper[4718]: I1210 16:21:42.983107 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vmn9z" event={"ID":"28438923-c961-460d-8a5a-6a2d5e172fd3","Type":"ContainerStarted","Data":"87f9d023ae2759e75e0bd429b84a6ab04661cb020d9f06b7d1c22a16895a125a"} Dec 10 16:21:43 crc kubenswrapper[4718]: I1210 16:21:43.015227 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-vmn9z" podStartSLOduration=2.469511859 podStartE2EDuration="5.015205558s" podCreationTimestamp="2025-12-10 16:21:38 +0000 UTC" firstStartedPulling="2025-12-10 16:21:39.935646406 +0000 UTC m=+6604.884869833" lastFinishedPulling="2025-12-10 16:21:42.481340115 +0000 UTC m=+6607.430563532" observedRunningTime="2025-12-10 16:21:43.007729088 +0000 UTC m=+6607.956952505" watchObservedRunningTime="2025-12-10 16:21:43.015205558 +0000 UTC m=+6607.964428975" Dec 10 16:21:47 crc kubenswrapper[4718]: I1210 16:21:47.021203 4718 scope.go:117] "RemoveContainer" containerID="efbe25aac6a704d523a48396ea05982c2cc2181337c42dd02c1adda5c0cc383d" Dec 10 16:21:47 crc kubenswrapper[4718]: E1210 16:21:47.021875 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8zmhn_openshift-machine-config-operator(8db53917-7cfb-496d-b8a0-5cc68f3be4e7)\"" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" podUID="8db53917-7cfb-496d-b8a0-5cc68f3be4e7" Dec 10 16:21:48 crc kubenswrapper[4718]: I1210 16:21:48.556677 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-vmn9z" Dec 10 16:21:48 crc kubenswrapper[4718]: I1210 16:21:48.557084 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-vmn9z" Dec 10 16:21:48 crc kubenswrapper[4718]: I1210 16:21:48.615364 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-vmn9z" Dec 10 16:21:49 crc kubenswrapper[4718]: I1210 16:21:49.111591 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-vmn9z" Dec 10 16:21:49 crc kubenswrapper[4718]: I1210 16:21:49.179755 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-vmn9z"] Dec 10 16:21:51 crc kubenswrapper[4718]: I1210 16:21:51.075288 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-vmn9z" podUID="28438923-c961-460d-8a5a-6a2d5e172fd3" containerName="registry-server" containerID="cri-o://87f9d023ae2759e75e0bd429b84a6ab04661cb020d9f06b7d1c22a16895a125a" gracePeriod=2 Dec 10 16:21:51 crc kubenswrapper[4718]: I1210 16:21:51.786473 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vmn9z" Dec 10 16:21:51 crc kubenswrapper[4718]: I1210 16:21:51.825729 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/28438923-c961-460d-8a5a-6a2d5e172fd3-catalog-content\") pod \"28438923-c961-460d-8a5a-6a2d5e172fd3\" (UID: \"28438923-c961-460d-8a5a-6a2d5e172fd3\") " Dec 10 16:21:51 crc kubenswrapper[4718]: I1210 16:21:51.825929 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dq6lr\" (UniqueName: \"kubernetes.io/projected/28438923-c961-460d-8a5a-6a2d5e172fd3-kube-api-access-dq6lr\") pod \"28438923-c961-460d-8a5a-6a2d5e172fd3\" (UID: \"28438923-c961-460d-8a5a-6a2d5e172fd3\") " Dec 10 16:21:51 crc kubenswrapper[4718]: I1210 16:21:51.826009 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/28438923-c961-460d-8a5a-6a2d5e172fd3-utilities\") pod \"28438923-c961-460d-8a5a-6a2d5e172fd3\" (UID: \"28438923-c961-460d-8a5a-6a2d5e172fd3\") " Dec 10 16:21:51 crc kubenswrapper[4718]: I1210 16:21:51.828112 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/28438923-c961-460d-8a5a-6a2d5e172fd3-utilities" (OuterVolumeSpecName: "utilities") pod "28438923-c961-460d-8a5a-6a2d5e172fd3" (UID: "28438923-c961-460d-8a5a-6a2d5e172fd3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 16:21:51 crc kubenswrapper[4718]: I1210 16:21:51.835927 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/28438923-c961-460d-8a5a-6a2d5e172fd3-kube-api-access-dq6lr" (OuterVolumeSpecName: "kube-api-access-dq6lr") pod "28438923-c961-460d-8a5a-6a2d5e172fd3" (UID: "28438923-c961-460d-8a5a-6a2d5e172fd3"). InnerVolumeSpecName "kube-api-access-dq6lr". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 16:21:51 crc kubenswrapper[4718]: I1210 16:21:51.861507 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/28438923-c961-460d-8a5a-6a2d5e172fd3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "28438923-c961-460d-8a5a-6a2d5e172fd3" (UID: "28438923-c961-460d-8a5a-6a2d5e172fd3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 16:21:51 crc kubenswrapper[4718]: I1210 16:21:51.929650 4718 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/28438923-c961-460d-8a5a-6a2d5e172fd3-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 10 16:21:51 crc kubenswrapper[4718]: I1210 16:21:51.929902 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dq6lr\" (UniqueName: \"kubernetes.io/projected/28438923-c961-460d-8a5a-6a2d5e172fd3-kube-api-access-dq6lr\") on node \"crc\" DevicePath \"\"" Dec 10 16:21:51 crc kubenswrapper[4718]: I1210 16:21:51.930019 4718 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/28438923-c961-460d-8a5a-6a2d5e172fd3-utilities\") on node \"crc\" DevicePath \"\"" Dec 10 16:21:52 crc kubenswrapper[4718]: I1210 16:21:52.101765 4718 generic.go:334] "Generic (PLEG): container finished" podID="28438923-c961-460d-8a5a-6a2d5e172fd3" containerID="87f9d023ae2759e75e0bd429b84a6ab04661cb020d9f06b7d1c22a16895a125a" exitCode=0 Dec 10 16:21:52 crc kubenswrapper[4718]: I1210 16:21:52.101819 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vmn9z" event={"ID":"28438923-c961-460d-8a5a-6a2d5e172fd3","Type":"ContainerDied","Data":"87f9d023ae2759e75e0bd429b84a6ab04661cb020d9f06b7d1c22a16895a125a"} Dec 10 16:21:52 crc kubenswrapper[4718]: I1210 16:21:52.101853 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vmn9z" event={"ID":"28438923-c961-460d-8a5a-6a2d5e172fd3","Type":"ContainerDied","Data":"f03f83e60ce2bcba3d5fe22977a312a2e3ada922484b7db6e974d016eb2f14e5"} Dec 10 16:21:52 crc kubenswrapper[4718]: I1210 16:21:52.101876 4718 scope.go:117] "RemoveContainer" containerID="87f9d023ae2759e75e0bd429b84a6ab04661cb020d9f06b7d1c22a16895a125a" Dec 10 16:21:52 crc kubenswrapper[4718]: I1210 16:21:52.102030 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vmn9z" Dec 10 16:21:52 crc kubenswrapper[4718]: I1210 16:21:52.141020 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-vmn9z"] Dec 10 16:21:52 crc kubenswrapper[4718]: I1210 16:21:52.149082 4718 scope.go:117] "RemoveContainer" containerID="fd1ac219bb963c5582db948308500bd6fb745a85fd176f4686cefe4fd2285597" Dec 10 16:21:52 crc kubenswrapper[4718]: I1210 16:21:52.154918 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-vmn9z"] Dec 10 16:21:52 crc kubenswrapper[4718]: I1210 16:21:52.174241 4718 scope.go:117] "RemoveContainer" containerID="1c1e2c1f1b50cbe3f126fa2967ab6bde40a444c3b2e3439d7384ae2e311f24a4" Dec 10 16:21:52 crc kubenswrapper[4718]: I1210 16:21:52.223646 4718 scope.go:117] "RemoveContainer" containerID="87f9d023ae2759e75e0bd429b84a6ab04661cb020d9f06b7d1c22a16895a125a" Dec 10 16:21:52 crc kubenswrapper[4718]: E1210 16:21:52.224574 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"87f9d023ae2759e75e0bd429b84a6ab04661cb020d9f06b7d1c22a16895a125a\": container with ID starting with 87f9d023ae2759e75e0bd429b84a6ab04661cb020d9f06b7d1c22a16895a125a not found: ID does not exist" containerID="87f9d023ae2759e75e0bd429b84a6ab04661cb020d9f06b7d1c22a16895a125a" Dec 10 16:21:52 crc kubenswrapper[4718]: I1210 16:21:52.224749 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"87f9d023ae2759e75e0bd429b84a6ab04661cb020d9f06b7d1c22a16895a125a"} err="failed to get container status \"87f9d023ae2759e75e0bd429b84a6ab04661cb020d9f06b7d1c22a16895a125a\": rpc error: code = NotFound desc = could not find container \"87f9d023ae2759e75e0bd429b84a6ab04661cb020d9f06b7d1c22a16895a125a\": container with ID starting with 87f9d023ae2759e75e0bd429b84a6ab04661cb020d9f06b7d1c22a16895a125a not found: ID does not exist" Dec 10 16:21:52 crc kubenswrapper[4718]: I1210 16:21:52.224781 4718 scope.go:117] "RemoveContainer" containerID="fd1ac219bb963c5582db948308500bd6fb745a85fd176f4686cefe4fd2285597" Dec 10 16:21:52 crc kubenswrapper[4718]: E1210 16:21:52.225133 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fd1ac219bb963c5582db948308500bd6fb745a85fd176f4686cefe4fd2285597\": container with ID starting with fd1ac219bb963c5582db948308500bd6fb745a85fd176f4686cefe4fd2285597 not found: ID does not exist" containerID="fd1ac219bb963c5582db948308500bd6fb745a85fd176f4686cefe4fd2285597" Dec 10 16:21:52 crc kubenswrapper[4718]: I1210 16:21:52.225163 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fd1ac219bb963c5582db948308500bd6fb745a85fd176f4686cefe4fd2285597"} err="failed to get container status \"fd1ac219bb963c5582db948308500bd6fb745a85fd176f4686cefe4fd2285597\": rpc error: code = NotFound desc = could not find container \"fd1ac219bb963c5582db948308500bd6fb745a85fd176f4686cefe4fd2285597\": container with ID starting with fd1ac219bb963c5582db948308500bd6fb745a85fd176f4686cefe4fd2285597 not found: ID does not exist" Dec 10 16:21:52 crc kubenswrapper[4718]: I1210 16:21:52.225182 4718 scope.go:117] "RemoveContainer" containerID="1c1e2c1f1b50cbe3f126fa2967ab6bde40a444c3b2e3439d7384ae2e311f24a4" Dec 10 16:21:52 crc kubenswrapper[4718]: E1210 16:21:52.225981 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1c1e2c1f1b50cbe3f126fa2967ab6bde40a444c3b2e3439d7384ae2e311f24a4\": container with ID starting with 1c1e2c1f1b50cbe3f126fa2967ab6bde40a444c3b2e3439d7384ae2e311f24a4 not found: ID does not exist" containerID="1c1e2c1f1b50cbe3f126fa2967ab6bde40a444c3b2e3439d7384ae2e311f24a4" Dec 10 16:21:52 crc kubenswrapper[4718]: I1210 16:21:52.226016 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1c1e2c1f1b50cbe3f126fa2967ab6bde40a444c3b2e3439d7384ae2e311f24a4"} err="failed to get container status \"1c1e2c1f1b50cbe3f126fa2967ab6bde40a444c3b2e3439d7384ae2e311f24a4\": rpc error: code = NotFound desc = could not find container \"1c1e2c1f1b50cbe3f126fa2967ab6bde40a444c3b2e3439d7384ae2e311f24a4\": container with ID starting with 1c1e2c1f1b50cbe3f126fa2967ab6bde40a444c3b2e3439d7384ae2e311f24a4 not found: ID does not exist" Dec 10 16:21:54 crc kubenswrapper[4718]: I1210 16:21:54.034099 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="28438923-c961-460d-8a5a-6a2d5e172fd3" path="/var/lib/kubelet/pods/28438923-c961-460d-8a5a-6a2d5e172fd3/volumes" Dec 10 16:22:00 crc kubenswrapper[4718]: I1210 16:22:00.020573 4718 scope.go:117] "RemoveContainer" containerID="efbe25aac6a704d523a48396ea05982c2cc2181337c42dd02c1adda5c0cc383d" Dec 10 16:22:00 crc kubenswrapper[4718]: E1210 16:22:00.021426 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8zmhn_openshift-machine-config-operator(8db53917-7cfb-496d-b8a0-5cc68f3be4e7)\"" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" podUID="8db53917-7cfb-496d-b8a0-5cc68f3be4e7" Dec 10 16:22:14 crc kubenswrapper[4718]: I1210 16:22:14.022595 4718 scope.go:117] "RemoveContainer" containerID="efbe25aac6a704d523a48396ea05982c2cc2181337c42dd02c1adda5c0cc383d" Dec 10 16:22:14 crc kubenswrapper[4718]: E1210 16:22:14.026086 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8zmhn_openshift-machine-config-operator(8db53917-7cfb-496d-b8a0-5cc68f3be4e7)\"" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" podUID="8db53917-7cfb-496d-b8a0-5cc68f3be4e7" Dec 10 16:22:28 crc kubenswrapper[4718]: I1210 16:22:28.021066 4718 scope.go:117] "RemoveContainer" containerID="efbe25aac6a704d523a48396ea05982c2cc2181337c42dd02c1adda5c0cc383d" Dec 10 16:22:28 crc kubenswrapper[4718]: E1210 16:22:28.022062 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8zmhn_openshift-machine-config-operator(8db53917-7cfb-496d-b8a0-5cc68f3be4e7)\"" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" podUID="8db53917-7cfb-496d-b8a0-5cc68f3be4e7" Dec 10 16:22:39 crc kubenswrapper[4718]: I1210 16:22:39.035276 4718 scope.go:117] "RemoveContainer" containerID="efbe25aac6a704d523a48396ea05982c2cc2181337c42dd02c1adda5c0cc383d" Dec 10 16:22:39 crc kubenswrapper[4718]: E1210 16:22:39.036479 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8zmhn_openshift-machine-config-operator(8db53917-7cfb-496d-b8a0-5cc68f3be4e7)\"" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" podUID="8db53917-7cfb-496d-b8a0-5cc68f3be4e7" Dec 10 16:22:52 crc kubenswrapper[4718]: I1210 16:22:52.020317 4718 scope.go:117] "RemoveContainer" containerID="efbe25aac6a704d523a48396ea05982c2cc2181337c42dd02c1adda5c0cc383d" Dec 10 16:22:52 crc kubenswrapper[4718]: E1210 16:22:52.021160 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8zmhn_openshift-machine-config-operator(8db53917-7cfb-496d-b8a0-5cc68f3be4e7)\"" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" podUID="8db53917-7cfb-496d-b8a0-5cc68f3be4e7" Dec 10 16:23:05 crc kubenswrapper[4718]: I1210 16:23:05.020464 4718 scope.go:117] "RemoveContainer" containerID="efbe25aac6a704d523a48396ea05982c2cc2181337c42dd02c1adda5c0cc383d" Dec 10 16:23:05 crc kubenswrapper[4718]: E1210 16:23:05.021176 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8zmhn_openshift-machine-config-operator(8db53917-7cfb-496d-b8a0-5cc68f3be4e7)\"" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" podUID="8db53917-7cfb-496d-b8a0-5cc68f3be4e7" Dec 10 16:23:16 crc kubenswrapper[4718]: I1210 16:23:16.845915 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-wbxhg"] Dec 10 16:23:16 crc kubenswrapper[4718]: E1210 16:23:16.847341 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="28438923-c961-460d-8a5a-6a2d5e172fd3" containerName="extract-content" Dec 10 16:23:16 crc kubenswrapper[4718]: I1210 16:23:16.847365 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="28438923-c961-460d-8a5a-6a2d5e172fd3" containerName="extract-content" Dec 10 16:23:16 crc kubenswrapper[4718]: E1210 16:23:16.847459 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="28438923-c961-460d-8a5a-6a2d5e172fd3" containerName="registry-server" Dec 10 16:23:16 crc kubenswrapper[4718]: I1210 16:23:16.847469 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="28438923-c961-460d-8a5a-6a2d5e172fd3" containerName="registry-server" Dec 10 16:23:16 crc kubenswrapper[4718]: E1210 16:23:16.847488 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="28438923-c961-460d-8a5a-6a2d5e172fd3" containerName="extract-utilities" Dec 10 16:23:16 crc kubenswrapper[4718]: I1210 16:23:16.847496 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="28438923-c961-460d-8a5a-6a2d5e172fd3" containerName="extract-utilities" Dec 10 16:23:16 crc kubenswrapper[4718]: I1210 16:23:16.847772 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="28438923-c961-460d-8a5a-6a2d5e172fd3" containerName="registry-server" Dec 10 16:23:16 crc kubenswrapper[4718]: I1210 16:23:16.849944 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wbxhg" Dec 10 16:23:16 crc kubenswrapper[4718]: I1210 16:23:16.862237 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-wbxhg"] Dec 10 16:23:17 crc kubenswrapper[4718]: I1210 16:23:17.002681 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/823e77a1-f4fa-452f-bfc5-572228021708-catalog-content\") pod \"redhat-operators-wbxhg\" (UID: \"823e77a1-f4fa-452f-bfc5-572228021708\") " pod="openshift-marketplace/redhat-operators-wbxhg" Dec 10 16:23:17 crc kubenswrapper[4718]: I1210 16:23:17.002999 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/823e77a1-f4fa-452f-bfc5-572228021708-utilities\") pod \"redhat-operators-wbxhg\" (UID: \"823e77a1-f4fa-452f-bfc5-572228021708\") " pod="openshift-marketplace/redhat-operators-wbxhg" Dec 10 16:23:17 crc kubenswrapper[4718]: I1210 16:23:17.003324 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p79p8\" (UniqueName: \"kubernetes.io/projected/823e77a1-f4fa-452f-bfc5-572228021708-kube-api-access-p79p8\") pod \"redhat-operators-wbxhg\" (UID: \"823e77a1-f4fa-452f-bfc5-572228021708\") " pod="openshift-marketplace/redhat-operators-wbxhg" Dec 10 16:23:17 crc kubenswrapper[4718]: I1210 16:23:17.105372 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/823e77a1-f4fa-452f-bfc5-572228021708-utilities\") pod \"redhat-operators-wbxhg\" (UID: \"823e77a1-f4fa-452f-bfc5-572228021708\") " pod="openshift-marketplace/redhat-operators-wbxhg" Dec 10 16:23:17 crc kubenswrapper[4718]: I1210 16:23:17.105495 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p79p8\" (UniqueName: \"kubernetes.io/projected/823e77a1-f4fa-452f-bfc5-572228021708-kube-api-access-p79p8\") pod \"redhat-operators-wbxhg\" (UID: \"823e77a1-f4fa-452f-bfc5-572228021708\") " pod="openshift-marketplace/redhat-operators-wbxhg" Dec 10 16:23:17 crc kubenswrapper[4718]: I1210 16:23:17.105580 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/823e77a1-f4fa-452f-bfc5-572228021708-catalog-content\") pod \"redhat-operators-wbxhg\" (UID: \"823e77a1-f4fa-452f-bfc5-572228021708\") " pod="openshift-marketplace/redhat-operators-wbxhg" Dec 10 16:23:17 crc kubenswrapper[4718]: I1210 16:23:17.106068 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/823e77a1-f4fa-452f-bfc5-572228021708-catalog-content\") pod \"redhat-operators-wbxhg\" (UID: \"823e77a1-f4fa-452f-bfc5-572228021708\") " pod="openshift-marketplace/redhat-operators-wbxhg" Dec 10 16:23:17 crc kubenswrapper[4718]: I1210 16:23:17.106186 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/823e77a1-f4fa-452f-bfc5-572228021708-utilities\") pod \"redhat-operators-wbxhg\" (UID: \"823e77a1-f4fa-452f-bfc5-572228021708\") " pod="openshift-marketplace/redhat-operators-wbxhg" Dec 10 16:23:17 crc kubenswrapper[4718]: I1210 16:23:17.126961 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p79p8\" (UniqueName: \"kubernetes.io/projected/823e77a1-f4fa-452f-bfc5-572228021708-kube-api-access-p79p8\") pod \"redhat-operators-wbxhg\" (UID: \"823e77a1-f4fa-452f-bfc5-572228021708\") " pod="openshift-marketplace/redhat-operators-wbxhg" Dec 10 16:23:17 crc kubenswrapper[4718]: I1210 16:23:17.177946 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wbxhg" Dec 10 16:23:17 crc kubenswrapper[4718]: I1210 16:23:17.668059 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-wbxhg"] Dec 10 16:23:18 crc kubenswrapper[4718]: I1210 16:23:18.022038 4718 scope.go:117] "RemoveContainer" containerID="efbe25aac6a704d523a48396ea05982c2cc2181337c42dd02c1adda5c0cc383d" Dec 10 16:23:18 crc kubenswrapper[4718]: E1210 16:23:18.022277 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8zmhn_openshift-machine-config-operator(8db53917-7cfb-496d-b8a0-5cc68f3be4e7)\"" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" podUID="8db53917-7cfb-496d-b8a0-5cc68f3be4e7" Dec 10 16:23:18 crc kubenswrapper[4718]: I1210 16:23:18.206048 4718 generic.go:334] "Generic (PLEG): container finished" podID="823e77a1-f4fa-452f-bfc5-572228021708" containerID="b8544a4a5cc61d319bca3d6ceb406920f4125c2ebbef0b599ad47e433e85bc10" exitCode=0 Dec 10 16:23:18 crc kubenswrapper[4718]: I1210 16:23:18.206107 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wbxhg" event={"ID":"823e77a1-f4fa-452f-bfc5-572228021708","Type":"ContainerDied","Data":"b8544a4a5cc61d319bca3d6ceb406920f4125c2ebbef0b599ad47e433e85bc10"} Dec 10 16:23:18 crc kubenswrapper[4718]: I1210 16:23:18.206143 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wbxhg" event={"ID":"823e77a1-f4fa-452f-bfc5-572228021708","Type":"ContainerStarted","Data":"7623538619d2ac6ec65b2e5fd2c912d9b90863d08463f7148f783d42eb413f8e"} Dec 10 16:23:27 crc kubenswrapper[4718]: I1210 16:23:27.307936 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wbxhg" event={"ID":"823e77a1-f4fa-452f-bfc5-572228021708","Type":"ContainerStarted","Data":"71af5e13f3a21cdade07e350d74385fa4830561c2c12318f15322f5e55d541a1"} Dec 10 16:23:31 crc kubenswrapper[4718]: I1210 16:23:31.374571 4718 generic.go:334] "Generic (PLEG): container finished" podID="823e77a1-f4fa-452f-bfc5-572228021708" containerID="71af5e13f3a21cdade07e350d74385fa4830561c2c12318f15322f5e55d541a1" exitCode=0 Dec 10 16:23:31 crc kubenswrapper[4718]: I1210 16:23:31.374720 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wbxhg" event={"ID":"823e77a1-f4fa-452f-bfc5-572228021708","Type":"ContainerDied","Data":"71af5e13f3a21cdade07e350d74385fa4830561c2c12318f15322f5e55d541a1"} Dec 10 16:23:32 crc kubenswrapper[4718]: I1210 16:23:32.391223 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wbxhg" event={"ID":"823e77a1-f4fa-452f-bfc5-572228021708","Type":"ContainerStarted","Data":"32c1b0cf232a1be04f76ea8c5a2ba31a9fba88cc60503c2d154f26cff2051c81"} Dec 10 16:23:32 crc kubenswrapper[4718]: I1210 16:23:32.410171 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-wbxhg" podStartSLOduration=2.810308541 podStartE2EDuration="16.410147932s" podCreationTimestamp="2025-12-10 16:23:16 +0000 UTC" firstStartedPulling="2025-12-10 16:23:18.208610221 +0000 UTC m=+6703.157833638" lastFinishedPulling="2025-12-10 16:23:31.808449612 +0000 UTC m=+6716.757673029" observedRunningTime="2025-12-10 16:23:32.409427584 +0000 UTC m=+6717.358651011" watchObservedRunningTime="2025-12-10 16:23:32.410147932 +0000 UTC m=+6717.359371349" Dec 10 16:23:33 crc kubenswrapper[4718]: I1210 16:23:33.020828 4718 scope.go:117] "RemoveContainer" containerID="efbe25aac6a704d523a48396ea05982c2cc2181337c42dd02c1adda5c0cc383d" Dec 10 16:23:37 crc kubenswrapper[4718]: I1210 16:23:37.178653 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-wbxhg" Dec 10 16:23:37 crc kubenswrapper[4718]: I1210 16:23:37.179257 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-wbxhg" Dec 10 16:23:38 crc kubenswrapper[4718]: I1210 16:23:38.222583 4718 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-wbxhg" podUID="823e77a1-f4fa-452f-bfc5-572228021708" containerName="registry-server" probeResult="failure" output=< Dec 10 16:23:38 crc kubenswrapper[4718]: timeout: failed to connect service ":50051" within 1s Dec 10 16:23:38 crc kubenswrapper[4718]: > Dec 10 16:23:38 crc kubenswrapper[4718]: I1210 16:23:38.452084 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" event={"ID":"8db53917-7cfb-496d-b8a0-5cc68f3be4e7","Type":"ContainerStarted","Data":"8ed5a435f21adc700b92bc75b1658785da982c7c96614fd189595cd44c41f5bd"} Dec 10 16:23:40 crc kubenswrapper[4718]: I1210 16:23:40.570986 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-dnscw/must-gather-hdqkd"] Dec 10 16:23:40 crc kubenswrapper[4718]: I1210 16:23:40.573512 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-dnscw/must-gather-hdqkd" Dec 10 16:23:40 crc kubenswrapper[4718]: I1210 16:23:40.575338 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-dnscw"/"default-dockercfg-mjx8v" Dec 10 16:23:40 crc kubenswrapper[4718]: I1210 16:23:40.575636 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-dnscw"/"openshift-service-ca.crt" Dec 10 16:23:40 crc kubenswrapper[4718]: I1210 16:23:40.575790 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-dnscw"/"kube-root-ca.crt" Dec 10 16:23:40 crc kubenswrapper[4718]: I1210 16:23:40.606209 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-dnscw/must-gather-hdqkd"] Dec 10 16:23:40 crc kubenswrapper[4718]: I1210 16:23:40.720520 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/4e1a9ba4-970d-4984-aac8-cca52469a498-must-gather-output\") pod \"must-gather-hdqkd\" (UID: \"4e1a9ba4-970d-4984-aac8-cca52469a498\") " pod="openshift-must-gather-dnscw/must-gather-hdqkd" Dec 10 16:23:40 crc kubenswrapper[4718]: I1210 16:23:40.721231 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nt58h\" (UniqueName: \"kubernetes.io/projected/4e1a9ba4-970d-4984-aac8-cca52469a498-kube-api-access-nt58h\") pod \"must-gather-hdqkd\" (UID: \"4e1a9ba4-970d-4984-aac8-cca52469a498\") " pod="openshift-must-gather-dnscw/must-gather-hdqkd" Dec 10 16:23:40 crc kubenswrapper[4718]: I1210 16:23:40.823352 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/4e1a9ba4-970d-4984-aac8-cca52469a498-must-gather-output\") pod \"must-gather-hdqkd\" (UID: \"4e1a9ba4-970d-4984-aac8-cca52469a498\") " pod="openshift-must-gather-dnscw/must-gather-hdqkd" Dec 10 16:23:40 crc kubenswrapper[4718]: I1210 16:23:40.823517 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nt58h\" (UniqueName: \"kubernetes.io/projected/4e1a9ba4-970d-4984-aac8-cca52469a498-kube-api-access-nt58h\") pod \"must-gather-hdqkd\" (UID: \"4e1a9ba4-970d-4984-aac8-cca52469a498\") " pod="openshift-must-gather-dnscw/must-gather-hdqkd" Dec 10 16:23:40 crc kubenswrapper[4718]: I1210 16:23:40.824090 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/4e1a9ba4-970d-4984-aac8-cca52469a498-must-gather-output\") pod \"must-gather-hdqkd\" (UID: \"4e1a9ba4-970d-4984-aac8-cca52469a498\") " pod="openshift-must-gather-dnscw/must-gather-hdqkd" Dec 10 16:23:40 crc kubenswrapper[4718]: I1210 16:23:40.872467 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nt58h\" (UniqueName: \"kubernetes.io/projected/4e1a9ba4-970d-4984-aac8-cca52469a498-kube-api-access-nt58h\") pod \"must-gather-hdqkd\" (UID: \"4e1a9ba4-970d-4984-aac8-cca52469a498\") " pod="openshift-must-gather-dnscw/must-gather-hdqkd" Dec 10 16:23:40 crc kubenswrapper[4718]: I1210 16:23:40.892069 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-dnscw/must-gather-hdqkd" Dec 10 16:23:41 crc kubenswrapper[4718]: I1210 16:23:41.305323 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-dnscw/must-gather-hdqkd"] Dec 10 16:23:41 crc kubenswrapper[4718]: I1210 16:23:41.491094 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-dnscw/must-gather-hdqkd" event={"ID":"4e1a9ba4-970d-4984-aac8-cca52469a498","Type":"ContainerStarted","Data":"27f599c971483cd1992c3361dd7ec87b5a13fd55d929ee61f1abe9f969d01874"} Dec 10 16:23:42 crc kubenswrapper[4718]: I1210 16:23:42.503212 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-dnscw/must-gather-hdqkd" event={"ID":"4e1a9ba4-970d-4984-aac8-cca52469a498","Type":"ContainerStarted","Data":"c5df8fde3c3a1c56368f98e6db32663a8a4ac45af347a28e5ddf58657af62273"} Dec 10 16:23:42 crc kubenswrapper[4718]: I1210 16:23:42.503850 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-dnscw/must-gather-hdqkd" event={"ID":"4e1a9ba4-970d-4984-aac8-cca52469a498","Type":"ContainerStarted","Data":"7a739ccb7b795e7c13657c4bc82edd1fc5339ba68393ee4b0a1c0e1a0707e6f4"} Dec 10 16:23:42 crc kubenswrapper[4718]: I1210 16:23:42.534762 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-dnscw/must-gather-hdqkd" podStartSLOduration=2.534729038 podStartE2EDuration="2.534729038s" podCreationTimestamp="2025-12-10 16:23:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 16:23:42.519735149 +0000 UTC m=+6727.468958566" watchObservedRunningTime="2025-12-10 16:23:42.534729038 +0000 UTC m=+6727.483952465" Dec 10 16:23:45 crc kubenswrapper[4718]: I1210 16:23:45.997215 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-dnscw/crc-debug-dp4vm"] Dec 10 16:23:45 crc kubenswrapper[4718]: I1210 16:23:45.999445 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-dnscw/crc-debug-dp4vm" Dec 10 16:23:46 crc kubenswrapper[4718]: I1210 16:23:46.145915 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mw7gd\" (UniqueName: \"kubernetes.io/projected/56cd3338-234b-47cc-96bd-6627186a0c9b-kube-api-access-mw7gd\") pod \"crc-debug-dp4vm\" (UID: \"56cd3338-234b-47cc-96bd-6627186a0c9b\") " pod="openshift-must-gather-dnscw/crc-debug-dp4vm" Dec 10 16:23:46 crc kubenswrapper[4718]: I1210 16:23:46.146332 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/56cd3338-234b-47cc-96bd-6627186a0c9b-host\") pod \"crc-debug-dp4vm\" (UID: \"56cd3338-234b-47cc-96bd-6627186a0c9b\") " pod="openshift-must-gather-dnscw/crc-debug-dp4vm" Dec 10 16:23:46 crc kubenswrapper[4718]: I1210 16:23:46.249454 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mw7gd\" (UniqueName: \"kubernetes.io/projected/56cd3338-234b-47cc-96bd-6627186a0c9b-kube-api-access-mw7gd\") pod \"crc-debug-dp4vm\" (UID: \"56cd3338-234b-47cc-96bd-6627186a0c9b\") " pod="openshift-must-gather-dnscw/crc-debug-dp4vm" Dec 10 16:23:46 crc kubenswrapper[4718]: I1210 16:23:46.249586 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/56cd3338-234b-47cc-96bd-6627186a0c9b-host\") pod \"crc-debug-dp4vm\" (UID: \"56cd3338-234b-47cc-96bd-6627186a0c9b\") " pod="openshift-must-gather-dnscw/crc-debug-dp4vm" Dec 10 16:23:46 crc kubenswrapper[4718]: I1210 16:23:46.250028 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/56cd3338-234b-47cc-96bd-6627186a0c9b-host\") pod \"crc-debug-dp4vm\" (UID: \"56cd3338-234b-47cc-96bd-6627186a0c9b\") " pod="openshift-must-gather-dnscw/crc-debug-dp4vm" Dec 10 16:23:46 crc kubenswrapper[4718]: I1210 16:23:46.271166 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mw7gd\" (UniqueName: \"kubernetes.io/projected/56cd3338-234b-47cc-96bd-6627186a0c9b-kube-api-access-mw7gd\") pod \"crc-debug-dp4vm\" (UID: \"56cd3338-234b-47cc-96bd-6627186a0c9b\") " pod="openshift-must-gather-dnscw/crc-debug-dp4vm" Dec 10 16:23:46 crc kubenswrapper[4718]: I1210 16:23:46.327441 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-dnscw/crc-debug-dp4vm" Dec 10 16:23:46 crc kubenswrapper[4718]: W1210 16:23:46.358488 4718 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod56cd3338_234b_47cc_96bd_6627186a0c9b.slice/crio-c69111012fed681a67015ed9ee5313f947d286b101333b044fb165aed4aaf985 WatchSource:0}: Error finding container c69111012fed681a67015ed9ee5313f947d286b101333b044fb165aed4aaf985: Status 404 returned error can't find the container with id c69111012fed681a67015ed9ee5313f947d286b101333b044fb165aed4aaf985 Dec 10 16:23:46 crc kubenswrapper[4718]: I1210 16:23:46.540037 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-dnscw/crc-debug-dp4vm" event={"ID":"56cd3338-234b-47cc-96bd-6627186a0c9b","Type":"ContainerStarted","Data":"c69111012fed681a67015ed9ee5313f947d286b101333b044fb165aed4aaf985"} Dec 10 16:23:47 crc kubenswrapper[4718]: I1210 16:23:47.242528 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-wbxhg" Dec 10 16:23:47 crc kubenswrapper[4718]: I1210 16:23:47.309697 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-wbxhg" Dec 10 16:23:47 crc kubenswrapper[4718]: I1210 16:23:47.552051 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-dnscw/crc-debug-dp4vm" event={"ID":"56cd3338-234b-47cc-96bd-6627186a0c9b","Type":"ContainerStarted","Data":"77bb5d46b5b86b81cc0861436af27f2c8fe1a44d3ced26ef8a4a7751b50b3ee4"} Dec 10 16:23:47 crc kubenswrapper[4718]: I1210 16:23:47.576827 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-dnscw/crc-debug-dp4vm" podStartSLOduration=2.576801975 podStartE2EDuration="2.576801975s" podCreationTimestamp="2025-12-10 16:23:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 16:23:47.568577858 +0000 UTC m=+6732.517801275" watchObservedRunningTime="2025-12-10 16:23:47.576801975 +0000 UTC m=+6732.526025392" Dec 10 16:23:47 crc kubenswrapper[4718]: I1210 16:23:47.878168 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-wbxhg"] Dec 10 16:23:48 crc kubenswrapper[4718]: I1210 16:23:48.047341 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-k9h7w"] Dec 10 16:23:48 crc kubenswrapper[4718]: I1210 16:23:48.047664 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-k9h7w" podUID="4525909a-e5eb-458e-9a90-b5a079e0eb09" containerName="registry-server" containerID="cri-o://f6834da07f5a2929a3ff60d4941b1ca8ca2ac721d4d0983aaa6745aec58b4b88" gracePeriod=2 Dec 10 16:23:48 crc kubenswrapper[4718]: I1210 16:23:48.598022 4718 generic.go:334] "Generic (PLEG): container finished" podID="4525909a-e5eb-458e-9a90-b5a079e0eb09" containerID="f6834da07f5a2929a3ff60d4941b1ca8ca2ac721d4d0983aaa6745aec58b4b88" exitCode=0 Dec 10 16:23:48 crc kubenswrapper[4718]: I1210 16:23:48.600112 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k9h7w" event={"ID":"4525909a-e5eb-458e-9a90-b5a079e0eb09","Type":"ContainerDied","Data":"f6834da07f5a2929a3ff60d4941b1ca8ca2ac721d4d0983aaa6745aec58b4b88"} Dec 10 16:23:48 crc kubenswrapper[4718]: I1210 16:23:48.711052 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-k9h7w" Dec 10 16:23:48 crc kubenswrapper[4718]: I1210 16:23:48.805571 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4525909a-e5eb-458e-9a90-b5a079e0eb09-utilities\") pod \"4525909a-e5eb-458e-9a90-b5a079e0eb09\" (UID: \"4525909a-e5eb-458e-9a90-b5a079e0eb09\") " Dec 10 16:23:48 crc kubenswrapper[4718]: I1210 16:23:48.805623 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcn52\" (UniqueName: \"kubernetes.io/projected/4525909a-e5eb-458e-9a90-b5a079e0eb09-kube-api-access-fcn52\") pod \"4525909a-e5eb-458e-9a90-b5a079e0eb09\" (UID: \"4525909a-e5eb-458e-9a90-b5a079e0eb09\") " Dec 10 16:23:48 crc kubenswrapper[4718]: I1210 16:23:48.805722 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4525909a-e5eb-458e-9a90-b5a079e0eb09-catalog-content\") pod \"4525909a-e5eb-458e-9a90-b5a079e0eb09\" (UID: \"4525909a-e5eb-458e-9a90-b5a079e0eb09\") " Dec 10 16:23:48 crc kubenswrapper[4718]: I1210 16:23:48.806666 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4525909a-e5eb-458e-9a90-b5a079e0eb09-utilities" (OuterVolumeSpecName: "utilities") pod "4525909a-e5eb-458e-9a90-b5a079e0eb09" (UID: "4525909a-e5eb-458e-9a90-b5a079e0eb09"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 16:23:48 crc kubenswrapper[4718]: I1210 16:23:48.827603 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4525909a-e5eb-458e-9a90-b5a079e0eb09-kube-api-access-fcn52" (OuterVolumeSpecName: "kube-api-access-fcn52") pod "4525909a-e5eb-458e-9a90-b5a079e0eb09" (UID: "4525909a-e5eb-458e-9a90-b5a079e0eb09"). InnerVolumeSpecName "kube-api-access-fcn52". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 16:23:48 crc kubenswrapper[4718]: I1210 16:23:48.908616 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcn52\" (UniqueName: \"kubernetes.io/projected/4525909a-e5eb-458e-9a90-b5a079e0eb09-kube-api-access-fcn52\") on node \"crc\" DevicePath \"\"" Dec 10 16:23:48 crc kubenswrapper[4718]: I1210 16:23:48.908660 4718 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4525909a-e5eb-458e-9a90-b5a079e0eb09-utilities\") on node \"crc\" DevicePath \"\"" Dec 10 16:23:48 crc kubenswrapper[4718]: I1210 16:23:48.947721 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4525909a-e5eb-458e-9a90-b5a079e0eb09-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4525909a-e5eb-458e-9a90-b5a079e0eb09" (UID: "4525909a-e5eb-458e-9a90-b5a079e0eb09"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 16:23:49 crc kubenswrapper[4718]: I1210 16:23:49.010840 4718 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4525909a-e5eb-458e-9a90-b5a079e0eb09-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 10 16:23:49 crc kubenswrapper[4718]: I1210 16:23:49.649204 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k9h7w" event={"ID":"4525909a-e5eb-458e-9a90-b5a079e0eb09","Type":"ContainerDied","Data":"27a62d18634007ab19c61aa6b2f430e6bc4f299ca360248358f33115e6e368d5"} Dec 10 16:23:49 crc kubenswrapper[4718]: I1210 16:23:49.649641 4718 scope.go:117] "RemoveContainer" containerID="f6834da07f5a2929a3ff60d4941b1ca8ca2ac721d4d0983aaa6745aec58b4b88" Dec 10 16:23:49 crc kubenswrapper[4718]: I1210 16:23:49.650220 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-k9h7w" Dec 10 16:23:49 crc kubenswrapper[4718]: I1210 16:23:49.708133 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-k9h7w"] Dec 10 16:23:49 crc kubenswrapper[4718]: I1210 16:23:49.732226 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-k9h7w"] Dec 10 16:23:49 crc kubenswrapper[4718]: I1210 16:23:49.739974 4718 scope.go:117] "RemoveContainer" containerID="177197318a4c820fc6c984485d843b0fbbe47957d01de3022ca348930c51c341" Dec 10 16:23:49 crc kubenswrapper[4718]: I1210 16:23:49.777546 4718 scope.go:117] "RemoveContainer" containerID="2fe8871bc476d301905625e545dbb1380fb94d90e9764d9582c9f2025c59fd48" Dec 10 16:23:50 crc kubenswrapper[4718]: I1210 16:23:50.033465 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4525909a-e5eb-458e-9a90-b5a079e0eb09" path="/var/lib/kubelet/pods/4525909a-e5eb-458e-9a90-b5a079e0eb09/volumes" Dec 10 16:24:32 crc kubenswrapper[4718]: I1210 16:24:32.198034 4718 generic.go:334] "Generic (PLEG): container finished" podID="56cd3338-234b-47cc-96bd-6627186a0c9b" containerID="77bb5d46b5b86b81cc0861436af27f2c8fe1a44d3ced26ef8a4a7751b50b3ee4" exitCode=0 Dec 10 16:24:32 crc kubenswrapper[4718]: I1210 16:24:32.198258 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-dnscw/crc-debug-dp4vm" event={"ID":"56cd3338-234b-47cc-96bd-6627186a0c9b","Type":"ContainerDied","Data":"77bb5d46b5b86b81cc0861436af27f2c8fe1a44d3ced26ef8a4a7751b50b3ee4"} Dec 10 16:24:33 crc kubenswrapper[4718]: I1210 16:24:33.319658 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-dnscw/crc-debug-dp4vm" Dec 10 16:24:33 crc kubenswrapper[4718]: I1210 16:24:33.361016 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-dnscw/crc-debug-dp4vm"] Dec 10 16:24:33 crc kubenswrapper[4718]: I1210 16:24:33.371463 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-dnscw/crc-debug-dp4vm"] Dec 10 16:24:33 crc kubenswrapper[4718]: I1210 16:24:33.475170 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mw7gd\" (UniqueName: \"kubernetes.io/projected/56cd3338-234b-47cc-96bd-6627186a0c9b-kube-api-access-mw7gd\") pod \"56cd3338-234b-47cc-96bd-6627186a0c9b\" (UID: \"56cd3338-234b-47cc-96bd-6627186a0c9b\") " Dec 10 16:24:33 crc kubenswrapper[4718]: I1210 16:24:33.475331 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/56cd3338-234b-47cc-96bd-6627186a0c9b-host\") pod \"56cd3338-234b-47cc-96bd-6627186a0c9b\" (UID: \"56cd3338-234b-47cc-96bd-6627186a0c9b\") " Dec 10 16:24:33 crc kubenswrapper[4718]: I1210 16:24:33.475739 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/56cd3338-234b-47cc-96bd-6627186a0c9b-host" (OuterVolumeSpecName: "host") pod "56cd3338-234b-47cc-96bd-6627186a0c9b" (UID: "56cd3338-234b-47cc-96bd-6627186a0c9b"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 10 16:24:33 crc kubenswrapper[4718]: I1210 16:24:33.476151 4718 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/56cd3338-234b-47cc-96bd-6627186a0c9b-host\") on node \"crc\" DevicePath \"\"" Dec 10 16:24:33 crc kubenswrapper[4718]: I1210 16:24:33.482847 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/56cd3338-234b-47cc-96bd-6627186a0c9b-kube-api-access-mw7gd" (OuterVolumeSpecName: "kube-api-access-mw7gd") pod "56cd3338-234b-47cc-96bd-6627186a0c9b" (UID: "56cd3338-234b-47cc-96bd-6627186a0c9b"). InnerVolumeSpecName "kube-api-access-mw7gd". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 16:24:33 crc kubenswrapper[4718]: I1210 16:24:33.578177 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mw7gd\" (UniqueName: \"kubernetes.io/projected/56cd3338-234b-47cc-96bd-6627186a0c9b-kube-api-access-mw7gd\") on node \"crc\" DevicePath \"\"" Dec 10 16:24:34 crc kubenswrapper[4718]: I1210 16:24:34.033192 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="56cd3338-234b-47cc-96bd-6627186a0c9b" path="/var/lib/kubelet/pods/56cd3338-234b-47cc-96bd-6627186a0c9b/volumes" Dec 10 16:24:34 crc kubenswrapper[4718]: I1210 16:24:34.219904 4718 scope.go:117] "RemoveContainer" containerID="77bb5d46b5b86b81cc0861436af27f2c8fe1a44d3ced26ef8a4a7751b50b3ee4" Dec 10 16:24:34 crc kubenswrapper[4718]: I1210 16:24:34.219956 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-dnscw/crc-debug-dp4vm" Dec 10 16:24:34 crc kubenswrapper[4718]: I1210 16:24:34.586770 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-dnscw/crc-debug-ktdb2"] Dec 10 16:24:34 crc kubenswrapper[4718]: E1210 16:24:34.587310 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4525909a-e5eb-458e-9a90-b5a079e0eb09" containerName="registry-server" Dec 10 16:24:34 crc kubenswrapper[4718]: I1210 16:24:34.587327 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="4525909a-e5eb-458e-9a90-b5a079e0eb09" containerName="registry-server" Dec 10 16:24:34 crc kubenswrapper[4718]: E1210 16:24:34.587346 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4525909a-e5eb-458e-9a90-b5a079e0eb09" containerName="extract-utilities" Dec 10 16:24:34 crc kubenswrapper[4718]: I1210 16:24:34.587356 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="4525909a-e5eb-458e-9a90-b5a079e0eb09" containerName="extract-utilities" Dec 10 16:24:34 crc kubenswrapper[4718]: E1210 16:24:34.587406 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56cd3338-234b-47cc-96bd-6627186a0c9b" containerName="container-00" Dec 10 16:24:34 crc kubenswrapper[4718]: I1210 16:24:34.587415 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="56cd3338-234b-47cc-96bd-6627186a0c9b" containerName="container-00" Dec 10 16:24:34 crc kubenswrapper[4718]: E1210 16:24:34.587438 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4525909a-e5eb-458e-9a90-b5a079e0eb09" containerName="extract-content" Dec 10 16:24:34 crc kubenswrapper[4718]: I1210 16:24:34.587445 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="4525909a-e5eb-458e-9a90-b5a079e0eb09" containerName="extract-content" Dec 10 16:24:34 crc kubenswrapper[4718]: I1210 16:24:34.587691 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="56cd3338-234b-47cc-96bd-6627186a0c9b" containerName="container-00" Dec 10 16:24:34 crc kubenswrapper[4718]: I1210 16:24:34.587707 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="4525909a-e5eb-458e-9a90-b5a079e0eb09" containerName="registry-server" Dec 10 16:24:34 crc kubenswrapper[4718]: I1210 16:24:34.589194 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-dnscw/crc-debug-ktdb2" Dec 10 16:24:34 crc kubenswrapper[4718]: I1210 16:24:34.699279 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/cbd2aaf4-4691-48d5-bac5-9aa9f9d66ac3-host\") pod \"crc-debug-ktdb2\" (UID: \"cbd2aaf4-4691-48d5-bac5-9aa9f9d66ac3\") " pod="openshift-must-gather-dnscw/crc-debug-ktdb2" Dec 10 16:24:34 crc kubenswrapper[4718]: I1210 16:24:34.699541 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7257r\" (UniqueName: \"kubernetes.io/projected/cbd2aaf4-4691-48d5-bac5-9aa9f9d66ac3-kube-api-access-7257r\") pod \"crc-debug-ktdb2\" (UID: \"cbd2aaf4-4691-48d5-bac5-9aa9f9d66ac3\") " pod="openshift-must-gather-dnscw/crc-debug-ktdb2" Dec 10 16:24:34 crc kubenswrapper[4718]: I1210 16:24:34.814141 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7257r\" (UniqueName: \"kubernetes.io/projected/cbd2aaf4-4691-48d5-bac5-9aa9f9d66ac3-kube-api-access-7257r\") pod \"crc-debug-ktdb2\" (UID: \"cbd2aaf4-4691-48d5-bac5-9aa9f9d66ac3\") " pod="openshift-must-gather-dnscw/crc-debug-ktdb2" Dec 10 16:24:34 crc kubenswrapper[4718]: I1210 16:24:34.814276 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/cbd2aaf4-4691-48d5-bac5-9aa9f9d66ac3-host\") pod \"crc-debug-ktdb2\" (UID: \"cbd2aaf4-4691-48d5-bac5-9aa9f9d66ac3\") " pod="openshift-must-gather-dnscw/crc-debug-ktdb2" Dec 10 16:24:34 crc kubenswrapper[4718]: I1210 16:24:34.814488 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/cbd2aaf4-4691-48d5-bac5-9aa9f9d66ac3-host\") pod \"crc-debug-ktdb2\" (UID: \"cbd2aaf4-4691-48d5-bac5-9aa9f9d66ac3\") " pod="openshift-must-gather-dnscw/crc-debug-ktdb2" Dec 10 16:24:34 crc kubenswrapper[4718]: I1210 16:24:34.833290 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7257r\" (UniqueName: \"kubernetes.io/projected/cbd2aaf4-4691-48d5-bac5-9aa9f9d66ac3-kube-api-access-7257r\") pod \"crc-debug-ktdb2\" (UID: \"cbd2aaf4-4691-48d5-bac5-9aa9f9d66ac3\") " pod="openshift-must-gather-dnscw/crc-debug-ktdb2" Dec 10 16:24:34 crc kubenswrapper[4718]: I1210 16:24:34.910760 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-dnscw/crc-debug-ktdb2" Dec 10 16:24:35 crc kubenswrapper[4718]: I1210 16:24:35.230845 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-dnscw/crc-debug-ktdb2" event={"ID":"cbd2aaf4-4691-48d5-bac5-9aa9f9d66ac3","Type":"ContainerStarted","Data":"d3912bdd3f3260a1469abdfd8d84dba4b9be9ec3c847aff9e606ee28b3fdf21e"} Dec 10 16:24:36 crc kubenswrapper[4718]: I1210 16:24:36.243935 4718 generic.go:334] "Generic (PLEG): container finished" podID="cbd2aaf4-4691-48d5-bac5-9aa9f9d66ac3" containerID="e2ec0ce47d97794172761d845e0704316b60837db39ff21787ed778f8073e06a" exitCode=0 Dec 10 16:24:36 crc kubenswrapper[4718]: I1210 16:24:36.244252 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-dnscw/crc-debug-ktdb2" event={"ID":"cbd2aaf4-4691-48d5-bac5-9aa9f9d66ac3","Type":"ContainerDied","Data":"e2ec0ce47d97794172761d845e0704316b60837db39ff21787ed778f8073e06a"} Dec 10 16:24:37 crc kubenswrapper[4718]: I1210 16:24:37.379304 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-dnscw/crc-debug-ktdb2" Dec 10 16:24:37 crc kubenswrapper[4718]: I1210 16:24:37.467659 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7257r\" (UniqueName: \"kubernetes.io/projected/cbd2aaf4-4691-48d5-bac5-9aa9f9d66ac3-kube-api-access-7257r\") pod \"cbd2aaf4-4691-48d5-bac5-9aa9f9d66ac3\" (UID: \"cbd2aaf4-4691-48d5-bac5-9aa9f9d66ac3\") " Dec 10 16:24:37 crc kubenswrapper[4718]: I1210 16:24:37.467903 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/cbd2aaf4-4691-48d5-bac5-9aa9f9d66ac3-host\") pod \"cbd2aaf4-4691-48d5-bac5-9aa9f9d66ac3\" (UID: \"cbd2aaf4-4691-48d5-bac5-9aa9f9d66ac3\") " Dec 10 16:24:37 crc kubenswrapper[4718]: I1210 16:24:37.467965 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cbd2aaf4-4691-48d5-bac5-9aa9f9d66ac3-host" (OuterVolumeSpecName: "host") pod "cbd2aaf4-4691-48d5-bac5-9aa9f9d66ac3" (UID: "cbd2aaf4-4691-48d5-bac5-9aa9f9d66ac3"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 10 16:24:37 crc kubenswrapper[4718]: I1210 16:24:37.469706 4718 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/cbd2aaf4-4691-48d5-bac5-9aa9f9d66ac3-host\") on node \"crc\" DevicePath \"\"" Dec 10 16:24:37 crc kubenswrapper[4718]: I1210 16:24:37.475580 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cbd2aaf4-4691-48d5-bac5-9aa9f9d66ac3-kube-api-access-7257r" (OuterVolumeSpecName: "kube-api-access-7257r") pod "cbd2aaf4-4691-48d5-bac5-9aa9f9d66ac3" (UID: "cbd2aaf4-4691-48d5-bac5-9aa9f9d66ac3"). InnerVolumeSpecName "kube-api-access-7257r". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 16:24:37 crc kubenswrapper[4718]: I1210 16:24:37.571294 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7257r\" (UniqueName: \"kubernetes.io/projected/cbd2aaf4-4691-48d5-bac5-9aa9f9d66ac3-kube-api-access-7257r\") on node \"crc\" DevicePath \"\"" Dec 10 16:24:38 crc kubenswrapper[4718]: I1210 16:24:38.263999 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-dnscw/crc-debug-ktdb2" event={"ID":"cbd2aaf4-4691-48d5-bac5-9aa9f9d66ac3","Type":"ContainerDied","Data":"d3912bdd3f3260a1469abdfd8d84dba4b9be9ec3c847aff9e606ee28b3fdf21e"} Dec 10 16:24:38 crc kubenswrapper[4718]: I1210 16:24:38.264374 4718 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d3912bdd3f3260a1469abdfd8d84dba4b9be9ec3c847aff9e606ee28b3fdf21e" Dec 10 16:24:38 crc kubenswrapper[4718]: I1210 16:24:38.264489 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-dnscw/crc-debug-ktdb2" Dec 10 16:24:38 crc kubenswrapper[4718]: I1210 16:24:38.516461 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-dnscw/crc-debug-ktdb2"] Dec 10 16:24:38 crc kubenswrapper[4718]: I1210 16:24:38.525084 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-dnscw/crc-debug-ktdb2"] Dec 10 16:24:39 crc kubenswrapper[4718]: I1210 16:24:39.675520 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-dnscw/crc-debug-n7dfd"] Dec 10 16:24:39 crc kubenswrapper[4718]: E1210 16:24:39.676054 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cbd2aaf4-4691-48d5-bac5-9aa9f9d66ac3" containerName="container-00" Dec 10 16:24:39 crc kubenswrapper[4718]: I1210 16:24:39.676073 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="cbd2aaf4-4691-48d5-bac5-9aa9f9d66ac3" containerName="container-00" Dec 10 16:24:39 crc kubenswrapper[4718]: I1210 16:24:39.676625 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="cbd2aaf4-4691-48d5-bac5-9aa9f9d66ac3" containerName="container-00" Dec 10 16:24:39 crc kubenswrapper[4718]: I1210 16:24:39.677826 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-dnscw/crc-debug-n7dfd" Dec 10 16:24:39 crc kubenswrapper[4718]: I1210 16:24:39.817019 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/69394bb2-cf09-483a-92e6-55565fe9d03e-host\") pod \"crc-debug-n7dfd\" (UID: \"69394bb2-cf09-483a-92e6-55565fe9d03e\") " pod="openshift-must-gather-dnscw/crc-debug-n7dfd" Dec 10 16:24:39 crc kubenswrapper[4718]: I1210 16:24:39.817213 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dc8hf\" (UniqueName: \"kubernetes.io/projected/69394bb2-cf09-483a-92e6-55565fe9d03e-kube-api-access-dc8hf\") pod \"crc-debug-n7dfd\" (UID: \"69394bb2-cf09-483a-92e6-55565fe9d03e\") " pod="openshift-must-gather-dnscw/crc-debug-n7dfd" Dec 10 16:24:39 crc kubenswrapper[4718]: I1210 16:24:39.920252 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/69394bb2-cf09-483a-92e6-55565fe9d03e-host\") pod \"crc-debug-n7dfd\" (UID: \"69394bb2-cf09-483a-92e6-55565fe9d03e\") " pod="openshift-must-gather-dnscw/crc-debug-n7dfd" Dec 10 16:24:39 crc kubenswrapper[4718]: I1210 16:24:39.920370 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/69394bb2-cf09-483a-92e6-55565fe9d03e-host\") pod \"crc-debug-n7dfd\" (UID: \"69394bb2-cf09-483a-92e6-55565fe9d03e\") " pod="openshift-must-gather-dnscw/crc-debug-n7dfd" Dec 10 16:24:39 crc kubenswrapper[4718]: I1210 16:24:39.920670 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dc8hf\" (UniqueName: \"kubernetes.io/projected/69394bb2-cf09-483a-92e6-55565fe9d03e-kube-api-access-dc8hf\") pod \"crc-debug-n7dfd\" (UID: \"69394bb2-cf09-483a-92e6-55565fe9d03e\") " pod="openshift-must-gather-dnscw/crc-debug-n7dfd" Dec 10 16:24:39 crc kubenswrapper[4718]: I1210 16:24:39.977712 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dc8hf\" (UniqueName: \"kubernetes.io/projected/69394bb2-cf09-483a-92e6-55565fe9d03e-kube-api-access-dc8hf\") pod \"crc-debug-n7dfd\" (UID: \"69394bb2-cf09-483a-92e6-55565fe9d03e\") " pod="openshift-must-gather-dnscw/crc-debug-n7dfd" Dec 10 16:24:40 crc kubenswrapper[4718]: I1210 16:24:40.002406 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-dnscw/crc-debug-n7dfd" Dec 10 16:24:40 crc kubenswrapper[4718]: I1210 16:24:40.033982 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cbd2aaf4-4691-48d5-bac5-9aa9f9d66ac3" path="/var/lib/kubelet/pods/cbd2aaf4-4691-48d5-bac5-9aa9f9d66ac3/volumes" Dec 10 16:24:40 crc kubenswrapper[4718]: W1210 16:24:40.039570 4718 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod69394bb2_cf09_483a_92e6_55565fe9d03e.slice/crio-8c332f79f00ed7a0d7e7054fab1492980ad8f54300f5372977b31c5d45b5f57a WatchSource:0}: Error finding container 8c332f79f00ed7a0d7e7054fab1492980ad8f54300f5372977b31c5d45b5f57a: Status 404 returned error can't find the container with id 8c332f79f00ed7a0d7e7054fab1492980ad8f54300f5372977b31c5d45b5f57a Dec 10 16:24:40 crc kubenswrapper[4718]: I1210 16:24:40.287033 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-dnscw/crc-debug-n7dfd" event={"ID":"69394bb2-cf09-483a-92e6-55565fe9d03e","Type":"ContainerStarted","Data":"7ec0ec4b6c352519952735a35ba5d5b9ba85a66dae84a3ab9c08cdfdc181dd15"} Dec 10 16:24:40 crc kubenswrapper[4718]: I1210 16:24:40.287439 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-dnscw/crc-debug-n7dfd" event={"ID":"69394bb2-cf09-483a-92e6-55565fe9d03e","Type":"ContainerStarted","Data":"8c332f79f00ed7a0d7e7054fab1492980ad8f54300f5372977b31c5d45b5f57a"} Dec 10 16:24:40 crc kubenswrapper[4718]: I1210 16:24:40.307283 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-dnscw/crc-debug-n7dfd" podStartSLOduration=1.30724773 podStartE2EDuration="1.30724773s" podCreationTimestamp="2025-12-10 16:24:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-10 16:24:40.29968849 +0000 UTC m=+6785.248911917" watchObservedRunningTime="2025-12-10 16:24:40.30724773 +0000 UTC m=+6785.256471147" Dec 10 16:24:41 crc kubenswrapper[4718]: I1210 16:24:41.297589 4718 generic.go:334] "Generic (PLEG): container finished" podID="69394bb2-cf09-483a-92e6-55565fe9d03e" containerID="7ec0ec4b6c352519952735a35ba5d5b9ba85a66dae84a3ab9c08cdfdc181dd15" exitCode=0 Dec 10 16:24:41 crc kubenswrapper[4718]: I1210 16:24:41.297708 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-dnscw/crc-debug-n7dfd" event={"ID":"69394bb2-cf09-483a-92e6-55565fe9d03e","Type":"ContainerDied","Data":"7ec0ec4b6c352519952735a35ba5d5b9ba85a66dae84a3ab9c08cdfdc181dd15"} Dec 10 16:24:42 crc kubenswrapper[4718]: I1210 16:24:42.439535 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-dnscw/crc-debug-n7dfd" Dec 10 16:24:42 crc kubenswrapper[4718]: I1210 16:24:42.480322 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-dnscw/crc-debug-n7dfd"] Dec 10 16:24:42 crc kubenswrapper[4718]: I1210 16:24:42.494448 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-dnscw/crc-debug-n7dfd"] Dec 10 16:24:42 crc kubenswrapper[4718]: I1210 16:24:42.585721 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dc8hf\" (UniqueName: \"kubernetes.io/projected/69394bb2-cf09-483a-92e6-55565fe9d03e-kube-api-access-dc8hf\") pod \"69394bb2-cf09-483a-92e6-55565fe9d03e\" (UID: \"69394bb2-cf09-483a-92e6-55565fe9d03e\") " Dec 10 16:24:42 crc kubenswrapper[4718]: I1210 16:24:42.585833 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/69394bb2-cf09-483a-92e6-55565fe9d03e-host\") pod \"69394bb2-cf09-483a-92e6-55565fe9d03e\" (UID: \"69394bb2-cf09-483a-92e6-55565fe9d03e\") " Dec 10 16:24:42 crc kubenswrapper[4718]: I1210 16:24:42.586680 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/69394bb2-cf09-483a-92e6-55565fe9d03e-host" (OuterVolumeSpecName: "host") pod "69394bb2-cf09-483a-92e6-55565fe9d03e" (UID: "69394bb2-cf09-483a-92e6-55565fe9d03e"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 10 16:24:42 crc kubenswrapper[4718]: I1210 16:24:42.606116 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/69394bb2-cf09-483a-92e6-55565fe9d03e-kube-api-access-dc8hf" (OuterVolumeSpecName: "kube-api-access-dc8hf") pod "69394bb2-cf09-483a-92e6-55565fe9d03e" (UID: "69394bb2-cf09-483a-92e6-55565fe9d03e"). InnerVolumeSpecName "kube-api-access-dc8hf". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 16:24:42 crc kubenswrapper[4718]: I1210 16:24:42.689016 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dc8hf\" (UniqueName: \"kubernetes.io/projected/69394bb2-cf09-483a-92e6-55565fe9d03e-kube-api-access-dc8hf\") on node \"crc\" DevicePath \"\"" Dec 10 16:24:42 crc kubenswrapper[4718]: I1210 16:24:42.689347 4718 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/69394bb2-cf09-483a-92e6-55565fe9d03e-host\") on node \"crc\" DevicePath \"\"" Dec 10 16:24:43 crc kubenswrapper[4718]: I1210 16:24:43.320255 4718 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8c332f79f00ed7a0d7e7054fab1492980ad8f54300f5372977b31c5d45b5f57a" Dec 10 16:24:43 crc kubenswrapper[4718]: I1210 16:24:43.320339 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-dnscw/crc-debug-n7dfd" Dec 10 16:24:44 crc kubenswrapper[4718]: I1210 16:24:44.035597 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="69394bb2-cf09-483a-92e6-55565fe9d03e" path="/var/lib/kubelet/pods/69394bb2-cf09-483a-92e6-55565fe9d03e/volumes" Dec 10 16:25:17 crc kubenswrapper[4718]: I1210 16:25:17.781495 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-58896fd778-pk5pp_c53fcbf5-3330-4aff-a699-ff475344e705/barbican-api/0.log" Dec 10 16:25:17 crc kubenswrapper[4718]: I1210 16:25:17.886283 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-58896fd778-pk5pp_c53fcbf5-3330-4aff-a699-ff475344e705/barbican-api-log/0.log" Dec 10 16:25:17 crc kubenswrapper[4718]: I1210 16:25:17.947104 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-595577df7d-rjzmf_2b5afb71-a4ff-4786-ab1b-3e3d04f7e7dc/barbican-keystone-listener/0.log" Dec 10 16:25:18 crc kubenswrapper[4718]: I1210 16:25:18.060706 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-595577df7d-rjzmf_2b5afb71-a4ff-4786-ab1b-3e3d04f7e7dc/barbican-keystone-listener-log/0.log" Dec 10 16:25:18 crc kubenswrapper[4718]: I1210 16:25:18.210434 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-86c6894dc7-l9prg_f8b33603-9f2b-410e-a4ae-52b20ea62bd9/barbican-worker-log/0.log" Dec 10 16:25:18 crc kubenswrapper[4718]: I1210 16:25:18.213049 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-86c6894dc7-l9prg_f8b33603-9f2b-410e-a4ae-52b20ea62bd9/barbican-worker/0.log" Dec 10 16:25:18 crc kubenswrapper[4718]: I1210 16:25:18.477877 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-wkbj7_a7b1a942-17e0-4573-abd2-4bf182a8eef0/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Dec 10 16:25:18 crc kubenswrapper[4718]: I1210 16:25:18.591677 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_464e9486-56f7-4723-9cb6-6fe63cd86ae4/ceilometer-central-agent/0.log" Dec 10 16:25:18 crc kubenswrapper[4718]: I1210 16:25:18.699312 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_464e9486-56f7-4723-9cb6-6fe63cd86ae4/ceilometer-notification-agent/0.log" Dec 10 16:25:18 crc kubenswrapper[4718]: I1210 16:25:18.747372 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_464e9486-56f7-4723-9cb6-6fe63cd86ae4/proxy-httpd/0.log" Dec 10 16:25:18 crc kubenswrapper[4718]: I1210 16:25:18.799605 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_464e9486-56f7-4723-9cb6-6fe63cd86ae4/sg-core/0.log" Dec 10 16:25:19 crc kubenswrapper[4718]: I1210 16:25:19.051025 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_fbb11f94-73a2-4870-94d1-f7c6a699bc57/cinder-api-log/0.log" Dec 10 16:25:19 crc kubenswrapper[4718]: I1210 16:25:19.183458 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_74391ed2-7cf2-4c88-a940-e2bb3e6d3ad7/cinder-scheduler/0.log" Dec 10 16:25:19 crc kubenswrapper[4718]: I1210 16:25:19.392058 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_74391ed2-7cf2-4c88-a940-e2bb3e6d3ad7/probe/0.log" Dec 10 16:25:19 crc kubenswrapper[4718]: I1210 16:25:19.403538 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_fbb11f94-73a2-4870-94d1-f7c6a699bc57/cinder-api/0.log" Dec 10 16:25:19 crc kubenswrapper[4718]: I1210 16:25:19.485010 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-2plpc_7cea2862-2631-4d3a-98f8-29afc2428d28/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Dec 10 16:25:19 crc kubenswrapper[4718]: I1210 16:25:19.689772 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-vzmzr_2c9d42bb-c253-4ef3-92eb-71a2ffbbee3c/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Dec 10 16:25:19 crc kubenswrapper[4718]: I1210 16:25:19.764163 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-5c6b84c7df-hwqf9_177ad74b-362a-478f-a755-7c2862fa179d/init/0.log" Dec 10 16:25:19 crc kubenswrapper[4718]: I1210 16:25:19.931093 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-5c6b84c7df-hwqf9_177ad74b-362a-478f-a755-7c2862fa179d/init/0.log" Dec 10 16:25:20 crc kubenswrapper[4718]: I1210 16:25:20.113284 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-k9sjb_93b2ee40-9140-4ccf-8af3-d9bfc04ca78c/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Dec 10 16:25:20 crc kubenswrapper[4718]: I1210 16:25:20.187621 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-5c6b84c7df-hwqf9_177ad74b-362a-478f-a755-7c2862fa179d/dnsmasq-dns/0.log" Dec 10 16:25:20 crc kubenswrapper[4718]: I1210 16:25:20.336697 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_45fa8b4a-59e6-4adf-bbf1-4dc8d23a802b/glance-httpd/0.log" Dec 10 16:25:20 crc kubenswrapper[4718]: I1210 16:25:20.372142 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_45fa8b4a-59e6-4adf-bbf1-4dc8d23a802b/glance-log/0.log" Dec 10 16:25:20 crc kubenswrapper[4718]: I1210 16:25:20.580222 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_2504be79-1852-48ec-b2d2-d687ae68bd09/glance-log/0.log" Dec 10 16:25:20 crc kubenswrapper[4718]: I1210 16:25:20.602492 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_2504be79-1852-48ec-b2d2-d687ae68bd09/glance-httpd/0.log" Dec 10 16:25:20 crc kubenswrapper[4718]: I1210 16:25:20.771886 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-77c9ddb894-brvxz_e1a09589-44b9-49f4-8970-d3381c3d4b99/horizon/1.log" Dec 10 16:25:20 crc kubenswrapper[4718]: I1210 16:25:20.869545 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-77c9ddb894-brvxz_e1a09589-44b9-49f4-8970-d3381c3d4b99/horizon/0.log" Dec 10 16:25:21 crc kubenswrapper[4718]: I1210 16:25:21.271873 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-lnpsl_3de81842-2365-419e-88fd-b0b4611f3e8e/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Dec 10 16:25:21 crc kubenswrapper[4718]: I1210 16:25:21.573002 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-8vcjs_df7b87da-2bb2-494d-b840-478a58f1950c/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Dec 10 16:25:21 crc kubenswrapper[4718]: I1210 16:25:21.791279 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-77c9ddb894-brvxz_e1a09589-44b9-49f4-8970-d3381c3d4b99/horizon-log/0.log" Dec 10 16:25:21 crc kubenswrapper[4718]: I1210 16:25:21.848519 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29422981-fh4sm_dc3a87e8-6bf0-43e8-a75d-d743c4182d36/keystone-cron/0.log" Dec 10 16:25:22 crc kubenswrapper[4718]: I1210 16:25:22.018861 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29423041-7hjkl_c6059577-40c7-4880-8a8f-f0b5736dbac2/keystone-cron/0.log" Dec 10 16:25:22 crc kubenswrapper[4718]: I1210 16:25:22.119951 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-6bcdc7c9dc-hxhdn_6e6f831b-5d26-4e7c-9b6b-ebddeb01327c/keystone-api/0.log" Dec 10 16:25:22 crc kubenswrapper[4718]: I1210 16:25:22.144035 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_7a8da87b-0de3-4e86-ad3d-b29f4cd4ad1c/kube-state-metrics/0.log" Dec 10 16:25:22 crc kubenswrapper[4718]: I1210 16:25:22.313911 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-r2rqj_080c7769-f2d8-47fa-aa3d-a1b63190a679/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Dec 10 16:25:22 crc kubenswrapper[4718]: I1210 16:25:22.753433 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-b9vhb_2ce587d1-61d0-4844-bb2b-54894131a5bb/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Dec 10 16:25:22 crc kubenswrapper[4718]: I1210 16:25:22.819582 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-5fd4cf9989-fxv7b_ddd7c56e-7efb-44f9-8da2-45d0d54a9756/neutron-httpd/0.log" Dec 10 16:25:22 crc kubenswrapper[4718]: I1210 16:25:22.920977 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-5fd4cf9989-fxv7b_ddd7c56e-7efb-44f9-8da2-45d0d54a9756/neutron-api/0.log" Dec 10 16:25:23 crc kubenswrapper[4718]: I1210 16:25:23.627965 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_61020ad7-3d1d-4dab-9d46-2bb54b2e92d0/nova-cell0-conductor-conductor/0.log" Dec 10 16:25:23 crc kubenswrapper[4718]: I1210 16:25:23.679509 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_571880ea-f2a9-4e9e-99a5-c8bcaffb8675/memcached/0.log" Dec 10 16:25:24 crc kubenswrapper[4718]: I1210 16:25:24.028088 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_612c753e-cb9b-4995-b219-6e3b0d60cc22/nova-cell1-conductor-conductor/0.log" Dec 10 16:25:24 crc kubenswrapper[4718]: I1210 16:25:24.188890 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_da134815-ca06-4544-86a3-ebbc3d219c56/nova-cell1-novncproxy-novncproxy/0.log" Dec 10 16:25:24 crc kubenswrapper[4718]: I1210 16:25:24.388469 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-edpm-deployment-openstack-edpm-ipam-td49f_2dd95b44-946b-43ef-91a6-3eeab6ded836/nova-edpm-deployment-openstack-edpm-ipam/0.log" Dec 10 16:25:24 crc kubenswrapper[4718]: I1210 16:25:24.418420 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_12aef49c-9e40-4cc4-a280-103e9c6180de/nova-api-log/0.log" Dec 10 16:25:24 crc kubenswrapper[4718]: I1210 16:25:24.650843 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_badca5dd-ef88-4de8-a596-9cb2adc01193/nova-metadata-log/0.log" Dec 10 16:25:24 crc kubenswrapper[4718]: I1210 16:25:24.692878 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_12aef49c-9e40-4cc4-a280-103e9c6180de/nova-api-api/0.log" Dec 10 16:25:24 crc kubenswrapper[4718]: I1210 16:25:24.991833 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_f81943cf-47c7-424d-9473-2df3195bc9a6/mysql-bootstrap/0.log" Dec 10 16:25:25 crc kubenswrapper[4718]: I1210 16:25:25.130724 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_cc4b750b-8599-4d08-9b09-d2d75f035dc4/nova-scheduler-scheduler/0.log" Dec 10 16:25:25 crc kubenswrapper[4718]: I1210 16:25:25.183460 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_f81943cf-47c7-424d-9473-2df3195bc9a6/mysql-bootstrap/0.log" Dec 10 16:25:25 crc kubenswrapper[4718]: I1210 16:25:25.249097 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_f81943cf-47c7-424d-9473-2df3195bc9a6/galera/0.log" Dec 10 16:25:25 crc kubenswrapper[4718]: I1210 16:25:25.427767 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_0708d5de-311d-46e3-981e-7bd7a2fc495c/mysql-bootstrap/0.log" Dec 10 16:25:25 crc kubenswrapper[4718]: I1210 16:25:25.658685 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_0708d5de-311d-46e3-981e-7bd7a2fc495c/galera/0.log" Dec 10 16:25:25 crc kubenswrapper[4718]: I1210 16:25:25.673298 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_0708d5de-311d-46e3-981e-7bd7a2fc495c/mysql-bootstrap/0.log" Dec 10 16:25:25 crc kubenswrapper[4718]: I1210 16:25:25.718677 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_c0b43254-f8fe-4187-a8ce-aa65f7ac327e/openstackclient/0.log" Dec 10 16:25:25 crc kubenswrapper[4718]: I1210 16:25:25.986419 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-nf6b6_7326e5bc-27b1-4b9a-b0ea-979589622ea3/openstack-network-exporter/0.log" Dec 10 16:25:25 crc kubenswrapper[4718]: I1210 16:25:25.990317 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ftrjh_3e4f376b-2175-46d8-8b88-0560a3fcf231/ovn-controller/0.log" Dec 10 16:25:26 crc kubenswrapper[4718]: I1210 16:25:26.217790 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-l8vtq_061fa283-77d3-42e2-b267-2c01852d4123/ovsdb-server-init/0.log" Dec 10 16:25:26 crc kubenswrapper[4718]: I1210 16:25:26.476309 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-l8vtq_061fa283-77d3-42e2-b267-2c01852d4123/ovsdb-server/0.log" Dec 10 16:25:26 crc kubenswrapper[4718]: I1210 16:25:26.595717 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-l8vtq_061fa283-77d3-42e2-b267-2c01852d4123/ovsdb-server-init/0.log" Dec 10 16:25:26 crc kubenswrapper[4718]: I1210 16:25:26.767843 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_badca5dd-ef88-4de8-a596-9cb2adc01193/nova-metadata-metadata/0.log" Dec 10 16:25:26 crc kubenswrapper[4718]: I1210 16:25:26.782512 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-l8vtq_061fa283-77d3-42e2-b267-2c01852d4123/ovs-vswitchd/0.log" Dec 10 16:25:26 crc kubenswrapper[4718]: I1210 16:25:26.806478 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-sfg42_8ace4c93-2b2a-4185-b16a-d782334fa608/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Dec 10 16:25:27 crc kubenswrapper[4718]: I1210 16:25:27.399356 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_870a0a88-dfaa-49b8-96ae-96f5991f2e75/ovn-northd/0.log" Dec 10 16:25:27 crc kubenswrapper[4718]: I1210 16:25:27.425009 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_870a0a88-dfaa-49b8-96ae-96f5991f2e75/openstack-network-exporter/0.log" Dec 10 16:25:27 crc kubenswrapper[4718]: I1210 16:25:27.428257 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_50fbf3ca-d871-4ccd-a412-636fa783e3d4/openstack-network-exporter/0.log" Dec 10 16:25:27 crc kubenswrapper[4718]: I1210 16:25:27.555580 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_50fbf3ca-d871-4ccd-a412-636fa783e3d4/ovsdbserver-nb/0.log" Dec 10 16:25:27 crc kubenswrapper[4718]: I1210 16:25:27.627018 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_1ccf190d-cc0e-471c-b506-9784b1e8b038/openstack-network-exporter/0.log" Dec 10 16:25:27 crc kubenswrapper[4718]: I1210 16:25:27.672157 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_1ccf190d-cc0e-471c-b506-9784b1e8b038/ovsdbserver-sb/0.log" Dec 10 16:25:27 crc kubenswrapper[4718]: I1210 16:25:27.928189 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-7cb9f4c9bb-nx4ml_2ea453fb-60ad-4093-b15f-5cb288f92511/placement-api/0.log" Dec 10 16:25:27 crc kubenswrapper[4718]: I1210 16:25:27.995631 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_cac0f06e-eca1-4268-9fb6-78207619e61c/init-config-reloader/0.log" Dec 10 16:25:28 crc kubenswrapper[4718]: I1210 16:25:28.222567 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-7cb9f4c9bb-nx4ml_2ea453fb-60ad-4093-b15f-5cb288f92511/placement-log/0.log" Dec 10 16:25:28 crc kubenswrapper[4718]: I1210 16:25:28.305564 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_cac0f06e-eca1-4268-9fb6-78207619e61c/config-reloader/0.log" Dec 10 16:25:28 crc kubenswrapper[4718]: I1210 16:25:28.350111 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_cac0f06e-eca1-4268-9fb6-78207619e61c/init-config-reloader/0.log" Dec 10 16:25:28 crc kubenswrapper[4718]: I1210 16:25:28.383220 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_cac0f06e-eca1-4268-9fb6-78207619e61c/prometheus/0.log" Dec 10 16:25:28 crc kubenswrapper[4718]: I1210 16:25:28.451694 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_cac0f06e-eca1-4268-9fb6-78207619e61c/thanos-sidecar/0.log" Dec 10 16:25:28 crc kubenswrapper[4718]: I1210 16:25:28.523450 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_55b4c58e-c07e-4cd2-8592-f57b1d9f9233/setup-container/0.log" Dec 10 16:25:28 crc kubenswrapper[4718]: I1210 16:25:28.727401 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_55b4c58e-c07e-4cd2-8592-f57b1d9f9233/setup-container/0.log" Dec 10 16:25:28 crc kubenswrapper[4718]: I1210 16:25:28.756724 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_55b4c58e-c07e-4cd2-8592-f57b1d9f9233/rabbitmq/0.log" Dec 10 16:25:28 crc kubenswrapper[4718]: I1210 16:25:28.800044 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-notifications-server-0_5611ee41-14a4-45d3-88b1-e6e6c9bc4d13/setup-container/0.log" Dec 10 16:25:28 crc kubenswrapper[4718]: I1210 16:25:28.965358 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-notifications-server-0_5611ee41-14a4-45d3-88b1-e6e6c9bc4d13/setup-container/0.log" Dec 10 16:25:29 crc kubenswrapper[4718]: I1210 16:25:29.019149 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_4e530819-d029-4526-aed9-2cd33568dbcb/setup-container/0.log" Dec 10 16:25:29 crc kubenswrapper[4718]: I1210 16:25:29.040041 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-notifications-server-0_5611ee41-14a4-45d3-88b1-e6e6c9bc4d13/rabbitmq/0.log" Dec 10 16:25:29 crc kubenswrapper[4718]: I1210 16:25:29.226552 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_4e530819-d029-4526-aed9-2cd33568dbcb/setup-container/0.log" Dec 10 16:25:29 crc kubenswrapper[4718]: I1210 16:25:29.251482 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_4e530819-d029-4526-aed9-2cd33568dbcb/rabbitmq/0.log" Dec 10 16:25:29 crc kubenswrapper[4718]: I1210 16:25:29.352075 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-vldh6_71d72952-f3a0-4c3c-97f8-26c143f154cc/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Dec 10 16:25:29 crc kubenswrapper[4718]: I1210 16:25:29.443619 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_redhat-edpm-deployment-openstack-edpm-ipam-b6wmm_99702136-5bd9-4803-8ce9-8a89bd572648/redhat-edpm-deployment-openstack-edpm-ipam/0.log" Dec 10 16:25:29 crc kubenswrapper[4718]: I1210 16:25:29.541079 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-89nsq_10400b99-0213-470b-b37a-f0b9cd98ab2b/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Dec 10 16:25:29 crc kubenswrapper[4718]: I1210 16:25:29.574729 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-dt6pb_311630cc-3a9b-48d5-9407-879b0f508508/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Dec 10 16:25:29 crc kubenswrapper[4718]: I1210 16:25:29.768753 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-bsdk2_4d6f6be0-1d66-4c7e-a8e9-09826a416501/ssh-known-hosts-edpm-deployment/0.log" Dec 10 16:25:29 crc kubenswrapper[4718]: I1210 16:25:29.960880 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-6c7874df4c-ns7dm_52552bcb-7acc-4882-86ef-0353a39e7262/proxy-httpd/0.log" Dec 10 16:25:30 crc kubenswrapper[4718]: I1210 16:25:30.027602 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-6c7874df4c-ns7dm_52552bcb-7acc-4882-86ef-0353a39e7262/proxy-server/0.log" Dec 10 16:25:30 crc kubenswrapper[4718]: I1210 16:25:30.061689 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-nhp59_ea7defa5-2130-4d6d-8bba-9416bec21dfa/swift-ring-rebalance/0.log" Dec 10 16:25:30 crc kubenswrapper[4718]: I1210 16:25:30.218824 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e094c947-215b-4386-906f-5ee833afa9d0/account-auditor/0.log" Dec 10 16:25:30 crc kubenswrapper[4718]: I1210 16:25:30.340520 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e094c947-215b-4386-906f-5ee833afa9d0/account-server/0.log" Dec 10 16:25:30 crc kubenswrapper[4718]: I1210 16:25:30.351236 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e094c947-215b-4386-906f-5ee833afa9d0/account-replicator/0.log" Dec 10 16:25:30 crc kubenswrapper[4718]: I1210 16:25:30.360193 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e094c947-215b-4386-906f-5ee833afa9d0/account-reaper/0.log" Dec 10 16:25:30 crc kubenswrapper[4718]: I1210 16:25:30.444543 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e094c947-215b-4386-906f-5ee833afa9d0/container-auditor/0.log" Dec 10 16:25:30 crc kubenswrapper[4718]: I1210 16:25:30.476650 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e094c947-215b-4386-906f-5ee833afa9d0/container-replicator/0.log" Dec 10 16:25:30 crc kubenswrapper[4718]: I1210 16:25:30.570643 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e094c947-215b-4386-906f-5ee833afa9d0/container-updater/0.log" Dec 10 16:25:30 crc kubenswrapper[4718]: I1210 16:25:30.592667 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e094c947-215b-4386-906f-5ee833afa9d0/container-server/0.log" Dec 10 16:25:30 crc kubenswrapper[4718]: I1210 16:25:30.627830 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e094c947-215b-4386-906f-5ee833afa9d0/object-auditor/0.log" Dec 10 16:25:30 crc kubenswrapper[4718]: I1210 16:25:30.688616 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e094c947-215b-4386-906f-5ee833afa9d0/object-expirer/0.log" Dec 10 16:25:30 crc kubenswrapper[4718]: I1210 16:25:30.731879 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e094c947-215b-4386-906f-5ee833afa9d0/object-replicator/0.log" Dec 10 16:25:30 crc kubenswrapper[4718]: I1210 16:25:30.759058 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e094c947-215b-4386-906f-5ee833afa9d0/object-server/0.log" Dec 10 16:25:30 crc kubenswrapper[4718]: I1210 16:25:30.847438 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e094c947-215b-4386-906f-5ee833afa9d0/object-updater/0.log" Dec 10 16:25:30 crc kubenswrapper[4718]: I1210 16:25:30.851531 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e094c947-215b-4386-906f-5ee833afa9d0/rsync/0.log" Dec 10 16:25:30 crc kubenswrapper[4718]: I1210 16:25:30.918164 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e094c947-215b-4386-906f-5ee833afa9d0/swift-recon-cron/0.log" Dec 10 16:25:31 crc kubenswrapper[4718]: I1210 16:25:31.004698 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-edpm-deployment-openstack-edpm-ipam-hf8zl_7aeaa205-67e7-4b41-a2d6-fff74ab0d61b/telemetry-edpm-deployment-openstack-edpm-ipam/0.log" Dec 10 16:25:31 crc kubenswrapper[4718]: I1210 16:25:31.093974 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_df134785-8fb2-418f-89ba-55f6d822f50a/tempest-tests-tempest-tests-runner/0.log" Dec 10 16:25:31 crc kubenswrapper[4718]: I1210 16:25:31.406597 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_c99e4dd1-6bd0-4be7-ad00-40d50d9a4e2e/test-operator-logs-container/0.log" Dec 10 16:25:31 crc kubenswrapper[4718]: I1210 16:25:31.528028 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-nt5t9_e98b8eb6-cfd6-4125-973e-7cda6cdeceeb/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Dec 10 16:25:32 crc kubenswrapper[4718]: I1210 16:25:32.268860 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_watcher-applier-0_cda091c0-e668-4132-95bf-e956b4ee9b39/watcher-applier/0.log" Dec 10 16:25:32 crc kubenswrapper[4718]: I1210 16:25:32.996612 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_watcher-api-0_b0c0a449-91c3-43fe-ba1b-02a146745b82/watcher-api-log/0.log" Dec 10 16:25:35 crc kubenswrapper[4718]: I1210 16:25:35.072932 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_watcher-decision-engine-0_0105b21d-8a6a-4368-aec4-80c009daecd1/watcher-decision-engine/0.log" Dec 10 16:25:36 crc kubenswrapper[4718]: I1210 16:25:36.474232 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_watcher-api-0_b0c0a449-91c3-43fe-ba1b-02a146745b82/watcher-api/0.log" Dec 10 16:25:46 crc kubenswrapper[4718]: I1210 16:25:46.379292 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-httpz"] Dec 10 16:25:46 crc kubenswrapper[4718]: E1210 16:25:46.387462 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="69394bb2-cf09-483a-92e6-55565fe9d03e" containerName="container-00" Dec 10 16:25:46 crc kubenswrapper[4718]: I1210 16:25:46.387490 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="69394bb2-cf09-483a-92e6-55565fe9d03e" containerName="container-00" Dec 10 16:25:46 crc kubenswrapper[4718]: I1210 16:25:46.387746 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="69394bb2-cf09-483a-92e6-55565fe9d03e" containerName="container-00" Dec 10 16:25:46 crc kubenswrapper[4718]: I1210 16:25:46.389525 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-httpz" Dec 10 16:25:46 crc kubenswrapper[4718]: I1210 16:25:46.389575 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-httpz"] Dec 10 16:25:46 crc kubenswrapper[4718]: I1210 16:25:46.499857 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p9jc9\" (UniqueName: \"kubernetes.io/projected/adbf7f1f-280e-4a0f-978a-5d579dea0c24-kube-api-access-p9jc9\") pod \"community-operators-httpz\" (UID: \"adbf7f1f-280e-4a0f-978a-5d579dea0c24\") " pod="openshift-marketplace/community-operators-httpz" Dec 10 16:25:46 crc kubenswrapper[4718]: I1210 16:25:46.500342 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/adbf7f1f-280e-4a0f-978a-5d579dea0c24-utilities\") pod \"community-operators-httpz\" (UID: \"adbf7f1f-280e-4a0f-978a-5d579dea0c24\") " pod="openshift-marketplace/community-operators-httpz" Dec 10 16:25:46 crc kubenswrapper[4718]: I1210 16:25:46.500509 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/adbf7f1f-280e-4a0f-978a-5d579dea0c24-catalog-content\") pod \"community-operators-httpz\" (UID: \"adbf7f1f-280e-4a0f-978a-5d579dea0c24\") " pod="openshift-marketplace/community-operators-httpz" Dec 10 16:25:46 crc kubenswrapper[4718]: I1210 16:25:46.603089 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p9jc9\" (UniqueName: \"kubernetes.io/projected/adbf7f1f-280e-4a0f-978a-5d579dea0c24-kube-api-access-p9jc9\") pod \"community-operators-httpz\" (UID: \"adbf7f1f-280e-4a0f-978a-5d579dea0c24\") " pod="openshift-marketplace/community-operators-httpz" Dec 10 16:25:46 crc kubenswrapper[4718]: I1210 16:25:46.603192 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/adbf7f1f-280e-4a0f-978a-5d579dea0c24-utilities\") pod \"community-operators-httpz\" (UID: \"adbf7f1f-280e-4a0f-978a-5d579dea0c24\") " pod="openshift-marketplace/community-operators-httpz" Dec 10 16:25:46 crc kubenswrapper[4718]: I1210 16:25:46.603255 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/adbf7f1f-280e-4a0f-978a-5d579dea0c24-catalog-content\") pod \"community-operators-httpz\" (UID: \"adbf7f1f-280e-4a0f-978a-5d579dea0c24\") " pod="openshift-marketplace/community-operators-httpz" Dec 10 16:25:46 crc kubenswrapper[4718]: I1210 16:25:46.603939 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/adbf7f1f-280e-4a0f-978a-5d579dea0c24-utilities\") pod \"community-operators-httpz\" (UID: \"adbf7f1f-280e-4a0f-978a-5d579dea0c24\") " pod="openshift-marketplace/community-operators-httpz" Dec 10 16:25:46 crc kubenswrapper[4718]: I1210 16:25:46.603996 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/adbf7f1f-280e-4a0f-978a-5d579dea0c24-catalog-content\") pod \"community-operators-httpz\" (UID: \"adbf7f1f-280e-4a0f-978a-5d579dea0c24\") " pod="openshift-marketplace/community-operators-httpz" Dec 10 16:25:46 crc kubenswrapper[4718]: I1210 16:25:46.627720 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p9jc9\" (UniqueName: \"kubernetes.io/projected/adbf7f1f-280e-4a0f-978a-5d579dea0c24-kube-api-access-p9jc9\") pod \"community-operators-httpz\" (UID: \"adbf7f1f-280e-4a0f-978a-5d579dea0c24\") " pod="openshift-marketplace/community-operators-httpz" Dec 10 16:25:46 crc kubenswrapper[4718]: I1210 16:25:46.709600 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-httpz" Dec 10 16:25:47 crc kubenswrapper[4718]: I1210 16:25:47.310174 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-httpz"] Dec 10 16:25:48 crc kubenswrapper[4718]: I1210 16:25:48.000817 4718 generic.go:334] "Generic (PLEG): container finished" podID="adbf7f1f-280e-4a0f-978a-5d579dea0c24" containerID="ae49f19881b6073a72a6651aad899109aed41c3117206a41a9ef0052e82dab39" exitCode=0 Dec 10 16:25:48 crc kubenswrapper[4718]: I1210 16:25:48.001008 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-httpz" event={"ID":"adbf7f1f-280e-4a0f-978a-5d579dea0c24","Type":"ContainerDied","Data":"ae49f19881b6073a72a6651aad899109aed41c3117206a41a9ef0052e82dab39"} Dec 10 16:25:48 crc kubenswrapper[4718]: I1210 16:25:48.001193 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-httpz" event={"ID":"adbf7f1f-280e-4a0f-978a-5d579dea0c24","Type":"ContainerStarted","Data":"3352f20316c666d3731a892d7ed02a028defa31394cfaa4deee023c06fcc8c67"} Dec 10 16:25:48 crc kubenswrapper[4718]: I1210 16:25:48.084569 4718 patch_prober.go:28] interesting pod/machine-config-daemon-8zmhn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 10 16:25:48 crc kubenswrapper[4718]: I1210 16:25:48.084717 4718 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" podUID="8db53917-7cfb-496d-b8a0-5cc68f3be4e7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 10 16:25:50 crc kubenswrapper[4718]: I1210 16:25:50.023665 4718 generic.go:334] "Generic (PLEG): container finished" podID="adbf7f1f-280e-4a0f-978a-5d579dea0c24" containerID="84eb70a204faf98e59b84a88c8a9c7d7d9747d79e2ca4eba691501b85febd2fd" exitCode=0 Dec 10 16:25:50 crc kubenswrapper[4718]: I1210 16:25:50.034777 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-httpz" event={"ID":"adbf7f1f-280e-4a0f-978a-5d579dea0c24","Type":"ContainerDied","Data":"84eb70a204faf98e59b84a88c8a9c7d7d9747d79e2ca4eba691501b85febd2fd"} Dec 10 16:25:51 crc kubenswrapper[4718]: I1210 16:25:51.039708 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-httpz" event={"ID":"adbf7f1f-280e-4a0f-978a-5d579dea0c24","Type":"ContainerStarted","Data":"4634b575784317ba0d581e1a04b62e2d4b52819dd5fdd2f9da70ea66f78e28e4"} Dec 10 16:25:51 crc kubenswrapper[4718]: I1210 16:25:51.060469 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-httpz" podStartSLOduration=2.607777379 podStartE2EDuration="5.060441102s" podCreationTimestamp="2025-12-10 16:25:46 +0000 UTC" firstStartedPulling="2025-12-10 16:25:48.002989487 +0000 UTC m=+6852.952212924" lastFinishedPulling="2025-12-10 16:25:50.45565323 +0000 UTC m=+6855.404876647" observedRunningTime="2025-12-10 16:25:51.057064708 +0000 UTC m=+6856.006288125" watchObservedRunningTime="2025-12-10 16:25:51.060441102 +0000 UTC m=+6856.009664539" Dec 10 16:25:56 crc kubenswrapper[4718]: I1210 16:25:56.710651 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-httpz" Dec 10 16:25:56 crc kubenswrapper[4718]: I1210 16:25:56.712523 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-httpz" Dec 10 16:25:56 crc kubenswrapper[4718]: I1210 16:25:56.763072 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-httpz" Dec 10 16:25:57 crc kubenswrapper[4718]: I1210 16:25:57.172881 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-httpz" Dec 10 16:25:57 crc kubenswrapper[4718]: I1210 16:25:57.232145 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-httpz"] Dec 10 16:25:59 crc kubenswrapper[4718]: I1210 16:25:59.118891 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-httpz" podUID="adbf7f1f-280e-4a0f-978a-5d579dea0c24" containerName="registry-server" containerID="cri-o://4634b575784317ba0d581e1a04b62e2d4b52819dd5fdd2f9da70ea66f78e28e4" gracePeriod=2 Dec 10 16:25:59 crc kubenswrapper[4718]: I1210 16:25:59.586533 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-httpz" Dec 10 16:25:59 crc kubenswrapper[4718]: I1210 16:25:59.701476 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/adbf7f1f-280e-4a0f-978a-5d579dea0c24-catalog-content\") pod \"adbf7f1f-280e-4a0f-978a-5d579dea0c24\" (UID: \"adbf7f1f-280e-4a0f-978a-5d579dea0c24\") " Dec 10 16:25:59 crc kubenswrapper[4718]: I1210 16:25:59.701799 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/adbf7f1f-280e-4a0f-978a-5d579dea0c24-utilities\") pod \"adbf7f1f-280e-4a0f-978a-5d579dea0c24\" (UID: \"adbf7f1f-280e-4a0f-978a-5d579dea0c24\") " Dec 10 16:25:59 crc kubenswrapper[4718]: I1210 16:25:59.701885 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p9jc9\" (UniqueName: \"kubernetes.io/projected/adbf7f1f-280e-4a0f-978a-5d579dea0c24-kube-api-access-p9jc9\") pod \"adbf7f1f-280e-4a0f-978a-5d579dea0c24\" (UID: \"adbf7f1f-280e-4a0f-978a-5d579dea0c24\") " Dec 10 16:25:59 crc kubenswrapper[4718]: I1210 16:25:59.703865 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/adbf7f1f-280e-4a0f-978a-5d579dea0c24-utilities" (OuterVolumeSpecName: "utilities") pod "adbf7f1f-280e-4a0f-978a-5d579dea0c24" (UID: "adbf7f1f-280e-4a0f-978a-5d579dea0c24"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 16:25:59 crc kubenswrapper[4718]: I1210 16:25:59.710213 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/adbf7f1f-280e-4a0f-978a-5d579dea0c24-kube-api-access-p9jc9" (OuterVolumeSpecName: "kube-api-access-p9jc9") pod "adbf7f1f-280e-4a0f-978a-5d579dea0c24" (UID: "adbf7f1f-280e-4a0f-978a-5d579dea0c24"). InnerVolumeSpecName "kube-api-access-p9jc9". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 16:25:59 crc kubenswrapper[4718]: I1210 16:25:59.805047 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p9jc9\" (UniqueName: \"kubernetes.io/projected/adbf7f1f-280e-4a0f-978a-5d579dea0c24-kube-api-access-p9jc9\") on node \"crc\" DevicePath \"\"" Dec 10 16:25:59 crc kubenswrapper[4718]: I1210 16:25:59.805093 4718 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/adbf7f1f-280e-4a0f-978a-5d579dea0c24-utilities\") on node \"crc\" DevicePath \"\"" Dec 10 16:25:59 crc kubenswrapper[4718]: I1210 16:25:59.934677 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/adbf7f1f-280e-4a0f-978a-5d579dea0c24-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "adbf7f1f-280e-4a0f-978a-5d579dea0c24" (UID: "adbf7f1f-280e-4a0f-978a-5d579dea0c24"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 16:26:00 crc kubenswrapper[4718]: I1210 16:26:00.009489 4718 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/adbf7f1f-280e-4a0f-978a-5d579dea0c24-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 10 16:26:00 crc kubenswrapper[4718]: I1210 16:26:00.131762 4718 generic.go:334] "Generic (PLEG): container finished" podID="adbf7f1f-280e-4a0f-978a-5d579dea0c24" containerID="4634b575784317ba0d581e1a04b62e2d4b52819dd5fdd2f9da70ea66f78e28e4" exitCode=0 Dec 10 16:26:00 crc kubenswrapper[4718]: I1210 16:26:00.131820 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-httpz" event={"ID":"adbf7f1f-280e-4a0f-978a-5d579dea0c24","Type":"ContainerDied","Data":"4634b575784317ba0d581e1a04b62e2d4b52819dd5fdd2f9da70ea66f78e28e4"} Dec 10 16:26:00 crc kubenswrapper[4718]: I1210 16:26:00.131878 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-httpz" event={"ID":"adbf7f1f-280e-4a0f-978a-5d579dea0c24","Type":"ContainerDied","Data":"3352f20316c666d3731a892d7ed02a028defa31394cfaa4deee023c06fcc8c67"} Dec 10 16:26:00 crc kubenswrapper[4718]: I1210 16:26:00.131904 4718 scope.go:117] "RemoveContainer" containerID="4634b575784317ba0d581e1a04b62e2d4b52819dd5fdd2f9da70ea66f78e28e4" Dec 10 16:26:00 crc kubenswrapper[4718]: I1210 16:26:00.133032 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-httpz" Dec 10 16:26:00 crc kubenswrapper[4718]: I1210 16:26:00.169760 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-httpz"] Dec 10 16:26:00 crc kubenswrapper[4718]: I1210 16:26:00.176441 4718 scope.go:117] "RemoveContainer" containerID="84eb70a204faf98e59b84a88c8a9c7d7d9747d79e2ca4eba691501b85febd2fd" Dec 10 16:26:00 crc kubenswrapper[4718]: I1210 16:26:00.179716 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-httpz"] Dec 10 16:26:00 crc kubenswrapper[4718]: I1210 16:26:00.209320 4718 scope.go:117] "RemoveContainer" containerID="ae49f19881b6073a72a6651aad899109aed41c3117206a41a9ef0052e82dab39" Dec 10 16:26:00 crc kubenswrapper[4718]: I1210 16:26:00.274024 4718 scope.go:117] "RemoveContainer" containerID="4634b575784317ba0d581e1a04b62e2d4b52819dd5fdd2f9da70ea66f78e28e4" Dec 10 16:26:00 crc kubenswrapper[4718]: E1210 16:26:00.275212 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4634b575784317ba0d581e1a04b62e2d4b52819dd5fdd2f9da70ea66f78e28e4\": container with ID starting with 4634b575784317ba0d581e1a04b62e2d4b52819dd5fdd2f9da70ea66f78e28e4 not found: ID does not exist" containerID="4634b575784317ba0d581e1a04b62e2d4b52819dd5fdd2f9da70ea66f78e28e4" Dec 10 16:26:00 crc kubenswrapper[4718]: I1210 16:26:00.275262 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4634b575784317ba0d581e1a04b62e2d4b52819dd5fdd2f9da70ea66f78e28e4"} err="failed to get container status \"4634b575784317ba0d581e1a04b62e2d4b52819dd5fdd2f9da70ea66f78e28e4\": rpc error: code = NotFound desc = could not find container \"4634b575784317ba0d581e1a04b62e2d4b52819dd5fdd2f9da70ea66f78e28e4\": container with ID starting with 4634b575784317ba0d581e1a04b62e2d4b52819dd5fdd2f9da70ea66f78e28e4 not found: ID does not exist" Dec 10 16:26:00 crc kubenswrapper[4718]: I1210 16:26:00.275295 4718 scope.go:117] "RemoveContainer" containerID="84eb70a204faf98e59b84a88c8a9c7d7d9747d79e2ca4eba691501b85febd2fd" Dec 10 16:26:00 crc kubenswrapper[4718]: E1210 16:26:00.275841 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"84eb70a204faf98e59b84a88c8a9c7d7d9747d79e2ca4eba691501b85febd2fd\": container with ID starting with 84eb70a204faf98e59b84a88c8a9c7d7d9747d79e2ca4eba691501b85febd2fd not found: ID does not exist" containerID="84eb70a204faf98e59b84a88c8a9c7d7d9747d79e2ca4eba691501b85febd2fd" Dec 10 16:26:00 crc kubenswrapper[4718]: I1210 16:26:00.275892 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"84eb70a204faf98e59b84a88c8a9c7d7d9747d79e2ca4eba691501b85febd2fd"} err="failed to get container status \"84eb70a204faf98e59b84a88c8a9c7d7d9747d79e2ca4eba691501b85febd2fd\": rpc error: code = NotFound desc = could not find container \"84eb70a204faf98e59b84a88c8a9c7d7d9747d79e2ca4eba691501b85febd2fd\": container with ID starting with 84eb70a204faf98e59b84a88c8a9c7d7d9747d79e2ca4eba691501b85febd2fd not found: ID does not exist" Dec 10 16:26:00 crc kubenswrapper[4718]: I1210 16:26:00.275930 4718 scope.go:117] "RemoveContainer" containerID="ae49f19881b6073a72a6651aad899109aed41c3117206a41a9ef0052e82dab39" Dec 10 16:26:00 crc kubenswrapper[4718]: E1210 16:26:00.276242 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ae49f19881b6073a72a6651aad899109aed41c3117206a41a9ef0052e82dab39\": container with ID starting with ae49f19881b6073a72a6651aad899109aed41c3117206a41a9ef0052e82dab39 not found: ID does not exist" containerID="ae49f19881b6073a72a6651aad899109aed41c3117206a41a9ef0052e82dab39" Dec 10 16:26:00 crc kubenswrapper[4718]: I1210 16:26:00.276366 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ae49f19881b6073a72a6651aad899109aed41c3117206a41a9ef0052e82dab39"} err="failed to get container status \"ae49f19881b6073a72a6651aad899109aed41c3117206a41a9ef0052e82dab39\": rpc error: code = NotFound desc = could not find container \"ae49f19881b6073a72a6651aad899109aed41c3117206a41a9ef0052e82dab39\": container with ID starting with ae49f19881b6073a72a6651aad899109aed41c3117206a41a9ef0052e82dab39 not found: ID does not exist" Dec 10 16:26:01 crc kubenswrapper[4718]: I1210 16:26:01.873526 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7d9dfd778-4s6xq_61e4671d-9417-472d-9d76-64fdcc0e3297/kube-rbac-proxy/0.log" Dec 10 16:26:01 crc kubenswrapper[4718]: I1210 16:26:01.970826 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7d9dfd778-4s6xq_61e4671d-9417-472d-9d76-64fdcc0e3297/manager/0.log" Dec 10 16:26:02 crc kubenswrapper[4718]: I1210 16:26:02.033359 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="adbf7f1f-280e-4a0f-978a-5d579dea0c24" path="/var/lib/kubelet/pods/adbf7f1f-280e-4a0f-978a-5d579dea0c24/volumes" Dec 10 16:26:02 crc kubenswrapper[4718]: I1210 16:26:02.129329 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-6c677c69b-jfdsv_82086b4c-0222-45a7-a3c3-fc2504f63a4e/kube-rbac-proxy/0.log" Dec 10 16:26:02 crc kubenswrapper[4718]: I1210 16:26:02.190330 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-6c677c69b-jfdsv_82086b4c-0222-45a7-a3c3-fc2504f63a4e/manager/0.log" Dec 10 16:26:02 crc kubenswrapper[4718]: I1210 16:26:02.425008 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_d48c0f09d4f85e6997d52028ed3b1a3994c4f4b73b6b561c4a3bbe5136txtln_92739fd0-cf7c-45af-a0db-dbf3f2ffdddf/util/0.log" Dec 10 16:26:02 crc kubenswrapper[4718]: I1210 16:26:02.588066 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_d48c0f09d4f85e6997d52028ed3b1a3994c4f4b73b6b561c4a3bbe5136txtln_92739fd0-cf7c-45af-a0db-dbf3f2ffdddf/pull/0.log" Dec 10 16:26:02 crc kubenswrapper[4718]: I1210 16:26:02.603598 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_d48c0f09d4f85e6997d52028ed3b1a3994c4f4b73b6b561c4a3bbe5136txtln_92739fd0-cf7c-45af-a0db-dbf3f2ffdddf/pull/0.log" Dec 10 16:26:02 crc kubenswrapper[4718]: I1210 16:26:02.639716 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_d48c0f09d4f85e6997d52028ed3b1a3994c4f4b73b6b561c4a3bbe5136txtln_92739fd0-cf7c-45af-a0db-dbf3f2ffdddf/util/0.log" Dec 10 16:26:02 crc kubenswrapper[4718]: I1210 16:26:02.828249 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_d48c0f09d4f85e6997d52028ed3b1a3994c4f4b73b6b561c4a3bbe5136txtln_92739fd0-cf7c-45af-a0db-dbf3f2ffdddf/pull/0.log" Dec 10 16:26:02 crc kubenswrapper[4718]: I1210 16:26:02.842354 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_d48c0f09d4f85e6997d52028ed3b1a3994c4f4b73b6b561c4a3bbe5136txtln_92739fd0-cf7c-45af-a0db-dbf3f2ffdddf/util/0.log" Dec 10 16:26:02 crc kubenswrapper[4718]: I1210 16:26:02.848378 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_d48c0f09d4f85e6997d52028ed3b1a3994c4f4b73b6b561c4a3bbe5136txtln_92739fd0-cf7c-45af-a0db-dbf3f2ffdddf/extract/0.log" Dec 10 16:26:03 crc kubenswrapper[4718]: I1210 16:26:03.024940 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-697fb699cf-6mx57_a3023af7-f9ec-44a3-a532-0f6d51843443/kube-rbac-proxy/0.log" Dec 10 16:26:03 crc kubenswrapper[4718]: I1210 16:26:03.055121 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-697fb699cf-6mx57_a3023af7-f9ec-44a3-a532-0f6d51843443/manager/0.log" Dec 10 16:26:03 crc kubenswrapper[4718]: I1210 16:26:03.132337 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-5697bb5779-xvwrl_513a8781-70b0-4692-9141-0c60ef254a98/kube-rbac-proxy/0.log" Dec 10 16:26:03 crc kubenswrapper[4718]: I1210 16:26:03.316227 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-5697bb5779-xvwrl_513a8781-70b0-4692-9141-0c60ef254a98/manager/0.log" Dec 10 16:26:03 crc kubenswrapper[4718]: I1210 16:26:03.362969 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-5f64f6f8bb-gw22w_a701287e-359e-429d-8b94-c4e06e8922a8/kube-rbac-proxy/0.log" Dec 10 16:26:03 crc kubenswrapper[4718]: I1210 16:26:03.383080 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-5f64f6f8bb-gw22w_a701287e-359e-429d-8b94-c4e06e8922a8/manager/0.log" Dec 10 16:26:03 crc kubenswrapper[4718]: I1210 16:26:03.554654 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-68c6d99b8f-mshqn_80f8ae23-3a84-4810-9868-6571b6cf56a1/kube-rbac-proxy/0.log" Dec 10 16:26:03 crc kubenswrapper[4718]: I1210 16:26:03.627284 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-68c6d99b8f-mshqn_80f8ae23-3a84-4810-9868-6571b6cf56a1/manager/0.log" Dec 10 16:26:03 crc kubenswrapper[4718]: I1210 16:26:03.832085 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-78d48bff9d-cvh4g_6a2a49d9-73fc-4798-8173-ed230aa16811/kube-rbac-proxy/0.log" Dec 10 16:26:03 crc kubenswrapper[4718]: I1210 16:26:03.935544 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-967d97867-2jgmr_2516d98a-9991-4d5b-9791-14642a4ec629/kube-rbac-proxy/0.log" Dec 10 16:26:04 crc kubenswrapper[4718]: I1210 16:26:04.052257 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-967d97867-2jgmr_2516d98a-9991-4d5b-9791-14642a4ec629/manager/0.log" Dec 10 16:26:04 crc kubenswrapper[4718]: I1210 16:26:04.071604 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-78d48bff9d-cvh4g_6a2a49d9-73fc-4798-8173-ed230aa16811/manager/0.log" Dec 10 16:26:04 crc kubenswrapper[4718]: I1210 16:26:04.170718 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-7765d96ddf-9sxwk_7d8ae7e9-7545-4ab6-b87c-6c5484b47424/kube-rbac-proxy/0.log" Dec 10 16:26:04 crc kubenswrapper[4718]: I1210 16:26:04.352461 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-7765d96ddf-9sxwk_7d8ae7e9-7545-4ab6-b87c-6c5484b47424/manager/0.log" Dec 10 16:26:04 crc kubenswrapper[4718]: I1210 16:26:04.448080 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-5b5fd79c9c-cfcm2_e7677f94-866d-45c7-b1c9-70fd2b7c7012/manager/0.log" Dec 10 16:26:04 crc kubenswrapper[4718]: I1210 16:26:04.449632 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-5b5fd79c9c-cfcm2_e7677f94-866d-45c7-b1c9-70fd2b7c7012/kube-rbac-proxy/0.log" Dec 10 16:26:04 crc kubenswrapper[4718]: I1210 16:26:04.684590 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-79c8c4686c-7qqzr_32690a0c-0ce7-4639-b30f-18a1a91ed86d/kube-rbac-proxy/0.log" Dec 10 16:26:04 crc kubenswrapper[4718]: I1210 16:26:04.760894 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-79c8c4686c-7qqzr_32690a0c-0ce7-4639-b30f-18a1a91ed86d/manager/0.log" Dec 10 16:26:04 crc kubenswrapper[4718]: I1210 16:26:04.864092 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-5fdfd5b6b5-6smz5_191f3c0d-be7d-463e-9979-922dfb629747/kube-rbac-proxy/0.log" Dec 10 16:26:05 crc kubenswrapper[4718]: I1210 16:26:05.057607 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-5fdfd5b6b5-6smz5_191f3c0d-be7d-463e-9979-922dfb629747/manager/0.log" Dec 10 16:26:05 crc kubenswrapper[4718]: I1210 16:26:05.107937 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-697bc559fc-426mn_a3e406b6-e5c1-4c25-b9ee-80fbbb3eb89b/kube-rbac-proxy/0.log" Dec 10 16:26:05 crc kubenswrapper[4718]: I1210 16:26:05.388698 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-697bc559fc-426mn_a3e406b6-e5c1-4c25-b9ee-80fbbb3eb89b/manager/0.log" Dec 10 16:26:05 crc kubenswrapper[4718]: I1210 16:26:05.444301 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-998648c74-s4568_664faf77-d6a3-4b57-9dc9-ca7a4879c0ef/kube-rbac-proxy/0.log" Dec 10 16:26:05 crc kubenswrapper[4718]: I1210 16:26:05.542168 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-998648c74-s4568_664faf77-d6a3-4b57-9dc9-ca7a4879c0ef/manager/0.log" Dec 10 16:26:05 crc kubenswrapper[4718]: I1210 16:26:05.674518 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-7f95dc5b94n9mbl_463f6bf2-85ef-488a-8223-56898633fe8f/kube-rbac-proxy/0.log" Dec 10 16:26:05 crc kubenswrapper[4718]: I1210 16:26:05.756500 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-7f95dc5b94n9mbl_463f6bf2-85ef-488a-8223-56898633fe8f/manager/0.log" Dec 10 16:26:06 crc kubenswrapper[4718]: I1210 16:26:06.246153 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-operator-966884dd6-7tsss_08ebcdcc-242d-43fe-bb21-3ddf6b7ae71f/operator/0.log" Dec 10 16:26:06 crc kubenswrapper[4718]: I1210 16:26:06.381288 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-ggxhd_5b117425-d366-4008-9216-4696f8736b81/registry-server/0.log" Dec 10 16:26:06 crc kubenswrapper[4718]: I1210 16:26:06.494294 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-b6456fdb6-hq6tv_204f0155-9693-4239-8a7a-440255d5ad50/kube-rbac-proxy/0.log" Dec 10 16:26:06 crc kubenswrapper[4718]: I1210 16:26:06.667289 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-78f8948974-6s7dx_e4e01550-5ee5-4afc-a01a-b3ea52b47f23/kube-rbac-proxy/0.log" Dec 10 16:26:06 crc kubenswrapper[4718]: I1210 16:26:06.699775 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-b6456fdb6-hq6tv_204f0155-9693-4239-8a7a-440255d5ad50/manager/0.log" Dec 10 16:26:06 crc kubenswrapper[4718]: I1210 16:26:06.819232 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-78f8948974-6s7dx_e4e01550-5ee5-4afc-a01a-b3ea52b47f23/manager/0.log" Dec 10 16:26:06 crc kubenswrapper[4718]: I1210 16:26:06.904124 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-nb754_91cdfe7c-2e49-4919-a7ff-0559e12ecf8b/operator/0.log" Dec 10 16:26:07 crc kubenswrapper[4718]: I1210 16:26:07.114625 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-9d58d64bc-7vvmc_daeefe3a-b055-4ee9-be2e-a93afc257365/kube-rbac-proxy/0.log" Dec 10 16:26:07 crc kubenswrapper[4718]: I1210 16:26:07.254891 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-9d58d64bc-7vvmc_daeefe3a-b055-4ee9-be2e-a93afc257365/manager/0.log" Dec 10 16:26:07 crc kubenswrapper[4718]: I1210 16:26:07.266660 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-58d5ff84df-jqlwv_469e8dbb-654f-464b-80f9-ac7b0d55439f/kube-rbac-proxy/0.log" Dec 10 16:26:07 crc kubenswrapper[4718]: I1210 16:26:07.559377 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-5854674fcc-r7vfj_12ba5675-3e82-41d7-be5a-ecbe1a440af5/manager/0.log" Dec 10 16:26:07 crc kubenswrapper[4718]: I1210 16:26:07.578034 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-85cbc5886b-z2lw4_81cbd3a0-2031-418d-95b6-fdac9d170a51/manager/0.log" Dec 10 16:26:07 crc kubenswrapper[4718]: I1210 16:26:07.584985 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-5854674fcc-r7vfj_12ba5675-3e82-41d7-be5a-ecbe1a440af5/kube-rbac-proxy/0.log" Dec 10 16:26:07 crc kubenswrapper[4718]: I1210 16:26:07.678708 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-58d5ff84df-jqlwv_469e8dbb-654f-464b-80f9-ac7b0d55439f/manager/0.log" Dec 10 16:26:07 crc kubenswrapper[4718]: I1210 16:26:07.744831 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-75944c9b7-rn6gn_21d69144-5afe-4aa8-95f0-c6e7c8802b14/kube-rbac-proxy/0.log" Dec 10 16:26:07 crc kubenswrapper[4718]: I1210 16:26:07.857797 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-75944c9b7-rn6gn_21d69144-5afe-4aa8-95f0-c6e7c8802b14/manager/0.log" Dec 10 16:26:18 crc kubenswrapper[4718]: I1210 16:26:18.085209 4718 patch_prober.go:28] interesting pod/machine-config-daemon-8zmhn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 10 16:26:18 crc kubenswrapper[4718]: I1210 16:26:18.085804 4718 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" podUID="8db53917-7cfb-496d-b8a0-5cc68f3be4e7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 10 16:26:27 crc kubenswrapper[4718]: I1210 16:26:27.668844 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-bvd4w_fe1f76c3-fb22-4c92-bc66-7048e04e63b0/control-plane-machine-set-operator/0.log" Dec 10 16:26:27 crc kubenswrapper[4718]: I1210 16:26:27.863130 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-prs2h_8b5042f5-52a1-42da-9a21-72d7b2a75c75/kube-rbac-proxy/0.log" Dec 10 16:26:27 crc kubenswrapper[4718]: I1210 16:26:27.864451 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-prs2h_8b5042f5-52a1-42da-9a21-72d7b2a75c75/machine-api-operator/0.log" Dec 10 16:26:41 crc kubenswrapper[4718]: I1210 16:26:41.356432 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-5b446d88c5-nmkz4_8989c984-6f88-4b26-9c39-37cd583802d7/cert-manager-controller/0.log" Dec 10 16:26:41 crc kubenswrapper[4718]: I1210 16:26:41.506866 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-7f985d654d-kjxkw_33afc510-28a4-4f71-810d-da9f04ca2a86/cert-manager-cainjector/0.log" Dec 10 16:26:41 crc kubenswrapper[4718]: I1210 16:26:41.549109 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-5655c58dd6-t9tzp_07467c5a-e532-4233-8736-8191dbbdd234/cert-manager-webhook/0.log" Dec 10 16:26:48 crc kubenswrapper[4718]: I1210 16:26:48.084897 4718 patch_prober.go:28] interesting pod/machine-config-daemon-8zmhn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 10 16:26:48 crc kubenswrapper[4718]: I1210 16:26:48.085482 4718 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" podUID="8db53917-7cfb-496d-b8a0-5cc68f3be4e7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 10 16:26:48 crc kubenswrapper[4718]: I1210 16:26:48.085557 4718 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" Dec 10 16:26:48 crc kubenswrapper[4718]: I1210 16:26:48.086714 4718 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"8ed5a435f21adc700b92bc75b1658785da982c7c96614fd189595cd44c41f5bd"} pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 10 16:26:48 crc kubenswrapper[4718]: I1210 16:26:48.086792 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" podUID="8db53917-7cfb-496d-b8a0-5cc68f3be4e7" containerName="machine-config-daemon" containerID="cri-o://8ed5a435f21adc700b92bc75b1658785da982c7c96614fd189595cd44c41f5bd" gracePeriod=600 Dec 10 16:26:48 crc kubenswrapper[4718]: I1210 16:26:48.721504 4718 generic.go:334] "Generic (PLEG): container finished" podID="8db53917-7cfb-496d-b8a0-5cc68f3be4e7" containerID="8ed5a435f21adc700b92bc75b1658785da982c7c96614fd189595cd44c41f5bd" exitCode=0 Dec 10 16:26:48 crc kubenswrapper[4718]: I1210 16:26:48.721579 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" event={"ID":"8db53917-7cfb-496d-b8a0-5cc68f3be4e7","Type":"ContainerDied","Data":"8ed5a435f21adc700b92bc75b1658785da982c7c96614fd189595cd44c41f5bd"} Dec 10 16:26:48 crc kubenswrapper[4718]: I1210 16:26:48.721970 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" event={"ID":"8db53917-7cfb-496d-b8a0-5cc68f3be4e7","Type":"ContainerStarted","Data":"46e05a5a0b93b5b168c9b125f8be5d0a31de5bd6c874a8392e9682fe57e1a5ab"} Dec 10 16:26:48 crc kubenswrapper[4718]: I1210 16:26:48.722006 4718 scope.go:117] "RemoveContainer" containerID="efbe25aac6a704d523a48396ea05982c2cc2181337c42dd02c1adda5c0cc383d" Dec 10 16:26:54 crc kubenswrapper[4718]: I1210 16:26:54.874000 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-7fbb5f6569-p79rx_15f18f34-32e0-49a4-b05d-ccd88e6c9541/nmstate-console-plugin/0.log" Dec 10 16:26:55 crc kubenswrapper[4718]: I1210 16:26:55.062010 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-2pm2t_1d31a2e1-7843-4881-807c-38aed6f2ee1d/nmstate-handler/0.log" Dec 10 16:26:55 crc kubenswrapper[4718]: I1210 16:26:55.096309 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-7f946cbc9-l52p4_dc2fd026-789f-445a-befd-cdaf23a77c25/kube-rbac-proxy/0.log" Dec 10 16:26:55 crc kubenswrapper[4718]: I1210 16:26:55.152644 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-7f946cbc9-l52p4_dc2fd026-789f-445a-befd-cdaf23a77c25/nmstate-metrics/0.log" Dec 10 16:26:55 crc kubenswrapper[4718]: I1210 16:26:55.280443 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-5b5b58f5c8-99xzv_b28ae747-6984-4dea-8efd-f3f238f56386/nmstate-operator/0.log" Dec 10 16:26:55 crc kubenswrapper[4718]: I1210 16:26:55.431883 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-5f6d4c5ccb-xkpb5_64da34db-cd1c-46a7-9a41-69926590d466/nmstate-webhook/0.log" Dec 10 16:27:10 crc kubenswrapper[4718]: I1210 16:27:10.425830 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-f8648f98b-wm6nx_e08743b1-961a-43f6-a4b4-c546f2ce87cf/kube-rbac-proxy/0.log" Dec 10 16:27:10 crc kubenswrapper[4718]: I1210 16:27:10.545463 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-f8648f98b-wm6nx_e08743b1-961a-43f6-a4b4-c546f2ce87cf/controller/0.log" Dec 10 16:27:10 crc kubenswrapper[4718]: I1210 16:27:10.666119 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-jhgkf_8c7642dd-0879-49cf-870a-a30a11c4d1b9/cp-frr-files/0.log" Dec 10 16:27:10 crc kubenswrapper[4718]: I1210 16:27:10.931677 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-jhgkf_8c7642dd-0879-49cf-870a-a30a11c4d1b9/cp-reloader/0.log" Dec 10 16:27:10 crc kubenswrapper[4718]: I1210 16:27:10.933318 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-jhgkf_8c7642dd-0879-49cf-870a-a30a11c4d1b9/cp-metrics/0.log" Dec 10 16:27:10 crc kubenswrapper[4718]: I1210 16:27:10.948121 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-jhgkf_8c7642dd-0879-49cf-870a-a30a11c4d1b9/cp-reloader/0.log" Dec 10 16:27:10 crc kubenswrapper[4718]: I1210 16:27:10.968211 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-jhgkf_8c7642dd-0879-49cf-870a-a30a11c4d1b9/cp-frr-files/0.log" Dec 10 16:27:11 crc kubenswrapper[4718]: I1210 16:27:11.187492 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-jhgkf_8c7642dd-0879-49cf-870a-a30a11c4d1b9/cp-reloader/0.log" Dec 10 16:27:11 crc kubenswrapper[4718]: I1210 16:27:11.219161 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-jhgkf_8c7642dd-0879-49cf-870a-a30a11c4d1b9/cp-frr-files/0.log" Dec 10 16:27:11 crc kubenswrapper[4718]: I1210 16:27:11.235407 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-jhgkf_8c7642dd-0879-49cf-870a-a30a11c4d1b9/cp-metrics/0.log" Dec 10 16:27:11 crc kubenswrapper[4718]: I1210 16:27:11.255882 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-jhgkf_8c7642dd-0879-49cf-870a-a30a11c4d1b9/cp-metrics/0.log" Dec 10 16:27:11 crc kubenswrapper[4718]: I1210 16:27:11.413707 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-jhgkf_8c7642dd-0879-49cf-870a-a30a11c4d1b9/cp-frr-files/0.log" Dec 10 16:27:11 crc kubenswrapper[4718]: I1210 16:27:11.420115 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-jhgkf_8c7642dd-0879-49cf-870a-a30a11c4d1b9/cp-reloader/0.log" Dec 10 16:27:11 crc kubenswrapper[4718]: I1210 16:27:11.448282 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-jhgkf_8c7642dd-0879-49cf-870a-a30a11c4d1b9/cp-metrics/0.log" Dec 10 16:27:11 crc kubenswrapper[4718]: I1210 16:27:11.507276 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-jhgkf_8c7642dd-0879-49cf-870a-a30a11c4d1b9/controller/0.log" Dec 10 16:27:11 crc kubenswrapper[4718]: I1210 16:27:11.627574 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-jhgkf_8c7642dd-0879-49cf-870a-a30a11c4d1b9/frr-metrics/0.log" Dec 10 16:27:11 crc kubenswrapper[4718]: I1210 16:27:11.697404 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-jhgkf_8c7642dd-0879-49cf-870a-a30a11c4d1b9/kube-rbac-proxy/0.log" Dec 10 16:27:11 crc kubenswrapper[4718]: I1210 16:27:11.718176 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-jhgkf_8c7642dd-0879-49cf-870a-a30a11c4d1b9/kube-rbac-proxy-frr/0.log" Dec 10 16:27:11 crc kubenswrapper[4718]: I1210 16:27:11.847540 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-jhgkf_8c7642dd-0879-49cf-870a-a30a11c4d1b9/reloader/0.log" Dec 10 16:27:12 crc kubenswrapper[4718]: I1210 16:27:12.038264 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7fcb986d4-45v7s_63f53c92-7e30-4be9-be8e-4eb3126d9fc1/frr-k8s-webhook-server/0.log" Dec 10 16:27:12 crc kubenswrapper[4718]: I1210 16:27:12.221584 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-5d7fdbff7b-h2jjz_2c08926d-27dd-4571-a545-aeb91d97a810/manager/0.log" Dec 10 16:27:12 crc kubenswrapper[4718]: I1210 16:27:12.417959 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-6fdd887f57-qm9df_dc4a693b-8b70-42fd-9a9e-83fcdcc7cb6e/webhook-server/0.log" Dec 10 16:27:12 crc kubenswrapper[4718]: I1210 16:27:12.507238 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-pbmz7_1a988a3f-3408-4963-8f6c-b77351286aab/kube-rbac-proxy/0.log" Dec 10 16:27:13 crc kubenswrapper[4718]: I1210 16:27:13.239612 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-pbmz7_1a988a3f-3408-4963-8f6c-b77351286aab/speaker/0.log" Dec 10 16:27:13 crc kubenswrapper[4718]: I1210 16:27:13.752767 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-jhgkf_8c7642dd-0879-49cf-870a-a30a11c4d1b9/frr/0.log" Dec 10 16:27:26 crc kubenswrapper[4718]: I1210 16:27:26.096968 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f6qfc5_ee0c87fb-677b-4338-85a7-3a53ad85e806/util/0.log" Dec 10 16:27:26 crc kubenswrapper[4718]: I1210 16:27:26.332505 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f6qfc5_ee0c87fb-677b-4338-85a7-3a53ad85e806/pull/0.log" Dec 10 16:27:26 crc kubenswrapper[4718]: I1210 16:27:26.334464 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f6qfc5_ee0c87fb-677b-4338-85a7-3a53ad85e806/util/0.log" Dec 10 16:27:26 crc kubenswrapper[4718]: I1210 16:27:26.343975 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f6qfc5_ee0c87fb-677b-4338-85a7-3a53ad85e806/pull/0.log" Dec 10 16:27:26 crc kubenswrapper[4718]: I1210 16:27:26.508769 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f6qfc5_ee0c87fb-677b-4338-85a7-3a53ad85e806/util/0.log" Dec 10 16:27:26 crc kubenswrapper[4718]: I1210 16:27:26.523425 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f6qfc5_ee0c87fb-677b-4338-85a7-3a53ad85e806/pull/0.log" Dec 10 16:27:26 crc kubenswrapper[4718]: I1210 16:27:26.557141 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f6qfc5_ee0c87fb-677b-4338-85a7-3a53ad85e806/extract/0.log" Dec 10 16:27:26 crc kubenswrapper[4718]: I1210 16:27:26.733630 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210dk787_792e6faa-53e5-4bb6-baa5-32ce33828b19/util/0.log" Dec 10 16:27:26 crc kubenswrapper[4718]: I1210 16:27:26.886898 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210dk787_792e6faa-53e5-4bb6-baa5-32ce33828b19/pull/0.log" Dec 10 16:27:26 crc kubenswrapper[4718]: I1210 16:27:26.921509 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210dk787_792e6faa-53e5-4bb6-baa5-32ce33828b19/util/0.log" Dec 10 16:27:26 crc kubenswrapper[4718]: I1210 16:27:26.921586 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210dk787_792e6faa-53e5-4bb6-baa5-32ce33828b19/pull/0.log" Dec 10 16:27:27 crc kubenswrapper[4718]: I1210 16:27:27.090920 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210dk787_792e6faa-53e5-4bb6-baa5-32ce33828b19/pull/0.log" Dec 10 16:27:27 crc kubenswrapper[4718]: I1210 16:27:27.128904 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210dk787_792e6faa-53e5-4bb6-baa5-32ce33828b19/util/0.log" Dec 10 16:27:27 crc kubenswrapper[4718]: I1210 16:27:27.137647 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210dk787_792e6faa-53e5-4bb6-baa5-32ce33828b19/extract/0.log" Dec 10 16:27:27 crc kubenswrapper[4718]: I1210 16:27:27.344139 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83pdbgb_9350b27a-484f-491b-9a2f-2ae333f3636b/util/0.log" Dec 10 16:27:27 crc kubenswrapper[4718]: I1210 16:27:27.501667 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83pdbgb_9350b27a-484f-491b-9a2f-2ae333f3636b/pull/0.log" Dec 10 16:27:27 crc kubenswrapper[4718]: I1210 16:27:27.524717 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83pdbgb_9350b27a-484f-491b-9a2f-2ae333f3636b/pull/0.log" Dec 10 16:27:27 crc kubenswrapper[4718]: I1210 16:27:27.532122 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83pdbgb_9350b27a-484f-491b-9a2f-2ae333f3636b/util/0.log" Dec 10 16:27:27 crc kubenswrapper[4718]: I1210 16:27:27.741087 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83pdbgb_9350b27a-484f-491b-9a2f-2ae333f3636b/extract/0.log" Dec 10 16:27:28 crc kubenswrapper[4718]: I1210 16:27:28.579807 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83pdbgb_9350b27a-484f-491b-9a2f-2ae333f3636b/pull/0.log" Dec 10 16:27:28 crc kubenswrapper[4718]: I1210 16:27:28.652892 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83pdbgb_9350b27a-484f-491b-9a2f-2ae333f3636b/util/0.log" Dec 10 16:27:28 crc kubenswrapper[4718]: I1210 16:27:28.669662 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-5vctd_81ebdd62-3494-4d9a-8d04-ae6122173e69/extract-utilities/0.log" Dec 10 16:27:28 crc kubenswrapper[4718]: I1210 16:27:28.837785 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-5vctd_81ebdd62-3494-4d9a-8d04-ae6122173e69/extract-content/0.log" Dec 10 16:27:28 crc kubenswrapper[4718]: I1210 16:27:28.852081 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-5vctd_81ebdd62-3494-4d9a-8d04-ae6122173e69/extract-utilities/0.log" Dec 10 16:27:28 crc kubenswrapper[4718]: I1210 16:27:28.865567 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-5vctd_81ebdd62-3494-4d9a-8d04-ae6122173e69/extract-content/0.log" Dec 10 16:27:29 crc kubenswrapper[4718]: I1210 16:27:29.063786 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-5vctd_81ebdd62-3494-4d9a-8d04-ae6122173e69/extract-utilities/0.log" Dec 10 16:27:29 crc kubenswrapper[4718]: I1210 16:27:29.079978 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-5vctd_81ebdd62-3494-4d9a-8d04-ae6122173e69/extract-content/0.log" Dec 10 16:27:29 crc kubenswrapper[4718]: I1210 16:27:29.305800 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-dh2hb_ed6de05f-b121-47be-9317-39b153c3012b/extract-utilities/0.log" Dec 10 16:27:29 crc kubenswrapper[4718]: I1210 16:27:29.630131 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-dh2hb_ed6de05f-b121-47be-9317-39b153c3012b/extract-utilities/0.log" Dec 10 16:27:29 crc kubenswrapper[4718]: I1210 16:27:29.645852 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-dh2hb_ed6de05f-b121-47be-9317-39b153c3012b/extract-content/0.log" Dec 10 16:27:29 crc kubenswrapper[4718]: I1210 16:27:29.722812 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-dh2hb_ed6de05f-b121-47be-9317-39b153c3012b/extract-content/0.log" Dec 10 16:27:29 crc kubenswrapper[4718]: I1210 16:27:29.833059 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-5vctd_81ebdd62-3494-4d9a-8d04-ae6122173e69/registry-server/0.log" Dec 10 16:27:29 crc kubenswrapper[4718]: I1210 16:27:29.935369 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-dh2hb_ed6de05f-b121-47be-9317-39b153c3012b/extract-content/0.log" Dec 10 16:27:29 crc kubenswrapper[4718]: I1210 16:27:29.997985 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-dh2hb_ed6de05f-b121-47be-9317-39b153c3012b/extract-utilities/0.log" Dec 10 16:27:30 crc kubenswrapper[4718]: I1210 16:27:30.198268 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-f8m8j_fe45ddb0-b6e9-4690-8d2c-dfcef7de0d90/marketplace-operator/0.log" Dec 10 16:27:30 crc kubenswrapper[4718]: I1210 16:27:30.437927 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-rdjz4_4e412181-46c3-4cde-81c7-92efeeacc196/extract-utilities/0.log" Dec 10 16:27:30 crc kubenswrapper[4718]: I1210 16:27:30.677842 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-rdjz4_4e412181-46c3-4cde-81c7-92efeeacc196/extract-content/0.log" Dec 10 16:27:30 crc kubenswrapper[4718]: I1210 16:27:30.693900 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-rdjz4_4e412181-46c3-4cde-81c7-92efeeacc196/extract-utilities/0.log" Dec 10 16:27:30 crc kubenswrapper[4718]: I1210 16:27:30.778610 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-rdjz4_4e412181-46c3-4cde-81c7-92efeeacc196/extract-content/0.log" Dec 10 16:27:30 crc kubenswrapper[4718]: I1210 16:27:30.983641 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-rdjz4_4e412181-46c3-4cde-81c7-92efeeacc196/extract-utilities/0.log" Dec 10 16:27:30 crc kubenswrapper[4718]: I1210 16:27:30.988840 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-rdjz4_4e412181-46c3-4cde-81c7-92efeeacc196/extract-content/0.log" Dec 10 16:27:31 crc kubenswrapper[4718]: I1210 16:27:31.004750 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-dh2hb_ed6de05f-b121-47be-9317-39b153c3012b/registry-server/0.log" Dec 10 16:27:31 crc kubenswrapper[4718]: I1210 16:27:31.215558 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-wbxhg_823e77a1-f4fa-452f-bfc5-572228021708/extract-utilities/0.log" Dec 10 16:27:31 crc kubenswrapper[4718]: I1210 16:27:31.257616 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-rdjz4_4e412181-46c3-4cde-81c7-92efeeacc196/registry-server/0.log" Dec 10 16:27:31 crc kubenswrapper[4718]: I1210 16:27:31.454042 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-wbxhg_823e77a1-f4fa-452f-bfc5-572228021708/extract-utilities/0.log" Dec 10 16:27:31 crc kubenswrapper[4718]: I1210 16:27:31.471235 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-wbxhg_823e77a1-f4fa-452f-bfc5-572228021708/extract-content/0.log" Dec 10 16:27:31 crc kubenswrapper[4718]: I1210 16:27:31.472660 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-wbxhg_823e77a1-f4fa-452f-bfc5-572228021708/extract-content/0.log" Dec 10 16:27:31 crc kubenswrapper[4718]: I1210 16:27:31.610676 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-wbxhg_823e77a1-f4fa-452f-bfc5-572228021708/extract-utilities/0.log" Dec 10 16:27:31 crc kubenswrapper[4718]: I1210 16:27:31.635270 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-wbxhg_823e77a1-f4fa-452f-bfc5-572228021708/extract-content/0.log" Dec 10 16:27:31 crc kubenswrapper[4718]: I1210 16:27:31.729656 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-wbxhg_823e77a1-f4fa-452f-bfc5-572228021708/registry-server/0.log" Dec 10 16:27:44 crc kubenswrapper[4718]: I1210 16:27:44.242815 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-668cf9dfbb-czssc_8dd390a0-a978-4de3-ad2e-b76c9f9288ff/prometheus-operator/0.log" Dec 10 16:27:44 crc kubenswrapper[4718]: I1210 16:27:44.392241 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-85c6579888-795fs_d7da36d4-9aa5-4d1c-8135-f4af9a21dde9/prometheus-operator-admission-webhook/0.log" Dec 10 16:27:44 crc kubenswrapper[4718]: I1210 16:27:44.479405 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-85c6579888-mxh5h_dc2f2282-251a-4b32-b59d-8e28aa8e28b1/prometheus-operator-admission-webhook/0.log" Dec 10 16:27:44 crc kubenswrapper[4718]: I1210 16:27:44.579551 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-d8bb48f5d-k2ncb_ecb014f7-37b1-431b-a452-676f723287f4/operator/0.log" Dec 10 16:27:44 crc kubenswrapper[4718]: I1210 16:27:44.677525 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5446b9c989-fxb27_5ae1b9f9-6939-4aa0-8651-a76dafd291a4/perses-operator/0.log" Dec 10 16:28:48 crc kubenswrapper[4718]: I1210 16:28:48.083986 4718 patch_prober.go:28] interesting pod/machine-config-daemon-8zmhn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 10 16:28:48 crc kubenswrapper[4718]: I1210 16:28:48.084650 4718 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" podUID="8db53917-7cfb-496d-b8a0-5cc68f3be4e7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 10 16:29:18 crc kubenswrapper[4718]: I1210 16:29:18.084421 4718 patch_prober.go:28] interesting pod/machine-config-daemon-8zmhn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 10 16:29:18 crc kubenswrapper[4718]: I1210 16:29:18.085244 4718 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" podUID="8db53917-7cfb-496d-b8a0-5cc68f3be4e7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 10 16:29:48 crc kubenswrapper[4718]: I1210 16:29:48.084529 4718 patch_prober.go:28] interesting pod/machine-config-daemon-8zmhn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 10 16:29:48 crc kubenswrapper[4718]: I1210 16:29:48.085221 4718 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" podUID="8db53917-7cfb-496d-b8a0-5cc68f3be4e7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 10 16:29:48 crc kubenswrapper[4718]: I1210 16:29:48.085279 4718 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" Dec 10 16:29:48 crc kubenswrapper[4718]: I1210 16:29:48.086520 4718 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"46e05a5a0b93b5b168c9b125f8be5d0a31de5bd6c874a8392e9682fe57e1a5ab"} pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 10 16:29:48 crc kubenswrapper[4718]: I1210 16:29:48.086604 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" podUID="8db53917-7cfb-496d-b8a0-5cc68f3be4e7" containerName="machine-config-daemon" containerID="cri-o://46e05a5a0b93b5b168c9b125f8be5d0a31de5bd6c874a8392e9682fe57e1a5ab" gracePeriod=600 Dec 10 16:29:48 crc kubenswrapper[4718]: E1210 16:29:48.209754 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8zmhn_openshift-machine-config-operator(8db53917-7cfb-496d-b8a0-5cc68f3be4e7)\"" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" podUID="8db53917-7cfb-496d-b8a0-5cc68f3be4e7" Dec 10 16:29:48 crc kubenswrapper[4718]: I1210 16:29:48.833190 4718 generic.go:334] "Generic (PLEG): container finished" podID="8db53917-7cfb-496d-b8a0-5cc68f3be4e7" containerID="46e05a5a0b93b5b168c9b125f8be5d0a31de5bd6c874a8392e9682fe57e1a5ab" exitCode=0 Dec 10 16:29:48 crc kubenswrapper[4718]: I1210 16:29:48.833428 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" event={"ID":"8db53917-7cfb-496d-b8a0-5cc68f3be4e7","Type":"ContainerDied","Data":"46e05a5a0b93b5b168c9b125f8be5d0a31de5bd6c874a8392e9682fe57e1a5ab"} Dec 10 16:29:48 crc kubenswrapper[4718]: I1210 16:29:48.833731 4718 scope.go:117] "RemoveContainer" containerID="8ed5a435f21adc700b92bc75b1658785da982c7c96614fd189595cd44c41f5bd" Dec 10 16:29:48 crc kubenswrapper[4718]: I1210 16:29:48.835027 4718 scope.go:117] "RemoveContainer" containerID="46e05a5a0b93b5b168c9b125f8be5d0a31de5bd6c874a8392e9682fe57e1a5ab" Dec 10 16:29:48 crc kubenswrapper[4718]: E1210 16:29:48.836254 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8zmhn_openshift-machine-config-operator(8db53917-7cfb-496d-b8a0-5cc68f3be4e7)\"" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" podUID="8db53917-7cfb-496d-b8a0-5cc68f3be4e7" Dec 10 16:29:52 crc kubenswrapper[4718]: I1210 16:29:52.879883 4718 generic.go:334] "Generic (PLEG): container finished" podID="4e1a9ba4-970d-4984-aac8-cca52469a498" containerID="7a739ccb7b795e7c13657c4bc82edd1fc5339ba68393ee4b0a1c0e1a0707e6f4" exitCode=0 Dec 10 16:29:52 crc kubenswrapper[4718]: I1210 16:29:52.879957 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-dnscw/must-gather-hdqkd" event={"ID":"4e1a9ba4-970d-4984-aac8-cca52469a498","Type":"ContainerDied","Data":"7a739ccb7b795e7c13657c4bc82edd1fc5339ba68393ee4b0a1c0e1a0707e6f4"} Dec 10 16:29:52 crc kubenswrapper[4718]: I1210 16:29:52.882103 4718 scope.go:117] "RemoveContainer" containerID="7a739ccb7b795e7c13657c4bc82edd1fc5339ba68393ee4b0a1c0e1a0707e6f4" Dec 10 16:29:52 crc kubenswrapper[4718]: I1210 16:29:52.993168 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-dnscw_must-gather-hdqkd_4e1a9ba4-970d-4984-aac8-cca52469a498/gather/0.log" Dec 10 16:30:00 crc kubenswrapper[4718]: I1210 16:30:00.169003 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29423070-g4shp"] Dec 10 16:30:00 crc kubenswrapper[4718]: E1210 16:30:00.170402 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="adbf7f1f-280e-4a0f-978a-5d579dea0c24" containerName="registry-server" Dec 10 16:30:00 crc kubenswrapper[4718]: I1210 16:30:00.170425 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="adbf7f1f-280e-4a0f-978a-5d579dea0c24" containerName="registry-server" Dec 10 16:30:00 crc kubenswrapper[4718]: E1210 16:30:00.170458 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="adbf7f1f-280e-4a0f-978a-5d579dea0c24" containerName="extract-content" Dec 10 16:30:00 crc kubenswrapper[4718]: I1210 16:30:00.170466 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="adbf7f1f-280e-4a0f-978a-5d579dea0c24" containerName="extract-content" Dec 10 16:30:00 crc kubenswrapper[4718]: E1210 16:30:00.170483 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="adbf7f1f-280e-4a0f-978a-5d579dea0c24" containerName="extract-utilities" Dec 10 16:30:00 crc kubenswrapper[4718]: I1210 16:30:00.170493 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="adbf7f1f-280e-4a0f-978a-5d579dea0c24" containerName="extract-utilities" Dec 10 16:30:00 crc kubenswrapper[4718]: I1210 16:30:00.171608 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="adbf7f1f-280e-4a0f-978a-5d579dea0c24" containerName="registry-server" Dec 10 16:30:00 crc kubenswrapper[4718]: I1210 16:30:00.172850 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29423070-g4shp" Dec 10 16:30:00 crc kubenswrapper[4718]: I1210 16:30:00.175612 4718 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 10 16:30:00 crc kubenswrapper[4718]: I1210 16:30:00.176630 4718 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 10 16:30:00 crc kubenswrapper[4718]: I1210 16:30:00.189101 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29423070-g4shp"] Dec 10 16:30:00 crc kubenswrapper[4718]: I1210 16:30:00.250933 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dzn4t\" (UniqueName: \"kubernetes.io/projected/2b31583a-8903-4bea-acc6-6527b272ba72-kube-api-access-dzn4t\") pod \"collect-profiles-29423070-g4shp\" (UID: \"2b31583a-8903-4bea-acc6-6527b272ba72\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29423070-g4shp" Dec 10 16:30:00 crc kubenswrapper[4718]: I1210 16:30:00.250998 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2b31583a-8903-4bea-acc6-6527b272ba72-secret-volume\") pod \"collect-profiles-29423070-g4shp\" (UID: \"2b31583a-8903-4bea-acc6-6527b272ba72\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29423070-g4shp" Dec 10 16:30:00 crc kubenswrapper[4718]: I1210 16:30:00.251664 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2b31583a-8903-4bea-acc6-6527b272ba72-config-volume\") pod \"collect-profiles-29423070-g4shp\" (UID: \"2b31583a-8903-4bea-acc6-6527b272ba72\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29423070-g4shp" Dec 10 16:30:00 crc kubenswrapper[4718]: I1210 16:30:00.353955 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dzn4t\" (UniqueName: \"kubernetes.io/projected/2b31583a-8903-4bea-acc6-6527b272ba72-kube-api-access-dzn4t\") pod \"collect-profiles-29423070-g4shp\" (UID: \"2b31583a-8903-4bea-acc6-6527b272ba72\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29423070-g4shp" Dec 10 16:30:00 crc kubenswrapper[4718]: I1210 16:30:00.354322 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2b31583a-8903-4bea-acc6-6527b272ba72-secret-volume\") pod \"collect-profiles-29423070-g4shp\" (UID: \"2b31583a-8903-4bea-acc6-6527b272ba72\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29423070-g4shp" Dec 10 16:30:00 crc kubenswrapper[4718]: I1210 16:30:00.355503 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2b31583a-8903-4bea-acc6-6527b272ba72-config-volume\") pod \"collect-profiles-29423070-g4shp\" (UID: \"2b31583a-8903-4bea-acc6-6527b272ba72\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29423070-g4shp" Dec 10 16:30:00 crc kubenswrapper[4718]: I1210 16:30:00.356374 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2b31583a-8903-4bea-acc6-6527b272ba72-config-volume\") pod \"collect-profiles-29423070-g4shp\" (UID: \"2b31583a-8903-4bea-acc6-6527b272ba72\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29423070-g4shp" Dec 10 16:30:00 crc kubenswrapper[4718]: I1210 16:30:00.362130 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2b31583a-8903-4bea-acc6-6527b272ba72-secret-volume\") pod \"collect-profiles-29423070-g4shp\" (UID: \"2b31583a-8903-4bea-acc6-6527b272ba72\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29423070-g4shp" Dec 10 16:30:00 crc kubenswrapper[4718]: I1210 16:30:00.372820 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dzn4t\" (UniqueName: \"kubernetes.io/projected/2b31583a-8903-4bea-acc6-6527b272ba72-kube-api-access-dzn4t\") pod \"collect-profiles-29423070-g4shp\" (UID: \"2b31583a-8903-4bea-acc6-6527b272ba72\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29423070-g4shp" Dec 10 16:30:00 crc kubenswrapper[4718]: I1210 16:30:00.508737 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29423070-g4shp" Dec 10 16:30:00 crc kubenswrapper[4718]: I1210 16:30:00.992933 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29423070-g4shp"] Dec 10 16:30:01 crc kubenswrapper[4718]: E1210 16:30:01.609279 4718 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2b31583a_8903_4bea_acc6_6527b272ba72.slice/crio-conmon-c0ae47460d8c70a08a3dffe10e6565660e67075311248b2223e5aa7b419fc8b2.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2b31583a_8903_4bea_acc6_6527b272ba72.slice/crio-c0ae47460d8c70a08a3dffe10e6565660e67075311248b2223e5aa7b419fc8b2.scope\": RecentStats: unable to find data in memory cache]" Dec 10 16:30:01 crc kubenswrapper[4718]: I1210 16:30:01.995774 4718 generic.go:334] "Generic (PLEG): container finished" podID="2b31583a-8903-4bea-acc6-6527b272ba72" containerID="c0ae47460d8c70a08a3dffe10e6565660e67075311248b2223e5aa7b419fc8b2" exitCode=0 Dec 10 16:30:01 crc kubenswrapper[4718]: I1210 16:30:01.995824 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29423070-g4shp" event={"ID":"2b31583a-8903-4bea-acc6-6527b272ba72","Type":"ContainerDied","Data":"c0ae47460d8c70a08a3dffe10e6565660e67075311248b2223e5aa7b419fc8b2"} Dec 10 16:30:01 crc kubenswrapper[4718]: I1210 16:30:01.995855 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29423070-g4shp" event={"ID":"2b31583a-8903-4bea-acc6-6527b272ba72","Type":"ContainerStarted","Data":"2f93c883564fa32d92c502767f051aa4ffa4238f037c6bda00ab5e33a9342021"} Dec 10 16:30:03 crc kubenswrapper[4718]: I1210 16:30:03.370189 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29423070-g4shp" Dec 10 16:30:03 crc kubenswrapper[4718]: I1210 16:30:03.421356 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2b31583a-8903-4bea-acc6-6527b272ba72-secret-volume\") pod \"2b31583a-8903-4bea-acc6-6527b272ba72\" (UID: \"2b31583a-8903-4bea-acc6-6527b272ba72\") " Dec 10 16:30:03 crc kubenswrapper[4718]: I1210 16:30:03.421554 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2b31583a-8903-4bea-acc6-6527b272ba72-config-volume\") pod \"2b31583a-8903-4bea-acc6-6527b272ba72\" (UID: \"2b31583a-8903-4bea-acc6-6527b272ba72\") " Dec 10 16:30:03 crc kubenswrapper[4718]: I1210 16:30:03.421797 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dzn4t\" (UniqueName: \"kubernetes.io/projected/2b31583a-8903-4bea-acc6-6527b272ba72-kube-api-access-dzn4t\") pod \"2b31583a-8903-4bea-acc6-6527b272ba72\" (UID: \"2b31583a-8903-4bea-acc6-6527b272ba72\") " Dec 10 16:30:03 crc kubenswrapper[4718]: I1210 16:30:03.422589 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2b31583a-8903-4bea-acc6-6527b272ba72-config-volume" (OuterVolumeSpecName: "config-volume") pod "2b31583a-8903-4bea-acc6-6527b272ba72" (UID: "2b31583a-8903-4bea-acc6-6527b272ba72"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 10 16:30:03 crc kubenswrapper[4718]: I1210 16:30:03.424334 4718 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2b31583a-8903-4bea-acc6-6527b272ba72-config-volume\") on node \"crc\" DevicePath \"\"" Dec 10 16:30:03 crc kubenswrapper[4718]: I1210 16:30:03.429831 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2b31583a-8903-4bea-acc6-6527b272ba72-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "2b31583a-8903-4bea-acc6-6527b272ba72" (UID: "2b31583a-8903-4bea-acc6-6527b272ba72"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 10 16:30:03 crc kubenswrapper[4718]: I1210 16:30:03.430654 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2b31583a-8903-4bea-acc6-6527b272ba72-kube-api-access-dzn4t" (OuterVolumeSpecName: "kube-api-access-dzn4t") pod "2b31583a-8903-4bea-acc6-6527b272ba72" (UID: "2b31583a-8903-4bea-acc6-6527b272ba72"). InnerVolumeSpecName "kube-api-access-dzn4t". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 16:30:03 crc kubenswrapper[4718]: I1210 16:30:03.526254 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dzn4t\" (UniqueName: \"kubernetes.io/projected/2b31583a-8903-4bea-acc6-6527b272ba72-kube-api-access-dzn4t\") on node \"crc\" DevicePath \"\"" Dec 10 16:30:03 crc kubenswrapper[4718]: I1210 16:30:03.526301 4718 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2b31583a-8903-4bea-acc6-6527b272ba72-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 10 16:30:04 crc kubenswrapper[4718]: I1210 16:30:04.020858 4718 scope.go:117] "RemoveContainer" containerID="46e05a5a0b93b5b168c9b125f8be5d0a31de5bd6c874a8392e9682fe57e1a5ab" Dec 10 16:30:04 crc kubenswrapper[4718]: I1210 16:30:04.020871 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29423070-g4shp" Dec 10 16:30:04 crc kubenswrapper[4718]: E1210 16:30:04.021338 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8zmhn_openshift-machine-config-operator(8db53917-7cfb-496d-b8a0-5cc68f3be4e7)\"" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" podUID="8db53917-7cfb-496d-b8a0-5cc68f3be4e7" Dec 10 16:30:04 crc kubenswrapper[4718]: I1210 16:30:04.034642 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29423070-g4shp" event={"ID":"2b31583a-8903-4bea-acc6-6527b272ba72","Type":"ContainerDied","Data":"2f93c883564fa32d92c502767f051aa4ffa4238f037c6bda00ab5e33a9342021"} Dec 10 16:30:04 crc kubenswrapper[4718]: I1210 16:30:04.034911 4718 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2f93c883564fa32d92c502767f051aa4ffa4238f037c6bda00ab5e33a9342021" Dec 10 16:30:04 crc kubenswrapper[4718]: I1210 16:30:04.473655 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29423025-qltcv"] Dec 10 16:30:04 crc kubenswrapper[4718]: I1210 16:30:04.486207 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29423025-qltcv"] Dec 10 16:30:05 crc kubenswrapper[4718]: I1210 16:30:05.766445 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-dnscw/must-gather-hdqkd"] Dec 10 16:30:05 crc kubenswrapper[4718]: I1210 16:30:05.767128 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-dnscw/must-gather-hdqkd" podUID="4e1a9ba4-970d-4984-aac8-cca52469a498" containerName="copy" containerID="cri-o://c5df8fde3c3a1c56368f98e6db32663a8a4ac45af347a28e5ddf58657af62273" gracePeriod=2 Dec 10 16:30:05 crc kubenswrapper[4718]: I1210 16:30:05.785631 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-dnscw/must-gather-hdqkd"] Dec 10 16:30:06 crc kubenswrapper[4718]: I1210 16:30:06.036562 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b7519cf8-6dba-4139-8a89-6d0a5187c5b8" path="/var/lib/kubelet/pods/b7519cf8-6dba-4139-8a89-6d0a5187c5b8/volumes" Dec 10 16:30:06 crc kubenswrapper[4718]: I1210 16:30:06.090067 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-dnscw_must-gather-hdqkd_4e1a9ba4-970d-4984-aac8-cca52469a498/copy/0.log" Dec 10 16:30:06 crc kubenswrapper[4718]: I1210 16:30:06.090455 4718 generic.go:334] "Generic (PLEG): container finished" podID="4e1a9ba4-970d-4984-aac8-cca52469a498" containerID="c5df8fde3c3a1c56368f98e6db32663a8a4ac45af347a28e5ddf58657af62273" exitCode=143 Dec 10 16:30:06 crc kubenswrapper[4718]: I1210 16:30:06.266555 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-dnscw_must-gather-hdqkd_4e1a9ba4-970d-4984-aac8-cca52469a498/copy/0.log" Dec 10 16:30:06 crc kubenswrapper[4718]: I1210 16:30:06.267243 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-dnscw/must-gather-hdqkd" Dec 10 16:30:06 crc kubenswrapper[4718]: I1210 16:30:06.393943 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nt58h\" (UniqueName: \"kubernetes.io/projected/4e1a9ba4-970d-4984-aac8-cca52469a498-kube-api-access-nt58h\") pod \"4e1a9ba4-970d-4984-aac8-cca52469a498\" (UID: \"4e1a9ba4-970d-4984-aac8-cca52469a498\") " Dec 10 16:30:06 crc kubenswrapper[4718]: I1210 16:30:06.394126 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/4e1a9ba4-970d-4984-aac8-cca52469a498-must-gather-output\") pod \"4e1a9ba4-970d-4984-aac8-cca52469a498\" (UID: \"4e1a9ba4-970d-4984-aac8-cca52469a498\") " Dec 10 16:30:06 crc kubenswrapper[4718]: I1210 16:30:06.424717 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4e1a9ba4-970d-4984-aac8-cca52469a498-kube-api-access-nt58h" (OuterVolumeSpecName: "kube-api-access-nt58h") pod "4e1a9ba4-970d-4984-aac8-cca52469a498" (UID: "4e1a9ba4-970d-4984-aac8-cca52469a498"). InnerVolumeSpecName "kube-api-access-nt58h". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 16:30:06 crc kubenswrapper[4718]: I1210 16:30:06.497278 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nt58h\" (UniqueName: \"kubernetes.io/projected/4e1a9ba4-970d-4984-aac8-cca52469a498-kube-api-access-nt58h\") on node \"crc\" DevicePath \"\"" Dec 10 16:30:06 crc kubenswrapper[4718]: I1210 16:30:06.654617 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4e1a9ba4-970d-4984-aac8-cca52469a498-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "4e1a9ba4-970d-4984-aac8-cca52469a498" (UID: "4e1a9ba4-970d-4984-aac8-cca52469a498"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 16:30:06 crc kubenswrapper[4718]: I1210 16:30:06.701587 4718 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/4e1a9ba4-970d-4984-aac8-cca52469a498-must-gather-output\") on node \"crc\" DevicePath \"\"" Dec 10 16:30:07 crc kubenswrapper[4718]: I1210 16:30:07.102443 4718 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-dnscw_must-gather-hdqkd_4e1a9ba4-970d-4984-aac8-cca52469a498/copy/0.log" Dec 10 16:30:07 crc kubenswrapper[4718]: I1210 16:30:07.103077 4718 scope.go:117] "RemoveContainer" containerID="c5df8fde3c3a1c56368f98e6db32663a8a4ac45af347a28e5ddf58657af62273" Dec 10 16:30:07 crc kubenswrapper[4718]: I1210 16:30:07.103189 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-dnscw/must-gather-hdqkd" Dec 10 16:30:07 crc kubenswrapper[4718]: I1210 16:30:07.129567 4718 scope.go:117] "RemoveContainer" containerID="7a739ccb7b795e7c13657c4bc82edd1fc5339ba68393ee4b0a1c0e1a0707e6f4" Dec 10 16:30:08 crc kubenswrapper[4718]: I1210 16:30:08.035884 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4e1a9ba4-970d-4984-aac8-cca52469a498" path="/var/lib/kubelet/pods/4e1a9ba4-970d-4984-aac8-cca52469a498/volumes" Dec 10 16:30:09 crc kubenswrapper[4718]: I1210 16:30:09.105288 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-ggscw"] Dec 10 16:30:09 crc kubenswrapper[4718]: E1210 16:30:09.108142 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e1a9ba4-970d-4984-aac8-cca52469a498" containerName="gather" Dec 10 16:30:09 crc kubenswrapper[4718]: I1210 16:30:09.108238 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e1a9ba4-970d-4984-aac8-cca52469a498" containerName="gather" Dec 10 16:30:09 crc kubenswrapper[4718]: E1210 16:30:09.108349 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e1a9ba4-970d-4984-aac8-cca52469a498" containerName="copy" Dec 10 16:30:09 crc kubenswrapper[4718]: I1210 16:30:09.108457 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e1a9ba4-970d-4984-aac8-cca52469a498" containerName="copy" Dec 10 16:30:09 crc kubenswrapper[4718]: E1210 16:30:09.108572 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b31583a-8903-4bea-acc6-6527b272ba72" containerName="collect-profiles" Dec 10 16:30:09 crc kubenswrapper[4718]: I1210 16:30:09.108670 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b31583a-8903-4bea-acc6-6527b272ba72" containerName="collect-profiles" Dec 10 16:30:09 crc kubenswrapper[4718]: I1210 16:30:09.108940 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="4e1a9ba4-970d-4984-aac8-cca52469a498" containerName="gather" Dec 10 16:30:09 crc kubenswrapper[4718]: I1210 16:30:09.109037 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="4e1a9ba4-970d-4984-aac8-cca52469a498" containerName="copy" Dec 10 16:30:09 crc kubenswrapper[4718]: I1210 16:30:09.109106 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="2b31583a-8903-4bea-acc6-6527b272ba72" containerName="collect-profiles" Dec 10 16:30:09 crc kubenswrapper[4718]: I1210 16:30:09.110793 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ggscw" Dec 10 16:30:09 crc kubenswrapper[4718]: I1210 16:30:09.143215 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-ggscw"] Dec 10 16:30:09 crc kubenswrapper[4718]: I1210 16:30:09.153645 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c6524535-33da-44ab-b231-034cf194c227-utilities\") pod \"certified-operators-ggscw\" (UID: \"c6524535-33da-44ab-b231-034cf194c227\") " pod="openshift-marketplace/certified-operators-ggscw" Dec 10 16:30:09 crc kubenswrapper[4718]: I1210 16:30:09.153810 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p75x5\" (UniqueName: \"kubernetes.io/projected/c6524535-33da-44ab-b231-034cf194c227-kube-api-access-p75x5\") pod \"certified-operators-ggscw\" (UID: \"c6524535-33da-44ab-b231-034cf194c227\") " pod="openshift-marketplace/certified-operators-ggscw" Dec 10 16:30:09 crc kubenswrapper[4718]: I1210 16:30:09.153933 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c6524535-33da-44ab-b231-034cf194c227-catalog-content\") pod \"certified-operators-ggscw\" (UID: \"c6524535-33da-44ab-b231-034cf194c227\") " pod="openshift-marketplace/certified-operators-ggscw" Dec 10 16:30:09 crc kubenswrapper[4718]: I1210 16:30:09.256947 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c6524535-33da-44ab-b231-034cf194c227-catalog-content\") pod \"certified-operators-ggscw\" (UID: \"c6524535-33da-44ab-b231-034cf194c227\") " pod="openshift-marketplace/certified-operators-ggscw" Dec 10 16:30:09 crc kubenswrapper[4718]: I1210 16:30:09.257094 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c6524535-33da-44ab-b231-034cf194c227-utilities\") pod \"certified-operators-ggscw\" (UID: \"c6524535-33da-44ab-b231-034cf194c227\") " pod="openshift-marketplace/certified-operators-ggscw" Dec 10 16:30:09 crc kubenswrapper[4718]: I1210 16:30:09.257149 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p75x5\" (UniqueName: \"kubernetes.io/projected/c6524535-33da-44ab-b231-034cf194c227-kube-api-access-p75x5\") pod \"certified-operators-ggscw\" (UID: \"c6524535-33da-44ab-b231-034cf194c227\") " pod="openshift-marketplace/certified-operators-ggscw" Dec 10 16:30:09 crc kubenswrapper[4718]: I1210 16:30:09.257619 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c6524535-33da-44ab-b231-034cf194c227-utilities\") pod \"certified-operators-ggscw\" (UID: \"c6524535-33da-44ab-b231-034cf194c227\") " pod="openshift-marketplace/certified-operators-ggscw" Dec 10 16:30:09 crc kubenswrapper[4718]: I1210 16:30:09.257694 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c6524535-33da-44ab-b231-034cf194c227-catalog-content\") pod \"certified-operators-ggscw\" (UID: \"c6524535-33da-44ab-b231-034cf194c227\") " pod="openshift-marketplace/certified-operators-ggscw" Dec 10 16:30:09 crc kubenswrapper[4718]: I1210 16:30:09.281337 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p75x5\" (UniqueName: \"kubernetes.io/projected/c6524535-33da-44ab-b231-034cf194c227-kube-api-access-p75x5\") pod \"certified-operators-ggscw\" (UID: \"c6524535-33da-44ab-b231-034cf194c227\") " pod="openshift-marketplace/certified-operators-ggscw" Dec 10 16:30:09 crc kubenswrapper[4718]: I1210 16:30:09.440441 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ggscw" Dec 10 16:30:10 crc kubenswrapper[4718]: I1210 16:30:10.054177 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-ggscw"] Dec 10 16:30:10 crc kubenswrapper[4718]: I1210 16:30:10.141330 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ggscw" event={"ID":"c6524535-33da-44ab-b231-034cf194c227","Type":"ContainerStarted","Data":"e3ee44b12210253f5151ceb25a4221950794577dbc5407bbe62bf1832e603858"} Dec 10 16:30:11 crc kubenswrapper[4718]: I1210 16:30:11.172128 4718 generic.go:334] "Generic (PLEG): container finished" podID="c6524535-33da-44ab-b231-034cf194c227" containerID="99acc41dba118ac4eaa7ba4106a01a47a4db9218d5e992524a09c213864506d2" exitCode=0 Dec 10 16:30:11 crc kubenswrapper[4718]: I1210 16:30:11.172623 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ggscw" event={"ID":"c6524535-33da-44ab-b231-034cf194c227","Type":"ContainerDied","Data":"99acc41dba118ac4eaa7ba4106a01a47a4db9218d5e992524a09c213864506d2"} Dec 10 16:30:11 crc kubenswrapper[4718]: I1210 16:30:11.181994 4718 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 10 16:30:12 crc kubenswrapper[4718]: I1210 16:30:12.204142 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ggscw" event={"ID":"c6524535-33da-44ab-b231-034cf194c227","Type":"ContainerStarted","Data":"a66fc9715ebd27d07e15b671f5c3875c157d573a74f007d7b71ec5b636fb262b"} Dec 10 16:30:13 crc kubenswrapper[4718]: I1210 16:30:13.217024 4718 generic.go:334] "Generic (PLEG): container finished" podID="c6524535-33da-44ab-b231-034cf194c227" containerID="a66fc9715ebd27d07e15b671f5c3875c157d573a74f007d7b71ec5b636fb262b" exitCode=0 Dec 10 16:30:13 crc kubenswrapper[4718]: I1210 16:30:13.217089 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ggscw" event={"ID":"c6524535-33da-44ab-b231-034cf194c227","Type":"ContainerDied","Data":"a66fc9715ebd27d07e15b671f5c3875c157d573a74f007d7b71ec5b636fb262b"} Dec 10 16:30:14 crc kubenswrapper[4718]: I1210 16:30:14.230524 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ggscw" event={"ID":"c6524535-33da-44ab-b231-034cf194c227","Type":"ContainerStarted","Data":"9b8a002742ba9c6084d80ee8a3f941c832e200e4463df9543e03ed5bb75d3198"} Dec 10 16:30:14 crc kubenswrapper[4718]: I1210 16:30:14.254064 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-ggscw" podStartSLOduration=2.76781686 podStartE2EDuration="5.254033476s" podCreationTimestamp="2025-12-10 16:30:09 +0000 UTC" firstStartedPulling="2025-12-10 16:30:11.181565487 +0000 UTC m=+7116.130788914" lastFinishedPulling="2025-12-10 16:30:13.667782113 +0000 UTC m=+7118.617005530" observedRunningTime="2025-12-10 16:30:14.248264392 +0000 UTC m=+7119.197487809" watchObservedRunningTime="2025-12-10 16:30:14.254033476 +0000 UTC m=+7119.203256893" Dec 10 16:30:16 crc kubenswrapper[4718]: I1210 16:30:16.027165 4718 scope.go:117] "RemoveContainer" containerID="46e05a5a0b93b5b168c9b125f8be5d0a31de5bd6c874a8392e9682fe57e1a5ab" Dec 10 16:30:16 crc kubenswrapper[4718]: E1210 16:30:16.027937 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8zmhn_openshift-machine-config-operator(8db53917-7cfb-496d-b8a0-5cc68f3be4e7)\"" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" podUID="8db53917-7cfb-496d-b8a0-5cc68f3be4e7" Dec 10 16:30:19 crc kubenswrapper[4718]: I1210 16:30:19.440826 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-ggscw" Dec 10 16:30:19 crc kubenswrapper[4718]: I1210 16:30:19.441419 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-ggscw" Dec 10 16:30:19 crc kubenswrapper[4718]: I1210 16:30:19.513690 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-ggscw" Dec 10 16:30:20 crc kubenswrapper[4718]: I1210 16:30:20.374365 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-ggscw" Dec 10 16:30:20 crc kubenswrapper[4718]: I1210 16:30:20.491017 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-ggscw"] Dec 10 16:30:22 crc kubenswrapper[4718]: I1210 16:30:22.325673 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-ggscw" podUID="c6524535-33da-44ab-b231-034cf194c227" containerName="registry-server" containerID="cri-o://9b8a002742ba9c6084d80ee8a3f941c832e200e4463df9543e03ed5bb75d3198" gracePeriod=2 Dec 10 16:30:23 crc kubenswrapper[4718]: I1210 16:30:23.271199 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ggscw" Dec 10 16:30:23 crc kubenswrapper[4718]: I1210 16:30:23.329671 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c6524535-33da-44ab-b231-034cf194c227-utilities\") pod \"c6524535-33da-44ab-b231-034cf194c227\" (UID: \"c6524535-33da-44ab-b231-034cf194c227\") " Dec 10 16:30:23 crc kubenswrapper[4718]: I1210 16:30:23.329862 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p75x5\" (UniqueName: \"kubernetes.io/projected/c6524535-33da-44ab-b231-034cf194c227-kube-api-access-p75x5\") pod \"c6524535-33da-44ab-b231-034cf194c227\" (UID: \"c6524535-33da-44ab-b231-034cf194c227\") " Dec 10 16:30:23 crc kubenswrapper[4718]: I1210 16:30:23.330014 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c6524535-33da-44ab-b231-034cf194c227-catalog-content\") pod \"c6524535-33da-44ab-b231-034cf194c227\" (UID: \"c6524535-33da-44ab-b231-034cf194c227\") " Dec 10 16:30:23 crc kubenswrapper[4718]: I1210 16:30:23.331956 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c6524535-33da-44ab-b231-034cf194c227-utilities" (OuterVolumeSpecName: "utilities") pod "c6524535-33da-44ab-b231-034cf194c227" (UID: "c6524535-33da-44ab-b231-034cf194c227"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 16:30:23 crc kubenswrapper[4718]: I1210 16:30:23.341352 4718 generic.go:334] "Generic (PLEG): container finished" podID="c6524535-33da-44ab-b231-034cf194c227" containerID="9b8a002742ba9c6084d80ee8a3f941c832e200e4463df9543e03ed5bb75d3198" exitCode=0 Dec 10 16:30:23 crc kubenswrapper[4718]: I1210 16:30:23.341444 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ggscw" event={"ID":"c6524535-33da-44ab-b231-034cf194c227","Type":"ContainerDied","Data":"9b8a002742ba9c6084d80ee8a3f941c832e200e4463df9543e03ed5bb75d3198"} Dec 10 16:30:23 crc kubenswrapper[4718]: I1210 16:30:23.341670 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ggscw" event={"ID":"c6524535-33da-44ab-b231-034cf194c227","Type":"ContainerDied","Data":"e3ee44b12210253f5151ceb25a4221950794577dbc5407bbe62bf1832e603858"} Dec 10 16:30:23 crc kubenswrapper[4718]: I1210 16:30:23.341727 4718 scope.go:117] "RemoveContainer" containerID="9b8a002742ba9c6084d80ee8a3f941c832e200e4463df9543e03ed5bb75d3198" Dec 10 16:30:23 crc kubenswrapper[4718]: I1210 16:30:23.341491 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ggscw" Dec 10 16:30:23 crc kubenswrapper[4718]: I1210 16:30:23.346633 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c6524535-33da-44ab-b231-034cf194c227-kube-api-access-p75x5" (OuterVolumeSpecName: "kube-api-access-p75x5") pod "c6524535-33da-44ab-b231-034cf194c227" (UID: "c6524535-33da-44ab-b231-034cf194c227"). InnerVolumeSpecName "kube-api-access-p75x5". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 16:30:23 crc kubenswrapper[4718]: I1210 16:30:23.393889 4718 scope.go:117] "RemoveContainer" containerID="a66fc9715ebd27d07e15b671f5c3875c157d573a74f007d7b71ec5b636fb262b" Dec 10 16:30:23 crc kubenswrapper[4718]: I1210 16:30:23.405690 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c6524535-33da-44ab-b231-034cf194c227-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c6524535-33da-44ab-b231-034cf194c227" (UID: "c6524535-33da-44ab-b231-034cf194c227"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 16:30:23 crc kubenswrapper[4718]: I1210 16:30:23.419710 4718 scope.go:117] "RemoveContainer" containerID="99acc41dba118ac4eaa7ba4106a01a47a4db9218d5e992524a09c213864506d2" Dec 10 16:30:23 crc kubenswrapper[4718]: I1210 16:30:23.431899 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p75x5\" (UniqueName: \"kubernetes.io/projected/c6524535-33da-44ab-b231-034cf194c227-kube-api-access-p75x5\") on node \"crc\" DevicePath \"\"" Dec 10 16:30:23 crc kubenswrapper[4718]: I1210 16:30:23.431931 4718 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c6524535-33da-44ab-b231-034cf194c227-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 10 16:30:23 crc kubenswrapper[4718]: I1210 16:30:23.431942 4718 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c6524535-33da-44ab-b231-034cf194c227-utilities\") on node \"crc\" DevicePath \"\"" Dec 10 16:30:23 crc kubenswrapper[4718]: I1210 16:30:23.482010 4718 scope.go:117] "RemoveContainer" containerID="9b8a002742ba9c6084d80ee8a3f941c832e200e4463df9543e03ed5bb75d3198" Dec 10 16:30:23 crc kubenswrapper[4718]: E1210 16:30:23.482761 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9b8a002742ba9c6084d80ee8a3f941c832e200e4463df9543e03ed5bb75d3198\": container with ID starting with 9b8a002742ba9c6084d80ee8a3f941c832e200e4463df9543e03ed5bb75d3198 not found: ID does not exist" containerID="9b8a002742ba9c6084d80ee8a3f941c832e200e4463df9543e03ed5bb75d3198" Dec 10 16:30:23 crc kubenswrapper[4718]: I1210 16:30:23.482804 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9b8a002742ba9c6084d80ee8a3f941c832e200e4463df9543e03ed5bb75d3198"} err="failed to get container status \"9b8a002742ba9c6084d80ee8a3f941c832e200e4463df9543e03ed5bb75d3198\": rpc error: code = NotFound desc = could not find container \"9b8a002742ba9c6084d80ee8a3f941c832e200e4463df9543e03ed5bb75d3198\": container with ID starting with 9b8a002742ba9c6084d80ee8a3f941c832e200e4463df9543e03ed5bb75d3198 not found: ID does not exist" Dec 10 16:30:23 crc kubenswrapper[4718]: I1210 16:30:23.482832 4718 scope.go:117] "RemoveContainer" containerID="a66fc9715ebd27d07e15b671f5c3875c157d573a74f007d7b71ec5b636fb262b" Dec 10 16:30:23 crc kubenswrapper[4718]: E1210 16:30:23.483280 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a66fc9715ebd27d07e15b671f5c3875c157d573a74f007d7b71ec5b636fb262b\": container with ID starting with a66fc9715ebd27d07e15b671f5c3875c157d573a74f007d7b71ec5b636fb262b not found: ID does not exist" containerID="a66fc9715ebd27d07e15b671f5c3875c157d573a74f007d7b71ec5b636fb262b" Dec 10 16:30:23 crc kubenswrapper[4718]: I1210 16:30:23.483340 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a66fc9715ebd27d07e15b671f5c3875c157d573a74f007d7b71ec5b636fb262b"} err="failed to get container status \"a66fc9715ebd27d07e15b671f5c3875c157d573a74f007d7b71ec5b636fb262b\": rpc error: code = NotFound desc = could not find container \"a66fc9715ebd27d07e15b671f5c3875c157d573a74f007d7b71ec5b636fb262b\": container with ID starting with a66fc9715ebd27d07e15b671f5c3875c157d573a74f007d7b71ec5b636fb262b not found: ID does not exist" Dec 10 16:30:23 crc kubenswrapper[4718]: I1210 16:30:23.483377 4718 scope.go:117] "RemoveContainer" containerID="99acc41dba118ac4eaa7ba4106a01a47a4db9218d5e992524a09c213864506d2" Dec 10 16:30:23 crc kubenswrapper[4718]: E1210 16:30:23.483741 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"99acc41dba118ac4eaa7ba4106a01a47a4db9218d5e992524a09c213864506d2\": container with ID starting with 99acc41dba118ac4eaa7ba4106a01a47a4db9218d5e992524a09c213864506d2 not found: ID does not exist" containerID="99acc41dba118ac4eaa7ba4106a01a47a4db9218d5e992524a09c213864506d2" Dec 10 16:30:23 crc kubenswrapper[4718]: I1210 16:30:23.483773 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"99acc41dba118ac4eaa7ba4106a01a47a4db9218d5e992524a09c213864506d2"} err="failed to get container status \"99acc41dba118ac4eaa7ba4106a01a47a4db9218d5e992524a09c213864506d2\": rpc error: code = NotFound desc = could not find container \"99acc41dba118ac4eaa7ba4106a01a47a4db9218d5e992524a09c213864506d2\": container with ID starting with 99acc41dba118ac4eaa7ba4106a01a47a4db9218d5e992524a09c213864506d2 not found: ID does not exist" Dec 10 16:30:23 crc kubenswrapper[4718]: I1210 16:30:23.675772 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-ggscw"] Dec 10 16:30:23 crc kubenswrapper[4718]: I1210 16:30:23.712098 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-ggscw"] Dec 10 16:30:24 crc kubenswrapper[4718]: I1210 16:30:24.042692 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c6524535-33da-44ab-b231-034cf194c227" path="/var/lib/kubelet/pods/c6524535-33da-44ab-b231-034cf194c227/volumes" Dec 10 16:30:27 crc kubenswrapper[4718]: I1210 16:30:27.020377 4718 scope.go:117] "RemoveContainer" containerID="46e05a5a0b93b5b168c9b125f8be5d0a31de5bd6c874a8392e9682fe57e1a5ab" Dec 10 16:30:27 crc kubenswrapper[4718]: E1210 16:30:27.021234 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8zmhn_openshift-machine-config-operator(8db53917-7cfb-496d-b8a0-5cc68f3be4e7)\"" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" podUID="8db53917-7cfb-496d-b8a0-5cc68f3be4e7" Dec 10 16:30:38 crc kubenswrapper[4718]: I1210 16:30:38.021027 4718 scope.go:117] "RemoveContainer" containerID="46e05a5a0b93b5b168c9b125f8be5d0a31de5bd6c874a8392e9682fe57e1a5ab" Dec 10 16:30:38 crc kubenswrapper[4718]: E1210 16:30:38.021998 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8zmhn_openshift-machine-config-operator(8db53917-7cfb-496d-b8a0-5cc68f3be4e7)\"" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" podUID="8db53917-7cfb-496d-b8a0-5cc68f3be4e7" Dec 10 16:30:51 crc kubenswrapper[4718]: I1210 16:30:51.021890 4718 scope.go:117] "RemoveContainer" containerID="46e05a5a0b93b5b168c9b125f8be5d0a31de5bd6c874a8392e9682fe57e1a5ab" Dec 10 16:30:51 crc kubenswrapper[4718]: E1210 16:30:51.024360 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8zmhn_openshift-machine-config-operator(8db53917-7cfb-496d-b8a0-5cc68f3be4e7)\"" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" podUID="8db53917-7cfb-496d-b8a0-5cc68f3be4e7" Dec 10 16:30:58 crc kubenswrapper[4718]: I1210 16:30:58.284250 4718 scope.go:117] "RemoveContainer" containerID="7ec0ec4b6c352519952735a35ba5d5b9ba85a66dae84a3ab9c08cdfdc181dd15" Dec 10 16:30:58 crc kubenswrapper[4718]: I1210 16:30:58.310540 4718 scope.go:117] "RemoveContainer" containerID="d0e813890190bb3563842ea854b423deb8eb0f9fbedbeaeb89f7977a370ca408" Dec 10 16:30:58 crc kubenswrapper[4718]: I1210 16:30:58.380863 4718 scope.go:117] "RemoveContainer" containerID="e2ec0ce47d97794172761d845e0704316b60837db39ff21787ed778f8073e06a" Dec 10 16:31:02 crc kubenswrapper[4718]: I1210 16:31:02.020856 4718 scope.go:117] "RemoveContainer" containerID="46e05a5a0b93b5b168c9b125f8be5d0a31de5bd6c874a8392e9682fe57e1a5ab" Dec 10 16:31:02 crc kubenswrapper[4718]: E1210 16:31:02.021984 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8zmhn_openshift-machine-config-operator(8db53917-7cfb-496d-b8a0-5cc68f3be4e7)\"" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" podUID="8db53917-7cfb-496d-b8a0-5cc68f3be4e7" Dec 10 16:31:15 crc kubenswrapper[4718]: I1210 16:31:15.020926 4718 scope.go:117] "RemoveContainer" containerID="46e05a5a0b93b5b168c9b125f8be5d0a31de5bd6c874a8392e9682fe57e1a5ab" Dec 10 16:31:15 crc kubenswrapper[4718]: E1210 16:31:15.021957 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8zmhn_openshift-machine-config-operator(8db53917-7cfb-496d-b8a0-5cc68f3be4e7)\"" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" podUID="8db53917-7cfb-496d-b8a0-5cc68f3be4e7" Dec 10 16:31:29 crc kubenswrapper[4718]: I1210 16:31:29.020878 4718 scope.go:117] "RemoveContainer" containerID="46e05a5a0b93b5b168c9b125f8be5d0a31de5bd6c874a8392e9682fe57e1a5ab" Dec 10 16:31:29 crc kubenswrapper[4718]: E1210 16:31:29.021979 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8zmhn_openshift-machine-config-operator(8db53917-7cfb-496d-b8a0-5cc68f3be4e7)\"" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" podUID="8db53917-7cfb-496d-b8a0-5cc68f3be4e7" Dec 10 16:31:44 crc kubenswrapper[4718]: I1210 16:31:44.020998 4718 scope.go:117] "RemoveContainer" containerID="46e05a5a0b93b5b168c9b125f8be5d0a31de5bd6c874a8392e9682fe57e1a5ab" Dec 10 16:31:44 crc kubenswrapper[4718]: E1210 16:31:44.021943 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8zmhn_openshift-machine-config-operator(8db53917-7cfb-496d-b8a0-5cc68f3be4e7)\"" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" podUID="8db53917-7cfb-496d-b8a0-5cc68f3be4e7" Dec 10 16:31:51 crc kubenswrapper[4718]: I1210 16:31:51.720900 4718 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-xz746"] Dec 10 16:31:51 crc kubenswrapper[4718]: E1210 16:31:51.722274 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6524535-33da-44ab-b231-034cf194c227" containerName="extract-content" Dec 10 16:31:51 crc kubenswrapper[4718]: I1210 16:31:51.722293 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6524535-33da-44ab-b231-034cf194c227" containerName="extract-content" Dec 10 16:31:51 crc kubenswrapper[4718]: E1210 16:31:51.722330 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6524535-33da-44ab-b231-034cf194c227" containerName="registry-server" Dec 10 16:31:51 crc kubenswrapper[4718]: I1210 16:31:51.722338 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6524535-33da-44ab-b231-034cf194c227" containerName="registry-server" Dec 10 16:31:51 crc kubenswrapper[4718]: E1210 16:31:51.722359 4718 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6524535-33da-44ab-b231-034cf194c227" containerName="extract-utilities" Dec 10 16:31:51 crc kubenswrapper[4718]: I1210 16:31:51.722368 4718 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6524535-33da-44ab-b231-034cf194c227" containerName="extract-utilities" Dec 10 16:31:51 crc kubenswrapper[4718]: I1210 16:31:51.722692 4718 memory_manager.go:354] "RemoveStaleState removing state" podUID="c6524535-33da-44ab-b231-034cf194c227" containerName="registry-server" Dec 10 16:31:51 crc kubenswrapper[4718]: I1210 16:31:51.724726 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xz746" Dec 10 16:31:51 crc kubenswrapper[4718]: I1210 16:31:51.733998 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-xz746"] Dec 10 16:31:51 crc kubenswrapper[4718]: I1210 16:31:51.827188 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/01b620f2-2b26-4be9-966c-fad9aa58cd05-catalog-content\") pod \"redhat-marketplace-xz746\" (UID: \"01b620f2-2b26-4be9-966c-fad9aa58cd05\") " pod="openshift-marketplace/redhat-marketplace-xz746" Dec 10 16:31:51 crc kubenswrapper[4718]: I1210 16:31:51.827761 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/01b620f2-2b26-4be9-966c-fad9aa58cd05-utilities\") pod \"redhat-marketplace-xz746\" (UID: \"01b620f2-2b26-4be9-966c-fad9aa58cd05\") " pod="openshift-marketplace/redhat-marketplace-xz746" Dec 10 16:31:51 crc kubenswrapper[4718]: I1210 16:31:51.827822 4718 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dgsgp\" (UniqueName: \"kubernetes.io/projected/01b620f2-2b26-4be9-966c-fad9aa58cd05-kube-api-access-dgsgp\") pod \"redhat-marketplace-xz746\" (UID: \"01b620f2-2b26-4be9-966c-fad9aa58cd05\") " pod="openshift-marketplace/redhat-marketplace-xz746" Dec 10 16:31:51 crc kubenswrapper[4718]: I1210 16:31:51.929970 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dgsgp\" (UniqueName: \"kubernetes.io/projected/01b620f2-2b26-4be9-966c-fad9aa58cd05-kube-api-access-dgsgp\") pod \"redhat-marketplace-xz746\" (UID: \"01b620f2-2b26-4be9-966c-fad9aa58cd05\") " pod="openshift-marketplace/redhat-marketplace-xz746" Dec 10 16:31:51 crc kubenswrapper[4718]: I1210 16:31:51.930120 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/01b620f2-2b26-4be9-966c-fad9aa58cd05-catalog-content\") pod \"redhat-marketplace-xz746\" (UID: \"01b620f2-2b26-4be9-966c-fad9aa58cd05\") " pod="openshift-marketplace/redhat-marketplace-xz746" Dec 10 16:31:51 crc kubenswrapper[4718]: I1210 16:31:51.930268 4718 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/01b620f2-2b26-4be9-966c-fad9aa58cd05-utilities\") pod \"redhat-marketplace-xz746\" (UID: \"01b620f2-2b26-4be9-966c-fad9aa58cd05\") " pod="openshift-marketplace/redhat-marketplace-xz746" Dec 10 16:31:51 crc kubenswrapper[4718]: I1210 16:31:51.931028 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/01b620f2-2b26-4be9-966c-fad9aa58cd05-utilities\") pod \"redhat-marketplace-xz746\" (UID: \"01b620f2-2b26-4be9-966c-fad9aa58cd05\") " pod="openshift-marketplace/redhat-marketplace-xz746" Dec 10 16:31:51 crc kubenswrapper[4718]: I1210 16:31:51.931127 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/01b620f2-2b26-4be9-966c-fad9aa58cd05-catalog-content\") pod \"redhat-marketplace-xz746\" (UID: \"01b620f2-2b26-4be9-966c-fad9aa58cd05\") " pod="openshift-marketplace/redhat-marketplace-xz746" Dec 10 16:31:51 crc kubenswrapper[4718]: I1210 16:31:51.953825 4718 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dgsgp\" (UniqueName: \"kubernetes.io/projected/01b620f2-2b26-4be9-966c-fad9aa58cd05-kube-api-access-dgsgp\") pod \"redhat-marketplace-xz746\" (UID: \"01b620f2-2b26-4be9-966c-fad9aa58cd05\") " pod="openshift-marketplace/redhat-marketplace-xz746" Dec 10 16:31:52 crc kubenswrapper[4718]: I1210 16:31:52.052562 4718 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xz746" Dec 10 16:31:52 crc kubenswrapper[4718]: I1210 16:31:52.590068 4718 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-xz746"] Dec 10 16:31:53 crc kubenswrapper[4718]: I1210 16:31:53.448910 4718 generic.go:334] "Generic (PLEG): container finished" podID="01b620f2-2b26-4be9-966c-fad9aa58cd05" containerID="1ef5f8125473d76cd070fc089b14ed7fc57d0e2c0a8981c498f9c2d8c023201a" exitCode=0 Dec 10 16:31:53 crc kubenswrapper[4718]: I1210 16:31:53.449023 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xz746" event={"ID":"01b620f2-2b26-4be9-966c-fad9aa58cd05","Type":"ContainerDied","Data":"1ef5f8125473d76cd070fc089b14ed7fc57d0e2c0a8981c498f9c2d8c023201a"} Dec 10 16:31:53 crc kubenswrapper[4718]: I1210 16:31:53.449274 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xz746" event={"ID":"01b620f2-2b26-4be9-966c-fad9aa58cd05","Type":"ContainerStarted","Data":"87bcf3fe56d6e77acce91baf965a1c9fb5347504cfff6ddaac12256ca637d96f"} Dec 10 16:31:54 crc kubenswrapper[4718]: I1210 16:31:54.460227 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xz746" event={"ID":"01b620f2-2b26-4be9-966c-fad9aa58cd05","Type":"ContainerStarted","Data":"9eb242cac011fbcb8dd9b439cf1208002ca86e4ebfa418a0f325351cccaaea22"} Dec 10 16:31:54 crc kubenswrapper[4718]: E1210 16:31:54.787763 4718 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod01b620f2_2b26_4be9_966c_fad9aa58cd05.slice/crio-conmon-9eb242cac011fbcb8dd9b439cf1208002ca86e4ebfa418a0f325351cccaaea22.scope\": RecentStats: unable to find data in memory cache]" Dec 10 16:31:55 crc kubenswrapper[4718]: I1210 16:31:55.479415 4718 generic.go:334] "Generic (PLEG): container finished" podID="01b620f2-2b26-4be9-966c-fad9aa58cd05" containerID="9eb242cac011fbcb8dd9b439cf1208002ca86e4ebfa418a0f325351cccaaea22" exitCode=0 Dec 10 16:31:55 crc kubenswrapper[4718]: I1210 16:31:55.479504 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xz746" event={"ID":"01b620f2-2b26-4be9-966c-fad9aa58cd05","Type":"ContainerDied","Data":"9eb242cac011fbcb8dd9b439cf1208002ca86e4ebfa418a0f325351cccaaea22"} Dec 10 16:31:56 crc kubenswrapper[4718]: I1210 16:31:56.028336 4718 scope.go:117] "RemoveContainer" containerID="46e05a5a0b93b5b168c9b125f8be5d0a31de5bd6c874a8392e9682fe57e1a5ab" Dec 10 16:31:56 crc kubenswrapper[4718]: E1210 16:31:56.029628 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8zmhn_openshift-machine-config-operator(8db53917-7cfb-496d-b8a0-5cc68f3be4e7)\"" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" podUID="8db53917-7cfb-496d-b8a0-5cc68f3be4e7" Dec 10 16:31:56 crc kubenswrapper[4718]: I1210 16:31:56.491371 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xz746" event={"ID":"01b620f2-2b26-4be9-966c-fad9aa58cd05","Type":"ContainerStarted","Data":"0829692c065ad1f65a44871641f50a9d20df0a20c64aded5bd900e1f30a4d74f"} Dec 10 16:31:56 crc kubenswrapper[4718]: I1210 16:31:56.524066 4718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-xz746" podStartSLOduration=3.004309861 podStartE2EDuration="5.524036297s" podCreationTimestamp="2025-12-10 16:31:51 +0000 UTC" firstStartedPulling="2025-12-10 16:31:53.452005049 +0000 UTC m=+7218.401228466" lastFinishedPulling="2025-12-10 16:31:55.971731485 +0000 UTC m=+7220.920954902" observedRunningTime="2025-12-10 16:31:56.517624166 +0000 UTC m=+7221.466847583" watchObservedRunningTime="2025-12-10 16:31:56.524036297 +0000 UTC m=+7221.473259704" Dec 10 16:32:02 crc kubenswrapper[4718]: I1210 16:32:02.053256 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-xz746" Dec 10 16:32:02 crc kubenswrapper[4718]: I1210 16:32:02.054233 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-xz746" Dec 10 16:32:02 crc kubenswrapper[4718]: I1210 16:32:02.121363 4718 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-xz746" Dec 10 16:32:02 crc kubenswrapper[4718]: I1210 16:32:02.605107 4718 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-xz746" Dec 10 16:32:02 crc kubenswrapper[4718]: I1210 16:32:02.659673 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-xz746"] Dec 10 16:32:04 crc kubenswrapper[4718]: I1210 16:32:04.571460 4718 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-xz746" podUID="01b620f2-2b26-4be9-966c-fad9aa58cd05" containerName="registry-server" containerID="cri-o://0829692c065ad1f65a44871641f50a9d20df0a20c64aded5bd900e1f30a4d74f" gracePeriod=2 Dec 10 16:32:05 crc kubenswrapper[4718]: I1210 16:32:05.040954 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xz746" Dec 10 16:32:05 crc kubenswrapper[4718]: I1210 16:32:05.118668 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/01b620f2-2b26-4be9-966c-fad9aa58cd05-utilities\") pod \"01b620f2-2b26-4be9-966c-fad9aa58cd05\" (UID: \"01b620f2-2b26-4be9-966c-fad9aa58cd05\") " Dec 10 16:32:05 crc kubenswrapper[4718]: I1210 16:32:05.118809 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/01b620f2-2b26-4be9-966c-fad9aa58cd05-catalog-content\") pod \"01b620f2-2b26-4be9-966c-fad9aa58cd05\" (UID: \"01b620f2-2b26-4be9-966c-fad9aa58cd05\") " Dec 10 16:32:05 crc kubenswrapper[4718]: I1210 16:32:05.118886 4718 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dgsgp\" (UniqueName: \"kubernetes.io/projected/01b620f2-2b26-4be9-966c-fad9aa58cd05-kube-api-access-dgsgp\") pod \"01b620f2-2b26-4be9-966c-fad9aa58cd05\" (UID: \"01b620f2-2b26-4be9-966c-fad9aa58cd05\") " Dec 10 16:32:05 crc kubenswrapper[4718]: I1210 16:32:05.120251 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/01b620f2-2b26-4be9-966c-fad9aa58cd05-utilities" (OuterVolumeSpecName: "utilities") pod "01b620f2-2b26-4be9-966c-fad9aa58cd05" (UID: "01b620f2-2b26-4be9-966c-fad9aa58cd05"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 16:32:05 crc kubenswrapper[4718]: I1210 16:32:05.128742 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01b620f2-2b26-4be9-966c-fad9aa58cd05-kube-api-access-dgsgp" (OuterVolumeSpecName: "kube-api-access-dgsgp") pod "01b620f2-2b26-4be9-966c-fad9aa58cd05" (UID: "01b620f2-2b26-4be9-966c-fad9aa58cd05"). InnerVolumeSpecName "kube-api-access-dgsgp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 10 16:32:05 crc kubenswrapper[4718]: I1210 16:32:05.151342 4718 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/01b620f2-2b26-4be9-966c-fad9aa58cd05-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "01b620f2-2b26-4be9-966c-fad9aa58cd05" (UID: "01b620f2-2b26-4be9-966c-fad9aa58cd05"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 10 16:32:05 crc kubenswrapper[4718]: I1210 16:32:05.220747 4718 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dgsgp\" (UniqueName: \"kubernetes.io/projected/01b620f2-2b26-4be9-966c-fad9aa58cd05-kube-api-access-dgsgp\") on node \"crc\" DevicePath \"\"" Dec 10 16:32:05 crc kubenswrapper[4718]: I1210 16:32:05.220788 4718 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/01b620f2-2b26-4be9-966c-fad9aa58cd05-utilities\") on node \"crc\" DevicePath \"\"" Dec 10 16:32:05 crc kubenswrapper[4718]: I1210 16:32:05.220802 4718 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/01b620f2-2b26-4be9-966c-fad9aa58cd05-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 10 16:32:05 crc kubenswrapper[4718]: I1210 16:32:05.586014 4718 generic.go:334] "Generic (PLEG): container finished" podID="01b620f2-2b26-4be9-966c-fad9aa58cd05" containerID="0829692c065ad1f65a44871641f50a9d20df0a20c64aded5bd900e1f30a4d74f" exitCode=0 Dec 10 16:32:05 crc kubenswrapper[4718]: I1210 16:32:05.586097 4718 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xz746" Dec 10 16:32:05 crc kubenswrapper[4718]: I1210 16:32:05.586097 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xz746" event={"ID":"01b620f2-2b26-4be9-966c-fad9aa58cd05","Type":"ContainerDied","Data":"0829692c065ad1f65a44871641f50a9d20df0a20c64aded5bd900e1f30a4d74f"} Dec 10 16:32:05 crc kubenswrapper[4718]: I1210 16:32:05.586200 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xz746" event={"ID":"01b620f2-2b26-4be9-966c-fad9aa58cd05","Type":"ContainerDied","Data":"87bcf3fe56d6e77acce91baf965a1c9fb5347504cfff6ddaac12256ca637d96f"} Dec 10 16:32:05 crc kubenswrapper[4718]: I1210 16:32:05.586233 4718 scope.go:117] "RemoveContainer" containerID="0829692c065ad1f65a44871641f50a9d20df0a20c64aded5bd900e1f30a4d74f" Dec 10 16:32:05 crc kubenswrapper[4718]: I1210 16:32:05.620200 4718 scope.go:117] "RemoveContainer" containerID="9eb242cac011fbcb8dd9b439cf1208002ca86e4ebfa418a0f325351cccaaea22" Dec 10 16:32:05 crc kubenswrapper[4718]: I1210 16:32:05.635568 4718 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-xz746"] Dec 10 16:32:05 crc kubenswrapper[4718]: I1210 16:32:05.643417 4718 scope.go:117] "RemoveContainer" containerID="1ef5f8125473d76cd070fc089b14ed7fc57d0e2c0a8981c498f9c2d8c023201a" Dec 10 16:32:05 crc kubenswrapper[4718]: I1210 16:32:05.653540 4718 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-xz746"] Dec 10 16:32:05 crc kubenswrapper[4718]: I1210 16:32:05.692874 4718 scope.go:117] "RemoveContainer" containerID="0829692c065ad1f65a44871641f50a9d20df0a20c64aded5bd900e1f30a4d74f" Dec 10 16:32:05 crc kubenswrapper[4718]: E1210 16:32:05.693328 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0829692c065ad1f65a44871641f50a9d20df0a20c64aded5bd900e1f30a4d74f\": container with ID starting with 0829692c065ad1f65a44871641f50a9d20df0a20c64aded5bd900e1f30a4d74f not found: ID does not exist" containerID="0829692c065ad1f65a44871641f50a9d20df0a20c64aded5bd900e1f30a4d74f" Dec 10 16:32:05 crc kubenswrapper[4718]: I1210 16:32:05.693370 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0829692c065ad1f65a44871641f50a9d20df0a20c64aded5bd900e1f30a4d74f"} err="failed to get container status \"0829692c065ad1f65a44871641f50a9d20df0a20c64aded5bd900e1f30a4d74f\": rpc error: code = NotFound desc = could not find container \"0829692c065ad1f65a44871641f50a9d20df0a20c64aded5bd900e1f30a4d74f\": container with ID starting with 0829692c065ad1f65a44871641f50a9d20df0a20c64aded5bd900e1f30a4d74f not found: ID does not exist" Dec 10 16:32:05 crc kubenswrapper[4718]: I1210 16:32:05.693409 4718 scope.go:117] "RemoveContainer" containerID="9eb242cac011fbcb8dd9b439cf1208002ca86e4ebfa418a0f325351cccaaea22" Dec 10 16:32:05 crc kubenswrapper[4718]: E1210 16:32:05.693623 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9eb242cac011fbcb8dd9b439cf1208002ca86e4ebfa418a0f325351cccaaea22\": container with ID starting with 9eb242cac011fbcb8dd9b439cf1208002ca86e4ebfa418a0f325351cccaaea22 not found: ID does not exist" containerID="9eb242cac011fbcb8dd9b439cf1208002ca86e4ebfa418a0f325351cccaaea22" Dec 10 16:32:05 crc kubenswrapper[4718]: I1210 16:32:05.693655 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9eb242cac011fbcb8dd9b439cf1208002ca86e4ebfa418a0f325351cccaaea22"} err="failed to get container status \"9eb242cac011fbcb8dd9b439cf1208002ca86e4ebfa418a0f325351cccaaea22\": rpc error: code = NotFound desc = could not find container \"9eb242cac011fbcb8dd9b439cf1208002ca86e4ebfa418a0f325351cccaaea22\": container with ID starting with 9eb242cac011fbcb8dd9b439cf1208002ca86e4ebfa418a0f325351cccaaea22 not found: ID does not exist" Dec 10 16:32:05 crc kubenswrapper[4718]: I1210 16:32:05.693673 4718 scope.go:117] "RemoveContainer" containerID="1ef5f8125473d76cd070fc089b14ed7fc57d0e2c0a8981c498f9c2d8c023201a" Dec 10 16:32:05 crc kubenswrapper[4718]: E1210 16:32:05.693957 4718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1ef5f8125473d76cd070fc089b14ed7fc57d0e2c0a8981c498f9c2d8c023201a\": container with ID starting with 1ef5f8125473d76cd070fc089b14ed7fc57d0e2c0a8981c498f9c2d8c023201a not found: ID does not exist" containerID="1ef5f8125473d76cd070fc089b14ed7fc57d0e2c0a8981c498f9c2d8c023201a" Dec 10 16:32:05 crc kubenswrapper[4718]: I1210 16:32:05.693984 4718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1ef5f8125473d76cd070fc089b14ed7fc57d0e2c0a8981c498f9c2d8c023201a"} err="failed to get container status \"1ef5f8125473d76cd070fc089b14ed7fc57d0e2c0a8981c498f9c2d8c023201a\": rpc error: code = NotFound desc = could not find container \"1ef5f8125473d76cd070fc089b14ed7fc57d0e2c0a8981c498f9c2d8c023201a\": container with ID starting with 1ef5f8125473d76cd070fc089b14ed7fc57d0e2c0a8981c498f9c2d8c023201a not found: ID does not exist" Dec 10 16:32:06 crc kubenswrapper[4718]: I1210 16:32:06.041118 4718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01b620f2-2b26-4be9-966c-fad9aa58cd05" path="/var/lib/kubelet/pods/01b620f2-2b26-4be9-966c-fad9aa58cd05/volumes" Dec 10 16:32:10 crc kubenswrapper[4718]: I1210 16:32:10.020570 4718 scope.go:117] "RemoveContainer" containerID="46e05a5a0b93b5b168c9b125f8be5d0a31de5bd6c874a8392e9682fe57e1a5ab" Dec 10 16:32:10 crc kubenswrapper[4718]: E1210 16:32:10.021367 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8zmhn_openshift-machine-config-operator(8db53917-7cfb-496d-b8a0-5cc68f3be4e7)\"" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" podUID="8db53917-7cfb-496d-b8a0-5cc68f3be4e7" Dec 10 16:32:25 crc kubenswrapper[4718]: I1210 16:32:25.021152 4718 scope.go:117] "RemoveContainer" containerID="46e05a5a0b93b5b168c9b125f8be5d0a31de5bd6c874a8392e9682fe57e1a5ab" Dec 10 16:32:25 crc kubenswrapper[4718]: E1210 16:32:25.022511 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8zmhn_openshift-machine-config-operator(8db53917-7cfb-496d-b8a0-5cc68f3be4e7)\"" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" podUID="8db53917-7cfb-496d-b8a0-5cc68f3be4e7" Dec 10 16:32:39 crc kubenswrapper[4718]: I1210 16:32:39.020827 4718 scope.go:117] "RemoveContainer" containerID="46e05a5a0b93b5b168c9b125f8be5d0a31de5bd6c874a8392e9682fe57e1a5ab" Dec 10 16:32:39 crc kubenswrapper[4718]: E1210 16:32:39.021877 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8zmhn_openshift-machine-config-operator(8db53917-7cfb-496d-b8a0-5cc68f3be4e7)\"" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" podUID="8db53917-7cfb-496d-b8a0-5cc68f3be4e7" Dec 10 16:32:52 crc kubenswrapper[4718]: I1210 16:32:52.020807 4718 scope.go:117] "RemoveContainer" containerID="46e05a5a0b93b5b168c9b125f8be5d0a31de5bd6c874a8392e9682fe57e1a5ab" Dec 10 16:32:52 crc kubenswrapper[4718]: E1210 16:32:52.022810 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8zmhn_openshift-machine-config-operator(8db53917-7cfb-496d-b8a0-5cc68f3be4e7)\"" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" podUID="8db53917-7cfb-496d-b8a0-5cc68f3be4e7" Dec 10 16:33:04 crc kubenswrapper[4718]: I1210 16:33:04.020268 4718 scope.go:117] "RemoveContainer" containerID="46e05a5a0b93b5b168c9b125f8be5d0a31de5bd6c874a8392e9682fe57e1a5ab" Dec 10 16:33:04 crc kubenswrapper[4718]: E1210 16:33:04.021305 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8zmhn_openshift-machine-config-operator(8db53917-7cfb-496d-b8a0-5cc68f3be4e7)\"" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" podUID="8db53917-7cfb-496d-b8a0-5cc68f3be4e7" Dec 10 16:33:17 crc kubenswrapper[4718]: I1210 16:33:17.020572 4718 scope.go:117] "RemoveContainer" containerID="46e05a5a0b93b5b168c9b125f8be5d0a31de5bd6c874a8392e9682fe57e1a5ab" Dec 10 16:33:17 crc kubenswrapper[4718]: E1210 16:33:17.021970 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8zmhn_openshift-machine-config-operator(8db53917-7cfb-496d-b8a0-5cc68f3be4e7)\"" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" podUID="8db53917-7cfb-496d-b8a0-5cc68f3be4e7" Dec 10 16:33:32 crc kubenswrapper[4718]: I1210 16:33:32.021076 4718 scope.go:117] "RemoveContainer" containerID="46e05a5a0b93b5b168c9b125f8be5d0a31de5bd6c874a8392e9682fe57e1a5ab" Dec 10 16:33:32 crc kubenswrapper[4718]: E1210 16:33:32.022249 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8zmhn_openshift-machine-config-operator(8db53917-7cfb-496d-b8a0-5cc68f3be4e7)\"" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" podUID="8db53917-7cfb-496d-b8a0-5cc68f3be4e7" Dec 10 16:33:47 crc kubenswrapper[4718]: I1210 16:33:47.020948 4718 scope.go:117] "RemoveContainer" containerID="46e05a5a0b93b5b168c9b125f8be5d0a31de5bd6c874a8392e9682fe57e1a5ab" Dec 10 16:33:47 crc kubenswrapper[4718]: E1210 16:33:47.023615 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8zmhn_openshift-machine-config-operator(8db53917-7cfb-496d-b8a0-5cc68f3be4e7)\"" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" podUID="8db53917-7cfb-496d-b8a0-5cc68f3be4e7" Dec 10 16:34:01 crc kubenswrapper[4718]: I1210 16:34:01.021237 4718 scope.go:117] "RemoveContainer" containerID="46e05a5a0b93b5b168c9b125f8be5d0a31de5bd6c874a8392e9682fe57e1a5ab" Dec 10 16:34:01 crc kubenswrapper[4718]: E1210 16:34:01.022289 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8zmhn_openshift-machine-config-operator(8db53917-7cfb-496d-b8a0-5cc68f3be4e7)\"" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" podUID="8db53917-7cfb-496d-b8a0-5cc68f3be4e7" Dec 10 16:34:15 crc kubenswrapper[4718]: I1210 16:34:15.021429 4718 scope.go:117] "RemoveContainer" containerID="46e05a5a0b93b5b168c9b125f8be5d0a31de5bd6c874a8392e9682fe57e1a5ab" Dec 10 16:34:15 crc kubenswrapper[4718]: E1210 16:34:15.022873 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8zmhn_openshift-machine-config-operator(8db53917-7cfb-496d-b8a0-5cc68f3be4e7)\"" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" podUID="8db53917-7cfb-496d-b8a0-5cc68f3be4e7" Dec 10 16:34:28 crc kubenswrapper[4718]: I1210 16:34:28.021338 4718 scope.go:117] "RemoveContainer" containerID="46e05a5a0b93b5b168c9b125f8be5d0a31de5bd6c874a8392e9682fe57e1a5ab" Dec 10 16:34:28 crc kubenswrapper[4718]: E1210 16:34:28.022501 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8zmhn_openshift-machine-config-operator(8db53917-7cfb-496d-b8a0-5cc68f3be4e7)\"" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" podUID="8db53917-7cfb-496d-b8a0-5cc68f3be4e7" Dec 10 16:34:42 crc kubenswrapper[4718]: I1210 16:34:42.020894 4718 scope.go:117] "RemoveContainer" containerID="46e05a5a0b93b5b168c9b125f8be5d0a31de5bd6c874a8392e9682fe57e1a5ab" Dec 10 16:34:42 crc kubenswrapper[4718]: E1210 16:34:42.021955 4718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8zmhn_openshift-machine-config-operator(8db53917-7cfb-496d-b8a0-5cc68f3be4e7)\"" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" podUID="8db53917-7cfb-496d-b8a0-5cc68f3be4e7" Dec 10 16:34:53 crc kubenswrapper[4718]: I1210 16:34:53.021607 4718 scope.go:117] "RemoveContainer" containerID="46e05a5a0b93b5b168c9b125f8be5d0a31de5bd6c874a8392e9682fe57e1a5ab" Dec 10 16:34:53 crc kubenswrapper[4718]: I1210 16:34:53.589821 4718 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8zmhn" event={"ID":"8db53917-7cfb-496d-b8a0-5cc68f3be4e7","Type":"ContainerStarted","Data":"8f0e0165ff08a5d1cf19aed190d28a6b00050a9bfcd7f53931c36564ff0af16a"}